content
stringlengths
86
994k
meta
stringlengths
288
619
Finding the Areas of Rhombuses Question Video: Finding the Areas of Rhombuses Mathematics A diagonal of a rhombus has length 2.1, while the other one is four times as long. What is its area? Video Transcript A diagonal of a rhombus has length 2.1, while the other one is four times as long. What is its area? We start by remembering that a rhombus is a quadrilateral with all four sides equal in length. So, when we model our rhombus, we’ll need to have four equal sides. Instead of being given any information about the length of the sides of this rhombus, we’re given information about the diagonals. We’re given that one of these diagonals has a length of 2.1 units, and the other one is four times as long. Four multiplied by 2.1 will give us 8.4. Looking at our diagram, we can see that there’s a shorter diagonal and a longer diagonal. And therefore, the shorter one will be 2.1 units, and the longer one will be 8.4 units. There are two formulas that we can use to find the area of a rhombus. One involves the base and the perpendicular height, and the other one involves the diagonals. As we’re only given the length of the diagonals here, it would be sensible to use that formula. So, we remember that the area of a rhombus is equal to 𝑑 sub one multiplied by 𝑑 sub two over two, where 𝑑 sub one and 𝑑 sub two are the lengths of the two diagonals. We can plug in the values of our two diagonals to give us 2.1 multiplied by 8.4 over two. Simplifying our calculation, we’ll need to work out 2.1 multiplied by 4.2, which we can do without a calculator. We work out 21 multiplied by 42 using whatever multiplication method we choose. And then, as our two values had a total of two decimal places, then so will our answer. We weren’t given any length units in the question, but as we’ve worked out an area, we would be using square units. And so, our answer is that the area of this rhombus is 8.82 square units.
{"url":"https://www.nagwa.com/en/videos/784196235382/","timestamp":"2024-11-04T02:45:43Z","content_type":"text/html","content_length":"242258","record_id":"<urn:uuid:2d9a5191-cb1b-4d8f-9167-7eb50da418bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00597.warc.gz"}
How to do I solve this problem? 3x^2-12=0 | HIX Tutor How to do I solve this problem? #3x^2-12=0# I can't remember how to do these and just need step by step so I can finish the problem. This is only a portion of my calculus problem but I forget how to do this part. The whole problem is f(x) = x^ 3-12x+17 find the relative extreme points of the function if they exist. I did the first part and found the derivatives of the problem but now I am stuck at the next step which is what I posted for my question. It's been years since I took algebra so I forget. Thanks! Answer 1 You need to factor #3x^2-12#, which ends up being #3(x+2) (x-2)# and then set each of these (but ignore the #3#) equal to zero and solve. #x+2- 2 = 0 - 2# So #x = 2, -2# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To solve the equation (3x^2 - 12 = 0), you can follow these steps: 1. Add 12 to both sides to isolate the term with (x^2): (3x^2 = 12) 2. Divide both sides by 3 to solve for (x^2): (x^2 = \frac{12}{3}) 3. Simplify: (x^2 = 4) 4. Take the square root of both sides: (x = \pm \sqrt{4}) 5. Simplify: (x = \pm 2) So, the solutions to the equation are (x = 2) and (x = -2). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-to-do-i-solve-this-problem-3x-2-12-0-x-8f9af92d53","timestamp":"2024-11-14T11:35:46Z","content_type":"text/html","content_length":"579298","record_id":"<urn:uuid:8b2ddaa1-da85-4018-af3f-9ca5dca73238>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00609.warc.gz"}
How would I make this kind of system? So basically I am bored and dicided to make a card system. Basically where you open up a pack of cards and it gives you random one depending on there rarity. So lets say I something like this: local cards = { ProCard = { rarity = 30 noobCard = { rarity = 50 ExtremeCard = { rarity = 0.5 And I want to loop through all the cards and pick on depending on its rarity. Basically the rarer it is the less chance you will get it(Like and loot box lol) How would I do this? 1 Like Well, the way you want it is not the way everyone makes something like that. But it’s close. Here is a small code which will give you the results you want. local cards = { ProCard = { rarity = 30 noobCard = { rarity = 50 ExtremeCard = { rarity = 0.5 local Max = 100 function PickRandomCard(Max) local Random = math.random(1,Max) local Rarity,MaxRarity for i,v in pairs(cards) do if v.rarity > MaxRarity then MaxRarity = v.rarity Rarity = v return Rarity The higher the rarity number, the lower the chance of getting the card EDIT: I fixed the error I think, try now 1 Like Well how do I get is so it will pick a random one? Depending on how rare it is? Cause I already can get how high the number is. I just need to know how to pick the actual card. Depending on its rarity 1 Like It will pick a random card depending on it’s rarity. Read the code it does exactly what you want 1 Like I played around with it for a little while. But it is not picking doubles. This is what I have right now: local cards = { ProCard = { rarity = 51 noobCard = { rarity = 50 ExtremeCard = { rarity = 10 local Max = 100 function PickRandomCard(Max) local Random = math.random(1,Max) local Rarity = 0 local maxrarity = 0 for i,v in pairs(cards) do if v.rarity > maxrarity then maxrarity = v.rarity Rarity = v if maxrarity == cards.ExtremeCard.rarity then warn('LEGENDARY CARD!!') while true do And basically it always prints 50 then 51. It never does 50 then 50 again, or 51 then 51. So any way to fix this? Also dont ask why I’m making it run multiple times. I just want to see what it will give me The difference is in this: if maxrarity == cards.ExtremeCard.rarity then warn('LEGENDARY CARD!!') You use == operator, of course it will be way too rare so use < operator instead 1 Like Its still doing the thing where it always prints 50 then 51. And never doubles. Like 50 and 50 ETC 1 Like You are doing loop in loop; get rid of that, and it will be better. In fact, I designed this function to be used with return. The way you made just broke the entire purpose of what I did 1 Like Assuming you want to throw in probability distribution, we can write a function to calculate that. local cards = { ProCard = 30, noobCard = 50, ExtremeCard = 0.5 local function createDistribution(cards) local distribution = {} local total = 0 for card, rarity in pairs(cards) do total = total + rarity local accumulate = 0 for card, rarity in pairs(cards) do accumulate = accumulate + rarity distribution[card] = accumulate / total return distribution Then we can make a simple function to draw a card using this distribution. local function drawCard(distribution) local rand = math.random() for card, prob in pairs(distribution) do if rand <= prob then return card Here is the usage: local distribution = createDistribution(cards) local card = drawCard(distribution) Please do note, the higher the rarity value, the more common the card (it occupies a larger portion of the probability distribution). If you want to make higher rarity values mean a card is less common, you could invert the rarity in the createDistribution function, for example by changing total = total + rarity to total = total + 1/rarity and accumulate = accumulate + rarity to accumulate = accumulate + 1/rarity . Here is the output: local cards = { ProCard = 30, noobCard = 50, ExtremeCard = 10 local function createDistribution(cards) local distribution = {} local total = 0 for card, rarity in pairs(cards) do total = total + rarity local accumulate = 0 for card, rarity in pairs(cards) do accumulate = accumulate + rarity distribution[card] = accumulate / total return distribution local function drawCard(distribution) local rand = math.random() for card, prob in pairs(distribution) do if rand <= prob then return card local distribution = createDistribution(cards) local dict = {} for n = 0, 100 do local card = drawCard(distribution) if dict[card] then dict[card] = dict[card] + 1 dict[card] = 0 for i, v in next, dict do print(i, v / 100, "\n") ProCard 0.37 noobCard 0.51 ExtremeCard 0.1 2 Likes Ok, I fixed the code. Should work fine now, sorry for the errors. local cards = { ProCard = { rarity = 30 noobCard = { rarity = 50 ExtremeCard = { rarity = 0.5 local Max = 100 function PickRandomCard(Max) local Random = math.random(1,Max) local Rarity for i,v in pairs(cards) do if v.rarity > Random then Rarity = v return Rarity 2 Likes Yeah this works perfectly. Thank you @towerscripter1386 also for the help! I noticed that It pulls the same card a lot sometimes. I got the noobCard 15 time lol. I also noticed that yes it did give me 1 single extreme card so yes it is very rare lol! 2 Likes As a side note, while it’s possible to store additional information with the card rarity (like the table structure you initially proposed), it’s generally better to keep the data structure simple and specific to its purpose to avoid unnecessary complexity. This follows the single responsibility principle, making your code more maintainable and scalable. For instance, a separate table or dictionary could be used to store the card stats, keeping the draw chance and card stats distinct. To summarize that using code: Not good local cards = { ProCard = { rarity = 30 noobCard = { rarity = 50 ExtremeCard = { rarity = 0.5 Good because it adheres to one principle and takes up less space in memory: local cards = { ProCard = 30, noobCard = 50, ExtremeCard = 0.5 1 Like Well yes true. the reason I did this is cause I was also going to have other values so thats why. Also I changed it so it loops through cards in a folder in ServerStorage. But for some reason it keeps picking the legendary card instead of picking the common card. And I have no idea why this is happening. heres my code. I didnt really change anything. It still should be picking the smallest value the littlest amount of time local cards = game:GetService('ServerStorage'):FindFirstChild('Cards'):GetChildren() local cardFolder = game:GetService('ServerStorage'):FindFirstChild('Cards') -- get right cards local function createDistribution(cards) -- get the cards so we can draw from cards later local distribution = {} local total = 0 for card, item in pairs(cards) do total = total + item.Rarity.Value local accumulate = 0 for card, item in pairs(cards) do accumulate = accumulate + item.Rarity.Value local cardName = item.Name local cardDetails = { card = item, rarityValue = accumulate / total distribution[cardName] = cardDetails return distribution local function drawCard(distribution) -- draw a card from the cards. local rand = math.random() for card, prob in pairs(distribution) do if rand <= prob.rarityValue then return card -- gets the card! local function getCard() local distribution = createDistribution(cards) local card = drawCard(distribution) return card local function displayCards(cards,player) local debris = game:GetService('Debris') local ui = player.PlayerGui:FindFirstChild('DisplayCards') local sf = ui:FindFirstChild('SF') local template = script:FindFirstChild('CardTemplate') -- the template of cards ui.Enabled = true for _, card in pairs(cards) do -- loop through the cards and display them local cardItem = cardFolder:FindFirstChild(card) -- get the card local Item = template:Clone() -- make the template Item.Visible = true -- set values Item.Rarity.Text = cardItem.Rarity.Value Item.ItemName.Text = cardItem.Name Item.Parent = sf ui.Enabled = false local cards = {getCard(),getCard(),getCard()} local cards = {getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard(),getCard()} I need the place file with the cards because the code I sent earlier works. Here ya go. CardGameLol.rbxl (56.9 KB) The game file. 1 Like Here is the solution CardGameLol.rbxl (57.4 KB) 1 Like This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
{"url":"https://devforum.roblox.com/t/how-would-i-make-this-kind-of-system/2334436","timestamp":"2024-11-06T23:51:36Z","content_type":"text/html","content_length":"65223","record_id":"<urn:uuid:287d26f5-1abe-4bd8-808d-3777aba2357b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00016.warc.gz"}
10 NO-CALCULATOR TIPS FOR SAT MATH You’re sitting in class when your teacher asks “What is 168 divided by 34?” In a matter of seconds a couple students whip out their handy calculators… it’s almost a race. But you have the upper hand! Your calculator is sitting right on top of your desk. Before anyone else can, you punch in those numbers and read the answer displayed neatly on the small screen: You’re rounding, of course. The actual answer is something like 4.94117647058824. Pretty simple, right? Calculators are AH-mazing tools that we can use today to answer the majority of math problems. But what if you didn’t have one? What if it’s not ALLOWED? On the SAT you will be given 25 minutes to answer 15 math questions. Here’s the catch. For these 15 questions, NO CALCULATOR ALLOWED. Don’t panic! Here are some no-calculator tips for your SAT math. 1. If it’s taking too long, it’s wrong. The no-calculator questions on the SAT are just that — questions that don’t require a calculator. You’re not set up to fail here! These problems ARE possible to solve without pulling out your favorite tool. Which means if they’re taking a long time to solve without a calculator, you’ve probably taken a wrong turn. 2. Work with the smallest possible numbers This trick will save you A LOT of time and stress. Remove the 0’s Let’s say you need to divide 280 by 30. Divide by the common denominator In fractions, you can often determine a common denominator which will help you to simplify the problem to where it is more easily solvable. Check out this Khan Academy video that explains how to find the least common denominator of fractions! 3. Memorize common fractions Below is a chart you can reference for memorizing these. It includes the decimal and percentage forms of each fraction. You should know that if you divide 1 by 2, it will give you .50 as the decimal of 1/2. When you multiply .50 x 100, you get 50%. If you get stuck, you can try checking this way. 4. Know your multiplication tables through 15 You should practice the multiplication tables listed below to have them fresh in your mind when it comes to taking the SAT. There are plenty of great sites out there to help you brush up on your multiplication skills. • math-drills.com: Worksheets you can print out and time yourself on. • multiplication.com: Multiplication games you can play to have fun while you’re re-learning. • aplusmath.com: Flashcards where you can specify the range to be 1-18 5. Memorize the squares of integers I recommend that you memorize the squares of integers up to 15. It’ll save you a lot of time trying to multiply these out by hand! If you’re interested in learning some quick tricks to calculate the square roots of other numbers, check out the tecmath channel on Youtube! There’s some great videos there like: 6. Brush up on your Pythagorean triples If you already forgot what Pythagorean triples are, they’re a set of 3 numbers that make up a right triangle. The smallest P triple is 3, 4, 5. The rule for whether or not something is a Pythagorean triple is a formula: a² + b² = c² This is why it helps to have your square integers memorized! mathisfun.com is another great website that can help answer any questions you may have about this triangle business. 7. Don’t forget about triangles! There are 3 special triangles in geometry: 1. Equilateral 2. Isosceles 3. Scalene You may also want to take a look at the angles of different triangles. …BECAUSE, you can combine the two! Again, head over to mathisfun to play with their interactive triangle — change the angles to change the triangle and learn more about calculating things like area and height! 8. Target your weak points and study those concepts Check in on your overall math skills in arithmetic of fractions, negative numbers, and decimals to makes sure that you don’t have any gaps in understanding. If you do, you’ll want to fill those in so they don’t stump you up! Take a practice SAT test and note any problems that you struggle with or take longer than others on. 9. Practice doing problems without a calculator Sometimes it can feel like our calculator is a security blanket. When you go to math class without it, you feel lost and useless… So try working without a calculator for a week or so leading up to your SAT to build your confidence in pencil-solving math problems. You can still use it to double-check your answers, but really aim to solve everything as if it wasn’t there. 10. Don’t let the problems scare you Remember, the very FIRST tip I gave you in this article was that the problems are MADE to be solved without a calculator. It’s waaay too easy to look at a foreign or complex-seeming problem on a test and think “That’s not what I studied! I don’t know this!” Cue the freakout… When you encounter problems like these, take a deep breath. The MOST you can do is answer the question completely wrong, and remember that those points won’t count against you! And, pace yourself. If you really don’t know how to solve the problem, don’t spend all your time stuck on it when there are other problems you can answer right away. Come back to it at the end. I hope after giving you these tips that I’ve at least somewhat put your mind at ease about not having a calculator on a portion of the SAT. You can do it, I promise! Know of any other tips or tricks for doing well on the no-calculator SAT math section? Tell us about it in the comments below! *This article is written by Renae Hintze
{"url":"https://ivyanswers.com/10-no-calculator-tips-for-sat-math/","timestamp":"2024-11-11T19:28:52Z","content_type":"text/html","content_length":"55152","record_id":"<urn:uuid:2ca96e8a-89f6-4c92-b469-b3eaba1c379e>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00525.warc.gz"}
Common Difference in Sequences - GRE Math All GRE Math Resources Example Questions Example Question #1 : How To Find The Common Difference In Sequences The sequence What is the difference between the 20th and the 30th elements of Possible Answers: The answer cannot be determined Correct answer: For this sequence, you do not have a starting point (i.e. Now, to find difference between the 20th and the 30th element, it is merely necessary to count the number of twenties that would be added for each of those elements. For instance, the difference between the 21st and the 20th elements is Example Question #12 : Sequences Which of the following defines any term in a linear sequence having Correct answer: Since this sequence is linear, we know that it will add the same amount for each element. This means that you can evenly divide the difference between the first and the ninth term. Be careful! There will be eight total increases between these terms. (Think this through: 1 to 2, 2 to 3, 3 to 4, etc.) Thus, we know that the total difference between these terms is: Now, dividing this among the eight increases that happen, we know: This means that for each element, we add
{"url":"https://cdn.varsitytutors.com/gre_math-help/sequences/integers/arithmetic/common-difference-in-sequences","timestamp":"2024-11-02T20:07:44Z","content_type":"application/xhtml+xml","content_length":"143727","record_id":"<urn:uuid:76399b85-ce07-4a25-87e0-e489f20a231f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00560.warc.gz"}
How to Use Higher-Order Functions In Haskell? In Haskell, higher-order functions are a fundamental concept that allows us to write more concise and modular code. They provide a way to take functions as arguments, return functions as results, or even both. Here is how you can use higher-order functions in Haskell: 1. Passing functions as arguments: You can pass functions as arguments to other functions by declaring the argument type as a function type. For example: applyFunction :: (a -> b) -> a -> b applyFunction f x = f x In the above code, applyFunction takes a function f (of type a -> b) and an argument x (of type a), and applies the function f to x to get the result b. You can use this function with any other functions that have compatible argument and result types. 2. Returning functions as results: You can also define functions that return other functions as results. This is particularly useful for creating functions with partially applied arguments. For example: multiplyBy :: Int -> (Int -> Int) multiplyBy x = \y -> x * y In the above code, multiplyBy takes an integer x and returns a lambda function that takes another integer y and multiplies it by x. This allows you to create specialized functions that are based on a common pattern but have different constant values. 3. Combining multiple functions: Higher-order functions can also be used to combine or compose multiple functions together. For example, you can use the . operator to compose two functions. The resulting function applies the second function to the output of the first function. Here is an example: square :: Int -> Int square x = x * x tripleAndSquare :: Int -> Int tripleAndSquare = square . (* 3) In the above code, tripleAndSquare is a composition of square and the lambda function (* 3). It first triples the input value and then squares the result. Higher-order functions in Haskell enable powerful abstractions and facilitate functional programming techniques. They help in writing more reusable, expressive, and concise code, making Haskell a highly expressive and elegant programming language. How to use the map function with higher-order functions in Haskell? In Haskell, the map function is a higher-order function that applies a given function to each element of a list. Here is how you can use the map function with higher-order functions in Haskell: 1. Define a function that you want to apply to each element of a list. For example, let's say you want to multiply each element of a list by 2. You can define a function called double as follows: 1 double :: Int -> Int 2 double x = x * 2 1. Use the map function to apply the defined function to each element of a list. The syntax for using the map function is map functionName list. For example, to apply the double function to each element of a list called numbers, you can use the following code: 1 numbers = [1, 2, 3, 4, 5] 2 doubledNumbers = map double numbers In this code, the map function takes the double function and the numbers list as arguments and returns a new list called doubledNumbers. The result of the map function in this example would be [2, 4, 6, 8, 10], as each element of the numbers list is multiplied by 2. Note that the function you provide to the map function can also be an anonymous lambda function. Here's an example using a lambda function: 1 numbers = [1, 2, 3, 4, 5] 2 doubledNumbers = map (\x -> x * 2) numbers This code achieves the same result as the previous code snippet, but it uses a lambda function instead of a named function. How to create a higher-order function that takes multiple arguments in Haskell? In Haskell, a higher-order function can be created that takes multiple arguments using currying or by using tuples. Here are examples of both methods: 1. Currying: Currying is the process of transforming a function that takes multiple arguments into a series of functions that take one argument each. Here's an example of creating a higher-order function that takes two arguments: 1 mult :: Int -> Int -> Int 2 mult x y = x * y 4 higherOrderFunc :: (Int -> Int -> Int) -> Int -> Int 5 higherOrderFunc f x = f x 10 7 -- Usage: 8 -- ghci> higherOrderFunc mult 5 9 -- 50 In the above example, the higherOrderFunc takes a function f as the first argument, which takes two integers and returns an integer. It then applies the function f to the x argument (here, multiplied by 10). 1. Using Tuples: Tuples can be used to pass multiple arguments to a higher-order function. Here's an example: 1 add :: (Int, Int) -> Int 2 add (x, y) = x + y 4 higherOrderFunc :: ((Int, Int) -> Int) -> (Int, Int) -> Int 5 higherOrderFunc f args = f args 7 -- Usage: 8 -- ghci> higherOrderFunc add (5, 10) 9 -- 15 In the above example, the higherOrderFunc takes a function f as the first argument, which takes a tuple (Int, Int) as an argument and returns an integer. It then passes the args tuple to the function How to use the filter function with higher-order functions in Haskell? In Haskell, the filter function is a higher-order function that takes a predicate (a function that returns a Boolean value) and a list as arguments, and returns a new list that contains only the elements for which the predicate returns True. Here is the general syntax for using the filter function in Haskell: 1 filter :: (a -> Bool) -> [a] -> [a] To use the filter function with higher-order functions in Haskell, you can pass it a lambda function or a named function that acts as the predicate. Here are a few examples: 1. Using a lambda function: 1 filter (\x -> x `mod` 2 == 0) [1,2,3,4,5,6] -- Output: [2,4,6] In this example, the lambda function \x -> x mod 2 == 0 acts as the predicate. It checks if an element is divisible by 2 (i.e., even). 1. Using a named function: 1 isPositive :: Int -> Bool 2 isPositive x = x > 0 4 filter isPositive [-2,-1,0,1,2,3] -- Output: [1,2,3] In this example, the named function isPositive acts as the predicate. It checks if an element is greater than 0. Both of these examples demonstrate how to use the filter function with higher-order functions in Haskell. Simply define a predicate function and pass it as the first argument to the filter function along with the list as the second argument. What is the purpose of the filter function in Haskell? The filter function is used in Haskell to selectively extract elements from a list that satisfy a given condition. Its purpose is to take a predicate (a function that evaluates to a Boolean value) and a list, and return a new list containing only the elements for which the predicate returns true. The general type signature of the filter function is: filter :: (a -> Bool) -> [a] -> [a] It takes a predicate function of type (a -> Bool), where 'a' is the type of elements in the list, and a list of type [a]. The resulting list is also of type [a]. For example, if we have a list of numbers [1, 2, 3, 4, 5] and want to filter out only the even numbers, we can define a predicate function using the modulus operator: evenNum :: Int -> Bool evenNum x = x mod 2 == 0 Using the filter function with this predicate: filter evenNum [1, 2, 3, 4, 5] will return the list [2, 4]. The filter function goes through each element of the list, applies the predicate function to it, and includes it in the resulting list if the predicate returns true. In essence, the purpose of the filter function is to provide a concise and functional way of extracting elements from a list based on a condition. It helps in writing more expressive and readable How to use the foldr function with higher-order functions in Haskell? In Haskell, the foldr function is used to apply a binary function to a list from right to left, reducing it to a single value. It takes three arguments: the binary function, the initial accumulator value, and the list to be folded. Here is an example of how to use foldr with higher-order functions in Haskell: 1 -- Example 1: Summing a list of numbers using foldr 2 sumList :: [Int] -> Int 3 sumList xs = foldr (+) 0 xs 5 -- Example 2: Concatenating a list of strings using foldr 6 concatList :: [String] -> String 7 concatList xs = foldr (++) "" xs 9 -- Example 3: Filtering odd numbers from a list using foldr and a lambda function 10 filterOdd :: [Int] -> [Int] 11 filterOdd xs = foldr (\x acc -> if odd x then x : acc else acc) [] xs In these examples: • sumList uses foldr with the addition operator (+) and an initial accumulator value of 0 to calculate the sum of a list of Ints. • concatList uses foldr with the string concatenation operator (++) and an initial accumulator value of "" to concatenate a list of strings. • filterOdd uses foldr with a lambda function as the binary operator to filter odd numbers from a list. The lambda function takes an element x and an accumulator acc and checks if x is odd. If x is odd, it is prepended to the accumulator acc; otherwise, acc is returned without modification. You can use foldr with other higher-order functions in a similar fashion by passing appropriate binary functions and initial accumulator values.
{"url":"https://infervour.com/blog/how-to-use-higher-order-functions-in-haskell","timestamp":"2024-11-15T00:29:25Z","content_type":"text/html","content_length":"349777","record_id":"<urn:uuid:8084687b-aef2-46bc-bbdd-befcdf6b33a5>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00429.warc.gz"}
US20150049056A1 - Interaction Sensing - Google Patents US20150049056A1 - Interaction Sensing - Google Patents Interaction Sensing Download PDF Publication number US20150049056A1 US20150049056A1 US14/458,102 US201414458102A US2015049056A1 US 20150049056 A1 US20150049056 A1 US 20150049056A1 US 201414458102 A US201414458102 A US 201414458102A US 2015049056 A1 US2015049056 A1 US 2015049056A1 United States Prior art keywords Prior art date Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) Application number Other versions Ernest Rehmi Post Olivier Bau Iliya Tsekov Sajid Sadi Mike Digman Vatche Attarian Sergi Consul Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.) Samsung Electronics Co Ltd Original Assignee Samsung Electronics Co Ltd Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.) Filing date Publication date Priority to US14/458,102 priority Critical patent/US10042504B2/en Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd Priority to BR112015007261-5A priority patent/BR112015007261B1/en Priority to AU2014307232A priority patent/AU2014307232B2/en Priority to PCT/KR2014/007544 priority patent/WO2015023131A1/en Priority to KR1020157015347A priority patent/KR102214438B1/en Priority to CN201480007036.0A priority patent/CN104969156B/en Priority to JP2016534531A priority patent/JP6422973B2/en Priority to EP14836371.6A priority patent/EP2880515B1/en Publication of US20150049056A1 publication Critical patent/US20150049056A1/en Assigned to SAMSUNG ELECTRONICS COMPANY, LTD. reassignment SAMSUNG ELECTRONICS COMPANY, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONSUL, Sergi, ATTARIAN, Vatche, BAU, OLIVIER, DIGMAN, MIKE, POST, ERNEST REHMI, Sadi, Sajid, TSEKOV, Iliya, WORTHAM, CODY Application granted granted Critical Publication of US10042504B2 publication Critical patent/US10042504B2/en Active legal-status Critical Current Anticipated expiration legal-status Critical ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form ☆ G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means ☆ G06F3/0416—Control or interface arrangements specially adapted for digitisers ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form ☆ G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means ☆ G06F3/046—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by electromagnetic means ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form ☆ G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form ☆ G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means ☆ G06F3/0416—Control or interface arrangements specially adapted for digitisers ☆ G06F3/0418—Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements ☆ G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer ☆ G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form ☆ G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means ☆ G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means ☆ H—ELECTRICITY ☆ H03—ELECTRONIC CIRCUITRY ☆ H03K—PULSE TECHNIQUE ☆ H03K17/00—Electronic switching or gating, i.e. not by contact-making and –breaking ☆ H03K17/94—Electronic switching or gating, i.e. not by contact-making and –breaking characterised by the way in which the control signals are generated ☆ H03K17/96—Touch switches ☆ G—PHYSICS ☆ G06—COMPUTING; CALCULATING OR COUNTING ☆ G06F—ELECTRIC DIGITAL DATA PROCESSING ☆ G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048 ☆ G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045 ☆ G06F2203/04106—Multi-sensing digitiser, i.e. digitiser using at least two different sensing technologies simultaneously or alternatively, e.g. for detecting pen and finger, for saving power or for improving position detection □ This disclosure generally relates to electronic devices that detect interactions with objects, and more particularly to devices that use surface contact sensors or proximity sensors to detect □ a touch sensor can detect the presence and location of a touch or object or the proximity of an object (such as a user's finger or a stylus) within a touch-sensitive area of the touch sensor overlaid on a display screen, for example. □ the touch sensor may enable a user to interact directly with what is displayed on the screen, rather than indirectly with a mouse or touch pad. □ a touch sensor may be attached to or provided as part of a desktop computer, laptop computer, tablet computer, personal digital assistant (PDA), smartphone, satellite navigation device, portable media player, portable game console, kiosk computer, point-of-sale device, or other suitable device. □ a control panel on a household or other appliance may include a touch sensor. □ touch sensors such as, for example, resistive touch screens, surface acoustic wave touch screens, and capacitive touch screens. □ reference to a touch sensor may encompass a touch screen, and vice versa, where appropriate. □ a touch-sensor controller may process the change in capacitance to determine its position on the touch screen. □ Capacitive touch operates by sending a signal from an electrode, and then measuring the variation caused by the presence of intervening materials. Actively emitting an electric field adds to the energy usage of the device and slows down responsiveness. Additionally, scaling the capacitive touch sensor to very large areas can be cost-prohibitive. □ FIGS. 1A and 1B illustrate an example TriboTouch system, which can determine positions of an object based on triboactivity. □ FIGS. 2A-2E illustrate an example interaction between a finger and a TriboTouch sensor. □ FIG. 3 illustrates an example architecture of a TriboTouch system. □ FIG. 4 illustrates an example alternative analog front-end. □ FIG. 5 illustrates principles of TriboTouch operation. □ FIG. 6 illustrates an example process for determining types of contacts based on signal profiles. □ FIG. 7 illustrates an example of combining the capabilities of capacitive sensing and TriboTouch. □ FIG. 8 illustrates an example of capacitively coupling a transmitter to an electrode while using the same receiver system for both capacitive and TriboTouch sensing. □ FIG. 9 illustrates a triboactive surface covered with an array of different materials. □ FIGS. 10A-10C illustrate different positive and negative charge patterns generated when different objects make contact with the same patterned array of sensors. □ FIG. 11 illustrates an example configuration of a NoiseTouch system relative to a user and the environment. □ FIG. 12 illustrates an example NoiseTouch system architecture. □ FIG. 13 illustrates an example process that determines hand poses or positions. □ FIG. 14 illustrates an example method of separating touch and stylus data. □ FIG. 15 illustrates detection of the signal modification characterizing the modification of ambient noise by the contact by a stylus or pen. □ FIG. 16 illustrates an example process of passively sensing the environment and context of a user. □ FIG. 17 illustrates examples of noise contexts that can be passively sensed. □ FIG. 18 illustrates an example process of using the context sensing system to communicate with a device having a NoiseTouch sensor. □ FIG. 19 illustrates an example architecture of a TriboNoiseTouch system. □ FIG. 20 illustrates an example method of separating triboactive data from noise data. □ FIGS. 21-23 illustrate example TriboNoiseTouch processes for identifying triboelectricity-related events and noise-related events. □ FIG. 24 illustrates a triboactive subsystem producing example high-resolution data based on individual micro-contacts with a surface of a touch sensor, while a noise-based sensing subsystem produces an example blob around the area of contact or hover as well as a “shadow” of a hand hovering over the surface. □ FIG. 25 illustrates an example method enhancing the accuracy of finger contact. □ FIG. 26 illustrates an example method for detecting a finger contact and isolating it from a non-conductive pen contact. □ FIG. 27 illustrates example estimation of a pen or hand pose by detecting a hover shadow of the hand making contact or holding the pen. □ FIG. 28 illustrates example TriboTouch sensing for providing high resolution stylus sensing and example TriboNoise sensing for detecting a specifically designed stylus that features buttons to trigger menus and functions. □ FIG. 29 illustrates an example method for improving a dynamic range for hover sensing. □ FIG. 30 illustrates example single-touch electrode components. □ FIG. 31 illustrates two electrodes in an example interleaved pattern. □ FIG. 32 illustrates a row-column electrode grid that can be used to detect position of two touch points. □ FIGS. 33 and 34 illustrate array multitouch configurations using single-touch electrodes in a grid. □ FIG. 35 illustrates an example of continuous passive position sensing using a resistive sheet electrode. □ FIGS. 36 and 37 illustrate an example of continuous two-dimensional passive position sensing. □ FIGS. 38-40 illustrate example electrode-sheet configurations. □ FIG. 41 illustrates an example of dielectric-encoded passive position sensing. □ FIGS. 42 and 43 illustrate an example of continuous passive position sensing using an array of non-linear elements. □ FIG. 44 illustrates an example of spatially-distributed coordinated encoding. □ FIG. 45 illustrates an example combination of TriboTouch with resistive touch sensors. □ FIGS. 46 and 47 illustrate example combination of TriboTouch with inductive touch sensors. □ FIG. 48 illustrates an example computer system 4300 . □ FIGS. 1A and 1B illustrate an example TriboTouch system, which can determine positions of an object based on triboactivity. □ FIG. 1A shows an insulator surface adjacent to an electrode. The electrode is connected to TriboTouch hardware, which determines the positions of an object 130 such as a finger when contact of the object with the insulator produces a local charge displacement, as shown in FIG. 1B . □ the charge displacement is not a net current flow, but rather a charge displacement that is reversed when contact with the object is removed. □ This distortion of the internal electric field of the insulator can be picked up by the TriboTouch hardware, and interpreted as contact and separation events. Additionally, the distortion spreads out over a region from the point of contact, allowing for a continuous estimate of position. □ FIGS. 2A-2E illustrate an example interaction between a finger and a TriboTouch sensor, which can be used to determine the finger's position based on triboactivity. □ TriboTouch When two objects come into contact, charge can be transferred between them due to the interaction of electron clouds around the surface atoms. This effect is known by various names, including triboelectricity, contact potential difference, and work function. In the semiconductor industry these phenomena lead to electrostatic discharge (ESD) events which can damage sensitive electronic devices. Rather than attempting to mitigate these effects, techniques disclosed herein referred to by the name “TriboTouch” use this charging mechanism to detect surface contact and motion effects. □ ESD electrostatic discharge □ TriboTouch can directly sense insulators (e.g., gloves, brushes, etc.) as well as conductors or strong dielectrics (e.g., fingers, conductive rubber, etc.) to enable modes of interaction with sensing surfaces such as those described herein. □ TriboTsouch uses local charge transfer caused by contact and need not emit an electric field to be measured. □ TriboTouch works by measuring the charge displaced when two objects come into contact or separate. No secondary mechanism is needed to induce charge in the object being sensed. There is no need to transmit a signal to be measured. Instead, charge is generated and received when an object contacts the sensing surface. □ FIG. 2A shows a finger above an insulating surface. □ FIG. 2B as the finger contacts the surface, charge flows and current is sensed. □ FIG. 2C current ceases at equilibrium. □ finger separation causes charge redistribution and an opposite current. □ FIG. 2E equilibrium has been restored. □ Charge transfer can occur between combinations of insulators, semi-conductors, and conductors with dissimilar surface properties (e.g., composition, surface microstructure, etc.). □ the polarity, surface charge density, and rate of charge transfer (“contact current”) depend on the particular materials involved. □ the amount of charge transferred between two materials can be estimated from their relative positions in an empirically-determined “triboelectric series”. □ a commonly-accepted series is: air, human skin or leather, glass, human hair, nylon, wool, cat fur, silk, aluminum, paper, cotton, steel, wood, acrylic, polystyrene, rubber, nickel or copper, silver, acetate or rayon, Styrofoam, polyurethane, polyethylene, polypropylene, vinyl (PVC), silicon, and Teflon (PTFE). □ TriboTouch allows detection of contact by essentially any solid material. □ FIG. 3 illustrates an example architecture of a TriboTouch system. □ a high impedance amplifier 306 amplifies incoming signals 305 received from an input electrode 304 in response to a surface contact 302 , and a subsequent analog to digital converter (ADC) converts this signal 305 to digital form. □ the input electrode 304 , high impedance amplifier 306 , and ADC 308 convert the signal 305 as seen at the electrode 304 accurately to digital form. □ Other embodiments can use sigma-delta approaches, charge counting, charge balancing, or other means of measuring small charge displacements, as shown in FIG. 4 . □ a gain control system 310 can optionally be used to maintain the values within the prescribed range of the system. □ the components that receive and convert the input signal to digital form are referred to herein as an analog front-end. □ the analog front-end can include the input electrode 304 , amplifier 306 , ADC 308 , and gain control 310 , or a subset of those components. □ a processing system 312 receives the digital signal and generates position data 332 . □ the processing system 312 can be implemented using hardware, software, or a combination of hardware and software. □ the processing system 312 starts at block 314 and performs initial calibration at block 316 . Then the baseline can be determined by an adaptive method 318 . □ the adaptive method can be, for example, a running average, a differential measurement with respect to a shield electrode, or a composite measure computed from an aggregate of measurement sites, or other methods. This may be triggered as the system is first initialized, or when the system detects that there is a drift in the signal, as indicated by a constant offset of values over a long period. Once this baseline is subtracted at block 320 , the noise in the signal (for example to detect common 50/60 Hz noise, and frequencies above and below the expected range of the system) is modeled and rejected at block 322 , leaving a signal due to contact charging effects. □ Contact charging events are then detected and classified, at block 326 , as contact, separation, or motion by their time domain profile, using methods such as matched filters, wavelet transforms, or time-domain classifiers (e.g. support vector machines). These events are then integrated by a state machine at block 328 to create a map of contact states on the sensing surface, which allows the system to track when and where contact and release events take place. Finally, this map is used to estimate event types and coordinates at block 330 . Note that TriboTouch does not ordinarily produce a continuous signal when a contact is stationary. However, it does produce opposite-polarity signals on contact and removal. These opposite-polarity signals can be used to keep track of how additional contacts are formed and removed in the vicinity of an existing contact point. □ TriboTouch does not ordinarily produce a continuous signal when a contact is stationary. However, it does produce opposite-polarity signals on contact and removal. These opposite-polarity signals can be used to keep track of how additional contacts are formed and removed in the vicinity of an existing contact □ the pattern of contacts can be understood by an analogy to the effects of dragging a finger through sand, in which a wake is formed before and after the finger. Similarly a “charge wake” is seen by the system, and the charge wake is used to determine motion. □ the final output is a high-level event stream 333 describing the user's actions. □ the output can include position data 332 . □ large objects e.g., a finger □ the TriboTouch system can keep track of which contacts belong together. Even when two objects are in close proximity, as in a pinch gesture, for example, the sensor actually detects two contact “peaks” very close together. Therefore the contact relationships can be maintained. □ FIG. 4 illustrates an example alternative analog front-end. While the description of FIG. 3 has been related to the use of a high-impedance amplifier 306 followed by an analog-to-digital converter 308 , TriboTouch can also employ a charge-balancing sigma-delta converter, or it can combine both approaches. □ a capacitor 406 is switched by a switch 404 between a reference voltage source (Vref) 408 and the input electrode 402 to transfer packets of charge, thereby keeping the input electrode potential within the range of the input amplifier 410 (or comparator in the case of a 1-bit sigma-delta ADC). □ Vref reference voltage source □ the subsequent signal processing chain combines the output 315 of the ADC 412 and output of the automatic gain control (AGC) 414 to reconstruct the input current with a higher dynamic range than would be possible with the input amplifier and ADC alone. □ the reconstructed input current is provide to TriboTouch signal processing 416 , which can be the processing system, 312 or other signal processing system. □ TriboTouch can sense signals directly generated by physical contact and need not transmit signals to be sensed. Therefore, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. □ An additional benefit is the power savings from this design. There is direct savings from not having to transmit a field. Additionally, the system benefits from a simplified architecture, which means there are fewer electronic devices to power. Further, since there is no need to perform extensive noise rejection in hardware, there can be additional savings from reduction of □ FIG. 5 illustrates principles of TriboTouch operation. □ the tribocharging caused by contact with the insulating surface is coupled capacitively to the electrode via dielectric polarization. □ TriboTouch is thus capable of detecting contact, motion, and separation of objects at the surface of the insulator. □ a data processing system can determine the type of object that interacts with the surface using an event detection and classification component 506 . □ the event detection and classification component 506 uses classification characteristics 504 to determine contact type data 508 , which identifies the type of the object. □ the classification characteristics 504 can include one or more signal patterns 502 that correspond to different types of objects. □ a first signal pattern 512 can correspond to a finger, a second signal pattern 514 to a glove, a third signal pattern 516 to a plastic stylus, a fourth signal pattern 518 to a paint brush, and so on. □ the event detection and classification component 506 can, for example, compare the detected tribocharging signal to the signal patterns 502 and select one of the signal patterns 502 that best matches the detected signal. □ the event detection and classification component 506 can also estimate the position 510 of the detected signal, as described above with reference to FIG. 3 . □ FIG. 6 illustrates an example process for determining types of contacts based on signal profiles. Because TriboTouch can sense contact with, motion across, and separation of an object from the sensing surface, it is not necessary to algorithmically derive these events from capacitance measurements. TriboTouch can therefore produce more accurate identification of these events than capacitive sensing can ordinarily provide. Additionally, because of the localized nature of tribocharging, the position estimation algorithms can yield higher spatial and temporal resolution than capacitive sensing methods. This higher resolution can be used, for example, to perform palm rejection or other inadvertent contact rejection using the process shown in FIG. 6 . The process of FIG. 6 detects and classifies events at block 602 . □ Block 604 integrates events, e.g., by using a state machine to create a map of contact states on the sensing surface, as described above with reference to FIG. 3 . □ the map can be used to track when and where contact and release events take place, and to estimate event type and coordinates. □ Block 608 estimates event positions to generate position data 612 . □ Block 610 detects poses to generate hand and stylus pose data 614 . □ different types of contacts can have different characteristic signal profiles (examples shown are not indicative of actual material data) and the received signal's characteristics can be used to detect, for example, inadvertent palm contact while sketching with a stylus. □ different object types can be detected based on the contact profiles of the objects without a special pickup design. These profiles can be expressed either in terms of example waveforms, or in terms of algorithmic approaches that capture distinctive features of the waveform. □ the TriboTouch system can use one instance of the above-mentioned hardware for each touch position, or it can use a continuous larger electrode, and estimate the position based on the distance-dependent change in signal through the electrode. □ the change can be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. □ TriboTouch can therefore distinguish position at a resolution higher than that of its electrode structure. □ the hardware instances when an instance of the hardware is used for each touch position, the hardware instances operate in parallel, so that each electrode is handled individually. □ the parallel arrangement allows faster read speeds, but increases hardware complexity. □ scanning through each electrode in sequence offers different tradeoffs, because the digitization system should be faster (and thus consume more power), but the overall system is more compact (which can reduce the power consumption). □ TriboTouch can be configured for single or multiple touch points, and additionally can be configured for either continuous position sensing (such as a phone or tablet), or discrete position sensing (such as a button). That is, position and motion can be sensed, as in a touchscreen, or discrete switches can be used. □ a 4-contact resistive-pickup system can be used. □ a row-column system that detects 2 simultaneous contacts can be used. □ pickups can be added to a resistive system. □ an array of pickups can be used to detect 5 contacts. □ the specific pickup configuration is a design option for the pickups and electronics. □ TriboTouch can provide the benefit of robust input without the need for additional precautions necessary for traditional capacitive sensing. □ FIG. 7 illustrates an example of combining the capabilities of capacitive sensing and TriboTouch (e.g. direct sensing of contact with conductive and non-conductive objects). Because both strategies use sensitive measurements of charge displacement, it is possible to combine them using essentially the same analog front end hardware. □ FIG. 7 illustrates the basic principle whereby an electrode 702 can be shared between these two sensing methods. Capacitive sensing works by transmitting a balanced AC signal at high frequencies (typically >125 kHz) using a transmitter 706 into an electrode, and measuring either the transmit loading or the signal received at other electrodes. The capacitive measurement can be performed by a capacitive receiver 708 . □ TriboTouch works by measuring the local charge displacement using a receiver 712 at low frequencies (ordinarily ⁇ 1 kHz). By capacitively decoupling the electrode 702 from the capacitive sensing circuit 708 using a capacitor 704 , the triboelectric charge displacement can be maintained and measured separately, either by time-multiplexing the two sensing modes or by filtering out the transmit signal at the TriboTouch analog front-end or in subsequent signal processing. When time-multiplexing, the capacitive system 710 suspends access to the electrode 702 while the TriboTouch system 714 measures, and vice versa. □ the TriboTouch system 714 uses a filter and knowledge of the signal being sent by the capacitive system 710 to remove the effects of capacitive measurements during the noise rejection phase of processing. Further examples of combining other types of touch sensors, such as resistive, capacitive, and inductive sensors, are described below with reference to FIGS. 45-47 . □ FIG. 8 illustrates an example of capacitively coupling a transmitter 804 to an electrode 802 while using the same receiver system 806 for both capacitive and TriboTouch sensing. □ the capacitive software and the TriboTouch software can be combined into a single system 808 . □ the capacitive software uses the same hardware as the TriboTouch software, taking turns to use the shared resources. □ FIG. 9 illustrates a triboactive surface covered with an array 900 of different materials. □ the embodiment shown in FIG. 9 makes it possible to distinguish between different contacting materials (e.g. skin, graphite, rubber, nylon, etc.) by patterning materials with different tribonegativity over sensing sites 902 , 904 , 906 on the TriboTouch surface. □ the principle is similar to that of a color CMOS image sensor, in which a color filter mask is superimposed over pixel sensors. □ the triboactive surface is covered with an array 900 of four different materials, ranging from strongly tribopositive (++) materials 902 to strongly tribonegative ( ⁇ ) materials 906 . □ the array can be laid over different electrodes. These electrodes can be clustered closely together, such that a small motion is sufficient to cross multiple electrodes. Differentiation between different material types can be performed fewer material types to speed up type detection. □ FIGS. 10A-10C illustrate different positive and negative charge patterns generated when different objects make contact with the same patterned array 1008 of sensors. □ contact with the finger 1002 generates negative charge patterns on the ⁇ , +, and ⁇ sensors, and a neutral charge pattern on the ++ sensor. Therefore, the finger 1002 is characterized by an overall strongly positive charge pattern. □ contact with the pencil 1004 generates positive charge patterns on the + and ++ sensors, and negative charge patterns on the ⁇ and ⁇ sensors. Therefore, the pencil 1004 is characterized by an overall neutral charge pattern. □ contact with the eraser 1006 generates positive charge patterns on the +, ⁇ , and ++ sensors, and a neutral charge pattern on the ⁇ sensor. The eraser 1006 is therefore characterized by a strongly positive charge pattern. □ These characteristic charge patterns can be used to identify an unknown object that makes contact with the sensor array 1008 . □ TriboTouch allows for detecting a single contact, dual touch (e.g., detect two fingers simultaneously making contact), multi-touch (e.g., detect three or more fingers simultaneously making contact), the order of touch (e.g., detect the order where index finger makes contact first and then middle finger), the state of the object/finger where the first object/finger is in a first state and the second object/finger is in a second state (for example, when rotating, the first finger can be stationary while the second finger rotates about the first finger), detect adjacent fingers versus non-adjacent fingers, detect thumb versus fingers, and detect input from prosthetic devices. TriboTouch also allows for detecting motion, and also detecting the position of the touch/motion. □ TriboTouch allows for determining the shape of the object making the contact, the type of materials of the object making the contact, activating controls based on the type of materials that are detected, activating modalities based on the shape and type of materials detected (e.g., brush vs. eraser), using contact shape to depict contact realistically, using contact shape to detect object to change modalities of application, and using contact shape to improve position accuracy. □ the dual touch detection allows for detecting zoom gesture, panning gesture, and rhythmic gesture to create shortcuts or codes. □ multi-touch detection allows panning gestures to control application switching or multi-finger controls for games. □ TriboTouch also allows the order of the touch to be detected so that, for example, rhythmic input can be used to create shortcuts or codes. Detecting adjacent fingers versus non-adjacent fingers can be used to detect input from chorded keyboard where multiple keys together form a letter. Detecting thumb versus fingers can be used to provide modified keyboard input mode, allow for chorded input, and allow imprint of fingers to be used as code. □ motion can be detected so that, for example, the following gestures can be detected: zoom in, zoom out, panning, dragging, scrolling, swipe, flick, slide, rotate clockwise, or rotate □ the different types of contact, motion/gestures, and position described above can also be detected using NoiseTouch and TriboNoiseTouch. □ TriboTouch In industrial settings, the noise-resistance and distinctive signal characteristics of TriboTouch (and NoiseTouch) allow operation is noisy, humid, or dirty environments. These conditions generally prohibit the use of capacitive sensors, and as a result the systems currently used are relatively primitive (though robust)—such as physical buttons, membrane switches, IR touchscreens, etc. TriboTouch techniques enable the same type of interfaces available to consumer users to be used in industrial settings, such as to easy-to-clean hard glass touch controls, and so on. □ TriboTouch can be used to provide self-powered buttons, e.g., for transitioning from sleep mode to wakeup mode without capacitive sensing. □ a contact takes place on a triboelectric control pickup, a small charge redistribution is triggered. □ this displacement current may be used to directly send a short message regarding the event. □ the device may collect power from static electricity produced during bulk motion, and later use that power to operate during relevant contact events. This may be coupled with a radio transmitter or similar device to allow for completely wireless and battery-less remote control of devices. □ TriboTouch can provide indirect touch features, which can, for example, enable placing a paper on top of a touch screen and writing on the paper with a finger, stylus, brush, or the like. □ TriboTouch (and NoiseTouch) surfaces operate with an insulator between the electrode and the contacting object. □ the charge displacement effect can occur on any material. Therefore, the touch surface can be covered by an additional material such as a sheet of paper or cloth, and operation will not necessarily be impeded as a result. Since the contact of any two materials can produce a triboelectric effect, the makeup of the two materials making contact (while in contact with the touch surface), whether paper and pencil or brush and canvas, is not at an issue in contact detection. □ Triboactive contact detection can be used to detect erasure, for example, by detecting the motion of an eraser on top of paper, thus mirroring the digital content to what is drawn on the paper itself. Attachments can also be made to the screen to speed up particular input. For example, for gaming applications, a soft passive joystick that makes contact with the screen when pressed in different directions can be used to provide the user with better haptic feedback. Similarly, a keyboard template may be used to provide physical passive keys that can be used to quickly trigger actions in applications such as drawings or 3D graphics, where switching between different modes and tools in rapid succession is common. Because triboactive contact sensing can sense contact from non-conductive materials, the choice of materials for the attachments is greatly increased, and conductive or electrically active components are not required. This allows a much broader class of input experiences at much lower cost, and use of a greater set of materials such as plastic, paper or wood for the attachments. □ Gesture input can be provided using TriboTouch techniques. □ triboelectric charge displacement is analogous to the displacement of sand as a finger is run through it. □ a change in the angle of the finger (whether leaning left or right, angle of lean, and so on) can affect the way the sand is disturbed. □ a change in the angle of the finger can affect the charge displacement. This change in displacement can be measured to estimate the pose of the hand, including angle, handedness, and the □ FIG. 11 illustrates an example configuration of a NoiseTouch system relative to a user and the environment. □ a person can be surrounded by an electric field emitted by devices in the environment. This field is normally considered part of the electromagnetic interference (EMI) in the environment. This field is carried throughout the body, and can be coupled capacitively to electrodes in the device. □ EMI electromagnetic interference □ a technique referred to herein as “NoiseTouch” uses the noise that is conducted by the body and picked up by the electrodes of the touch sensor to detect the position of the user's touch. Feature parity with capacitive sensors is maintained (hovering support, multitouch, etc.). □ NoiseTouch uses environmental noise and is thus immune to EMI, and does not need to emit an electric field to sense user interactions. □ NoiseTouch is scalable, i.e., can be applied to surfaces of any shape and size, responsive, and has reduced complexity. □ environmental EMI sources 1106 can be coupled to the ground 1102 via impedance Z in 1104 , and to the human body 1110 via impedance Z air 1108 . □ the body 1110 is also connected to ground 1102 via an impedance Z b 1112 . □ the EMI 1106 is coupled to the electrode 1118 through an optional insulator 1116 , and is then received by the NoiseTouch hardware 1120 , which itself is coupled to ground 1102 via impedance Z h 1114 . □ the NoiseTouch system 1120 can detect touch by sensing this change in the characteristics of the noise received by the electrode 1118 . □ FIG. 12 illustrates an example NoiseTouch system architecture. □ Environmental noise from power lines, appliances, mobile and computing devices, etc. continuously emits electric fields that contribute to the environmental electromagnetic interference (EMI, i.e., electronic noise). □ EMI environmental electromagnetic interference □ the human body is a slight conductor, and thus acts as an antenna for these signals. □ a high impedance amplifier 1208 amplifies the incoming signals, and a subsequent analog to digital converter (ADC) 1214 converts this signal to digital form. □ ADC analog to digital converter □ the processing system 1216 (e.g., processing software running on a computer system) has two functions. Initially, the processing system 1216 characterizes the noise at block 1220 and adapts the gain at block 1218 so that the signal does not overwhelm the amplifier 1208 . The data processing system 1224 then continues gain adaptation at block 1226 , while rejecting unwanted signals at block 1228 and estimating positions at block 1230 . The gain adaptation information is fed back to a gain controller 1210 , which can be a portion of the front-end hardware, to control the high-impedance amplifier 12078 . The gain adaptation maintains the signal from the amplifier 1208 within the range of the ADC 1214 . □ a gain controller 1210 which can be a portion of the front-end hardware □ the noise characterization system 1220 can be used to break the noise signal into bands and characterize the reliability of those bands based on how constantly they are available, and what variability they exhibit. Via this analysis, a profile of each band is created, which can then be used by the noise source selection system 1222 to select an appropriate band (or set of bands) for position estimation. □ the selection process can also decide to change the selection on a time-varying basis and the user location and the noise environment around the user changes. For example, when the user sits down in front of a TV, a particular band may be particularly fruitful. When leaving the home, this band may no longer be as useful as that band (or set of bands) that are produced by the car. □ Block 1228 removes the unwanted bands of noise, and feeds the data to block 1230 , which uses the signals to estimate where and how the user is approaching the surface. □ Block 1230 also carries our linearization, such that the position of the user is expressed as uniform values with relation to the edges of the surface. □ linearization in TriboTouch is essentially de-noising of the position data generated by the array. Because the positions are detected at each sensor, the position data is cleaned up and fit to a smoother motion. □ the linearization system mathematically maps the positions from the continuous range of values produced by the system to the Cartesian coordinates of the touch surface. In one or more embodiments, the process can be based on an individual mapping of touch positions to Cartesian coordinates. □ noise from the environment can be picked up by the user's body. This noise is capacitively coupled by the electric field of the body to the input electrode 1206 . □ the signal from the electrode is then digitized. □ the digitization can be done by a number of methods, including the high-gain amplifier 1208 followed by the ADC 1214 , as shown in FIG. 12 . □ the signal can also be converted by other techniques, such as instrumentation amplifiers, sigma-delta converters, charge counters, current metering approaches, and so on. □ the gain of the high impedance amplifier 1208 can be optionally controlled by the gain adaptation component 1218 of the processing system 1216 , though the high impedance amplifier 1208 and ADC 1214 can alternatively have sufficient resolution such that gain control is not necessary. □ Block 1220 characterizes the noise into frequency bands. □ the system can also determine the periodicity of various noise bands. This determination allows for the selection of a reliable band (or bands) at block 1222 , based on continuous availability and signal strength. That information can be used to reject unwanted signals at block 1228 . □ Block 1230 performs position estimation. During the course of processing, the signal characteristics may change. In case of such a change, the system can trigger additional gain adaptation 1226 or noise characterization 1220 to provide uninterrupted operations. □ linearization can be performed at block 1230 to compensate for the nonlinear nature of a continuous sheet electrode, or a position can be estimated directly from the centroid of the activation seen in a row-column or matrix electrode array. Block 1230 then produces the resulting position data 1232 . □ the NoiseTouch system does not include facilities for transmission of signals. Signal transmission facilities can be omitted because NoiseTouch senses environmental signals, and does not need to transmit signals in order to sense environmental signals. Since the receiving hardware is designed to accept EMI, it is resistant to interference from EMI sources. In addition, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. An additional benefit is the power savings from this design. On one hand, there is direct savings from not having to transmit a field. Additionally, the system benefits from a simplified architecture, which means there is simply less electronics to power to begin with. Additionally, since there is no need to carry out extensive noise rejection in hardware, there is additional savings from the reduction of complexity on that front as well. □ FIG. 13 illustrates an example process that determines hand poses or positions. □ the EMI conducted by the body is coupled capacitively to the electrode via the electric field surrounding the body. □ the process can determine when the user is holding or touching the screen from the left or right side, which is pose information. □ An ADC 1306 converts the analog input signal from the amplifier 1302 to a digital signal. □ NoiseTouch can detect approach of a body part at a distance. As such, it is possible to distinguish when the user is hovering over the touch surface without making physical contact. □ the electrodes can be continuously scanned at multiple gain settings at block 1310 , allowing for simultaneous detection of hover and touch. □ Multiple gain setting scanning may be used, for example, to allow for palm or inadvertent contact rejection, detection of holding pose (one-handed vs. two-handed, left vs. right handed, etc.), and the like. □ Block 1312 compares the signals read at different gains. □ Bock 1314 uses pose heuristics to determine pose data, such as the pose of a hand. □ Block 1318 uses the results of the signal comparison 1312 to determine hover position. □ the multi-gain surface scanning can detect the pose of the hand as the user holds a device that contains a NoiseTouch sensor. □ Multi-gain scanning provides different sensing depths, with resolution decreasing with the increase of the gain. At high gain, it can sense more distance objects, but does not determine position as exactly as when low gain is used. □ multi-gain scanning can enable the system to distinguish right-handed pen input from left-handed pen input by locating the position of the hovering hand relative to the contact position. The location can be determined using a higher gain surface scanning setting to sense the approximate position of the hand that is making contact. □ Multi-gain scanning can also help to sense whether one or two hands are hovering, which from the sensing perspective will produce, respectively, one or two sensed “blobs” at medium gain, or a small or large “blob” at high gain. Since the sensing field at high gains extends out from the device to some distance, it is also possible to detect how a device with a NoiseTouch screen is being held relative to the location of the screen. □ gestures that are part of “touch” can be separated from how the machine can react to the presence of hover. For example, if a user is holding the phone in their right hand, the keyboard can automatically shift its touch points to the left so that the user can type more easily. Also, controls can appear on a tablet closer to the hand holding the tablet (or, alternatively, on the other side of the table, so that touching the tablet with the free hand is easier). □ hover can be a contextual cue for the software. □ FIG. 14 illustrates an example method of separating touch and stylus data. Similar to the NoiseTouch techniques described above in FIG. 12 , an input signal can received by an ADC 1420 and characterized by a noise characterization block 1422 . Noise separation is performed by a modified noise separation block 1424 , and position data 1428 is determined by a position estimation and linearization block 1426 . While the description above focuses on the appendages of a user, it should be noted that NoiseTouch can function equally well with conductive or partially-conductive objects. As such, it is possible to fashion styli, pens, or other devices 1410 that can be detected by NoiseTouch. □ the design of the stylus 1410 can include passive reactive elements, such as inductors or capacitors (or a combination of passive elements), which imprint the noise signal with a specific signature that is detectable by the position estimation 1426 and noise characterization 1422 blocks. Detecting the specific signature enables NoiseTouch to discern between the presence of a stylus 1410 and a finger. Therefore, separate finger position data 1428 and stylus position data 1430 can be generated by the position estimation and linearization block 1426 . □ passive reactive elements such as inductors or capacitors (or a combination of passive elements) □ FIG. 15 illustrates detection of the signal modification characterizing the modification of ambient noise by the contact by a stylus or pen 1510 . □ FIG. 15 illustrates example signals at different points of the analog front-end portion of the system shown in FIG. 14 . □ the EMI sources emit an EMI signal. □ the stylus 1510 emits a signal, and another signal is received by the electrode 1418 . □ the signal received by the electrode 1418 is different from the EMI received by the electrode at Z air when the stylus is not in contact with the insulator 1416 . □ An alternative implementation of the device may produce a certain amount of controlled generalized EMI from the device which is then used to detect position in areas where sufficient environmental EMI may not be available. This capability may be automatically switched on by the automated gain control systems once the levels of environmental EMI drops below a pre-programmed or dynamically selected threshold. □ the NoiseTouch system may be tuned to specifically use the regulatorily-allowed EMI emissions of the device exclusively, thus rejecting other sources of noise. This increases the robustness of the device since the EMI profile need not be dynamically characterized. □ the NoiseTouch system may use one instance of the above-mentioned hardware for each touch position, or it may use a continuous larger electrode and estimate the position based on the distance-dependent change in signal through the electrode. □ the change may be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. In this way, NoiseTouch can distinguish position at a higher resolution than the resolution of its electrode structure. □ NoiseTouch can be configured for single or multiple touch points, and additionally can be configured for either continuous position sensing (such as a phone or tablet), or discrete position sensing (such as a button). In the latter application, the strengths of the system remain in force, and make the system practical to use in many scenarios where environmental noise or contamination may be an issue, such as in automotive or marine uses, in factory floors, etc. In such cases, NoiseTouch can provide the benefit of a robust input solution without the need for additional precautions necessary for traditional capacitive sensing. □ FIG. 16 illustrates an example process of passively sensing the environment and context of a user. □ NoiseTouch can continuously sense and characterize the environmental EMI, and this capability can be used to passively sense the environment and context of the user. For example, at home the user may surrounded by EMI from the TV, mobile phone, and refrigerator, while at office the user may be surrounded by the EMI from the desktop computer, office lighting, and office phone □ the NoiseTouch system can capture this characteristic data and compare it to an internal database of noise and environments, using relevant similarities to deduce the user's location. □ an input signal is fed from a signal acquisition system 1602 to a Noise Characterization module 1604 . □ Block 1604 performs noise characterization to determine a current noise profile 1610 . □ Block 1604 breaks the signal down into bands (e.g., using FFT or the like), and analyzes both the magnitudes of signals in different signal bands, as well as the time-domain change in those □ the signals to be used for positioning are fed to block 1606 . □ Block 1616 performs estimation and linearization to generate position data 1608 , as described elsewhere herein. □ From user input 1606 , as well as automatic sensing 1618 , such as GPS, WiFi positioning, etc., block 1620 determines whether the device is in an environment of interest. If so, the current noise profile is stored in the Environment and Context Database 1622 . □ the current profile and the entries in the Database are used by the Environment and Context recognizer 1612 to detect when the environment or context is encountered again, and when recognized again, events are generated accordingly. □ FIG. 17 illustrates examples of noise contexts that can be passively sensed. □ Different rooms in a house or office can have different noise contexts. □ a break room may include EMI from the coffee machine □ a meeting room may include EMI from a large TV or projector, as shown in FIG. 17 . □ the device can then use the context estimates to make certain functionality easily accessible. For example, the device can automatically print queued documents from the user when the user approaches the printer, or allow control of the projector when the user is in the same room. □ the user may additionally configure capabilities on a per-area or per-context basis to help streamline tasks. □ the noise characterization can be performed based on external device activity 1702 , such as whether a TV or lights are on or off. □ the context of interest can be based on automated context recognition 1704 or user input 1706 . □ the automatic context recognition 1704 can determine that the context is “leaving the kitchen”, “in the bedroom”, or “driving”, for example. □ the user input can be “watching TV”, “reading in bed”, or “washing clothes”, for example. Based on these factors, the environment and context data 1708 is generated. and used as input for context-relevant or context-dependent automation services 1710 . □ FIG. 18 illustrates an example process of using the context sensing system to communicate with a device having a NoiseTouch sensor. □ a device wishing to communicate may emit a capacitive signal 1804 by applying a voltage to a conductive surface, including the metal frame or shielding of the device itself. This signal 1804 is combined into the environmental EMI field 1802 and received by NoiseTouch. □ the signal can be encoded by a user or stylus at block 1808 , and received by an electrode and ADC at block 1808 . □ Noise characterization can be performed at block 1810 □ position estimation and linearization can be performed at block 1812 to generate position data 1814 . □ a signal detection system 1818 can be used to search for such signals, possibly with additional information 1816 about the context which allows a narrowing of search parameters. □ the noise signal is filtered at block 1820 to only include the bands where transmission is taking place, and the signal is demodulated at block 1822 to receive 1824 data. □ data may be used to uniquely identify a device (for example, to allow immediate and direct control of a nearby appliance), or to send contextual data (such as the time remaining for an oven to heat up, or that a refrigerator door has been left open). □ Such communication may be made bidirectional, such that a device which does not include the position-sending capability may nonetheless include a NoiseTouch electrode for the purposes of receiving context and incoming data. Therefore a non-touch-enabled device (such as a microwave) may receive NoiseTouch-based communication from a nearby device for the purposes of control or query of functionality. □ Example scenarios in which environmental sensing can be used include changing a phone's home screen depending on sensed context, changing a phone's home screen depending on sensed context, sending user location to external devices using context sensed by the phone, targeted sensing of activity of external devices, and monitoring energy consumption. □ the sensor system may be located on a device such as a watch or Fitbit-type device that is word by the user. □ the sensor system can also be on a laptop or a TV. □ the phone detects the noise signature of the house and provide a set of applications on the home screen that are dedicated to home control, e.g. Alarm control, TV, Audio system, etc. □ a phone's home screen can be changed depending on sensed context. □ the phone Upon the user entering the house, the phone detects the noise signature of home and provides a set of applications on the home screen that are dedicated to home control, e.g. Alarm control, TV, Audio system, etc. □ home control e.g. Alarm control, TV, Audio system, etc. □ a tablet or smartphone can display up a home screen page that contains music applications when a headphone is plugged in. □ the controls for various appliances, lighting systems, TV and other electronics, home HVAC controls, etc. can be brought up on a special page of the interface that makes access much more □ the home can be enabled to provide application dedicated to the control of devices in each room, privileging TV controls when in the living room, and timer when in the kitchen for example. □ the home screen can be changed depending on the sensed environmental context. □ This technique can be applied on a per-room basis. □ the user may customize a page that displays business-related applications such as email and business document management software when the user is in the study, the TV remote and current TV schedule in the living room, and the baby monitor, security system, and AC controls in the bedroom. □ business-related applications such as email and business document management software when the user is in the study, the TV remote and current TV schedule in the living room, and the baby monitor, security system, and AC controls in the bedroom. □ a user's location can be sent to external devices using context sensed by the phone. □ the phone detects the current room the user is in, and sends the information to the devices in the current room. □ Lights can be turned on when the user carrying his phone enters a given room, and turns off when leaving it; □ a preset profile, e.g. certain music and lighting conditions can be started automatically when the user enters the living room; Alarm could be de-activated when entering the house, and so on. □ the system may notify the TV when it detects the user has moved away. At that point, the TV may turn off a power-consuming display panel, but leave the sound on, saving energy. □ the air conditioning may go into a power-saving mode likewise when the user is away, and quickly cool a room when the user enters. □ the user may configure the devices to act in a particular way based on his or her presence or absence from the vicinity. In one or more embodiments, If the TV is on, the phone may look up favorite programs the user selected previously, and tell the user that a particular channel is showing his favorite show. □ Noise detection can also be used to target activity sensing of specific external devices, such as TV, lights, audio system etc. □ a phone can detect that lights are left on when in the hallway before leaving a place, and notify the user. □ a phone can detect that a television s switched on and can provide recommendations, and the like. □ noise detection can sense the overall noise level of a home to monitor the activity of electronic devices and give a sense of global energy consumption. Using signal processing on the global noise level, energy monitoring can also be targeted and device-specific. All electronics, when active, can produce more EMI than when off. □ the system may determine when the user is generally using more or less energy, and provide overall feedback without necessarily detecting particular devices or knowing anything particular about those devices. Therefore, when the user is in a room, the sensing system can detect if the lights are or not. When the user moves to a different area as noted by the system based on a change in the EMI environment, the system can notify the user that they left the lights on. This may be additionally gated by particular locations, such that it only applies to home, office, or otherwise. Note that on one or more embodiments this technique requires no special instrumentation of the lights or other infrastructure, and thus can be easily used with legacy unaugmented locations. □ NoiseTouch and hover can be used to detect a single air touch/tap, dual air touch/tap, multi-finger air touch/tap, adjacent fingers hovering, or hovering thumb versus fingers. □ motion using hover can be detected such as, for example, zoom in, zoom out, panning, dragging, scrolling, swipe, flick, slide, rotation clockwise, or rotation counterclockwise. □ portions of content under the hovering object can be magnified or previewed. □ objects can be recognized by detecting the conductive parts of the object. □ NoiseTouch allows for detecting the tool angle, and the position of the hand relative to the object. □ FIG. 19 illustrates an example architecture of a TriboNoiseTouch system. □ the TriboNoiseTouch techniques disclosed herein are based on a combination of the TriboTouch and NoiseTouch techniques. □ NoiseTouch uses the noise that is conducted by the human body and picked up by the electrodes of the touch sensor to detect the position of the user's touch. □ TriboTouch uses the charge displacement that occurs when two objects come in contact with each other. By measuring this displacement, TriboTouch can detect contact of the sensitive surface with any material. This is done using a sensing surface similar to capacitive sensors in use today and requires no physical displacement (as resistive screens do). □ TriboNoiseTouch combines the capabilities of TriboTouch and NoiseTouch using the same hardware, electrode geometry, and processing architecture. Therefore, the TriboNoiseTouch system has the capacitive touch features of NoiseTouch, and is also capable of sensing contact with a wide variety of materials using TriboTouch. TriboNoiseTouch opportunistically uses each methodology to offer improved capabilities, further improving the speed of contact detection over NoiseTouch, while providing non-contact and bulk contact (e.g., palm contact) sensing. TriboNoiseTouch uses environmental noise and surface interaction. TriboNoiseTouch can thus be immune to EMI, and need not emit an electric field. □ TriboNoiseTouch can sense the contact of non-conductive materials. Additionally, TriboNoiseTouch uses a combination of two physical phenomena to detect touch and provide robustness, speed, and differentiation of contacts by different materials (e.g., finger vs. stylus). The combination of NoiseTouch and TriboTouch technologies into a single panel can reduce complexity and provide savings in energy, and reduce hardware resource usage. □ a full panel reading can be performed with TriboTouch, and then with NoiseTouch, or we some of the electrodes on a panel can be used for TriboTouch, and others for NoiseTouch, with optional switching of electrodes between TriboTouch and NoiseTouch for more continuous coverage. □ environmental noise sources 1902 such as power lines, appliances, mobile and computing devices, and the like, emit electric fields that contribute to the environmental electromagnetic interference (EMI, or colloquially, electronic noise). □ the human body 1904 is a slight conductor, and thus acts as an antenna for these signals. □ this signal is capacitively coupled to the input electrode 1906 . □ the contact of the body or another object with the touch surface causes a triboelectric signal 1908 to be produced. Both signals are capacitively coupled to the electrode. □ a high-impedance amplifier or electrometer 1910 detects the incoming signals, and an analog to digital converter (ADC) 1912 subsequently converts this signal to digital form. □ ADC analog to digital converter □ the signal is processed by a processing system 1916 , which can be implemented as hardware, software, or a combination thereof. □ the processing system 1916 can include a calibration, which can be done at startup, and whenever internal heuristics determine that the signal is becoming intermittent or noisy. This is done, for example, by calculating mean and variance, and ensuring these values remain within a range. Deviations of the mean value may lead to gain adaptation, while excessive variance may cause the selection of a different noise band. □ the processing system 1916 has two stages of execution. For the triboactive signal, the processing system 1916 characterizes the noise at block 1920 and adapts the gain at block 1918 so that the signal does not overwhelm the amplifier. This stage can be done separately for triboactive and noise signals, in which case the processing system 1916 characterizes the noise at block 1926 and adapts the gain at block 1924 for the noise signals. Additionally, offsets in the readings caused by charges adhered to the insulators or nearby objects can be offset for triboactive signals at block 1922 . The initial conditions are calculated during the initialization phase. Noise source selection is performed at block 1928 . □ Block 1932 selects the measurement to make, and block 1934 separates the signals by applying initial filters specific to the signals required. □ the characteristics of the filters are suited to the selection of noise signals, as well the means of interleaving the two types of measurements. □ the process continues gain adaptation at block 1936 and rejects unwanted signals at block 1938 . □ the gain and offset are adapted to compensate for environmental drift at blocks 1940 and 1942 , respectively. □ the gain adaptation information is fed back to gain control block 1914 to control the high-impedance amplifier 1910 , so that the signal from the amplifier 1910 remains within the range of the ADC block 1912 . □ the outputs of both signal paths feed into the opportunistic position estimation and linearization block 1944 , which uses the most reliable and time-relevant features of both measures to calculate position estimates 1946 . □ FIG. 20 illustrates an example method of separating triboactive data from noise data. □ a characteristic profile of noise and triboactive signals is created at blocks 2002 and 2008 , respectively. □ signal separation block 2014 characterizes the triboactive signal in time and frequency domains, indicating what signals come from triboactivity. The remaining signal is than analyzed by band and appropriate bands are selected for the noise analysis at block 2016 . □ the system starts with an initialization of the system where we determine (possibly offline) specific initial signal bands. □ Signal separation may operate in the time or frequency domain, and may be done by filtering specific frequency bands from the combined signal. □ the signals are separated according to the initialization characteristics determined, and the data is separated into independent streams for processing. □ the band selection may be dynamically changed based on location, signal strengths, etc. □ the TriboNoiseTouch system does not include facilities for transmission of signals. Signal transmission facilities can be omitted because TriboNoiseTouch senses signals in the environment as well as to the contact itself, and does not need to transmit signals to sense environmental signals. Since the receiving hardware is designed to accept EMI, it is resistant to interference from EMI sources. In addition, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. An additional benefit is the power savings from this design. For example, there can be direct savings from not having to transmit a field. The system benefits from a simplified architecture, which means there is simply less electronics to power to begin with. Additionally, since there is no need to carry out extensive noise rejection in hardware, there is additional savings from the reduction of hardware complexity. □ FIGS. 21-23 illustrate example TriboNoiseTouch processes for identifying triboelectricity-related events and noise-related events. Three example processes for sequencing TriboNoise event-sensing are described herein. □ the process of FIG. 21 identifies triboelectricity-related events, then identifies noise-related events (i.e., TriboTouch first). □ the system can trigger the NoiseTouch subsystem when the TriboTouch portion of the system has received no signals after a period of time has elapsed. □ Each TriboTouch event transmits a touch-event or material classification event when detected. □ a timer can be used to reset the noise gain settings after no interruption has been sent by a TriboTouch-recognition pipeline after a given amount of time has passed □ the process of FIG. 23 is an example sweep process that acquires a wide band signal and parallelizes triboelectricity sensing and noise sensing. □ the sweep process of FIG. 23 can be used, for example, when prioritization is to be set at a higher level, e.g., at application level. □ a painting application may be more closely related to triboelectricity-based sensing, while location/context dependent applications may be more closely related to noise-based sensing. □ TriboTouch and TriboNoise can be device- and application-dependent. □ the triboelectricity-first approach is well-suited for applications where touch surfaces are used heavily by the user, while the “noise-first” approach is well-suited for more general application devices, such as mobile devices, where context sensing on and above the surface interaction can be used simultaneously. □ context dependent-applications are likely to privilege noise-sensing, while drawing, painting, and other direct manipulation applications are likely to privilege triboelectricity-sensing. □ the system By combining noise and triboactive measurements, it is possible to detect materials that are not sufficiently conductive to be visible to noise-based or capacitive measurements. □ the characteristic contact reading involved in triboactive measurement obviates the need for extensive threshold estimations for detecting touch. This means that the system is able to react to short contact events such as the user using a stylus to dot the lowercase letter “i”. □ the combination of the systems also allows for the detection of body parts and hand-held instruments such as styli. In such cases, the stylus can simply be made of an insulator that is “invisible” to noise-based measurements, which allows the system to detect whether a contact is made by, for example, resting the wrist on the touch surface, or by the stylus held in the same □ FIG. 13 illustrates a process of simultaneously detecting hand pose information and hover position. □ TriboNoiseTouch systems can determine when true contact has occurred, thus preventing phantom readings from fingers held close to the touch surface from accidentally invoking commands. This is a side effect of the fact that triboactive signals are only generated by direct contact. However, it is also possible to simultaneously detect hovering as well, thus presenting additional means of interaction. Since the EMI being conducted by the body is coupled capacitively to the electrode via the electric field surrounding the body, by appropriately adjusting the gain of the system, NoiseTouch is capable of detecting approach of a body part at a distance. □ TriboNoiseTouch system Because of the TriboNoiseTouch system's speed, it can continuously scan the electrodes at several gain settings, allowing for simultaneous detection of hover and touch. This may be used, for example, to allow for palm or inadvertent contact rejection, detection of holding pose (one-handed vs. two-handed, left vs. right handed, and so on). □ the process shown in FIG. 13 can take readings from the electrodes with a variety of gain settings, usually above the nominal setting used to detect contact. At higher gain, weaker and more distant electric fields are detected. By stacking up these weaker images at different gains, the system can detect what is near the sensing surface. For example, given a touch gain setting G, a finger hovering above would be detected at setting G+1, some of the knuckles at setting G+2, some of the hand and palm at gain setting G+3, and so on. Of course, further away objects cannot be “seen” by the sensor as well, but we can gather some information that then tells us if a user is hovering, which hand is holding the device, etc. □ TriboNoiseTouch hardware enables the detection of context, hover, contact, and material identification. Context dependent touch applications can then be provided. After context is sensed, specific touch applications and multi-material applications can be triggered, e.g. a remote control application when entering living room, or drawing application when entering the office. In addition, context can be used while the device is in standby to detect what applications and controls should be available to the user. Moreover, when TriboTouch is used to detect contact, the NoiseTouch can be used as backup or shut down completely to save power. TriboNoiseTouch can also provide high precision input. Using the integration of both TriboTouch and NoiseTouch, contact sensing coordinates can be used for high precision input in, e.g. technical drawing applications, or in interaction on very high definition displays. □ An alternative implementation of the device may produce a certain amount of controlled generalized EMI from the device which is then used to detect position in areas where sufficient environmental EMI may not be available. □ This capability may be automatically switched on by the automated gain control systems once the levels of environmental EMI drops below a pre-programmed or dynamically selected threshold. □ This logic may take into account the demands placed on the system, such that when hovering functionality is not necessary, the system can switch to using triboactive mode exclusively, maintaining sensitivity while excluding detection of contact type. □ the noise-sensitive component of the system may be tuned to specifically use the regulatorily-allowed EMI emissions of the device exclusively, thus rejecting other sources of noise. This increases the robustness of the device since the EMI profile need not be dynamically characterized. □ the TriboNoiseTouch system may use one instance of the above-mentioned hardware for each touch position, or it may use a continuous larger electrode and estimate the position based on the distance-dependent change in signal through the electrode. □ the change may be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. □ TriboNoiseTouch may be able to distinguish position at a higher resolution than the resolution of its electrode structure. □ TriboNoiseTouch may be configured for single or multiple touch points, and additionally may be configured for either continuous position sensing (such as a phone or tablet), or discrete sensing (such as a button or slider). □ TriboNoiseTouch can provide the benefit of a robust input solution without the need for additional precautions necessary for traditional capacitive sensing. Additionally, the system remains sensitive even when the user is wearing a bulky glove or using a non-conductive tool to trigger the control, allowing greater flexibility in terms of method of use and environmental contamination or interference. □ TriboNoiseTouch's features that continuously sense and characterize the environmental EMI can be used to passively sense the environment and context of the user. For example, at home the user may be surrounded by EMI from the TV, mobile phone, and refrigerator, while at office the user may be surrounded by the EMI from the desktop computer, office lighting, and office phone □ the TriboNoiseTouch system can capture this characteristic data and compare it to an internal database of noise and environments, using relevant similarities to deduce the user's location. This process is illustrated in FIG. 16 . Note that different rooms in a house or office may have very different noise contexts. □ the break room may include EMI from the coffee machine, while the meeting room may include EMI from a large TV or projector. □ the device can then use the context estimates to make certain functionality easily accessible. For example, it may automatically print queued documents from the user when the user approaches the printer, or allow control of the projector when the user is in the same room. □ the user may additionally configure capabilities on a per-area or per-context basis to help streamline tasks. □ the triboactive portion of the system produces high-resolution data based on individual micro-contacts with the surface of the touch sensor, while the noise-based sensing subsystem produces a blob around the area of contact or hover as well as a “shadow” of the hand hovering over the surface (see FIG. 24 ). These three types of data can be combined to create additional capabilities that are not available to either sensing modalities in isolation. □ TriboTouch-type normally will produce a cloud of contacts around a finger contact due to the micro-texture of the finger interacting with the sensing electrodes. □ the noise data can be used at the same time to give an accurate position for the centroid of the contact, thus allowing the tribo data to be cleanly segmented to be inside the noise blob. □ the exact tribo contact positions can them be used to estimate the shape, size, and intended exact contact position. □ FIG. 25 shows the method for doing this refinement. □ a finger contact can be detected and isolated from a non-conductive pen contact. Since the pen is not conductive, it will not register in the noise-based sensing, while finger contact will produce both types of contact data. This can be used to control different refinement algorithms based on pen or finger contact, and to allow the simultaneous use of fingers and pens. □ the algorithm is shown in FIG. 26 . □ the system provides both enhanced position based on type of contact, as well as notification of the type of contact event. □ the pen or hand pose can be estimated by detecting the hover shadow of the hand making contact or holding the pen. □ the overall shape of the hand, as well as the shape of the hand while holding a pen can be detected by using a pattern matching algorithm or heuristic, and this can be used to detect whether a contact is made by the left or right hand, as well as estimate of pen or finger tilt. □ Tilt is calculated by estimating the point where the stylus or pen is held, and the actual point of contact. The same approximate measurement can be made about finger contact and finger □ the algorithm is shown in FIG. 27 . □ Additional data can be made available to client programs to detect over-screen gestures, as well as disambiguation of left and right-handed contact. This can allow for example control of tool type with one hand while the other is used for manipulation, without two contacts accidentally triggering pinching gesture heuristics. □ the TriboTouch system can be used to detect the material making contact by examining the differences in charge displacement caused by various materials. Noise signals are transmitted through conductive and resistive object. As a result, it can help classification of materials done by TriboNoiseTouch hardware by quickly discriminating materials depending on their conductivity. For example, when interacting with the TriboNoiseTouch enabled display, the tip of the pencil could be detected to automatically trigger the drawing tool, while using the eraser of the pencil will trigger the erasing function. In this scenario, the NoiseTouch hardware will be able to detect the use of the tip of the pencil because it is conductive and will trigger both noise and tribo signals. On the other hand, the eraser will only generate tribo-electric signals. □ TriboNoiseTouch can be configured such that NoiseTouch is triggered only after contact has been sensed by the TriboTouch hardware. This system will only focus on contact-based interaction, such as touch and pen interaction, and will not be able to sense interaction above the surface such as hover. However, this will enable power savings and prevent both Tribo and Noise hardware (and their respective signal processing pipelines) to actively wait for interaction events. While the same front end is used for both, the reduction in calculations reduces the dynamic power usage of the digital logic used to run the triboactive and noise-based position calculations. □ TriboTouch sensing can provide high resolution stylus sensing □ TriboNoise can be used to detect a specifically designed stylus that features buttons to trigger menus and functions. □ the stylus will use tribo and noise signals together to detect position, where for example triboelectric signals will enable sensing contact, release and dragging states, while sensing noise will help to recover position during dragging states, hold, as well as get information from button presses (see FIG. 28 ). □ the core of the stylus consists of an antenna that transmits noise signal to the panel when the pen is in contact with the surface. □ the button enables adding to the antenna path a filtering circuit that will affect the noise signal in a predictable way, by adding a complex impedance or nonlinear behavior (like a diode) to the signal path. □ a complex impedance or nonlinear behavior like a diode □ the system can detect if the button has been pressed or not. In the case of a change to impedance caused by a button, a change in phase or amplitude at certain frequency will be the indicator of a button press. In case of a diode or other non-linear element, harmonics of a certain frequency will be sensed when the button is pressed due to clipping or shaping of the incoming noise □ triboelectric charging occurs when objects make or break contact, it is possible to detect these events more precisely using TriboTouch alone or in combination with NoiseTouch or other sensing methods. □ NoiseTouch alone uses a threshold value (that may be adaptive) to determine when contact occurs. Because the tribocharge distribution and polarity depend on the direction of motion (toward, away from, and along the surface), these events can be distinguished from hovering or near-contact events. This allows a finer control over the range of values considered for hovering, and thus improves the dynamic range for hover sensing (see FIG. 29 ). □ TriboTouch is good at detecting contact, separation, and motion, it cannot detect static objects. Therefore it is complemented by the use of NoiseTouch to detect position and shape of conductive objects during long static contacts. □ Another scenario is the simultaneous use of a nonconductive stylus, brush, or other object detected solely by TriboTouch in combination with finger gestures detected by both TriboTouch and □ An application can distinguish between the fingers and the stylus because of the differences in their TriboTouch and NoiseTouch characteristics, and therefore process their corresponding events differently. □ stylus input can be used to draw and brush input to paint □ finger input can be used to manipulate the image. □ this allows the user to zoom using hover and simultaneously use plastic stylus to draw; to adjust the drawing space as the user is drawing; to scale with fingers while drawing with stylus; or to control a drawing parameter such as brush color intensity with hover while simultaneously drawing with a stylus. □ conductive and non-conductive materials By patterning conductive and non-conductive materials onto an object, information may be encoded to allow recognition of the object. For example, the bottom of a game piece may be encoded with a pattern of materials that allow its identity and orientation to be detected. □ FIG. 30 illustrates example single-touch electrode components, which are one type of electrode configuration that can be used with the TriboTouch, NoiseTouch, and TriboNoiseTouch techniques disclosed herein. □ Other electrode configurations can also be used. □ the electrode types disclosed herein include (1) single-touch electrodes, (2) dual-touch electrodes, (3) array multi-touch electrodes, including the multiple-electrode configuration shown in FIG. 34 , (4) continuous passive position sensing, (5) continuous two-dimensional passive position sensing, (6) dielectric-encoded passive position sensing, (7) continuous passive position sensing using an array of non-linear elements, and (8) spatially-distributed coordinate encoding. □ Types (1)-(7) can be used with any of TriboTouch, NoiseTouch, and TriboNoiseTouch. □ Type (8) can be used with TriboTouch or TriboNoiseTouch. Any of these valid electrode-detection combinations (e.g., a combination of one or more of the electrodes (1)-(8) and one of the TriboTouch, TriboNoise, and TriboNoiseTouch detection techniques) can be used with the same analog front-end, such as the analog front-end described above with reference to FIG. 3 . □ a single-touch electrode can be designed to act as a switch, or can be arranged in an array as an element of a larger surface. □ a single-touch electrode with these components is shown in FIG. 30 . □ the components include an insulator layer and sense electrodes. □ the shield electrode and ground shield electrodes may be omitted at the cost of degraded performance, though performance may remain sufficient for touch detection. □ the shield electrode may be inter-digitated with the sense electrode such that the distance between the lines of the two electrodes is minimized. This may be done with simple inter-digitation, or via the use of a space-filling curve. A specific instantiation is the use of an inter-digitated Hilbert curve. □ inter-digitated electrodes are used to reduce the parasitic capacitance of the electrode relative to the environment by actively driving the electrode using the output of the high-impedance amplifier of the sensing system. □ An additional shield electrode may be used to reject input to the system from the direction opposed to the front of the surface. This prevents spurious detection of contact due to EMI produced by nearby electronics, such as the display in the case of a transparent touch surface application such as a tablet. □ FIG. 31 illustrates two electrodes ( 2602 and 2604 ) in an example interleaved pattern. □ the interleaved electrode only the shield and pickup electrodes are shown. Electrodes may be used interchangeably for pickup or shield. This is a simple example of interleaved patterns, and the conductive portions of the electrodes may be more complexly intertwined. □ FIG. 32 illustrates a row-column electrode grid that can be used to detect position of two touch points. Note that unlike capacitive touch sensors, row-column configurations do not directly offer the ability to sense multiple touch positions, since the electrodes are used as sense electrodes, and in the triboactive and noise-based sensors, transmit electrodes may not be present. In this configuration, two touch points can be distinguished, though their exact positions can be lost. However, this is sufficient for common gestures such as two-finger tap or pinch/ expansion gestures. Other example gestures can be a wave or sweep motion made over the screen without contact, or a hovering motion over a control (which can elicit a highlighting feedback). □ FIGS. 33 and 34 illustrate array multitouch configurations using single-touch electrodes in a grid. □ Each electrode individually picks up contact near it. □ the field can be detected by nearby electrodes as well, as shown in FIG. 34 . □ the position of the contact can be interpolated between the electrodes that receive the signal. □ the noise-based sensor can detect the presence of a hovering conductive body such as the user's finger, allowing for hover sensing. □ FIG. 35 illustrates an example of continuous passive position sensing using a resistive sheet electrode. □ a sheet electrode with some known uniform resistance per unit of area can be used alongside pickup electrodes that are placed on this resistive sheet 3002 . □ the configuration shown in FIG. 35 involves a linear sensor with two pickup electrodes. □ Continuous passive position sensing is performed by detecting the apportionment of charge displacement from a contact. When the impedance of the sheet matches (approximately) the impedance of the system, the value sensed at each pickup is some function of distance to the contact charge cloud. □ By characterizing and linearizing the readings from the pickups it is possible to detect the position of the contact continuously at any position up to the accuracy and precision of the digitization electronics and the noise characteristics of the system itself. □ the position of contact can be calculated based on the proportion of output each pickup relative to the total signal captured. □ a global noise pickup layer may be laid under the resistive layer to sense to total amount of charge injected into the surface, thus allowing a direct comparison. □ FIGS. 36 and 37 illustrate an example of continuous two-dimensional passive position sensing. □ the passive position sensing technique shown in FIG. 35 can be extended to two dimensions, as shown in FIG. 36 . □ the two-dimensional technique can sense n points of touch 3104 from the signals induced in a resistive sheet 3102 with a known distribution of m pickup points 3106 . □ the inputs to the touch surface at time t are n independent voltages Vi(t) at coordinates (xi, yi) 3212 for each point of touch, as shown in FIG. 37 Voltages are measured at m known pickup points 3204 , 3206 , 3208 , 3210 , 3212 on the edges of the resistive sheet 3102 . □ the resistance between a pickup point and a touch point may be found. □ the relationship between the resistance between a given pickup point and a touch point is used to determine the voltage at a given pickup point. □ the resulting equation represents the dependence of the voltage level at a pickup location on the coordinates and input voltages at the touch points. From this system of equations for voltage levels at pickup points, the touch point coordinates (xi, yi) and input voltages Vi(t) are found. □ the number of required pickup point locations m is at least 3n; a larger number of pickups may be used to reduce errors due to numerical approximations and measurement error. □ the known distribution of pickup points and the non-linearity of the resistive sheet allow separation of the touch points and their distribution. □ This method can be further generalized from points of contact (x i , y i ) to points of hover (x i , y i , z i ) by solving for a third unknown coordinate. □ This generalization to points of hover increases the minimum number of pickups m from 3n to 4n. □ FIGS. 38-40 illustrate example electrode-sheet configurations. □ the electrodes can be designed with pickups and resistive sheet on different layers, or on the same layer, as shown in FIG. 38 and FIG. 39 respectively. □ FIG. 38 shows the pickups 3306 and resistive sheet 3302 as different layers, separated by pickup contacts 3304 . Additionally, to increase the resolution of contact readouts, several of these patches may be arrayed next to each other with minimal gaps between them for pickup electrodes to create a single layer high-resolution touch surface. □ FIG. 39 shows the pickup contacts 3402 on the same layer as the resistive sheet 3404 . □ the contacts 3502 can be placed in the interior rather than the edge of the resistive sheet 2504 using a two-layer approach, effectively allowing some electrodes such as contact 3502 to be used for multiple patches 3506 , 3508 . □ FIG. 41 illustrates an example of dielectric-encoded passive position sensing. □ a position of contact 3602 , 3612 can be encoded to a single pickup electrode by a dielectric code printed to the touch surface. Since the signal from the contact is capacitively transferred to the electrode, it is possible to encode a dielectric pattern onto the surface that modifies the signal as it is transferred to the pickup electrodes. This dielectric pattern may be produced by etching, screen printing, subtractive lithography, mechanical, or other means. By knowing the dielectric pattern, it is possible to recover the position from a single electrode by the results of de-convolution or other inverse transforms 3610 , 3614 . Depending on the necessary contact area and resolution, multiple such patches 3606 , 3608 can be placed next to each other to produce a complete touch surface, simplifying the code and increasing the size of the code relative to the size of the patch in each patch. □ FIGS. 42 and 43 illustrate an example of continuous passive position sensing using an array 3702 of non-linear elements 3704 . □ the continuous passive position sensing approach can be combined with row-column grid-based position sensing to calculate the position of fingers. Due to the non-linear response of the system to touch position, multiple touches on the same row or column can be distinguished. Therefore, it becomes possible to use a row-column grid to calculate high-resolution multi-touch position. □ a continuous resistive sheet it is possible to replace the resistive sheet with a lattice of nonlinear reactive elements or a sheet material that has a nonlinear reactance. □ FIG. 42 shows a one-dimensional lattice for simplicity; similar principles apply to two-dimensional lattices. □ a signal injected into this medium decomposes into a group of solitons (solitary excitations) that exhibit a distance- and frequency-dependent relative phase shift as they travel through the □ each line pattern shows increasing distance from pickup. □ the soliton phase shifts can then be used to calculate the distance from each pickup point to the event, allowing determination of the event location. □ a nonlinear transmission line (lattice of nonlinear reactive elements) can be used with a multitude of pickup points. □ the surface can be broken into zones or strips, with one array covering each strip. □ the array also may be joined linearly, or in a matrix configuration with more than two connections to nearby elements. □ FIG. 44 illustrates an example of spatially-distributed coordinated encoding. □ the position of a contact or motion event at the sensing surface can be determined by encoding coordinates in physical variations of the surface which are then decoded from the signal generated by the event. □ An example of this is shown in cross-section in FIG. 44 : as a finger 3902 moves across a surface 3904 with a varying height profile 3906 , the detected signal 3908 reflects the profile variations along the direction of motion. □ Position information can be encoded in these variations using a two-dimensional self-clocking code, and subsequent signal processing by a coordinate decoder 3910 can reconstruct the position and velocity of points along the trajectory 3912 . □ This technique advantageously replaces an array of electrodes and associated electronics with a single electrode and amplifier, plus a textured surface to capture input motion, resulting in low-cost gestural input surfaces. □ FIG. 45 illustrates an example combination of TriboTouch with resistive touch sensors. □ TriboTouch can be combined with additional sensing approaches in order to use the existing physical designs, while upgrading the capabilities of the system with the benefits that TriboTouch technology offers, or to use the benefits of both approaches. □ Resistive sensors ordinarily use two layers 4002 , 4004 coated with a resistive material, and separated by a small distance. □ the electrodes can be used alternatively as receiver and as a voltage source to determine the vertical and horizontal position of the touch. □ TriboTouch can be combined with resistive sensors by placing pickups 4010 on the top resistive sheet 4002 used in a resistive sensor. □ the pickups 4010 can be used to derive the position of contacts on the top surface 4002 . Note that since resistive sensors often use a full edge as a connector, additional or separable contacts may be needed. □ the resistive sensing capability can be maintained by interleaving the processing of the signals. □ the bottom layer 4004 can be connected to a voltage source, while the top layer 4002 is used for TriboTouch. □ the TriboTouch system can detect the sudden large offset caused by contact with the bottom layer, hand off to the resistive system for resistive position detection, and begin interleaving at that time to use both systems. Such an approach allows for reduced switching and reduced power expenditure. □ TriboTouch can also be combined with capacitive touch sensors. As shown in FIGS. 7 and 8 , capacitive sensors operate by detecting the change in a transmitted electric field. In order to allow cooperation between the two systems, it is possible to connect a capacitive sensor ASIC directly to the same pads as a TriboTouch system and achieve coexistence by interleaved sensing. Since TriboTouch is capable of high-speed operation, it is possible to use existing capacitive technology without significant change. Note that capacitive signals are of a known form and frequency. Therefore, it is possible to operate non-transmitting electrodes in TriboTouch mode while they concurrently receive the signal being transmitted by other electrodes. In such a case, filters may be used to reject the capacitive signals from the TriboTouch processing system, either using traditional frequency-domain filtering, or by using synchronous filtering in cooperation with the excitation signal produced by the capacitive sensor. □ FIGS. 46 and 47 illustrate example combination of TriboTouch with inductive touch sensors. □ Inductive sensors operate by exciting an active stylus with a pulse of current using a matrix of wires. □ TriboTouch receiver lines When a line is not being used to provide excitation, it is possible to use these lines as TriboTouch receiver lines. Since TriboTouch does not transmit any signals, the lines can be directly connected to the TriboTouch system. Note that if one end of the line is permanently attached to a fixed potential rail 3902 , the rail should be disconnected so that the TriboTouch signal can be read. This disconnection can be achieved through an electronic switch 3904 . Alternatively, as shown in FIG. □ the inductive system can be coupled capacitively, e.g., via capacitors 4202 , 4204 , to the touch surface such that a continuous connection to a power rail does not exist. □ An additional benefit of incorporating TriboTouch technology is the reduction in power use. Since inductive sensing uses current flow to form a magnetic field, it is power-hungry. By detecting initial contact with the low-power TriboTouch technology, the inductive sensor can be disabled when there is no contact, leading to significant energy savings when the system is □ TriboTouch, TriboNoise, TriboNoiseTouch, or combinations of those can be combined with other touch sensor types, such as surface acoustic wave, infrared, or acoustic touch sensors, as well as with any of the resistive, capacitive, and inductive sensors described above. □ TriboTouch, TriboNoise, and TriboNoiseTouch can also use the electrode types described herein, except for spatially-distributed coordinate encoding electrodes, which can be used with TriboTouch and TriboNoiseTouch, as discussed above with reference to FIG. 30 . □ SAW touch sensors use transducers to produce an ultrasonic wave that is absorbed when a finger makes contact. □ the surface is ordinarily glass or a similar hard material. This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since SAW systems do not use electrical signals transiting the surface itself to detect position. □ Infrared touch sensors produce infrared light that is absorbed when a finger makes contact. □ This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since infrared systems do not use electrical signals transiting the surface itself to detect position. □ Acoustic touch sensors detect the specific sounds produced when an object touches the sensed surface to detect position. □ This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since acoustic systems do not use electrical signals transiting the surface itself to detect position. □ FIG. 48 illustrates an example computer system 4300 . □ one or more computer systems 4300 perform one or more steps of one or more methods described or illustrated herein. □ the processes and systems described herein, such as the processing system 312 of FIG. 3 , the noise processing system 1216 of FIG. 12 or the TriboNoiseTouch processing system 1916 of FIG. 19 can be implemented using one or more computer systems 4300 . □ one or more computer systems 4300 provide functionality described or illustrated herein. □ software running on one or more computer systems 4300 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated □ TriboNoiseTouch processing system 1916 of FIG. 19 can be implemented as one or more methods performed by software running on the one or more computer systems 4300 . □ Particular embodiments include one or more portions of one or more computer systems 4300 . □ reference to a computer system may encompass a computing device, and vice versa, where appropriate. □ reference to a computer system may encompass one or more computer systems, where appropriate. □ computer system 4300 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. □ SOC system-on-chip □ SBC single-board computer system □ COM computer-on-module □ SOM system-on-module □ computer system 4300 may include one or more computer systems 4300 ; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. □ one or more computer systems 4300 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. □ one or more computer systems 4300 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. □ One or more computer systems 4300 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. □ computer system 4300 includes a processor 4302 , memory 4304 , storage 4306 , an input/output (I/O) interface 4308 , a communication interface 4310 , and a bus 4312 . □ I/O input/output □ this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. □ processor 4302 includes hardware for executing instructions, such as those making up a computer program. □ processor 4302 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 4304 , or storage 4306 ; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 4304 , or storage 4306 . □ processor 4302 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 4302 including any suitable number of any suitable internal caches, where appropriate. □ processor 4302 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 4304 or storage 4306 , and the instruction caches may speed up retrieval of those instructions by processor 4302 . Data in the data caches may be copies of data in memory 4304 or storage 4306 for instructions executing at processor 4302 to operate on; the results of previous instructions executed at processor 4302 for access by subsequent instructions executing at processor 4302 or for writing to memory 4304 or storage 4306 ; or other suitable data. The data caches may speed up read or write operations by processor 4302 . The TLBs may speed up virtual-address translation for processor 4302 . □ TLBs translation lookaside buffers □ processor 4302 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 4302 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 4302 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 4302 . Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. □ ALUs arithmetic logic units □ memory 4304 includes main memory for storing instructions for processor 4302 to execute or data for processor 4302 to operate on. □ computer system 4300 may load instructions from storage 4306 or another source (such as, for example, another computer system 4300 ) to memory 4304 . □ Processor 4302 may then load the instructions from memory 4304 to an internal register or internal cache. □ processor 4302 may retrieve the instructions from the internal register or internal cache and decode them. □ processor 4302 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. □ Processor 4302 may then write one or more of those results to memory 4304 . □ processor 4302 executes only instructions in one or more internal registers or internal caches or in memory 4304 (as opposed to storage 4306 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 4304 (as opposed to storage 4306 or elsewhere). □ One or more memory buses (which may each include an address bus and a data bus) may couple processor 4302 to memory 4304 . □ Bus 4312 may include one or more memory buses, as described below. □ one or more memory management units reside between processor 4302 and memory 4304 and facilitate accesses to memory 4304 requested by processor 4302 . □ memory 4304 includes random access memory (RAM). □ This RAM may be volatile memory, where appropriate, and this RAM may be dynamic RAM (DRAM) or static RAM (SRAM), where appropriate. Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. □ Memory 4304 may include one or more memories 4304 , where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. □ storage 4306 includes mass storage for data or instructions. □ storage 4306 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. □ Storage 4306 may include removable or non-removable (or fixed) media, where appropriate. □ Storage 4306 may be internal or external to computer system 4300 , where appropriate. □ storage 4306 is non-volatile, solid-state memory. □ storage 4306 includes read-only memory (ROM). □ this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. □ This disclosure contemplates mass storage 4306 taking any suitable physical form. □ Storage 4306 may include one or more storage control units facilitating communication between processor 4302 and storage 4306 , where appropriate. □ storage 4306 may include one or more storages 4306 . □ this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. □ I/O interface 4308 includes hardware, software, or both, providing one or more interfaces for communication between computer system 4300 and one or more I/O devices. □ Computer system 4300 may include one or more of these I/O devices, where appropriate. □ One or more of these I/O devices may enable communication between a person and computer system 4300 . □ an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. □ An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 4308 for them. □ I/O interface 4308 may include one or more device or software drivers enabling processor 4302 to drive one or more of these I/O devices. □ I/O interface 4308 may include one or more I/O interfaces 4308 , where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. □ communication interface 4310 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 4300 and one or more other computer systems 4300 or one or more networks. □ communication interface 4310 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. □ NIC network interface controller □ WNIC wireless NIC □ WI-FI network wireless network □ computer system 4300 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), body area network (BAN), or one or more portions of the Internet or a combination of two or more of these. □ PAN personal area network □ LAN local area network □ WAN wide area network □ MAN metropolitan area network □ BAN body area network □ One or more portions of one or more of these networks may be wired or wireless. □ computer system 4300 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. □ WPAN wireless PAN □ WI-FI wireless personal area network □ WI-MAX wireless personal area network □ cellular telephone network such as, for example, a Global System for Mobile Communications (GSM) network □ GSM Global System for Mobile Communications □ Computer system 4300 may include any suitable communication interface 4310 for any of these networks, where appropriate. □ Communication interface 4310 may include one or more communication interfaces 4310 , where appropriate. □ bus 4312 includes hardware, software, or both coupling components of computer system 4300 to each other. □ bus 4312 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. □ Bus 4312 may include one or more buses 4312 , where appropriate. □ a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. □ ICs such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs) □ HDDs hard disk drives □ HHDs hybrid hard drives □ ODDs optical disc drives □ magneto-optical discs magneto-optical drives □ an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. □ Engineering & Computer Science (AREA) □ General Engineering & Computer Science (AREA) □ Theoretical Computer Science (AREA) □ Physics & Mathematics (AREA) □ Human Computer Interaction (AREA) □ General Physics & Mathematics (AREA) □ Electromagnetism (AREA) □ Position Input By Displaying (AREA) □ User Interface Of Digital Computer (AREA) □ Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA) □ Measurement Of Length, Angles, Or The Like Using Electric Or Magnetic Means (AREA) In particular embodiments, a method includes receiving a signal from an electrode of a touch sensor that senses a charge displacement, based on the signal, in order to detect the contact or separation. The method further includes detecting a contact or separation input to the touch sensor. □ This application claims the benefit, under 35 U.S.C. §119(e), of the following, which are all incorporated herein by reference: U.S. Provisional Patent Application No. 61/865,448, filed 13 Aug. 2013; U.S. Provisional Patent Application No. 61/924,558, filed 7 Jan. 2014; U.S. Provisional Patent Application No. 61/924,604, filed 7 Jan. 2014; U.S. Provisional Patent Application No. 61/924,625, filed 7 Jan. 2014; U.S. Provisional Patent Application No. 61/924,637, filed 7 Jan. 2014; U.S. Provisional Patent Application No. 61/969,544, filed 24 Mar. 2014; U.S. Provisional Patent Application No. 61/969,558, filed 24 Mar. 2014; U.S. Provisional Patent Application No. 61/969,590, filed 24 Mar. 2014; U.S. Provisional Patent Application No. 61/969,612, filed 24 Mar. 2014; and U.S. Provisional Patent Application No. 62/000,429, filed 19 May 2014. TECHNICAL FIELD □ This disclosure generally relates to electronic devices that detect interactions with objects, and more particularly to devices that use surface contact sensors or proximity sensors to detect □ A touch sensor can detect the presence and location of a touch or object or the proximity of an object (such as a user's finger or a stylus) within a touch-sensitive area of the touch sensor overlaid on a display screen, for example. In a touch sensitive display application, the touch sensor may enable a user to interact directly with what is displayed on the screen, rather than indirectly with a mouse or touch pad. A touch sensor may be attached to or provided as part of a desktop computer, laptop computer, tablet computer, personal digital assistant (PDA), smartphone, satellite navigation device, portable media player, portable game console, kiosk computer, point-of-sale device, or other suitable device. A control panel on a household or other appliance may include a touch sensor. □ There are a number of different types of touch sensors, such as, for example, resistive touch screens, surface acoustic wave touch screens, and capacitive touch screens. Herein, reference to a touch sensor may encompass a touch screen, and vice versa, where appropriate. When an object touches or comes within proximity of the surface of the capacitive touch screen, a change in capacitance may occur within the touch screen at the location of the touch or proximity. A touch-sensor controller may process the change in capacitance to determine its position on the touch □ Capacitive touch operates by sending a signal from an electrode, and then measuring the variation caused by the presence of intervening materials. Actively emitting an electric field adds to the energy usage of the device and slows down responsiveness. Additionally, scaling the capacitive touch sensor to very large areas can be cost-prohibitive. BRIEF DESCRIPTION OF THE DRAWINGS □ FIGS. 1A and 1B illustrate an example TriboTouch system, which can determine positions of an object based on triboactivity. □ FIGS. 2A-2E illustrate an example interaction between a finger and a TriboTouch sensor. □ FIG. 3 illustrates an example architecture of a TriboTouch system. □ FIG. 4 illustrates an example alternative analog front-end. □ FIG. 5 illustrates principles of TriboTouch operation. □ FIG. 6 illustrates an example process for determining types of contacts based on signal profiles. □ FIG. 7 illustrates an example of combining the capabilities of capacitive sensing and TriboTouch. □ FIG. 8 illustrates an example of capacitively coupling a transmitter to an electrode while using the same receiver system for both capacitive and TriboTouch sensing. □ FIG. 9 illustrates a triboactive surface covered with an array of different materials. □ FIGS. 10A-10C illustrate different positive and negative charge patterns generated when different objects make contact with the same patterned array of sensors. □ FIG. 11 illustrates an example configuration of a NoiseTouch system relative to a user and the environment. □ FIG. 12 illustrates an example NoiseTouch system architecture. □ FIG. 13 illustrates an example process that determines hand poses or positions. □ FIG. 14 illustrates an example method of separating touch and stylus data. □ FIG. 15 illustrates detection of the signal modification characterizing the modification of ambient noise by the contact by a stylus or pen. □ FIG. 16 illustrates an example process of passively sensing the environment and context of a user. □ FIG. 17 illustrates examples of noise contexts that can be passively sensed. □ FIG. 18 illustrates an example process of using the context sensing system to communicate with a device having a NoiseTouch sensor. □ FIG. 19 illustrates an example architecture of a TriboNoiseTouch system. □ FIG. 20 illustrates an example method of separating triboactive data from noise data. □ FIGS. 21-23 illustrate example TriboNoiseTouch processes for identifying triboelectricity-related events and noise-related events. □ FIG. 24 illustrates a triboactive subsystem producing example high-resolution data based on individual micro-contacts with a surface of a touch sensor, while a noise-based sensing subsystem produces an example blob around the area of contact or hover as well as a “shadow” of a hand hovering over the surface. □ FIG. 25 illustrates an example method enhancing the accuracy of finger contact. □ FIG. 26 illustrates an example method for detecting a finger contact and isolating it from a non-conductive pen contact. □ FIG. 27 illustrates example estimation of a pen or hand pose by detecting a hover shadow of the hand making contact or holding the pen. □ FIG. 28 illustrates example TriboTouch sensing for providing high resolution stylus sensing and example TriboNoise sensing for detecting a specifically designed stylus that features buttons to trigger menus and functions. □ FIG. 29 illustrates an example method for improving a dynamic range for hover sensing. □ FIG. 30 illustrates example single-touch electrode components. □ FIG. 31 illustrates two electrodes in an example interleaved pattern. □ FIG. 32 illustrates a row-column electrode grid that can be used to detect position of two touch points. □ FIGS. 33 and 34 illustrate array multitouch configurations using single-touch electrodes in a grid. □ FIG. 35 illustrates an example of continuous passive position sensing using a resistive sheet electrode. □ FIGS. 36 and 37 illustrate an example of continuous two-dimensional passive position sensing. □ FIGS. 38-40 illustrate example electrode-sheet configurations. □ FIG. 41 illustrates an example of dielectric-encoded passive position sensing. □ FIGS. 42 and 43 illustrate an example of continuous passive position sensing using an array of non-linear elements. □ FIG. 44 illustrates an example of spatially-distributed coordinated encoding. □ FIG. 45 illustrates an example combination of TriboTouch with resistive touch sensors. □ FIGS. 46 and 47 illustrate example combination of TriboTouch with inductive touch sensors. □ FIG. 48 illustrates an example computer system 4300. DESCRIPTION OF EXAMPLE EMBODIMENTS □ FIGS. 1A and 1B illustrate an example TriboTouch system, which can determine positions of an object based on triboactivity. FIG. 1A shows an insulator surface adjacent to an electrode. The electrode is connected to TriboTouch hardware, which determines the positions of an object 130 such as a finger when contact of the object with the insulator produces a local charge displacement, as shown in FIG. 1B. The charge displacement is not a net current flow, but rather a charge displacement that is reversed when contact with the object is removed. This distortion of the internal electric field of the insulator can be picked up by the TriboTouch hardware, and interpreted as contact and separation events. Additionally, the distortion spreads out over a region from the point of contact, allowing for a continuous estimate of position. □ FIGS. 2A-2E illustrate an example interaction between a finger and a TriboTouch sensor, which can be used to determine the finger's position based on triboactivity. When two objects come into contact, charge can be transferred between them due to the interaction of electron clouds around the surface atoms. This effect is known by various names, including triboelectricity, contact potential difference, and work function. In the semiconductor industry these phenomena lead to electrostatic discharge (ESD) events which can damage sensitive electronic devices. Rather than attempting to mitigate these effects, techniques disclosed herein referred to by the name “TriboTouch” use this charging mechanism to detect surface contact and motion effects. TriboTouch can directly sense insulators (e.g., gloves, brushes, etc.) as well as conductors or strong dielectrics (e.g., fingers, conductive rubber, etc.) to enable modes of interaction with sensing surfaces such as those described herein. In one aspect, TriboTsouch uses local charge transfer caused by contact and need not emit an electric field to be measured. □ Aspects of a TriboTouch system are illustrated in FIGS. 2A-2E using a finger, but any conductive or non-conductive object can have the same effect. In one aspect, TriboTouch works by measuring the charge displaced when two objects come into contact or separate. No secondary mechanism is needed to induce charge in the object being sensed. There is no need to transmit a signal to be measured. Instead, charge is generated and received when an object contacts the sensing surface. FIG. 2A shows a finger above an insulating surface. In FIG. 2B, as the finger contacts the surface, charge flows and current is sensed. In FIG. 2C, current ceases at equilibrium. In FIG. 2D, finger separation causes charge redistribution and an opposite current. In FIG. 2E, equilibrium has been restored. □ Charge transfer can occur between combinations of insulators, semi-conductors, and conductors with dissimilar surface properties (e.g., composition, surface microstructure, etc.). The polarity, surface charge density, and rate of charge transfer (“contact current”) depend on the particular materials involved. The amount of charge transferred between two materials can be estimated from their relative positions in an empirically-determined “triboelectric series”. A commonly-accepted series, ordered from most positive to most negative, is: air, human skin or leather, glass, human hair, nylon, wool, cat fur, silk, aluminum, paper, cotton, steel, wood, acrylic, polystyrene, rubber, nickel or copper, silver, acetate or rayon, Styrofoam, polyurethane, polyethylene, polypropylene, vinyl (PVC), silicon, and Teflon (PTFE). TriboTouch allows detection of contact by essentially any solid material. □ FIG. 3 illustrates an example architecture of a TriboTouch system. A high impedance amplifier 306 amplifies incoming signals 305 received from an input electrode 304 in response to a surface contact 302, and a subsequent analog to digital converter (ADC) converts this signal 305 to digital form. The input electrode 304, high impedance amplifier 306, and ADC 308 convert the signal 305 as seen at the electrode 304 accurately to digital form. Other embodiments can use sigma-delta approaches, charge counting, charge balancing, or other means of measuring small charge displacements, as shown in FIG. 4. A gain control system 310 can optionally be used to maintain the values within the prescribed range of the system. In one or more embodiments, the components that receive and convert the input signal to digital form are referred to herein as an analog front-end. The analog front-end can include the input electrode 304, amplifier 306, ADC 308, and gain control 310, or a subset of those components. A processing system 312 receives the digital signal and generates position data 332. The processing system 312 can be implemented using hardware, software, or a combination of hardware and software. The processing system 312 starts at block 314 and performs initial calibration at block 316. Then the baseline can be determined by an adaptive method 318. The adaptive method can be, for example, a running average, a differential measurement with respect to a shield electrode, or a composite measure computed from an aggregate of measurement sites, or other methods. This may be triggered as the system is first initialized, or when the system detects that there is a drift in the signal, as indicated by a constant offset of values over a long period. Once this baseline is subtracted at block 320, the noise in the signal (for example to detect common 50/60 Hz noise, and frequencies above and below the expected range of the system) is modeled and rejected at block 322, leaving a signal due to contact charging effects. Contact charging events are then detected and classified, at block 326, as contact, separation, or motion by their time domain profile, using methods such as matched filters, wavelet transforms, or time-domain classifiers (e.g. support vector machines). These events are then integrated by a state machine at block 328 to create a map of contact states on the sensing surface, which allows the system to track when and where contact and release events take place. Finally, this map is used to estimate event types and coordinates at block 330. Note that TriboTouch does not ordinarily produce a continuous signal when a contact is stationary. However, it does produce opposite-polarity signals on contact and removal. These opposite-polarity signals can be used to keep track of how additional contacts are formed and removed in the vicinity of an existing contact point. The pattern of contacts can be understood by an analogy to the effects of dragging a finger through sand, in which a wake is formed before and after the finger. Similarly a “charge wake” is seen by the system, and the charge wake is used to determine motion. The final output is a high-level event stream 333 describing the user's actions. The output can include position data 332. In one or more embodiments, large objects, e.g., a finger, can be distinguished from a collection of multiple touch sites because large objects tend to produce a larger “imprint” of contact that is received at substantially the same time. By correlating the contacts in time, the TriboTouch system can keep track of which contacts belong together. Even when two objects are in close proximity, as in a pinch gesture, for example, the sensor actually detects two contact “peaks” very close together. Therefore the contact relationships can be maintained. □ FIG. 4 illustrates an example alternative analog front-end. While the description of FIG. 3 has been related to the use of a high-impedance amplifier 306 followed by an analog-to-digital converter 308, TriboTouch can also employ a charge-balancing sigma-delta converter, or it can combine both approaches. In the configuration shown in FIG. 4, a capacitor 406 is switched by a switch 404 between a reference voltage source (Vref) 408 and the input electrode 402 to transfer packets of charge, thereby keeping the input electrode potential within the range of the input amplifier 410 (or comparator in the case of a 1-bit sigma-delta ADC). The subsequent signal processing chain combines the output 315 of the ADC 412 and output of the automatic gain control (AGC) 414 to reconstruct the input current with a higher dynamic range than would be possible with the input amplifier and ADC alone. The reconstructed input current is provide to TriboTouch signal processing 416, which can be the processing system, 312 or other signal processing system. □ As described above, TriboTouch can sense signals directly generated by physical contact and need not transmit signals to be sensed. Therefore, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. An additional benefit is the power savings from this design. There is direct savings from not having to transmit a field. Additionally, the system benefits from a simplified architecture, which means there are fewer electronic devices to power. Further, since there is no need to perform extensive noise rejection in hardware, there can be additional savings from reduction of complexity. □ FIG. 5 illustrates principles of TriboTouch operation. The tribocharging caused by contact with the insulating surface is coupled capacitively to the electrode via dielectric polarization. TriboTouch is thus capable of detecting contact, motion, and separation of objects at the surface of the insulator. As such, it is possible to use any object (finger, glove, plastic stylus, paint brush, paper, etc.) to interact with the sensing surface. A data processing system can determine the type of object that interacts with the surface using an event detection and classification component 506. The event detection and classification component 506 uses classification characteristics 504 to determine contact type data 508, which identifies the type of the object. The classification characteristics 504 can include one or more signal patterns 502 that correspond to different types of objects. For example, a first signal pattern 512 can correspond to a finger, a second signal pattern 514 to a glove, a third signal pattern 516 to a plastic stylus, a fourth signal pattern 518 to a paint brush, and so on. The event detection and classification component 506 can, for example, compare the detected tribocharging signal to the signal patterns 502 and select one of the signal patterns 502 that best matches the detected signal. The event detection and classification component 506 can also estimate the position 510 of the detected signal, as described above with reference to FIG. 3. □ FIG. 6 illustrates an example process for determining types of contacts based on signal profiles. Because TriboTouch can sense contact with, motion across, and separation of an object from the sensing surface, it is not necessary to algorithmically derive these events from capacitance measurements. TriboTouch can therefore produce more accurate identification of these events than capacitive sensing can ordinarily provide. Additionally, because of the localized nature of tribocharging, the position estimation algorithms can yield higher spatial and temporal resolution than capacitive sensing methods. This higher resolution can be used, for example, to perform palm rejection or other inadvertent contact rejection using the process shown in FIG. 6. The process of FIG. 6 detects and classifies events at block 602. Block 604 integrates events, e.g., by using a state machine to create a map of contact states on the sensing surface, as described above with reference to FIG. 3. The map can be used to track when and where contact and release events take place, and to estimate event type and coordinates. Block 608 estimates event positions to generate position data 612. Block 610 detects poses to generate hand and stylus pose data 614. As shown in FIG. 5, different types of contacts can have different characteristic signal profiles (examples shown are not indicative of actual material data) and the received signal's characteristics can be used to detect, for example, inadvertent palm contact while sketching with a stylus. In one or more embodiments, different object types can be detected based on the contact profiles of the objects without a special pickup design. These profiles can be expressed either in terms of example waveforms, or in terms of algorithmic approaches that capture distinctive features of the waveform. □ The TriboTouch system can use one instance of the above-mentioned hardware for each touch position, or it can use a continuous larger electrode, and estimate the position based on the distance-dependent change in signal through the electrode. The change can be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. TriboTouch can therefore distinguish position at a resolution higher than that of its electrode structure. In one or more embodiments, when an instance of the hardware is used for each touch position, the hardware instances operate in parallel, so that each electrode is handled individually. The parallel arrangement allows faster read speeds, but increases hardware complexity. Alternatively, scanning through each electrode in sequence offers different tradeoffs, because the digitization system should be faster (and thus consume more power), but the overall system is more compact (which can reduce the power consumption). □ TriboTouch can be configured for single or multiple touch points, and additionally can be configured for either continuous position sensing (such as a phone or tablet), or discrete position sensing (such as a button). That is, position and motion can be sensed, as in a touchscreen, or discrete switches can be used. In one example, a 4-contact resistive-pickup system can be used. Alternatively, a row-column system that detects 2 simultaneous contacts can be used. As another alternative, pickups can be added to a resistive system. In another example, an array of pickups can be used to detect 5 contacts. The specific pickup configuration is a design option for the pickups and electronics. In the discrete position sensing applications, the strengths of the system remain in force, and make the system practical to use in many scenarios where environmental noise or contamination may be an issue, such as in automotive or marine uses, in factory floors, etc. In such cases, TriboTouch can provide the benefit of robust input without the need for additional precautions necessary for traditional capacitive sensing. □ FIG. 7 illustrates an example of combining the capabilities of capacitive sensing and TriboTouch (e.g. direct sensing of contact with conductive and non-conductive objects). Because both strategies use sensitive measurements of charge displacement, it is possible to combine them using essentially the same analog front end hardware. FIG. 7 illustrates the basic principle whereby an electrode 702 can be shared between these two sensing methods. Capacitive sensing works by transmitting a balanced AC signal at high frequencies (typically >125 kHz) using a transmitter 706 into an electrode, and measuring either the transmit loading or the signal received at other electrodes. The capacitive measurement can be performed by a capacitive receiver 708. TriboTouch, however, works by measuring the local charge displacement using a receiver 712 at low frequencies (ordinarily <1 kHz). By capacitively decoupling the electrode 702 from the capacitive sensing circuit 708 using a capacitor 704, the triboelectric charge displacement can be maintained and measured separately, either by time-multiplexing the two sensing modes or by filtering out the transmit signal at the TriboTouch analog front-end or in subsequent signal processing. When time-multiplexing, the capacitive system 710 suspends access to the electrode 702 while the TriboTouch system 714 measures, and vice versa. When filtering, the TriboTouch system 714 uses a filter and knowledge of the signal being sent by the capacitive system 710 to remove the effects of capacitive measurements during the noise rejection phase of processing. Further examples of combining other types of touch sensors, such as resistive, capacitive, and inductive sensors, are described below with reference to FIGS. 45-47. □ FIG. 8 illustrates an example of capacitively coupling a transmitter 804 to an electrode 802 while using the same receiver system 806 for both capacitive and TriboTouch sensing. The capacitive software and the TriboTouch software can be combined into a single system 808. In this case, the capacitive software uses the same hardware as the TriboTouch software, taking turns to use the shared resources. □ FIG. 9 illustrates a triboactive surface covered with an array 900 of different materials. The embodiment shown in FIG. 9 makes it possible to distinguish between different contacting materials (e.g. skin, graphite, rubber, nylon, etc.) by patterning materials with different tribonegativity over sensing sites 902, 904, 906 on the TriboTouch surface. The principle is similar to that of a color CMOS image sensor, in which a color filter mask is superimposed over pixel sensors. In FIG. 9, the triboactive surface is covered with an array 900 of four different materials, ranging from strongly tribopositive (++) materials 902 to strongly tribonegative (−−) materials 906. When an object interacts with a cluster of these sensors, a characteristic charge pattern is generated that allows determination of the object's material properties (i.e. tribospectroscopy). In one or more embodiments, the array can be laid over different electrodes. These electrodes can be clustered closely together, such that a small motion is sufficient to cross multiple electrodes. Differentiation between different material types can be performed fewer material types to speed up type detection. □ FIGS. 10A-10C illustrate different positive and negative charge patterns generated when different objects make contact with the same patterned array 1008 of sensors. In FIG. 10A, contact with the finger 1002 generates negative charge patterns on the −−, +, and − sensors, and a neutral charge pattern on the ++ sensor. Therefore, the finger 1002 is characterized by an overall strongly positive charge pattern. In FIG. 10B, contact with the pencil 1004 generates positive charge patterns on the + and ++ sensors, and negative charge patterns on the − and −− sensors. Therefore, the pencil 1004 is characterized by an overall neutral charge pattern. In FIG. 10C, contact with the eraser 1006 generates positive charge patterns on the +, −, and ++ sensors, and a neutral charge pattern on the −− sensor. The eraser 1006 is therefore characterized by a strongly positive charge pattern. These characteristic charge patterns can be used to identify an unknown object that makes contact with the sensor array 1008. □ TriboTouch allows for detecting a single contact, dual touch (e.g., detect two fingers simultaneously making contact), multi-touch (e.g., detect three or more fingers simultaneously making contact), the order of touch (e.g., detect the order where index finger makes contact first and then middle finger), the state of the object/finger where the first object/finger is in a first state and the second object/finger is in a second state (for example, when rotating, the first finger can be stationary while the second finger rotates about the first finger), detect adjacent fingers versus non-adjacent fingers, detect thumb versus fingers, and detect input from prosthetic devices. TriboTouch also allows for detecting motion, and also detecting the position of the touch/motion. □ When contact is detected, TriboTouch allows for determining the shape of the object making the contact, the type of materials of the object making the contact, activating controls based on the type of materials that are detected, activating modalities based on the shape and type of materials detected (e.g., brush vs. eraser), using contact shape to depict contact realistically, using contact shape to detect object to change modalities of application, and using contact shape to improve position accuracy. □ The dual touch detection allows for detecting zoom gesture, panning gesture, and rhythmic gesture to create shortcuts or codes. In addition, multi-touch detection allows panning gestures to control application switching or multi-finger controls for games. □ TriboTouch also allows the order of the touch to be detected so that, for example, rhythmic input can be used to create shortcuts or codes. Detecting adjacent fingers versus non-adjacent fingers can be used to detect input from chorded keyboard where multiple keys together form a letter. Detecting thumb versus fingers can be used to provide modified keyboard input mode, allow for chorded input, and allow imprint of fingers to be used as code. In addition, motion can be detected so that, for example, the following gestures can be detected: zoom in, zoom out, panning, dragging, scrolling, swipe, flick, slide, rotate clockwise, or rotate counterclockwise. The different types of contact, motion/gestures, and position described above can also be detected using NoiseTouch and TriboNoiseTouch. □ In industrial settings, the noise-resistance and distinctive signal characteristics of TriboTouch (and NoiseTouch) allow operation is noisy, humid, or dirty environments. These conditions generally prohibit the use of capacitive sensors, and as a result the systems currently used are relatively primitive (though robust)—such as physical buttons, membrane switches, IR touchscreens, etc. TriboTouch techniques enable the same type of interfaces available to consumer users to be used in industrial settings, such as to easy-to-clean hard glass touch controls, and so on. □ TriboTouch can be used to provide self-powered buttons, e.g., for transitioning from sleep mode to wakeup mode without capacitive sensing. When a contact takes place on a triboelectric control pickup, a small charge redistribution is triggered. As long as the electronics connected to the device are sufficiently low-power, this displacement current may be used to directly send a short message regarding the event. Alternatively, the device may collect power from static electricity produced during bulk motion, and later use that power to operate during relevant contact events. This may be coupled with a radio transmitter or similar device to allow for completely wireless and battery-less remote control of devices. □ TriboTouch can provide indirect touch features, which can, for example, enable placing a paper on top of a touch screen and writing on the paper with a finger, stylus, brush, or the like. TriboTouch (and NoiseTouch) surfaces operate with an insulator between the electrode and the contacting object. However, the charge displacement effect can occur on any material. Therefore, the touch surface can be covered by an additional material such as a sheet of paper or cloth, and operation will not necessarily be impeded as a result. Since the contact of any two materials can produce a triboelectric effect, the makeup of the two materials making contact (while in contact with the touch surface), whether paper and pencil or brush and canvas, is not at an issue in contact detection. □ Triboactive contact detection can be used to detect erasure, for example, by detecting the motion of an eraser on top of paper, thus mirroring the digital content to what is drawn on the paper itself. Attachments can also be made to the screen to speed up particular input. For example, for gaming applications, a soft passive joystick that makes contact with the screen when pressed in different directions can be used to provide the user with better haptic feedback. Similarly, a keyboard template may be used to provide physical passive keys that can be used to quickly trigger actions in applications such as drawings or 3D graphics, where switching between different modes and tools in rapid succession is common. Because triboactive contact sensing can sense contact from non-conductive materials, the choice of materials for the attachments is greatly increased, and conductive or electrically active components are not required. This allows a much broader class of input experiences at much lower cost, and use of a greater set of materials such as plastic, paper or wood for the attachments. □ Gesture input can be provided using TriboTouch techniques. As discussed elsewhere herein, triboelectric charge displacement is analogous to the displacement of sand as a finger is run through it. A change in the angle of the finger (whether leaning left or right, angle of lean, and so on) can affect the way the sand is disturbed. Likewise, a change in the angle of the finger can affect the charge displacement. This change in displacement can be measured to estimate the pose of the hand, including angle, handedness, and the like. □ FIG. 11 illustrates an example configuration of a NoiseTouch system relative to a user and the environment. A person can be surrounded by an electric field emitted by devices in the environment. This field is normally considered part of the electromagnetic interference (EMI) in the environment. This field is carried throughout the body, and can be coupled capacitively to electrodes in the device. Rather than attempting to reject this noise, a technique referred to herein as “NoiseTouch” uses the noise that is conducted by the body and picked up by the electrodes of the touch sensor to detect the position of the user's touch. Feature parity with capacitive sensors is maintained (hovering support, multitouch, etc.). NoiseTouch uses environmental noise and is thus immune to EMI, and does not need to emit an electric field to sense user interactions. NoiseTouch is scalable, i.e., can be applied to surfaces of any shape and size, responsive, and has reduced complexity. □ Referring to FIG. 11, environmental EMI sources 1106 can be coupled to the ground 1102 via impedance Z[ ] [in ] 1104, and to the human body 1110 via impedance Z[ ] [air ] 1108. The body 1110 is also connected to ground 1102 via an impedance Z[ ] [b ] 1112. The EMI 1106 is coupled to the electrode 1118 through an optional insulator 1116, and is then received by the NoiseTouch hardware 1120, which itself is coupled to ground 1102 via impedance Z[ ] [h ] 1114. The differences between impedance values to ground of the different components in the system, and their exposure to EMI-induced electric field changes, result in a small potential difference sensed by the hardware 1120 from any source in the vicinity. In other words, when a large antenna, such as the human body 1110, is in proximity to the electrode 1118, the characteristic of the noise is different compared to when the human body 1110 is not in proximity. The NoiseTouch system 1120 can detect touch by sensing this change in the characteristics of the noise received by the electrode 1118. □ FIG. 12 illustrates an example NoiseTouch system architecture. Environmental noise (from power lines, appliances, mobile and computing devices, etc.) continuously emits electric fields that contribute to the environmental electromagnetic interference (EMI, i.e., electronic noise). The human body is a slight conductor, and thus acts as an antenna for these signals. When the body closely approaches the electrode, e.g., when the body is hovering over or touching a touch panel, this signal is capacitively coupled to an input electrode 1206. A high impedance amplifier 1208 amplifies the incoming signals, and a subsequent analog to digital converter (ADC) 1214 converts this signal to digital form. □ In one or more embodiments, the processing system 1216 (e.g., processing software running on a computer system) has two functions. Initially, the processing system 1216 characterizes the noise at block 1220 and adapts the gain at block 1218 so that the signal does not overwhelm the amplifier 1208. The data processing system 1224 then continues gain adaptation at block 1226, while rejecting unwanted signals at block 1228 and estimating positions at block 1230. The gain adaptation information is fed back to a gain controller 1210, which can be a portion of the front-end hardware, to control the high-impedance amplifier 12078. The gain adaptation maintains the signal from the amplifier 1208 within the range of the ADC 1214. □ The noise characterization system 1220 can be used to break the noise signal into bands and characterize the reliability of those bands based on how constantly they are available, and what variability they exhibit. Via this analysis, a profile of each band is created, which can then be used by the noise source selection system 1222 to select an appropriate band (or set of bands) for position estimation. The selection process can also decide to change the selection on a time-varying basis and the user location and the noise environment around the user changes. For example, when the user sits down in front of a TV, a particular band may be particularly fruitful. When leaving the home, this band may no longer be as useful as that band (or set of bands) that are produced by the car. □ During operation, gain adaptation as described previously continues to occur as necessary to keep the signal within range of the hardware. Using the characterization data, block 1228 removes the unwanted bands of noise, and feeds the data to block 1230, which uses the signals to estimate where and how the user is approaching the surface. Block 1230 also carries our linearization, such that the position of the user is expressed as uniform values with relation to the edges of the surface. When used with an array of pickups, linearization in TriboTouch is essentially de-noising of the position data generated by the array. Because the positions are detected at each sensor, the position data is cleaned up and fit to a smoother motion. When used with the electrode pickup systems described herein (see, e.g., FIGS. 32-47, the linearization system mathematically maps the positions from the continuous range of values produced by the system to the Cartesian coordinates of the touch surface. In one or more embodiments, the process can be based on an individual mapping of touch positions to Cartesian coordinates. □ In FIG. 12, noise from the environment can be picked up by the user's body. This noise is capacitively coupled by the electric field of the body to the input electrode 1206. The signal from the electrode is then digitized. The digitization can be done by a number of methods, including the high-gain amplifier 1208 followed by the ADC 1214, as shown in FIG. 12. The signal can also be converted by other techniques, such as instrumentation amplifiers, sigma-delta converters, charge counters, current metering approaches, and so on. The gain of the high impedance amplifier 1208 can be optionally controlled by the gain adaptation component 1218 of the processing system 1216, though the high impedance amplifier 1208 and ADC 1214 can alternatively have sufficient resolution such that gain control is not necessary. Following digitization, the data is fed to the processing system 1216, which can be implemented in hardware or software. An initial calibration is performed to set the gain if needed. Block 1220 characterizes the noise into frequency bands. The system can also determine the periodicity of various noise bands. This determination allows for the selection of a reliable band (or bands) at block 1222, based on continuous availability and signal strength. That information can be used to reject unwanted signals at block 1228. Block 1230 performs position estimation. During the course of processing, the signal characteristics may change. In case of such a change, the system can trigger additional gain adaptation 1226 or noise characterization 1220 to provide uninterrupted operations. Depending on the electrode structure, linearization can be performed at block 1230 to compensate for the nonlinear nature of a continuous sheet electrode, or a position can be estimated directly from the centroid of the activation seen in a row-column or matrix electrode array. Block 1230 then produces the resulting position data 1232. □ In one or more embodiments, the NoiseTouch system does not include facilities for transmission of signals. Signal transmission facilities can be omitted because NoiseTouch senses environmental signals, and does not need to transmit signals in order to sense environmental signals. Since the receiving hardware is designed to accept EMI, it is resistant to interference from EMI sources. In addition, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. An additional benefit is the power savings from this design. On one hand, there is direct savings from not having to transmit a field. Additionally, the system benefits from a simplified architecture, which means there is simply less electronics to power to begin with. Additionally, since there is no need to carry out extensive noise rejection in hardware, there is additional savings from the reduction of complexity on that front as well. □ FIG. 13 illustrates an example process that determines hand poses or positions. The EMI conducted by the body is coupled capacitively to the electrode via the electric field surrounding the body. As an example, the process can determine when the user is holding or touching the screen from the left or right side, which is pose information. An ADC 1306 converts the analog input signal from the amplifier 1302 to a digital signal. By appropriately adjusting the gain of the system at blocks 1308 ad 1304, NoiseTouch can detect approach of a body part at a distance. As such, it is possible to distinguish when the user is hovering over the touch surface without making physical contact. Additionally, because of the speed afforded by the NoiseTouch system, the electrodes can be continuously scanned at multiple gain settings at block 1310, allowing for simultaneous detection of hover and touch. Multiple gain setting scanning may be used, for example, to allow for palm or inadvertent contact rejection, detection of holding pose (one-handed vs. two-handed, left vs. right handed, etc.), and the like. Block 1312 compares the signals read at different gains. Bock 1314 uses pose heuristics to determine pose data, such as the pose of a hand. Block 1318 uses the results of the signal comparison 1312 to determine hover □ The multi-gain surface scanning can detect the pose of the hand as the user holds a device that contains a NoiseTouch sensor. Multi-gain scanning provides different sensing depths, with resolution decreasing with the increase of the gain. At high gain, it can sense more distance objects, but does not determine position as exactly as when low gain is used. For example, multi-gain scanning can enable the system to distinguish right-handed pen input from left-handed pen input by locating the position of the hovering hand relative to the contact position. The location can be determined using a higher gain surface scanning setting to sense the approximate position of the hand that is making contact. Multi-gain scanning can also help to sense whether one or two hands are hovering, which from the sensing perspective will produce, respectively, one or two sensed “blobs” at medium gain, or a small or large “blob” at high gain. Since the sensing field at high gains extends out from the device to some distance, it is also possible to detect how a device with a NoiseTouch screen is being held relative to the location of the □ In one or more embodiments, gestures that are part of “touch” (e.g., multi-gain hover and the like) can be separated from how the machine can react to the presence of hover. For example, if a user is holding the phone in their right hand, the keyboard can automatically shift its touch points to the left so that the user can type more easily. Also, controls can appear on a tablet closer to the hand holding the tablet (or, alternatively, on the other side of the table, so that touching the tablet with the free hand is easier). In one aspect, hover can be a contextual cue for the software. □ FIG. 14 illustrates an example method of separating touch and stylus data. Similar to the NoiseTouch techniques described above in FIG. 12, an input signal can received by an ADC 1420 and characterized by a noise characterization block 1422. Noise separation is performed by a modified noise separation block 1424, and position data 1428 is determined by a position estimation and linearization block 1426. While the description above focuses on the appendages of a user, it should be noted that NoiseTouch can function equally well with conductive or partially-conductive objects. As such, it is possible to fashion styli, pens, or other devices 1410 that can be detected by NoiseTouch. In such cases, the design of the stylus 1410 can include passive reactive elements, such as inductors or capacitors (or a combination of passive elements), which imprint the noise signal with a specific signature that is detectable by the position estimation 1426 and noise characterization 1422 blocks. Detecting the specific signature enables NoiseTouch to discern between the presence of a stylus 1410 and a finger. Therefore, separate finger position data 1428 and stylus position data 1430 can be generated by the position estimation and linearization block 1426. □ FIG. 15 illustrates detection of the signal modification characterizing the modification of ambient noise by the contact by a stylus or pen 1510. FIG. 15 illustrates example signals at different points of the analog front-end portion of the system shown in FIG. 14. The EMI sources emit an EMI signal. The stylus 1510 emits a signal, and another signal is received by the electrode 1418. The signal received by the electrode 1418 is different from the EMI received by the electrode at Z[air ]when the stylus is not in contact with the insulator 1416. □ An alternative implementation of the device may produce a certain amount of controlled generalized EMI from the device which is then used to detect position in areas where sufficient environmental EMI may not be available. This capability may be automatically switched on by the automated gain control systems once the levels of environmental EMI drops below a pre-programmed or dynamically selected threshold. The NoiseTouch system may be tuned to specifically use the regulatorily-allowed EMI emissions of the device exclusively, thus rejecting other sources of noise. This increases the robustness of the device since the EMI profile need not be dynamically characterized. □ The NoiseTouch system may use one instance of the above-mentioned hardware for each touch position, or it may use a continuous larger electrode and estimate the position based on the distance-dependent change in signal through the electrode. The change may be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. In this way, NoiseTouch can distinguish position at a higher resolution than the resolution of its electrode structure. □ NoiseTouch can be configured for single or multiple touch points, and additionally can be configured for either continuous position sensing (such as a phone or tablet), or discrete position sensing (such as a button). In the latter application, the strengths of the system remain in force, and make the system practical to use in many scenarios where environmental noise or contamination may be an issue, such as in automotive or marine uses, in factory floors, etc. In such cases, NoiseTouch can provide the benefit of a robust input solution without the need for additional precautions necessary for traditional capacitive sensing. □ FIG. 16 illustrates an example process of passively sensing the environment and context of a user. NoiseTouch can continuously sense and characterize the environmental EMI, and this capability can be used to passively sense the environment and context of the user. For example, at home the user may surrounded by EMI from the TV, mobile phone, and refrigerator, while at office the user may be surrounded by the EMI from the desktop computer, office lighting, and office phone system. When the user makes contact with the NoiseTouch system, e.g., to awaken or unlock their device, the NoiseTouch system can capture this characteristic data and compare it to an internal database of noise and environments, using relevant similarities to deduce the user's location. □ In the process of FIG. 16, an input signal is fed from a signal acquisition system 1602 to a Noise Characterization module 1604. Block 1604 performs noise characterization to determine a current noise profile 1610. Block 1604 breaks the signal down into bands (e.g., using FFT or the like), and analyzes both the magnitudes of signals in different signal bands, as well as the time-domain change in those magnitudes. The signals to be used for positioning are fed to block 1606. Block 1616 performs estimation and linearization to generate position data 1608, as described elsewhere herein. From user input 1606, as well as automatic sensing 1618, such as GPS, WiFi positioning, etc., block 1620 determines whether the device is in an environment of interest. If so, the current noise profile is stored in the Environment and Context Database 1622. The current profile and the entries in the Database are used by the Environment and Context recognizer 1612 to detect when the environment or context is encountered again, and when recognized again, events are generated accordingly. □ FIG. 17 illustrates examples of noise contexts that can be passively sensed. Different rooms in a house or office can have different noise contexts. For example, a break room may include EMI from the coffee machine, while a meeting room may include EMI from a large TV or projector, as shown in FIG. 17. The device can then use the context estimates to make certain functionality easily accessible. For example, the device can automatically print queued documents from the user when the user approaches the printer, or allow control of the projector when the user is in the same room. The user may additionally configure capabilities on a per-area or per-context basis to help streamline tasks. The noise characterization can be performed based on external device activity 1702, such as whether a TV or lights are on or off. The context of interest can be based on automated context recognition 1704 or user input 1706. The automatic context recognition 1704 can determine that the context is “leaving the kitchen”, “in the bedroom”, or “driving”, for example. The user input can be “watching TV”, “reading in bed”, or “washing clothes”, for example. Based on these factors, the environment and context data 1708 is generated. and used as input for context-relevant or context-dependent automation services 1710. □ FIG. 18 illustrates an example process of using the context sensing system to communicate with a device having a NoiseTouch sensor. A device wishing to communicate may emit a capacitive signal 1804 by applying a voltage to a conductive surface, including the metal frame or shielding of the device itself. This signal 1804 is combined into the environmental EMI field 1802 and received by NoiseTouch. The signal can be encoded by a user or stylus at block 1808, and received by an electrode and ADC at block 1808. Noise characterization can be performed at block 1810, and position estimation and linearization can be performed at block 1812 to generate position data 1814. A signal detection system 1818 can be used to search for such signals, possibly with additional information 1816 about the context which allows a narrowing of search parameters. Thereafter, the noise signal is filtered at block 1820 to only include the bands where transmission is taking place, and the signal is demodulated at block 1822 to receive 1824 data. Such data may be used to uniquely identify a device (for example, to allow immediate and direct control of a nearby appliance), or to send contextual data (such as the time remaining for an oven to heat up, or that a refrigerator door has been left open). Such communication may be made bidirectional, such that a device which does not include the position-sending capability may nonetheless include a NoiseTouch electrode for the purposes of receiving context and incoming data. Therefore a non-touch-enabled device (such as a microwave) may receive NoiseTouch-based communication from a nearby device for the purposes of control or query of functionality. □ Example scenarios in which environmental sensing can be used include changing a phone's home screen depending on sensed context, changing a phone's home screen depending on sensed context, sending user location to external devices using context sensed by the phone, targeted sensing of activity of external devices, and monitoring energy consumption. The sensor system may be located on a device such as a watch or Fitbit-type device that is word by the user. The sensor system can also be on a laptop or a TV. For example, when a user enters a house, the phone detects the noise signature of the house and provide a set of applications on the home screen that are dedicated to home control, e.g. Alarm control, TV, Audio system, etc. A phone's home screen can be changed depending on sensed context. Upon the user entering the house, the phone detects the noise signature of home and provides a set of applications on the home screen that are dedicated to home control, e.g. Alarm control, TV, Audio system, etc. For example, a tablet or smartphone can display up a home screen page that contains music applications when a headphone is plugged in. Likewise, when the user is at home, the controls for various appliances, lighting systems, TV and other electronics, home HVAC controls, etc., can be brought up on a special page of the interface that makes access much more convenient. In another example, the home can be enabled to provide application dedicated to the control of devices in each room, privileging TV controls when in the living room, and timer when in the kitchen for example. When a user moves from room to room in the house, the home screen can be changed depending on the sensed environmental context. This technique can be applied on a per-room basis. For example, the user may customize a page that displays business-related applications such as email and business document management software when the user is in the study, the TV remote and current TV schedule in the living room, and the baby monitor, security system, and AC controls in the bedroom. These may associations can be designed to be customized and managed by the user. □ A user's location can be sent to external devices using context sensed by the phone. For example, the phone detects the current room the user is in, and sends the information to the devices in the current room. Lights can be turned on when the user carrying his phone enters a given room, and turns off when leaving it; A preset profile, e.g. certain music and lighting conditions can be started automatically when the user enters the living room; Alarm could be de-activated when entering the house, and so on. For example, the system may notify the TV when it detects the user has moved away. At that point, the TV may turn off a power-consuming display panel, but leave the sound on, saving energy. The air conditioning may go into a power-saving mode likewise when the user is away, and quickly cool a room when the user enters. The user may configure the devices to act in a particular way based on his or her presence or absence from the vicinity. In one or more embodiments, If the TV is on, the phone may look up favorite programs the user selected previously, and tell the user that a particular channel is showing his favorite show. □ Noise detection can also be used to target activity sensing of specific external devices, such as TV, lights, audio system etc. For example, a phone can detect that lights are left on when in the hallway before leaving a place, and notify the user. As another example, a phone can detect that a television s switched on and can provide recommendations, and the like. To perform energy consumption monitoring, noise detection can sense the overall noise level of a home to monitor the activity of electronic devices and give a sense of global energy consumption. Using signal processing on the global noise level, energy monitoring can also be targeted and device-specific. All electronics, when active, can produce more EMI than when off. By sensing the overall changes in bulk EMI, the system may determine when the user is generally using more or less energy, and provide overall feedback without necessarily detecting particular devices or knowing anything particular about those devices. Therefore, when the user is in a room, the sensing system can detect if the lights are or not. When the user moves to a different area as noted by the system based on a change in the EMI environment, the system can notify the user that they left the lights on. This may be additionally gated by particular locations, such that it only applies to home, office, or otherwise. Note that on one or more embodiments this technique requires no special instrumentation of the lights or other infrastructure, and thus can be easily used with legacy unaugmented locations. □ In addition, NoiseTouch and hover can be used to detect a single air touch/tap, dual air touch/tap, multi-finger air touch/tap, adjacent fingers hovering, or hovering thumb versus fingers. Furthermore, motion using hover can be detected such as, for example, zoom in, zoom out, panning, dragging, scrolling, swipe, flick, slide, rotation clockwise, or rotation counterclockwise. In addition, portions of content under the hovering object can be magnified or previewed. Also, objects can be recognized by detecting the conductive parts of the object. Furthermore, when holding insulating objects, NoiseTouch allows for detecting the tool angle, and the position of the hand relative to the object. □ FIG. 19 illustrates an example architecture of a TriboNoiseTouch system. In one or more embodiments, the TriboNoiseTouch techniques disclosed herein are based on a combination of the TriboTouch and NoiseTouch techniques. In one or more embodiments, NoiseTouch uses the noise that is conducted by the human body and picked up by the electrodes of the touch sensor to detect the position of the user's touch. In one or more embodiments, TriboTouch uses the charge displacement that occurs when two objects come in contact with each other. By measuring this displacement, TriboTouch can detect contact of the sensitive surface with any material. This is done using a sensing surface similar to capacitive sensors in use today and requires no physical displacement (as resistive screens do). □ In one or more embodiments, TriboNoiseTouch combines the capabilities of TriboTouch and NoiseTouch using the same hardware, electrode geometry, and processing architecture. Therefore, the TriboNoiseTouch system has the capacitive touch features of NoiseTouch, and is also capable of sensing contact with a wide variety of materials using TriboTouch. TriboNoiseTouch opportunistically uses each methodology to offer improved capabilities, further improving the speed of contact detection over NoiseTouch, while providing non-contact and bulk contact (e.g., palm contact) sensing. TriboNoiseTouch uses environmental noise and surface interaction. TriboNoiseTouch can thus be immune to EMI, and need not emit an electric field. TriboNoiseTouch can sense the contact of non-conductive materials. Additionally, TriboNoiseTouch uses a combination of two physical phenomena to detect touch and provide robustness, speed, and differentiation of contacts by different materials (e.g., finger vs. stylus). The combination of NoiseTouch and TriboTouch technologies into a single panel can reduce complexity and provide savings in energy, and reduce hardware resource usage. □ While the sources of signals for noise and triboactive measurement are different, the characteristics of the signals have similarities. Both signals are ordinarily coupled to the electrode capacitively via an electric field, and are therefore ordinarily amplified by a high-impedance amplifier. This allows the hardware for triboactive and noise-based position sensing to be economically combined into a single TriboNoiseTouch system. The TriboTouch and NoiseTouch techniques can be combined using time multiplexing or space multiplexing. For example, a full panel reading can be performed with TriboTouch, and then with NoiseTouch, or we some of the electrodes on a panel can be used for TriboTouch, and others for NoiseTouch, with optional switching of electrodes between TriboTouch and NoiseTouch for more continuous coverage. □ Referring to the example TriboNoiseTouch system shown in FIG. 19, environmental noise sources 1902, such as power lines, appliances, mobile and computing devices, and the like, emit electric fields that contribute to the environmental electromagnetic interference (EMI, or colloquially, electronic noise). The human body 1904 is a slight conductor, and thus acts as an antenna for these signals. When the body 1904 closely approaches the electrode 1906, e.g., when the body 1904 is hovering over or touching a touch panel, this signal is capacitively coupled to the input electrode 1906. At the same time, the contact of the body or another object with the touch surface causes a triboelectric signal 1908 to be produced. Both signals are capacitively coupled to the electrode. A high-impedance amplifier or electrometer 1910 detects the incoming signals, and an analog to digital converter (ADC) 1912 subsequently converts this signal to digital form. These components may have additional switchable characteristics that aid in the separation of the two signals. □ The signal is processed by a processing system 1916, which can be implemented as hardware, software, or a combination thereof. The processing system 1916 can include a calibration, which can be done at startup, and whenever internal heuristics determine that the signal is becoming intermittent or noisy. This is done, for example, by calculating mean and variance, and ensuring these values remain within a range. Deviations of the mean value may lead to gain adaptation, while excessive variance may cause the selection of a different noise band. □ The processing system 1916 has two stages of execution. For the triboactive signal, the processing system 1916 characterizes the noise at block 1920 and adapts the gain at block 1918 so that the signal does not overwhelm the amplifier. This stage can be done separately for triboactive and noise signals, in which case the processing system 1916 characterizes the noise at block 1926 and adapts the gain at block 1924 for the noise signals. Additionally, offsets in the readings caused by charges adhered to the insulators or nearby objects can be offset for triboactive signals at block 1922. The initial conditions are calculated during the initialization phase. Noise source selection is performed at block 1928. □ After initialization is complete, the data processing portion of the system begins at block 1930. Block 1932 selects the measurement to make, and block 1934 separates the signals by applying initial filters specific to the signals required. The characteristics of the filters are suited to the selection of noise signals, as well the means of interleaving the two types of measurements. For noise signals, the process continues gain adaptation at block 1936 and rejects unwanted signals at block 1938. For triboactive signals, the gain and offset are adapted to compensate for environmental drift at blocks 1940 and 1942, respectively. The gain adaptation information is fed back to gain control block 1914 to control the high-impedance amplifier 1910, so that the signal from the amplifier 1910 remains within the range of the ADC block 1912. The outputs of both signal paths feed into the opportunistic position estimation and linearization block 1944, which uses the most reliable and time-relevant features of both measures to calculate position estimates 1946. □ FIG. 20 illustrates an example method of separating triboactive data from noise data. As shown, during initialization a characteristic profile of noise and triboactive signals is created at blocks 2002 and 2008, respectively. At runtime, signal separation block 2014 characterizes the triboactive signal in time and frequency domains, indicating what signals come from triboactivity. The remaining signal is than analyzed by band and appropriate bands are selected for the noise analysis at block 2016. □ The system starts with an initialization of the system where we determine (possibly offline) specific initial signal bands. Signal separation may operate in the time or frequency domain, and may be done by filtering specific frequency bands from the combined signal. At runtime, the signals are separated according to the initialization characteristics determined, and the data is separated into independent streams for processing. The band selection may be dynamically changed based on location, signal strengths, etc. □ In one or more embodiments, the TriboNoiseTouch system does not include facilities for transmission of signals. Signal transmission facilities can be omitted because TriboNoiseTouch senses signals in the environment as well as to the contact itself, and does not need to transmit signals to sense environmental signals. Since the receiving hardware is designed to accept EMI, it is resistant to interference from EMI sources. In addition, the system does not emit spurious signals as a result of its activities outside of what may be normally expected from any electronic circuit, simplifying compliance with EMI regulations and design of noise-sensitive electronics positioned nearby. An additional benefit is the power savings from this design. For example, there can be direct savings from not having to transmit a field. The system benefits from a simplified architecture, which means there is simply less electronics to power to begin with. Additionally, since there is no need to carry out extensive noise rejection in hardware, there is additional savings from the reduction of hardware complexity. □ FIGS. 21-23 illustrate example TriboNoiseTouch processes for identifying triboelectricity-related events and noise-related events. Three example processes for sequencing TriboNoise event-sensing are described herein. The process of FIG. 21 identifies triboelectricity-related events, then identifies noise-related events (i.e., TriboTouch first). In one or more embodiments, the system can trigger the NoiseTouch subsystem when the TriboTouch portion of the system has received no signals after a period of time has elapsed. Each TriboTouch event transmits a touch-event or material classification event when detected. □ The process of FIG. 22 identifies noise-events, then identifies triboelectricity events (i.e., NoiseTouch first). In one or more embodiments, in the NoiseTouch-first setup, a timer can be used to reset the noise gain settings after no interruption has been sent by a TriboTouch-recognition pipeline after a given amount of time has passed □ The process of FIG. 23 is an example sweep process that acquires a wide band signal and parallelizes triboelectricity sensing and noise sensing. The sweep process of FIG. 23 can be used, for example, when prioritization is to be set at a higher level, e.g., at application level. For example, a painting application may be more closely related to triboelectricity-based sensing, while location/context dependent applications may be more closely related to noise-based sensing. □ The choice regarding the relative prioritizations of TriboTouch and TriboNoise can be device- and application-dependent. The triboelectricity-first approach is well-suited for applications where touch surfaces are used heavily by the user, while the “noise-first” approach is well-suited for more general application devices, such as mobile devices, where context sensing on and above the surface interaction can be used simultaneously. Similarly, context dependent-applications are likely to privilege noise-sensing, while drawing, painting, and other direct manipulation applications are likely to privilege triboelectricity-sensing. □ By combining noise and triboactive measurements, it is possible to detect materials that are not sufficiently conductive to be visible to noise-based or capacitive measurements. In addition, the characteristic contact reading involved in triboactive measurement obviates the need for extensive threshold estimations for detecting touch. This means that the system is able to react to short contact events such as the user using a stylus to dot the lowercase letter “i”. The combination of the systems also allows for the detection of body parts and hand-held instruments such as styli. In such cases, the stylus can simply be made of an insulator that is “invisible” to noise-based measurements, which allows the system to detect whether a contact is made by, for example, resting the wrist on the touch surface, or by the stylus held in the same hand. □ FIG. 13, described in part above, illustrates a process of simultaneously detecting hand pose information and hover position. TriboNoiseTouch systems can determine when true contact has occurred, thus preventing phantom readings from fingers held close to the touch surface from accidentally invoking commands. This is a side effect of the fact that triboactive signals are only generated by direct contact. However, it is also possible to simultaneously detect hovering as well, thus presenting additional means of interaction. Since the EMI being conducted by the body is coupled capacitively to the electrode via the electric field surrounding the body, by appropriately adjusting the gain of the system, NoiseTouch is capable of detecting approach of a body part at a distance. Because of the TriboNoiseTouch system's speed, it can continuously scan the electrodes at several gain settings, allowing for simultaneous detection of hover and touch. This may be used, for example, to allow for palm or inadvertent contact rejection, detection of holding pose (one-handed vs. two-handed, left vs. right handed, and so on). □ The process shown in FIG. 13 can take readings from the electrodes with a variety of gain settings, usually above the nominal setting used to detect contact. At higher gain, weaker and more distant electric fields are detected. By stacking up these weaker images at different gains, the system can detect what is near the sensing surface. For example, given a touch gain setting G, a finger hovering above would be detected at setting G+1, some of the knuckles at setting G+2, some of the hand and palm at gain setting G+3, and so on. Of course, further away objects cannot be “seen” by the sensor as well, but we can gather some information that then tells us if a user is hovering, which hand is holding the device, etc. □ In one or more embodiments, TriboNoiseTouch hardware enables the detection of context, hover, contact, and material identification. Context dependent touch applications can then be provided. After context is sensed, specific touch applications and multi-material applications can be triggered, e.g. a remote control application when entering living room, or drawing application when entering the office. In addition, context can be used while the device is in standby to detect what applications and controls should be available to the user. Moreover, when TriboTouch is used to detect contact, the NoiseTouch can be used as backup or shut down completely to save power. TriboNoiseTouch can also provide high precision input. Using the integration of both TriboTouch and NoiseTouch, contact sensing coordinates can be used for high precision input in, e.g. technical drawing applications, or in interaction on very high definition displays. □ An alternative implementation of the device may produce a certain amount of controlled generalized EMI from the device which is then used to detect position in areas where sufficient environmental EMI may not be available. This capability may be automatically switched on by the automated gain control systems once the levels of environmental EMI drops below a pre-programmed or dynamically selected threshold. This logic may take into account the demands placed on the system, such that when hovering functionality is not necessary, the system can switch to using triboactive mode exclusively, maintaining sensitivity while excluding detection of contact type. The noise-sensitive component of the system may be tuned to specifically use the regulatorily-allowed EMI emissions of the device exclusively, thus rejecting other sources of noise. This increases the robustness of the device since the EMI profile need not be dynamically characterized. □ The TriboNoiseTouch system may use one instance of the above-mentioned hardware for each touch position, or it may use a continuous larger electrode and estimate the position based on the distance-dependent change in signal through the electrode. The change may be caused by material properties of the covering material, resistance of the electrode body, reactive impedance of the electrode, or any other method. By this means, TriboNoiseTouch may be able to distinguish position at a higher resolution than the resolution of its electrode structure. TriboNoiseTouch may be configured for single or multiple touch points, and additionally may be configured for either continuous position sensing (such as a phone or tablet), or discrete sensing (such as a button or slider). In the latter application, the strengths of the system remain in force, and make the system practical to use in many scenarios where environmental noise or contamination may be an issue, such as in automotive or marine uses, in factory floors, etc. In such cases, TriboNoiseTouch can provide the benefit of a robust input solution without the need for additional precautions necessary for traditional capacitive sensing. Additionally, the system remains sensitive even when the user is wearing a bulky glove or using a non-conductive tool to trigger the control, allowing greater flexibility in terms of method of use and environmental contamination or interference. □ TriboNoiseTouch's features that continuously sense and characterize the environmental EMI can be used to passively sense the environment and context of the user. For example, at home the user may be surrounded by EMI from the TV, mobile phone, and refrigerator, while at office the user may be surrounded by the EMI from the desktop computer, office lighting, and office phone system. When the user makes contact with the TriboNoiseTouch system, perhaps to awaken or unlock their device, the TriboNoiseTouch system can capture this characteristic data and compare it to an internal database of noise and environments, using relevant similarities to deduce the user's location. This process is illustrated in FIG. 16. Note that different rooms in a house or office may have very different noise contexts. For example, the break room may include EMI from the coffee machine, while the meeting room may include EMI from a large TV or projector. The device can then use the context estimates to make certain functionality easily accessible. For example, it may automatically print queued documents from the user when the user approaches the printer, or allow control of the projector when the user is in the same room. The user may additionally configure capabilities on a per-area or per-context basis to help streamline tasks. □ The triboactive portion of the system produces high-resolution data based on individual micro-contacts with the surface of the touch sensor, while the noise-based sensing subsystem produces a blob around the area of contact or hover as well as a “shadow” of the hand hovering over the surface (see FIG. 24). These three types of data can be combined to create additional capabilities that are not available to either sensing modalities in isolation. □ The accuracy of finger contact can be enhanced by using a combination of TriboTouch and NoiseTouch type sensing. TriboTouch-type normally will produce a cloud of contacts around a finger contact due to the micro-texture of the finger interacting with the sensing electrodes. The noise data can be used at the same time to give an accurate position for the centroid of the contact, thus allowing the tribo data to be cleanly segmented to be inside the noise blob. The exact tribo contact positions can them be used to estimate the shape, size, and intended exact contact position. FIG. 25 shows the method for doing this refinement. □ Even if the touch sensing surface has not been treated to sense materials, or such algorithms are not active, a finger contact can be detected and isolated from a non-conductive pen contact. Since the pen is not conductive, it will not register in the noise-based sensing, while finger contact will produce both types of contact data. This can be used to control different refinement algorithms based on pen or finger contact, and to allow the simultaneous use of fingers and pens. The algorithm is shown in FIG. 26. The system provides both enhanced position based on type of contact, as well as notification of the type of contact event. □ The pen or hand pose can be estimated by detecting the hover shadow of the hand making contact or holding the pen. The overall shape of the hand, as well as the shape of the hand while holding a pen can be detected by using a pattern matching algorithm or heuristic, and this can be used to detect whether a contact is made by the left or right hand, as well as estimate of pen or finger tilt. Tilt is calculated by estimating the point where the stylus or pen is held, and the actual point of contact. The same approximate measurement can be made about finger contact and finger angle. The algorithm is shown in FIG. 27. □ Additional data can be made available to client programs to detect over-screen gestures, as well as disambiguation of left and right-handed contact. This can allow for example control of tool type with one hand while the other is used for manipulation, without two contacts accidentally triggering pinching gesture heuristics. □ As noted previously, the TriboTouch system can be used to detect the material making contact by examining the differences in charge displacement caused by various materials. Noise signals are transmitted through conductive and resistive object. As a result, it can help classification of materials done by TriboNoiseTouch hardware by quickly discriminating materials depending on their conductivity. For example, when interacting with the TriboNoiseTouch enabled display, the tip of the pencil could be detected to automatically trigger the drawing tool, while using the eraser of the pencil will trigger the erasing function. In this scenario, the NoiseTouch hardware will be able to detect the use of the tip of the pencil because it is conductive and will trigger both noise and tribo signals. On the other hand, the eraser will only generate tribo-electric signals. □ TriboNoiseTouch can be configured such that NoiseTouch is triggered only after contact has been sensed by the TriboTouch hardware. This system will only focus on contact-based interaction, such as touch and pen interaction, and will not be able to sense interaction above the surface such as hover. However, this will enable power savings and prevent both Tribo and Noise hardware (and their respective signal processing pipelines) to actively wait for interaction events. While the same front end is used for both, the reduction in calculations reduces the dynamic power usage of the digital logic used to run the triboactive and noise-based position calculations. □ While TriboTouch sensing can provide high resolution stylus sensing, TriboNoise can be used to detect a specifically designed stylus that features buttons to trigger menus and functions. The stylus will use tribo and noise signals together to detect position, where for example triboelectric signals will enable sensing contact, release and dragging states, while sensing noise will help to recover position during dragging states, hold, as well as get information from button presses (see FIG. 28). The core of the stylus consists of an antenna that transmits noise signal to the panel when the pen is in contact with the surface. The button enables adding to the antenna path a filtering circuit that will affect the noise signal in a predictable way, by adding a complex impedance or nonlinear behavior (like a diode) to the signal path. By analyzing the signal injected into the panel by the pen, the system can detect if the button has been pressed or not. In the case of a change to impedance caused by a button, a change in phase or amplitude at certain frequency will be the indicator of a button press. In case of a diode or other non-linear element, harmonics of a certain frequency will be sensed when the button is pressed due to clipping or shaping of the incoming noise signal. □ Because triboelectric charging occurs when objects make or break contact, it is possible to detect these events more precisely using TriboTouch alone or in combination with NoiseTouch or other sensing methods. By contrast, NoiseTouch alone uses a threshold value (that may be adaptive) to determine when contact occurs. Because the tribocharge distribution and polarity depend on the direction of motion (toward, away from, and along the surface), these events can be distinguished from hovering or near-contact events. This allows a finer control over the range of values considered for hovering, and thus improves the dynamic range for hover sensing (see FIG. 29). □ While TriboTouch is good at detecting contact, separation, and motion, it cannot detect static objects. Therefore it is complemented by the use of NoiseTouch to detect position and shape of conductive objects during long static contacts. □ Another scenario is the simultaneous use of a nonconductive stylus, brush, or other object detected solely by TriboTouch in combination with finger gestures detected by both TriboTouch and NoiseTouch. An application can distinguish between the fingers and the stylus because of the differences in their TriboTouch and NoiseTouch characteristics, and therefore process their corresponding events differently. For example, stylus input can be used to draw and brush input to paint, while finger input can be used to manipulate the image. For example, this allows the user to zoom using hover and simultaneously use plastic stylus to draw; to adjust the drawing space as the user is drawing; to scale with fingers while drawing with stylus; or to control a drawing parameter such as brush color intensity with hover while simultaneously drawing with a stylus. □ By patterning conductive and non-conductive materials onto an object, information may be encoded to allow recognition of the object. For example, the bottom of a game piece may be encoded with a pattern of materials that allow its identity and orientation to be detected. □ FIG. 30 illustrates example single-touch electrode components, which are one type of electrode configuration that can be used with the TriboTouch, NoiseTouch, and TriboNoiseTouch techniques disclosed herein. Other electrode configurations can also be used. In particular, the electrode types disclosed herein include (1) single-touch electrodes, (2) dual-touch electrodes, (3) array multi-touch electrodes, including the multiple-electrode configuration shown in FIG. 34, (4) continuous passive position sensing, (5) continuous two-dimensional passive position sensing, (6) dielectric-encoded passive position sensing, (7) continuous passive position sensing using an array of non-linear elements, and (8) spatially-distributed coordinate encoding. Types (1)-(7) can be used with any of TriboTouch, NoiseTouch, and TriboNoiseTouch. Type (8) can be used with TriboTouch or TriboNoiseTouch. Any of these valid electrode-detection combinations (e.g., a combination of one or more of the electrodes (1)-(8) and one of the TriboTouch, TriboNoise, and TriboNoiseTouch detection techniques) can be used with the same analog front-end, such as the analog front-end described above with reference to FIG. 3. □ Referring again to FIG. 30, a single-touch electrode can be designed to act as a switch, or can be arranged in an array as an element of a larger surface. A single-touch electrode with these components is shown in FIG. 30. The components include an insulator layer and sense electrodes. The shield electrode and ground shield electrodes may be omitted at the cost of degraded performance, though performance may remain sufficient for touch detection. The shield electrode may be inter-digitated with the sense electrode such that the distance between the lines of the two electrodes is minimized. This may be done with simple inter-digitation, or via the use of a space-filling curve. A specific instantiation is the use of an inter-digitated Hilbert curve. The use of the inter-digitated electrodes is used to reduce the parasitic capacitance of the electrode relative to the environment by actively driving the electrode using the output of the high-impedance amplifier of the sensing system. An additional shield electrode may be used to reject input to the system from the direction opposed to the front of the surface. This prevents spurious detection of contact due to EMI produced by nearby electronics, such as the display in the case of a transparent touch surface application such as a tablet. □ FIG. 31 illustrates two electrodes (2602 and 2604) in an example interleaved pattern. In the interleaved electrode, only the shield and pickup electrodes are shown. Electrodes may be used interchangeably for pickup or shield. This is a simple example of interleaved patterns, and the conductive portions of the electrodes may be more complexly intertwined. □ FIG. 32 illustrates a row-column electrode grid that can be used to detect position of two touch points. Note that unlike capacitive touch sensors, row-column configurations do not directly offer the ability to sense multiple touch positions, since the electrodes are used as sense electrodes, and in the triboactive and noise-based sensors, transmit electrodes may not be present. In this configuration, two touch points can be distinguished, though their exact positions can be lost. However, this is sufficient for common gestures such as two-finger tap or pinch/ expansion gestures. Other example gestures can be a wave or sweep motion made over the screen without contact, or a hovering motion over a control (which can elicit a highlighting feedback). □ FIGS. 33 and 34 illustrate array multitouch configurations using single-touch electrodes in a grid. Each electrode individually picks up contact near it. However, since electric fields as well as the charge cloud produced by triboactivity expand outward from the source charge deflection, the field can be detected by nearby electrodes as well, as shown in FIG. 34. As a result, the position of the contact can be interpolated between the electrodes that receive the signal. Likewise, because capacitive coupling takes place at some distance, the noise-based sensor can detect the presence of a hovering conductive body such as the user's finger, allowing for hover sensing. □ FIG. 35 illustrates an example of continuous passive position sensing using a resistive sheet electrode. For continuous passive position sensing, a sheet electrode with some known uniform resistance per unit of area can be used alongside pickup electrodes that are placed on this resistive sheet 3002. The configuration shown in FIG. 35 involves a linear sensor with two pickup electrodes. Continuous passive position sensing is performed by detecting the apportionment of charge displacement from a contact. When the impedance of the sheet matches (approximately) the impedance of the system, the value sensed at each pickup is some function of distance to the contact charge cloud. By characterizing and linearizing the readings from the pickups, it is possible to detect the position of the contact continuously at any position up to the accuracy and precision of the digitization electronics and the noise characteristics of the system itself. This approach leads to simpler electronics and a simpler patterning of the primary touch resistive sheet, which in turn leads to lower cost and complexity. The position of contact can be calculated based on the proportion of output each pickup relative to the total signal captured. Conversely, a global noise pickup layer may be laid under the resistive layer to sense to total amount of charge injected into the surface, thus allowing a direct comparison. □ FIGS. 36 and 37 illustrate an example of continuous two-dimensional passive position sensing. The passive position sensing technique shown in FIG. 35 can be extended to two dimensions, as shown in FIG. 36. The two-dimensional technique can sense n points of touch 3104 from the signals induced in a resistive sheet 3102 with a known distribution of m pickup points 3106. The inputs to the touch surface at time t are n independent voltages Vi(t) at coordinates (xi, yi) 3212 for each point of touch, as shown in FIG. 37 Voltages are measured at m known pickup points 3204, 3206, 3208, 3210, 3212 on the edges of the resistive sheet 3102. By approximating the resistive sheet as M×N network of resistors and the use of already-known methods, the resistance between a pickup point and a touch point may be found. The relationship between the resistance between a given pickup point and a touch point is used to determine the voltage at a given pickup point. The resulting equation represents the dependence of the voltage level at a pickup location on the coordinates and input voltages at the touch points. From this system of equations for voltage levels at pickup points, the touch point coordinates (xi, yi) and input voltages Vi(t) are found. The number of required pickup point locations m is at least 3n; a larger number of pickups may be used to reduce errors due to numerical approximations and measurement error. The known distribution of pickup points and the non-linearity of the resistive sheet allow separation of the touch points and their distribution. This method can be further generalized from points of contact (x[i], y[i]) to points of hover (x[i], y[i], z[i]) by solving for a third unknown coordinate. This generalization to points of hover increases the minimum number of pickups m from 3n to 4n. □ FIGS. 38-40 illustrate example electrode-sheet configurations. The electrodes can be designed with pickups and resistive sheet on different layers, or on the same layer, as shown in FIG. 38 and FIG. 39 respectively. FIG. 38 shows the pickups 3306 and resistive sheet 3302 as different layers, separated by pickup contacts 3304. Additionally, to increase the resolution of contact readouts, several of these patches may be arrayed next to each other with minimal gaps between them for pickup electrodes to create a single layer high-resolution touch surface. FIG. 39 shows the pickup contacts 3402 on the same layer as the resistive sheet 3404. Alternatively, as shown in FIG. 40, the contacts 3502 can be placed in the interior rather than the edge of the resistive sheet 2504 using a two-layer approach, effectively allowing some electrodes such as contact 3502 to be used for multiple patches 3506, 3508. □ FIG. 41 illustrates an example of dielectric-encoded passive position sensing. A position of contact 3602, 3612 can be encoded to a single pickup electrode by a dielectric code printed to the touch surface. Since the signal from the contact is capacitively transferred to the electrode, it is possible to encode a dielectric pattern onto the surface that modifies the signal as it is transferred to the pickup electrodes. This dielectric pattern may be produced by etching, screen printing, subtractive lithography, mechanical, or other means. By knowing the dielectric pattern, it is possible to recover the position from a single electrode by the results of de-convolution or other inverse transforms 3610, 3614. Depending on the necessary contact area and resolution, multiple such patches 3606, 3608 can be placed next to each other to produce a complete touch surface, simplifying the code and increasing the size of the code relative to the size of the patch in each patch. □ FIGS. 42 and 43 illustrate an example of continuous passive position sensing using an array 3702 of non-linear elements 3704. The continuous passive position sensing approach can be combined with row-column grid-based position sensing to calculate the position of fingers. Due to the non-linear response of the system to touch position, multiple touches on the same row or column can be distinguished. Therefore, it becomes possible to use a row-column grid to calculate high-resolution multi-touch position. Instead of using a continuous resistive sheet, it is possible to replace the resistive sheet with a lattice of nonlinear reactive elements or a sheet material that has a nonlinear reactance. FIG. 42 shows a one-dimensional lattice for simplicity; similar principles apply to two-dimensional lattices. A signal injected into this medium decomposes into a group of solitons (solitary excitations) that exhibit a distance- and frequency-dependent relative phase shift as they travel through the medium. In FIG. 43, each line pattern shows increasing distance from pickup. The soliton phase shifts can then be used to calculate the distance from each pickup point to the event, allowing determination of the event location. In one or more embodiments, a nonlinear transmission line (lattice of nonlinear reactive elements) can be used with a multitude of pickup points. In such a case, the surface can be broken into zones or strips, with one array covering each strip. The array also may be joined linearly, or in a matrix configuration with more than two connections to nearby elements. □ FIG. 44 illustrates an example of spatially-distributed coordinated encoding. In one or more embodiments, the position of a contact or motion event at the sensing surface can be determined by encoding coordinates in physical variations of the surface which are then decoded from the signal generated by the event. An example of this is shown in cross-section in FIG. 44: as a finger 3902 moves across a surface 3904 with a varying height profile 3906, the detected signal 3908 reflects the profile variations along the direction of motion. Position information can be encoded in these variations using a two-dimensional self-clocking code, and subsequent signal processing by a coordinate decoder 3910 can reconstruct the position and velocity of points along the trajectory 3912. This technique advantageously replaces an array of electrodes and associated electronics with a single electrode and amplifier, plus a textured surface to capture input motion, resulting in low-cost gestural input surfaces. □ FIG. 45 illustrates an example combination of TriboTouch with resistive touch sensors. TriboTouch can be combined with additional sensing approaches in order to use the existing physical designs, while upgrading the capabilities of the system with the benefits that TriboTouch technology offers, or to use the benefits of both approaches. Resistive sensors ordinarily use two layers 4002, 4004 coated with a resistive material, and separated by a small distance. There can be electrodes 4006, 4008 along opposing edges of each layer, in vertical and horizontal directions, respectively. When the layers make contact due to pressure from touch, a touch position is sensed. The electrodes can be used alternatively as receiver and as a voltage source to determine the vertical and horizontal position of the touch. TriboTouch can be combined with resistive sensors by placing pickups 4010 on the top resistive sheet 4002 used in a resistive sensor. The pickups 4010 can be used to derive the position of contacts on the top surface 4002. Note that since resistive sensors often use a full edge as a connector, additional or separable contacts may be needed. The resistive sensing capability can be maintained by interleaving the processing of the signals. Alternatively, in a quiescent state, the bottom layer 4004 can be connected to a voltage source, while the top layer 4002 is used for TriboTouch. If a contact is of sufficient force to contact the bottom layer, the TriboTouch system can detect the sudden large offset caused by contact with the bottom layer, hand off to the resistive system for resistive position detection, and begin interleaving at that time to use both systems. Such an approach allows for reduced switching and reduced power expenditure. □ TriboTouch can also be combined with capacitive touch sensors. As shown in FIGS. 7 and 8, capacitive sensors operate by detecting the change in a transmitted electric field. In order to allow cooperation between the two systems, it is possible to connect a capacitive sensor ASIC directly to the same pads as a TriboTouch system and achieve coexistence by interleaved sensing. Since TriboTouch is capable of high-speed operation, it is possible to use existing capacitive technology without significant change. Note that capacitive signals are of a known form and frequency. Therefore, it is possible to operate non-transmitting electrodes in TriboTouch mode while they concurrently receive the signal being transmitted by other electrodes. In such a case, filters may be used to reject the capacitive signals from the TriboTouch processing system, either using traditional frequency-domain filtering, or by using synchronous filtering in cooperation with the excitation signal produced by the capacitive sensor. □ FIGS. 46 and 47 illustrate example combination of TriboTouch with inductive touch sensors. Inductive sensors operate by exciting an active stylus with a pulse of current using a matrix of wires. When a line is not being used to provide excitation, it is possible to use these lines as TriboTouch receiver lines. Since TriboTouch does not transmit any signals, the lines can be directly connected to the TriboTouch system. Note that if one end of the line is permanently attached to a fixed potential rail 3902, the rail should be disconnected so that the TriboTouch signal can be read. This disconnection can be achieved through an electronic switch 3904. Alternatively, as shown in FIG. 47, if the inductive system is being operated with current pulses, the inductive system can be coupled capacitively, e.g., via capacitors 4202, 4204, to the touch surface such that a continuous connection to a power rail does not exist. An additional benefit of incorporating TriboTouch technology is the reduction in power use. Since inductive sensing uses current flow to form a magnetic field, it is power-hungry. By detecting initial contact with the low-power TriboTouch technology, the inductive sensor can be disabled when there is no contact, leading to significant energy savings when the system is quiescent. □ In one or more embodiments, TriboTouch, TriboNoise, TriboNoiseTouch, or combinations of those can be combined with other touch sensor types, such as surface acoustic wave, infrared, or acoustic touch sensors, as well as with any of the resistive, capacitive, and inductive sensors described above. TriboTouch, TriboNoise, and TriboNoiseTouch can also use the electrode types described herein, except for spatially-distributed coordinate encoding electrodes, which can be used with TriboTouch and TriboNoiseTouch, as discussed above with reference to FIG. 30. □ Surface acoustic wave (SAW) touch sensors use transducers to produce an ultrasonic wave that is absorbed when a finger makes contact. The surface is ordinarily glass or a similar hard material. This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since SAW systems do not use electrical signals transiting the surface itself to detect position. □ Infrared touch sensors produce infrared light that is absorbed when a finger makes contact. This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since infrared systems do not use electrical signals transiting the surface itself to detect position. □ Acoustic touch sensors detect the specific sounds produced when an object touches the sensed surface to detect position. This surface can be patterned with a transparent conductive material to provide pickups for the TriboTouch system. No interleaving is necessary, since acoustic systems do not use electrical signals transiting the surface itself to detect position. □ FIG. 48 illustrates an example computer system 4300. In particular embodiments, one or more computer systems 4300 perform one or more steps of one or more methods described or illustrated herein. The processes and systems described herein, such as the processing system 312 of FIG. 3, the noise processing system 1216 of FIG. 12 or the TriboNoiseTouch processing system 1916 of FIG. 19, can be implemented using one or more computer systems 4300. In particular embodiments, one or more computer systems 4300 provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 4300 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. For example, the processing system 312 of FIG. 3, the noise processing system 1216 of FIG. 12 or the TriboNoiseTouch processing system 1916 of FIG. 19 can be implemented as one or more methods performed by software running on the one or more computer systems 4300. Particular embodiments include one or more portions of one or more computer systems 4300. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. □ This disclosure contemplates any suitable number of computer systems 4300. This disclosure contemplates computer system 4300 taking any suitable physical form. As example and not by way of limitation, computer system 4300 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. Where appropriate, computer system 4300 may include one or more computer systems 4300; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 4300 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 4300 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 4300 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where □ In particular embodiments, computer system 4300 includes a processor 4302, memory 4304, storage 4306, an input/output (I/O) interface 4308, a communication interface 4310, and a bus 4312. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. □ In particular embodiments, processor 4302 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 4302 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 4304, or storage 4306; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 4304, or storage 4306. In particular embodiments, processor 4302 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor 4302 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor 4302 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 4304 or storage 4306, and the instruction caches may speed up retrieval of those instructions by processor 4302. Data in the data caches may be copies of data in memory 4304 or storage 4306 for instructions executing at processor 4302 to operate on; the results of previous instructions executed at processor 4302 for access by subsequent instructions executing at processor 4302 or for writing to memory 4304 or storage 4306; or other suitable data. The data caches may speed up read or write operations by processor 4302. The TLBs may speed up virtual-address translation for processor 4302. In particular embodiments, processor 4302 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor 4302 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor 4302 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors 4302. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. □ In particular embodiments, memory 4304 includes main memory for storing instructions for processor 4302 to execute or data for processor 4302 to operate on. As an example and not by way of limitation, computer system 4300 may load instructions from storage 4306 or another source (such as, for example, another computer system 4300) to memory 4304. Processor 4302 may then load the instructions from memory 4304 to an internal register or internal cache. To execute the instructions, processor 4302 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor 4302 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 4302 may then write one or more of those results to memory 4304. In particular embodiments, processor 4302 executes only instructions in one or more internal registers or internal caches or in memory 4304 (as opposed to storage 4306 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 4304 (as opposed to storage 4306 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 4302 to memory 4304. Bus 4312 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor 4302 and memory 4304 and facilitate accesses to memory 4304 requested by processor 4302. In particular embodiments, memory 4304 includes random access memory (RAM). This RAM may be volatile memory, where appropriate, and this RAM may be dynamic RAM (DRAM) or static RAM (SRAM), where appropriate. Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory 4304 may include one or more memories 4304, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. □ In particular embodiments, storage 4306 includes mass storage for data or instructions. As an example and not by way of limitation, storage 4306 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage 4306 may include removable or non-removable (or fixed) media, where appropriate. Storage 4306 may be internal or external to computer system 4300, where appropriate. In particular embodiments, storage 4306 is non-volatile, solid-state memory. In particular embodiments, storage 4306 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 4306 taking any suitable physical form. Storage 4306 may include one or more storage control units facilitating communication between processor 4302 and storage 4306, where appropriate. Where appropriate, storage 4306 may include one or more storages 4306. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. □ In particular embodiments, I/O interface 4308 includes hardware, software, or both, providing one or more interfaces for communication between computer system 4300 and one or more I/O devices. Computer system 4300 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 4300. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 4308 for them. Where appropriate, I/O interface 4308 may include one or more device or software drivers enabling processor 4302 to drive one or more of these I/O devices. I/O interface 4308 may include one or more I/O interfaces 4308, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. □ In particular embodiments, communication interface 4310 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 4300 and one or more other computer systems 4300 or one or more networks. As an example and not by way of limitation, communication interface 4310 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface 4310 for it. As an example and not by way of limitation, computer system 4300 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), body area network (BAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 4300 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system 4300 may include any suitable communication interface 4310 for any of these networks, where appropriate. Communication interface 4310 may include one or more communication interfaces 4310, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. □ In particular embodiments, bus 4312 includes hardware, software, or both coupling components of computer system 4300 to each other. As an example and not by way of limitation, bus 4312 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus 4312 may include one or more buses 4312, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. □ Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. □ Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. □ The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Claims (32) What is claimed is: . A method comprising: receiving a signal from an electrode of a touch sensor; and detecting a contact or separation input to the touch sensor; wherein the touch sensor senses a charge displacement, based on the signal, in order to detect the contact or separation. . The method of claim 1 , further comprising: analyzing a noise component of the signal; rejecting at least a portion of the noise component of the signal; and based at least in part on a characteristic of the signal after the rejection of at least a portion of the noise component of the signal, classifying the contact or separation input as a contact event or a separation event. 3. The method of claim 2, wherein the characteristic of the signal comprises a time-domain profile of the signal. 4. The method of claim 3, wherein the time-domain profile of the signal comprises a signal waveform that comprises a peak and a trough. 5. The method of claim 2, further comprising determining a time and location of the contact or separation event. 6. The method of claim 1, further comprising determining a material of an implement that is a source of at least a portion of the signal. 7. The method of claim 1, wherein an object made the contact or separation and wherein the object is comprised of any type of material. 8. The method of claim 1, wherein an insulator is on the electrode, and another material is on the insulator. . A computer-readable non-transitory storage medium embodying logic that is operable when executed to: receive a signal from an electrode of a touch sensor; and detect a contact or separation input to the touch sensor; wherein the touch sensor is operable to sense a charge displacement, based on the signal, in order to detect the contact or separation. . The medium of claim 9 , wherein the logic is further operable when executed to: analyze a noise component of the signal; reject at least a portion of the noise component of the signal; and based at least in part on a characteristic of the signal after the rejection of at least a portion of the noise component of the signal, classify the contact or separation input as a contact event or a separation event. 11. The medium of claim 10, wherein the characteristic of the signal comprises a time-domain profile of the signal. 12. The medium of claim 11, wherein the time-domain profile of the signal comprises a signal waveform that comprises a peak and a trough. 13. The medium of claim 10, wherein the logic is further operable when executed to determine a time and location of the contact or separation event. 14. The medium of claim 9, wherein the logic is further operable when executed to determine a material of an implement that is a source of at least a portion of the signal. 15. The medium of claim 9, wherein an object made the contact or separation and wherein the object is comprised of any type of material. 16. The medium of claim 9, wherein an insulator is on the electrode, and another material is on the insulator. . An apparatus comprising: an electrode of a touch sensor; and a computer-readable non-transitory storage medium embodying logic that is operable when executed to: receive a signal from the electrode; and detect a contact or separation input to the touch sensor; wherein the touch sensor is operable to sense a charge displacement, based on the signal, in order to detect the contact or separation. . The apparatus of claim 17 , wherein the logic is further operable when executed to: analyze a noise component of the signal; reject at least a portion of the noise component of the signal; and based at least in part on a characteristic of the signal after the rejection of at least a portion of the noise component of the signal, classify the contact or separation input as a contact event or a separation event. 19. The apparatus of claim 18, wherein the characteristic of the signal comprises a time-domain profile of the signal. 20. The apparatus of claim 19, wherein the time-domain profile of the signal comprises a signal waveform that comprises a peak and a trough. 21. The apparatus of claim 18, wherein the logic is further operable when executed to determine a time and location of the contact or separation event. 22. The apparatus of claim 17, wherein the logic is further operable when executed to determine a material of an implement that is a source of at least a portion of the signal. 23. The apparatus of claim 17, wherein an object made the contact or separation and wherein the object is comprised of any type of material. 24. The apparatus of claim 17, wherein an insulator is on the electrode, and another material is on the insulator. . An apparatus comprising: means for receiving a signal from an electrode of a touch sensor; and means for detecting a contact or separation input to the touch sensor; wherein the touch sensor is operable to sense a charge displacement, based on the signal, in order to detect the contact or separation. . The apparatus of claim 25 , further comprising: means for analyzing a noise component of the signal; means for rejecting at least a portion of the noise component of the signal; and means for, based at least in part on a characteristic of the signal after the rejection of at least a portion of the noise component of the signal, classifying the contact or separation input as a contact event or a separation event. 27. The apparatus of claim 26, wherein the characteristic of the signal comprises a time-domain profile of the signal. 28. The apparatus of claim 27, wherein the time-domain profile of the signal comprises a signal waveform that comprises a peak and a trough. 29. The apparatus of claim 26, further comprising means for determining a time and location of the contact or separation event. 30. The apparatus of claim 25, further comprising means for determining a material of an implement that is a source of at least a portion of the signal. 31. The apparatus of claim 25, wherein an object made the contact or separation and wherein the object is comprised of any type of material. 32. The apparatus of claim 25, wherein an insulator is on the electrode, and another material is on the insulator. Priority Applications (8) Application Number Priority Date Filing Date Title US14/458,102 US10042504B2 (en) 2013-08-13 2014-08-12 Interaction sensing AU2014307232A AU2014307232B2 (en) 2013-08-13 2014-08-13 Interaction sensing PCT/KR2014/007544 WO2015023131A1 (en) 2013-08-13 2014-08-13 Interaction sensing KR1020157015347A KR102214438B1 (en) 2013-08-13 2014-08-13 Interaction sensing BR112015007261-5A BR112015007261B1 (en) 2013-08-13 2014-08-13 METHOD, NON-TRANSITORY COMPUTER READIBLE STORAGE, AND APPLIANCE CN201480007036.0A CN104969156B (en) 2013-08-13 2014-08-13 Interaction sensor device and interaction method for sensing JP2016534531A JP6422973B2 (en) 2013-08-13 2014-08-13 Interaction sensing EP14836371.6A EP2880515B1 (en) 2013-08-13 2014-08-13 Interaction sensing Applications Claiming Priority (11) Application Number Priority Date Filing Date Title US201361865448P 2013-08-13 2013-08-13 US201461924604P 2014-01-07 2014-01-07 US201461924637P 2014-01-07 2014-01-07 US201461924625P 2014-01-07 2014-01-07 US201461924558P 2014-01-07 2014-01-07 US201461969590P 2014-03-24 2014-03-24 US201461969612P 2014-03-24 2014-03-24 US201461969544P 2014-03-24 2014-03-24 US201461969558P 2014-03-24 2014-03-24 US201462000429P 2014-05-19 2014-05-19 US14/458,102 US10042504B2 (en) 2013-08-13 2014-08-12 Interaction sensing Publications (2) Family Applications (5) Application Number Title Priority Date Filing Date US14/458,083 Active US10108305B2 (en) 2013-08-13 2014-08-12 Interaction sensing US14/458,110 Active US9569055B2 (en) 2013-08-13 2014-08-12 Interaction sensing US14/458,097 Active 2036-04-05 US10318090B2 (en) 2013-08-13 2014-08-12 Interaction sensing US14/458,102 Active US10042504B2 (en) 2013-08-13 2014-08-12 Interaction sensing US16/431,506 Active 2034-09-09 US10955983B2 (en) 2013-08-13 2019-06-04 Interaction sensing Family Applications Before (3) Application Number Title Priority Date Filing Date US14/458,083 Active US10108305B2 (en) 2013-08-13 2014-08-12 Interaction sensing US14/458,110 Active US9569055B2 (en) 2013-08-13 2014-08-12 Interaction sensing US14/458,097 Active 2036-04-05 US10318090B2 (en) 2013-08-13 2014-08-12 Interaction sensing Family Applications After (1) Application Number Title Priority Date Filing Date US16/431,506 Active 2034-09-09 US10955983B2 (en) 2013-08-13 2019-06-04 Interaction sensing Country Status (8) Cited By (20) * Cited by examiner, † Cited by third party Publication number Priority date Publication date Assignee Title US20150048846A1 (en) * 2013-08-13 2015-02-19 Samsung Electronics Company, Ltd. Interaction Sensing US20150138130A1 (en) * 2013-11-21 2015-05-21 Pixart Imaging Inc. Capacitive touch system and gain control method thereof US20150185923A1 (en) * 2014-01-02 2015-07-02 Samsung Electronics Co., Ltd. Method for processing input and electronic device thereof WO2017048066A1 (en) * 2015-09-16 2017-03-23 Samsung Electronics Co., Ltd. Electromagnetic interference signal detection US20170160817A1 (en) * 2015-12-08 2017-06-08 Georgia Tech Research Corporation Personality identified self-powering keyboard US9729708B2 (en) * 2015-08-17 2017-08-08 Disney Enterprises, Inc. Methods and systems for altering features of mobile devices US20170344155A1 (en) * 2016-05-31 2017-11-30 Samsung Electronics Co., Ltd. Method and electronic device for obtaining touch input US20180011572A1 (en) * 2015-12-22 2018-01-11 Wuhan China Star Optoelectronics Technology Co. Ltd. Touch display device with tactile feedback function and driving method thereof KR20180042454A (en) * 2015-09-16 2018-04-25 삼성전자주식회사 Electromagnetic disturbance signal detection US10042446B2 (en) 2013-08-13 2018-08-07 Samsung Electronics Company, Ltd. Interaction modes for object-device interactions US20180242425A1 (en) * 2015-03-30 2018-08-23 Oledworks Gmbh Led device, led driver, and driving method US20180246591A1 (en) * 2015-03-02 2018-08-30 Nxp B.V. Method of controlling a mobile device US10073578B2 (en) 2013-08-13 2018-09-11 Samsung Electronics Company, Ltd Electromagnetic interference signal detection US10101869B2 (en) 2013-08-13 2018-10-16 Samsung Electronics Company, Ltd. Identifying device associated with touch event US10120512B2 (en) 2016-04-08 2018-11-06 Microsoft Technology Licensing, Llc Hover sensor US10141929B2 (en) 2013-08-13 2018-11-27 Samsung Electronics Company, Ltd. Processing electromagnetic interference signal using machine learning US20190317642A1 (en) * 2018-04-13 2019-10-17 Tactual Labs Co. Capacitively coupled conductors US10503266B2 (en) 2016-10-17 2019-12-10 Samsung Electronics Co., Ltd. Electronic device comprising electromagnetic interference sensor US20210109615A1 (en) * 2019-10-14 2021-04-15 RET Equipment Inc. Resistive pressure sensor device system US11363700B2 (en) * 2018-09-29 2022-06-14 Anhui Djx Information Technology Co., Ltd. Intelligent control system for touch switch based on infrared sensor detection Families Citing this family (44) * Cited by examiner, † Cited by third party Publication Priority Publication Assignee Title number date date EP3186695B1 (en) 2015-06-22 2021-11-10 SigmaSense, LLC Multi-touch sensor and electrostatic pen digitizing system utilizing simultaneous functions for improved US10664098B2 (en 2015-06-22 2020-05-26 Sigmasense, Llc. Channel driver circuit US11907484B2 (en 2015-06-22 2024-02-20 Sigmasense, Llc. Function and orientation identification for input/output (I/O) operative touch sensor device (TSD) US11397492B2 (en 2015-06-22 2022-07-26 Sigmasense, Llc. Enhanced mutual capacitance touch screen display with shape detection and methods for use therewith KR102453528B1 ( 2015-10-14 2022-10-13 삼성디스플레이 주식회사 Electromagnetic induction paenl, electromagnetic induction device including the same and display device en) * including the same KR102063903B1 ( 2015-11-25 2020-01-08 삼성전자주식회사 Processing of Electromagnetic Interference Signals Using Machine Learning WO2017110257A1 ( 2015-12-21 2017-06-29 ソニー株式会社 Information processing device and information processing method en) * GB2544353B (en) 2015-12-23 2018-02-21 Cambridge Touch Tech Ltd Pressure-sensitive touch panel US10141972B2 (en 2016-01-26 2018-11-27 Samsung Electronics Co., Ltd. Touch screen controller for increasing data processing speed and touch system including the same CN105630248B (en 2016-02-23 2018-03-30 深圳市华鼎星科技有限公司 A kind of Variable Array difunctional touch-control sensor, control detecting system, touch module and touch ) * control display apparatus GB2547905B (en) 2016-03-02 2021-09-22 Zwipe As Fingerprint authorisable device EP3408731A4 (en) 2016-04-07 2019-01-30 Samsung Electronics Co., Ltd. Interaction modes for object-device interactions EP3228240A1 (en) 2016-04-08 2017-10-11 Koninklijke Philips N.V. Fiber quality sensor US20180138831A1 2016-11-17 2018-05-17 Immersion Corporation Control of Contact Conditions For Static ESF (en) * EP3566112A2 (en) 2017-01-06 2019-11-13 SABIC Global Technologies B.V. Triboelectric sensor with haptic feedback CN109804337A (en 2017-05-03 2019-05-24 深圳市柔宇科技有限公司 Stylus, touch device, drills to improve one's handwriting system and method ) * US10466842B2 (en 2017-09-12 2019-11-05 Cypress Semiconductor Corporation Suppressing noise in touch panels using a shield layer ) * CN109189278B (en 2017-10-27 2022-03-15 上海飞智电子科技有限公司 Touch method, touch device and touch system applied to capacitive touch screen ) * KR102381435B1 ( 2017-10-30 2022-03-31 삼성전자주식회사 Antenna for electromagnetic interference detection and portable electronic device including the same KR102486453B1 ( 2017-12-08 2023-01-09 삼성디스플레이 주식회사 Display device en) * CN110113468B (en 2018-02-01 2021-02-12 中兴通讯股份有限公司 State detection device and method ) * KR102445112B1 ( 2018-02-14 2022-09-20 삼성전자 주식회사 An electronic device and method for controlling an external electronic device based on electro magnetic signals en) * US10900884B2 (en 2018-03-08 2021-01-26 International Business Machines Corporation 2D nanoparticle motion sensing methods and structures US11054942B2 (en 2018-04-05 2021-07-06 Synaptics Incorporated Noise suppression circuit TWI694374B (en) 2018-08-30 2020-05-21 禾瑞亞科技股份有限公司 Electronic wiping device TWI678655B (en) 2018-11-30 2019-12-01 大陸商北京集創北方科技股份有限公司 Anti-noise method of touch panel, touch panel control circuit and touch device KR102669114B1 ( 2018-12-17 2024-05-28 삼성전자주식회사 Method for changing audio signal path according to external electronic device transmitting em signal and en) electronic device therefor EP3742143A1 (en) 2019-05-20 2020-11-25 SABIC Global Technologies B.V. Triboelectric impact sensor and applications for its use TWI699691B (en) 2019-05-27 2020-07-21 友達光電股份有限公司 Touch display device and method for driving the same FR3097990B1 (en) 2019-06-27 2021-05-21 Thales Sa TOUCH SURFACE WITH HYBRID TOUCH DETECTION EP3761510A1 (en) 2019-07-02 2021-01-06 SABIC Global Technologies B.V. Thin-film based triboelectric material and touch sensors US11209937B2 (en 2019-07-08 2021-12-28 Samsung Electronics Co., Ltd. Error correction for seamless transition between hover and touch sensing US12091313B2 (en 2019-08-26 2024-09-17 The Research Foundation For The State Electrodynamically levitated actuator ) University Of New York KR102257994B1 ( 2019-09-02 2021-05-31 삼성전자주식회사 Method and apparatus for determining proximity US11842011B2 (en 2019-09-27 2023-12-12 Apple Inc. System and method of noise mitigation for improved stylus detection CN113147912A (en 2020-01-22 2021-07-23 现代自动车株式会社 System and method for controlling opening and closing of charging door of vehicle ) * CN111573457B (en 2020-04-13 2021-09-24 北京他山科技有限公司 Hover button sensor unit and method for providing trigger of hover button ) * IT202000011221A1 2020-05-15 2021-11-15 St Microelectronics Srl SYSTEM AND METHOD OF DETECTING THE LIFTING AND LOWERING OF A USER'S FOOT FOR THE PURPOSE OF ENABLING A (en) * FUNCTIONALITY OF A USER'S DEVICE, AND THE USER'S DEVICE CN111880675B (en 2020-06-19 2024-03-15 维沃移动通信(杭州)有限公司 Interface display method and device and electronic equipment ) * CN112433463B (en 2020-11-12 2022-02-08 四川写正智能科技有限公司 Intelligence wrist-watch with GPS tracks location and conversation function ) * JP2022118569A ( 2021-02-02 2022-08-15 キオクシア株式会社 Semiconductor device and semiconductor memory device en) * EP4167065A4 (en) 2021-08-25 2024-04-24 Seoul National University R&DB Foundation Triboresistive touch sensor CN113433376B (en 2021-08-26 2021-10-29 深圳佳力拓科技有限公司 Intelligent test pencil based on three channels and use method thereof ) * KR102594963B1 ( 2021-10-06 2023-10-30 전북대학교산학협력단 Non-contact semiconductor sensor and non-contact image sensing system using the same Citations (20) * Cited by examiner, † Cited by third party Publication Priority Publication Assignee Title number date date US6473072B1 (en 1998-05-12 2002-10-29 E Ink Corporation Microencapsulated electrophoretic electrostatically-addressed media for drawing device applications ) * US6570557B1 (en 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords ) * US20060267953A1 2005-05-31 2006-11-30 Peterson Richard A Jr Detection of and compensation for stray capacitance in capacitive touch sensors (en) * US20090160817A1 2007-12-24 2009-06-25 Wintek Corporation Transparent capacitive touch panel and manufacturing method thereof (en) * US20100026639A1 2008-08-04 2010-02-04 Samsung Electronics Co., Ltd. Liquid crystal display and touch sensing method thereof (en) * US20100053111A1 2008-09-04 2010-03-04 Sony Ericsson Mobile Multi-touch control for touch sensitive display (en) * Communications Ab US20100085325A1 2008-10-02 2010-04-08 Wacom Co., Ltd. Combination touch and transducer input system and method (en) * US20100220075A1 2009-03-02 2010-09-02 Au Optronics Corporation Touch sensing display panel and touch sensing substrate (en) * US20100265211A1 2009-04-15 2010-10-21 Rohm Co., Ltd. Touch-type input device (en) * US20120242584A1 2011-03-22 2012-09-27 Nokia Corporation Method and apparatus for providing sight independent activity reports responsive to a touch gesture (en) * US20130042581A1 2011-08-17 2013-02-21 Case-Mate, Inc. Snap-on protective cover for electronic device (en) * US20130049531A1 2011-08-30 2013-02-28 Georgia Tech Research Triboelectric Generator (en) * Corporation US20130120284A1 Shenzhen China Star (en) * 2011-11-15 2013-05-16 Optoelectronics Technology Energy saving type touch-controlled liquid crystal display device Co., Ltd. US20130234978A1 2012-02-23 2013-09-12 Cypress Semiconductor False touch filtering for capacitance sensing systems (en) * Corporation US20130314106A1 2010-10-15 2013-11-28 Freescale Semiconductor, Inc. Decoder for determining a substance or material structure of a detected object based on signals of a capacitive sensor and (en) * method for determining a substance or material structure of a detected object based on signals of a capacitive sensor CN103777803A ( 2013-08-12 2014-05-07 国家纳米科学中心 Single-electrode touch sensor and preparation method thereof en) * US20140210313A1 2013-01-28 2014-07-31 Samsung Electronics Co., Ltd. Energy harvesting device having self-powered touch sensor (en) * US20140300248A1 2012-09-21 2014-10-09 Georgia Tech Research Single Electrode Triboelectric Generator (en) * Corporation US20140313141A1 2013-04-23 2014-10-23 Samsung Electronics Co., Ltd. Smart apparatus having touch input module and energy generating device, and operating method of the smart apparatus (en) * US20150318800A1 2013-01-21 2015-11-05 Peking University Vibration generator and stacked-structure generator (en) * Family Cites Families (237) * Cited by examiner, † Cited by third party Publication Priority Publication Assignee Title number date date US3614750A (en) 1969-07-15 1971-10-19 Ncr Co Read-only memory circuit JPS5232703B2 (en 1972-04-17 1977-08-23 US4281323A (en) 1978-12-05 1981-07-28 Bank Computer Network Corporation Noise responsive data input apparatus and method US4323946A (en) 1979-09-20 1982-04-06 Traux Robert L Apparatus for reducing electrostatic charge storage and EMI on dielectric surfaces US4283749A (en) 1979-09-25 1981-08-11 The United States Of America As Represented By Apparatus for the controlled discharge of a charged object The Secretary Of The Army US4550310A (en) 1981-10-29 1985-10-29 Fujitsu Limited Touch sensing device JPS61204723A (en 1985-03-07 1986-09-10 Taihei Kagaku Seihin Kk Coordinate reader provided with electrostatic attracting function US4771276A (en) 1985-04-15 1988-09-13 International Business Machines Corporation Electromagnetic touch sensor input system in a cathode ray tube display device EP0274768A1 (en) 1986-11-25 1988-07-20 Pumptech N.V. Electromagnetic flowmeter for conductive and dielectric fluids and its applications in particular in US5305017A (en) 1989-08-16 1994-04-19 Gerpheide George E Methods and apparatus for data input US5341103A (en) 1992-06-04 1994-08-23 Hercules Incorporated Apparatus for simultaneously generating and measuring a triboelectric charge JPH05340709A (en 1992-06-05 1993-12-21 Sony Corp Three-dimensional shape measuring instrument US5889236A (en) 1992-06-08 1999-03-30 Synaptics Incorporated Pressure sensitive scrollbar feature US5914465A (en) 1992-06-08 1999-06-22 Synaptics, Inc. Object position detector US7532205B2 (en) 1992-06-08 2009-05-12 Synaptics, Inc. Object position detector with edge motion feature and gesture recognition US6028271A (en) 1992-06-08 2000-02-22 Synaptics, Inc. Object position detector with edge motion feature and gesture recognition US5543591A (en) 1992-06-08 1996-08-06 Synaptics, Incorporated Object position detector with edge motion feature and gesture recognition US5880411A (en) 1992-06-08 1999-03-09 Synaptics, Incorporated Object position detector with edge motion feature and gesture recognition US5861583A (en) 1992-06-08 1999-01-19 Synaptics, Incorporated Object position detector US7911456B2 (en) 1992-06-08 2011-03-22 Synaptics Incorporated Object position detector with edge motion feature and gesture recognition US6239389B1 (en) 1992-06-08 2001-05-29 Synaptics, Inc. Object position detection system and method DE69324067T2 (en 1992-06-08 1999-07-15 Synaptics Inc Object position detector US5861875A (en) 1992-07-13 1999-01-19 Cirque Corporation Methods and apparatus for data input JPH06149447A (en 1992-11-13 1994-05-27 Wacom Co Ltd Auxiliary tool for input, information input device utilizing the same and information processor US5469364A (en) 1993-03-15 1995-11-21 Hughey; Bradley W. Apparatus and methods for measuring and detecting variations in the value of a capacitor US5448172A (en) 1993-05-05 1995-09-05 Auburn International, Inc. Triboelectric instrument with DC drift compensation JPH0773790A (en) 1993-07-07 1995-03-17 Komatsu Raito Seisakusho:Kk Merchandise selecting switch device for dispenser US6476798B1 (en) 1994-08-22 2002-11-05 International Game Technology Reduced noise touch screen apparatus and method GB2295017A (en) 1994-11-08 1996-05-15 Ibm Touch sensor input system for a computer display FI101179B (en) 1995-05-26 1998-04-30 Tr Tech Int Oy Measuring system and method for measuring electrostatic charge and utilizing the measuring system US5956020A (en) 1995-07-27 1999-09-21 Microtouch Systems, Inc. Touchscreen controller with pen and/or finger inputs US5835027A (en) 1996-11-07 1998-11-10 Tyburski; Robert M. Residual charge effect traffic sensor US6018677A (en) 1997-11-25 2000-01-25 Tectrix Fitness Equipment, Inc. Heart rate monitor and method EP1717679B1 (en) 1998-01-26 2016-09-21 Apple Inc. Method for integrating manual input US6180894B1 (en) 1998-05-07 2001-01-30 Aiptek International Inc. Dual mode digitizer tablet system JP2000076005A ( 1998-09-02 2000-03-14 Ricoh Co Ltd Touch pad JP2000148396A ( 1998-11-06 2000-05-26 Canon Inc Information inputting device and method therefor US6650959B1 (en) 1999-03-10 2003-11-18 Barco N.V. Method and apparatus for the detection of foreign materials in moving textile materials US6342347B1 (en) 1999-10-22 2002-01-29 Biosensor Systems Design., Inc. Electromagnetic sensor US8275140B2 (en) 1999-10-28 2012-09-25 Clive Leonard Smith Transducer for sensing actual or simulated body sounds US6907977B1 (en) 2000-01-28 2005-06-21 Cashcode Company Inc. Capacitance sensor for coin evaluation AUPQ685900A0 (en 2000-04-12 2000-05-11 Goyen Controls Co Pty Limited Method and apparatus for detecting particles in a gas flow WO2001091045A1 ( 2000-05-19 2001-11-29 Technology Innovations, Llc Document with embedded information US6550639B2 (en) 2000-12-05 2003-04-22 S.C. Johnson & Son, Inc. Triboelectric system DE60205142T2 (en 2001-01-11 2006-05-24 Canon K.K. An electrophotographic image forming apparatus and image forming method US20020149571A1 2001-04-13 2002-10-17 Roberts Jerry B. Method and apparatus for force-based touch input US6762917B1 (en) 2001-06-12 2004-07-13 Novx Corporation Method of monitoring ESC levels and protective devices utilizing the method JP2003173238A ( 2001-12-05 2003-06-20 Sharp Corp Touch sensor and display device with touch sensor US7725348B1 (en) 2001-10-17 2010-05-25 United Toll Systems, Inc. Multilane vehicle information capture system US6762752B2 (en) 2001-11-29 2004-07-13 N-Trig Ltd. Dual function input device and method US7746325B2 (en) 2002-05-06 2010-06-29 3M Innovative Properties Company Method for improving positioned accuracy for a determined touch input JP4310164B2 (en) 2002-10-16 2009-08-05 アルプス電気株式会社 Transparent coordinate input device, liquid crystal display device, and transparent composite material JP4220218B2 (en) 2002-10-25 2009-02-04 株式会社デンソー Manufacturing method of center electrode for spark plug EP1556669A1 (en) 2002-10-31 2005-07-27 Harald Philipp Charge transfer capacitive position sensor JP4124444B2 (en) 2003-01-30 2008-07-23 富士通コンポーネント株式会社 Touch panel, input device having the same, and electronic device US7078911B2 (en) 2003-02-06 2006-07-18 Cehelnik Thomas G Patent application for a computer motional command interface US7358742B2 (en) 2003-10-30 2008-04-15 Cehelnik Thomas G DC & AC coupled E-field sensor WO2005019766A2 ( 2003-08-21 2005-03-03 Harald Philipp Capacitive position sensor JP3808063B2 (en) 2003-08-25 2006-08-09 シャープ株式会社 Integrated display tablet device US7288356B2 (en) 2003-11-19 2007-10-30 Canon Kabushiki Kaisha Toner kit, deep-color cyan toner, pale-color cyan toner, and image forming method EP1548409A1 (en) 2003-12-23 2005-06-29 Dialog Semiconductor GmbH Differential capacitance measurement US7277087B2 (en) 2003-12-31 2007-10-02 3M Innovative Properties Company Touch sensing with touch down and lift off sensitivity EP1735770A4 (en) 2004-02-27 2012-04-25 N trig ltd Noise reduction in digitizer system US20060209037A1 2004-03-15 2006-09-21 David Wang Method and system for providing haptic effects (en) * WO2005121939A2 ( 2004-06-10 2005-12-22 Koninklijke Philips Electronics N.V. Generating control signals using the impedance of parts of a living body for controlling a controllable en) device US7786980B2 (en) 2004-06-29 2010-08-31 Koninklijke Philips Electronics N.V. Method and device for preventing staining of a display device US7479878B2 (en) 2004-07-28 2009-01-20 Senstar-Stellar Corporation Triboelectric, ranging, or dual use security sensor cable and method of manufacturing same US7952564B2 (en) 2005-02-17 2011-05-31 Hurst G Samuel Multiple-touch sensor US20100049608A1 2005-04-25 2010-02-25 Grossman Stephanie L Third party content management system and method US7643965B2 (en) 2005-08-10 2010-01-05 Olympus Corporation EMI management system and method DE502006006125D1 2005-10-28 2010-03-25 Ident Technology Ag CIRCUIT FOR DETECTING THE PRESENCE, POSITION AND / OR APPROXIMATION OF AN OBJECT TO AT LEAST ONE (en) ELECTRODE DEVICE US7868874B2 (en) 2005-11-15 2011-01-11 Synaptics Incorporated Methods and systems for detecting a position-based attribute of an object using digital codes WO2007071630A1 ( 2005-12-23 2007-06-28 The European Community, Represented By The Electrostatic sensor en) European Commission JP4884149B2 (en) 2006-09-25 2012-02-29 株式会社日立ハイテクノロジーズ Exposure apparatus, exposure method, and manufacturing method of display panel substrate JP4766340B2 (en) 2006-10-13 2011-09-07 ソニー株式会社 Proximity detection type information display device and information display method using the same US8284165B2 (en) 2006-10-13 2012-10-09 Sony Corporation Information display apparatus with proximity detection performance and information display method using the same US8214007B2 (en) 2006-11-01 2012-07-03 Welch Allyn, Inc. Body worn physiological sensor device having a disposable electrode module KR101334945B1 ( 2006-11-02 2013-11-29 삼성디스플레이 주식회사 Alignment Layer and Liquid Crystal Display Apparatus Having the Same US8207944B2 (en) 2006-12-19 2012-06-26 3M Innovative Properties Company Capacitance measuring circuit and method US7877707B2 (en) 2007-01-06 2011-01-25 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices JP5037961B2 (en) 2007-02-01 2012-10-03 国立大学法人大阪大学 Object detection sensor JP2008225648A ( 2007-03-09 2008-09-25 Alps Electric Co Ltd Power source controller and electronic equipment equipped with the same, and method for starting en) electronic equipment JP4867766B2 (en) 2007-04-05 2012-02-01 セイコーエプソン株式会社 Liquid crystal device, image sensor, and electronic device WO2008131088A1 ( 2007-04-17 2008-10-30 The University Of Utah Research Foundation Mems devices and systems actuated by an energy field en) * US9554721B1 (en) 2007-04-23 2017-01-31 Neurowave Systems Inc. Seizure detector, brain dysfunction monitor and method US8797272B2 (en) 2007-05-15 2014-08-05 Chih-Feng Hsu Electronic devices with preselected operational characteristics, and associated methods US8493331B2 (en) 2007-06-13 2013-07-23 Apple Inc. Touch detection using multiple simultaneous frequencies CN101689090A (en 2007-06-28 2010-03-31 京瓷株式会社 Touch panel, and touch panel type display device ) * FI20085475A0 (en 2008-05-19 2008-05-19 Senseg Oy Touch Device Interface ) * EP2203797B1 (en) 2007-09-18 2014-07-09 Senseg OY Method and apparatus for sensory stimulation US20090135031A1 2007-11-26 2009-05-28 Key Systems, Inc. Armored Capacitive Keypad (en) * US20090174675A1 2008-01-09 2009-07-09 Dave Gillespie Locating multiple objects on a capacitive touch pad (en) * US8032330B2 (en) 2008-03-07 2011-10-04 Nokia Corporation Electromagnetic interference sensor device and method and computer program US20090295366A1 2008-03-20 2009-12-03 Cehelnik Thomas G E-field sensor arrays for interactive gaming, computer interfaces, machine vision, medical imaging, and (en) geological exploration JP2009276978A ( 2008-05-14 2009-11-26 Pioneer Electronic Corp Input device US9569037B2 (en) 2008-05-19 2017-02-14 Atmel Corporation Capacitive sensing with low-frequency noise reduction US8054300B2 (en) 2008-06-17 2011-11-08 Apple Inc. Capacitive sensor panel having dynamically reconfigurable sensor size and shape US8447704B2 (en) 2008-06-26 2013-05-21 Microsoft Corporation Recognizing gestures from forearm EMG signals US8120371B2 (en) 2008-06-27 2012-02-21 Synaptics, Inc. Object position sensing apparatus JP2010049608A ( 2008-08-25 2010-03-04 Seiko Instruments Inc Device and method of detecting proximity en) * US8816967B2 (en) 2008-09-25 2014-08-26 Apple Inc. Capacitive sensor having electrodes arranged on the substrate and the flex circuit US9927924B2 (en) 2008-09-26 2018-03-27 Apple Inc. Differential sensing for a touch panel JP2010086471A ( 2008-10-02 2010-04-15 Sony Corp Operation feeling providing device, and operation feeling feedback method, and program US8344348B2 (en) 2008-10-02 2013-01-01 Ovonyx, Inc. Memory device US7902004B2 (en) 2008-10-14 2011-03-08 Dpix Llc ESD induced artifact reduction design for a thin film transistor image sensor array EP2374229A1 (en) 2008-12-05 2011-10-12 Koninklijke Philips Electronics N.V. User identification based on body-coupled communication US8686952B2 (en) 2008-12-23 2014-04-01 Apple Inc. Multi touch with multi haptics KR20110132349A ( 2009-01-26 2011-12-07 지로 테크놀로지스 (2009) 엘티디. Device and method for monitoring an object's behavior US8314779B2 (en) 2009-02-23 2012-11-20 Solomon Systech Limited Method and apparatus for operating a touch panel US8810249B2 (en) 2009-03-20 2014-08-19 Thomas G. Cehelnik E-field sensor arrays for interactive gaming, computer interfaces, machine vision, medical imaging, and * geological exploration CIP US8174510B2 (en) 2009-03-29 2012-05-08 Cypress Semiconductor Corporation Capacitive touch screen JP2010244772A ( 2009-04-03 2010-10-28 Sony Corp Capacitance type touch member and method for producing the same, and capacitance type touch detection en) device US20120113051A1 2009-05-27 2012-05-10 Koninklijke Philips Electronics N.V. Touch- or proximity -sensitive interface CN101907922B (en 2009-06-04 2015-02-04 新励科技(深圳)有限公司 Touch and touch control system US8847613B2 (en) 2009-06-05 2014-09-30 Koninklijke Philips N.V. Capacitive sensing system CN101930133A (en 2009-06-19 2010-12-29 台均科技(深圳)有限公司 Liquid crystal panel and liquid crystal display KR101612052B1 ( 2009-07-01 2016-04-12 사브 에이비 A radar system comprising a switching mode power converter JP5669169B2 (en) 2009-07-28 2015-02-12 Necカシオモバイルコミュニケーションズ株式会社 Terminal device and program FR2949007B1 (en) 2009-08-07 2012-06-08 Nanotec Solution DEVICE AND METHOD FOR CONTROL INTERFACE SENSITIVE TO A MOVEMENT OF A BODY OR OBJECT AND CONTROL EQUIPMENT INCORPORATING THIS DEVICE. KR20120081583A ( 2009-08-17 2012-07-19 더 리전트 오브 더 유니버시티 오브 캘리포니아 Distributed external and internal wireless sensor systems for characterization of surface and subsurface en) * biomedical structure and condition US8334849B2 (en) 2009-08-25 2012-12-18 Pixart Imaging Inc. Firmware methods and devices for a mutual capacitance touch sensing device WO2011027265A1 ( 2009-09-03 2011-03-10 Koninklijke Philips Electronics N.V. Touch sensing output device US8514187B2 (en) 2009-09-30 2013-08-20 Motorola Mobility Llc Methods and apparatus for distinguishing between touch system manipulators JP5346769B2 (en) 2009-10-21 2013-11-20 株式会社ジャパンディスプレイ Touch panel and display device including the same CN102687103B (en 2009-10-28 2016-04-20 伊英克公司 There is the electro-optic displays of touch sensor US20120231248A1 2009-11-11 2012-09-13 Toray Industries, Inc. Conductive laminate and method of producing the same CN101719464B (en 2009-11-16 2011-04-27 江苏华创光电科技有限公司 Method for preparing ultra-shallow junction on surface of semiconductor chip through laser US8326395B2 (en) 2009-11-19 2012-12-04 Jack Edward Gratteau Electrode for electroretinographic use and method of application EP2338565B1 (en) 2009-12-22 2020-09-16 BIOTRONIK SE & Co. KG Switched protective device against electromagnetic interference WO2011105218A1 ( 2010-02-26 2011-09-01 Semiconductor Energy Laboratory Co., Ltd. Display device and e-book reader provided therewith KR101589762B1 ( 2010-03-03 2016-01-28 엘지전자 주식회사 Touch position detection apparatus and method of detecting touch position JP5427070B2 (en) 2010-03-05 2014-02-26 株式会社ワコム Position detection device WO2011121537A1 ( 2010-04-01 2011-10-06 Koninklijke Philips Electronics N.V. Signal measuring system, method for electrically conducting signals and a signal cable US8941395B2 (en) 2010-04-27 2015-01-27 3M Innovative Properties Company Integrated passive circuit elements for sensing devices KR101697342B1 ( 2010-05-04 2017-01-17 삼성전자 주식회사 Method and apparatus for performing calibration in touch sensing system and touch sensing system applying en) the same CN102243553B (en 2010-05-16 2015-06-10 宸鸿科技(厦门)有限公司 Capacitive touch panel and method for reducing visuality of metal conductor of capacitive touch panel ) * US9501145B2 (en) 2010-05-21 2016-11-22 Disney Enterprises, Inc. Electrovibration for touch surfaces KR20120003764A ( 2010-07-05 2012-01-11 엘지전자 주식회사 Touch sensor and method for recognizing a touch EP2405332B1 (en) 2010-07-09 2013-05-29 Elo Touch Solutions, Inc. Method for determining a touch event and touch sensitive device US8723834B2 (en) 2010-07-28 2014-05-13 Atmel Corporation Touch sensitive screen configurations US8614693B2 (en) 2010-08-27 2013-12-24 Apple Inc. Touch and hover signal drift compensation WO2012030183A2 ( 2010-09-01 2012-03-08 Lee Sung Ho Capacitive touch detection apparatus using level shift, detection method using level shift, and display en) device having the detection apparatus built therein WO2012039837A1 ( 2010-09-22 2012-03-29 Cypress Semiconductor Corporation Capacitive stylus for a touch screen US8451218B2 (en) 2010-10-13 2013-05-28 Toyota Motor Engineering & Manufacturing North Electronic control module interface system for a motor vehicle America, Inc. JP5666238B2 (en) 2010-10-15 2015-02-12 シャープ株式会社 Electronic device and display method WO2012057887A1 ( 2010-10-28 2012-05-03 Cypress Semiconductor Corporation Capacitive stylus with palm rejection US8564314B2 (en) 2010-11-02 2013-10-22 Atmel Corporation Capacitive touch sensor for identifying a fingerprint US8473433B2 (en) 2010-11-04 2013-06-25 At&T Intellectual Property I, L.P. Systems and methods to facilitate local searches via location disambiguation WO2012074059A1 ( 2010-12-02 2012-06-07 日東電工株式会社 Transparent conductive film and touch panel EP2464008A1 (en) 2010-12-08 2012-06-13 Fujitsu Semiconductor Limited Sampling circuitry JP5445438B2 (en) 2010-12-15 2014-03-19 Smk株式会社 Capacitive touch panel US9069421B2 (en) 2010-12-16 2015-06-30 Hung-Ta LIU Touch sensor and touch display apparatus and driving method thereof TWI437474B (en) 2010-12-16 2014-05-11 Hongda Liu Dual-modes touch sensor and touch display and driving method thereof US9244545B2 (en) 2010-12-17 2016-01-26 Microsoft Technology Licensing, Llc Touch and stylus discrimination and rejection for contact sensitive computing devices US8665210B2 (en) 2010-12-22 2014-03-04 Microsoft Corporation Sensing user input using the body as an antenna US8730190B2 (en) 2011-01-13 2014-05-20 Qualcomm Incorporated Detect motion generated from gestures used to execute functionality associated with a computer system KR20120085392A ( 2011-01-24 2012-08-01 삼성전자주식회사 Terminal having touch-screen and method for identifying touch event thereof JP2014507726A ( 2011-02-08 2014-03-27 ハワース,インコーポレイテッド Multimodal touch screen interaction apparatus, method and system JP5651036B2 (en) 2011-02-15 2015-01-07 株式会社日本自動車部品総合研究所 Operation detection device WO2012112561A1 ( 2011-02-18 2012-08-23 Proteus Biomedical, Inc. Wearable personal communicator apparatus, system, and method US8866490B1 (en) 2013-01-18 2014-10-21 Cypress Semiconductor Corporation Method and apparatus for eliminating tail effect in touch applications US8866491B2 (en) 2011-02-24 2014-10-21 Cypress Semiconductor Corporation Tail effect correction for SLIM pattern touch panels US9345110B2 (en) 2011-03-15 2016-05-17 Jack D. Miller Motion actuated fixture illuminator US9122325B2 (en) 2011-05-10 2015-09-01 Northwestern University Touch interface device and method for applying controllable shear forces to a human appendage EP2711818A1 (en) 2011-05-16 2014-03-26 Panasonic Corporation Display device, display control method and display control program, and input device, input assistance method and program US8798947B2 (en) 2011-06-20 2014-08-05 The University of Sussex-Falmer Apparatus and method for measuring charge density distribution JP5384598B2 (en) 2011-09-09 2014-01-08 シャープ株式会社 Capacitive touch sensor panel, capacitive touch sensor system using the same, and information input / output device JP2013020530A ( 2011-07-13 2013-01-31 Dainippon Printing Co Ltd Touch sensor panel member, display device with touch sensor panel member, and method of manufacturing en) touch sensor panel member US8878823B1 (en) 2011-07-27 2014-11-04 Cypress Semiconductor Corporation Dynamic shield electrode of a stylus US20130038565A1 2011-08-10 2013-02-14 Qualcomm Mems Technologies, Inc. Touch sensing integrated with display data updates (en) * US9470941B2 (en) 2011-08-19 2016-10-18 Apple Inc. In-cell or on-cell touch sensor with color filter on array KR101971067B1 ( 2011-08-31 2019-08-14 삼성전자 주식회사 Method and apparatus for providing of user interface in portable device US20130050143A1 2011-08-31 2013-02-28 Samsung Electronics Co., Ltd. Method of providing of user interface in portable terminal and apparatus thereof US20150233998A1 2011-09-15 2015-08-20 University Of Washington Through Its Center For Systems and methods for sensing environmental changes using emi signal sources as sensors (en) Commercialization WO2013040497A2 ( 2011-09-15 2013-03-21 University Of Washington Through Its Center For Systems and methods for sensing environmental changes using light sources as sensors en) Commercialization US9748952B2 (en) 2011-09-21 2017-08-29 Synaptics Incorporated Input device with integrated deformable electrode structure for force sensing KR102089505B1 ( 2011-09-23 2020-03-16 가부시키가이샤 한도오따이 에네루기 켄큐쇼 Semiconductor device en) * KR101790017B1 ( 2011-09-30 2017-10-25 삼성전자 주식회사 Controlling Method For Communication Channel Operation based on a Gesture and Portable Device System en) supporting the same US9596988B2 (en) 2011-10-12 2017-03-21 Purdue Research Foundation Pressure sensors for small-scale applications and related methods US9529479B2 (en) 2011-10-14 2016-12-27 Hung-Ta LIU Touch sensing method, module, and display US20130100043A1 2011-10-24 2013-04-25 General Electric Company Method for determining valid touch screen inputs US9958990B2 (en) 2011-10-28 2018-05-01 Atmel Corporation Authenticating with active stylus US9160331B2 (en) 2011-10-28 2015-10-13 Atmel Corporation Capacitive and inductive sensing US9134858B2 (en) 2011-11-03 2015-09-15 Innolux Corporation Touch panel for electrostatic discharge protection and electronic device using the same TWI465994B (en) 2011-12-09 2014-12-21 Nuvoton Technology Corp Touch sensing method and touch sensing apparatus using charge distribution manner US8599169B2 (en) 2011-12-14 2013-12-03 Freescale Semiconductor, Inc. Touch sense interface circuit US9482737B2 (en) 2011-12-30 2016-11-01 Elwha Llc Computational systems and methods for locating a mobile device US9110543B1 (en) 2012-01-06 2015-08-18 Steve Dabell Method and apparatus for emulating touch and gesture events on a capacitive touch sensor US20130176213A1 2012-01-09 2013-07-11 Nvidia Corporation Touch-Screen Input/Output Device Techniques TWI467453B (en) 2012-01-18 2015-01-01 Chunghwa Picture Tubes Ltd Dual-mode touch sensing apparatus IN2014DN07364A ( 2012-02-06 2015-04-24 Canatu Oy en) * CN104106024B (en 2012-02-10 2017-06-09 3M创新有限公司 For the lattice of touch sensor electrode US9043247B1 (en) 2012-02-25 2015-05-26 Symantec Corporation Systems and methods for classifying documents for data loss prevention US20130257804A1 2012-03-29 2013-10-03 Rutgers, The State University Of New Jersey Method, apparatus, and system for capacitive touch communication US20130265242A1 2012-04-09 2013-10-10 Peter W. Richards Touch sensor common mode noise recovery EP2958053A1 (en) 2012-04-10 2015-12-23 Idex Asa Biometric sensing KR102066017B1 ( 2012-05-11 2020-01-14 삼성전자주식회사 Coordinate indicating apparatus and coordinate measuring apparaturs which measures input position of en) * coordinate indicating apparatus US9665231B2 (en) 2012-05-18 2017-05-30 Egalax_Empia Technology Inc. Detecting method and device for touch screen US8723836B1 (en) 2012-06-07 2014-05-13 Rockwell Collins, Inc. Touch panel deactivation systems and methods KR101929427B1 ( 2012-06-14 2018-12-17 삼성디스플레이 주식회사 Display device including touch sensor US8896096B2 (en) 2012-07-19 2014-11-25 Taiwan Semiconductor Manufacturing Company, Ltd. Process-compatible decoupling capacitor and method for making the same KR101209514B1 ( 2012-07-25 2012-12-07 (주)이미지스테크놀로지 Touch input device for detecting changes in the magnetic field and capacitance KR101428568B1 ( 2012-08-08 2014-08-12 엘지디스플레이 주식회사 Display device with touch screen and method for driving the same en) * CN202795318U (en 2012-08-08 2013-03-13 上海天马微电子有限公司 Embedded touch display device US9035663B2 (en) 2012-09-11 2015-05-19 Atmel Corporation Capacitive position encoder US9122330B2 (en) 2012-11-19 2015-09-01 Disney Enterprises, Inc. Controlling a user's tactile perception in a dynamic physical environment US9229553B2 (en) 2012-11-30 2016-01-05 3M Innovative Properties Company Mesh patterns for touch sensor electrodes US9164607B2 (en) 2012-11-30 2015-10-20 3M Innovative Properties Company Complementary touch panel electrodes JP5565598B1 (en) 2013-02-01 2014-08-06 パナソニックインテレクチュアルプロパティコーポレ Electronic device, input processing method, and program KR101472203B1 ( 2013-03-04 2014-12-12 주식회사 동부하이텍 Signal processing circuit of a touch screen JP5858059B2 (en) 2013-04-02 2016-02-10 株式会社デンソー Input device CN104111725A (en 2013-04-22 2014-10-22 汪林川 Communication system and method on basis of capacitive touch screen device US9322847B2 (en) 2013-06-24 2016-04-26 The United States Of America As Represented By Apparatus and method for integrated circuit forensics The Secretary Of The Navy US9622074B2 (en) 2013-07-24 2017-04-11 Htc Corporation Method for continuing operation on mobile electronic device, mobile device using the same, wearable device using the same, and computer readable medium US10108305B2 (en 2013-08-13 2018-10-23 Samsung Electronics Company, Ltd. Interaction sensing US10042446B2 (en 2013-08-13 2018-08-07 Samsung Electronics Company, Ltd. Interaction modes for object-device interactions ) * US10141929B2 (en 2013-08-13 2018-11-27 Samsung Electronics Company, Ltd. Processing electromagnetic interference signal using machine learning ) * US10073578B2 (en 2013-08-13 2018-09-11 Samsung Electronics Company, Ltd Electromagnetic interference signal detection ) * US10101869B2 (en 2013-08-13 2018-10-16 Samsung Electronics Company, Ltd. Identifying device associated with touch event ) * KR102103987B1 ( 2013-09-02 2020-04-24 삼성전자주식회사 Textile-based energy generator en) * US20150084921A1 2013-09-23 2015-03-26 Touchplus Information Corp. Floating touch method and touch device US9298299B2 (en) 2013-10-02 2016-03-29 Synaptics Incorporated Multi-sensor touch integrated display driver configuration for capacitive sensing devices US20150109257A1 2013-10-23 2015-04-23 Lumi Stream Inc. Pre-touch pointer for control and data entry in touch-screen devices US9541588B2 (en) 2013-10-30 2017-01-10 Synaptics Incorporated Current-mode coarse-baseline-correction US9955895B2 (en) 2013-11-05 2018-05-01 The Research Foundation For The State University Wearable head-mounted, glass-style computing devices with EOG acquisition and analysis for human-computer Of New York interfaces US20150145653A1 2013-11-25 2015-05-28 Invensense, Inc. Device control using a wearable device KR102219042B1 ( 2014-02-26 2021-02-23 삼성전자주식회사 Electronic device, wearable device and method for the input of the electronic device US9239648B2 (en) 2014-03-17 2016-01-19 Google Inc. Determining user handedness and orientation using a touchscreen device CN103927013B (en 2014-04-16 2017-12-22 北京智谷睿拓技术服务有限公司 Exchange method and system US9582093B2 (en) 2014-05-13 2017-02-28 Synaptics Incorporated Passive pen with ground mass state switch US9594489B2 (en) 2014-08-12 2017-03-14 Microsoft Technology Licensing, Llc Hover-based interaction with rendered content KR102236314B1 ( 2014-10-29 2021-04-05 삼성디스플레이 주식회사 Touch display device for energy harvesting en) * CN107077213B (en 2014-11-03 2020-10-16 西北大学 Materials and structures for tactile displays with simultaneous sensing and actuation EP3186695B1 (en) 2015-06-22 2021-11-10 SigmaSense, LLC Multi-touch sensor and electrostatic pen digitizing system utilizing simultaneous functions for improved EP3350681B1 (en) 2015-09-16 2024-04-24 Samsung Electronics Co., Ltd. Electromagnetic interference signal detection KR102600148B1 ( 2016-08-23 2023-11-08 삼성전자주식회사 Triboelectric generator using surface plasmon resonance en) * KR102703712B1 ( 2016-11-23 2024-09-05 삼성전자주식회사 Triboelectric generator en) * Patent Citations (23) * Cited by examiner, † Cited by third party Publication Priority Publication Assignee Title number date date US6473072B1 (en 1998-05-12 2002-10-29 E Ink Corporation Microencapsulated electrophoretic electrostatically-addressed media for drawing device applications ) * US6570557B1 (en 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords ) * US20060267953A1 2005-05-31 2006-11-30 Peterson Richard A Jr Detection of and compensation for stray capacitance in capacitive touch sensors (en) * US20090160817A1 2007-12-24 2009-06-25 Wintek Corporation Transparent capacitive touch panel and manufacturing method thereof (en) * US20100026639A1 2008-08-04 2010-02-04 Samsung Electronics Co., Ltd. Liquid crystal display and touch sensing method thereof (en) * US20100053111A1 2008-09-04 2010-03-04 Sony Ericsson Mobile Multi-touch control for touch sensitive display (en) * Communications Ab US20100085325A1 2008-10-02 2010-04-08 Wacom Co., Ltd. Combination touch and transducer input system and method (en) * US20100220075A1 2009-03-02 2010-09-02 Au Optronics Corporation Touch sensing display panel and touch sensing substrate (en) * US20100265211A1 2009-04-15 2010-10-21 Rohm Co., Ltd. Touch-type input device (en) * US20130314106A1 2010-10-15 2013-11-28 Freescale Semiconductor, Inc. Decoder for determining a substance or material structure of a detected object based on signals of a capacitive sensor and (en) * method for determining a substance or material structure of a detected object based on signals of a capacitive sensor US20120242584A1 2011-03-22 2012-09-27 Nokia Corporation Method and apparatus for providing sight independent activity reports responsive to a touch gesture (en) * US20130042581A1 2011-08-17 2013-02-21 Case-Mate, Inc. Snap-on protective cover for electronic device (en) * US20130049531A1 2011-08-30 2013-02-28 Georgia Tech Research Triboelectric Generator (en) * Corporation US20130120284A1 Shenzhen China Star (en) * 2011-11-15 2013-05-16 Optoelectronics Technology Energy saving type touch-controlled liquid crystal display device Co., Ltd. US20130234978A1 2012-02-23 2013-09-12 Cypress Semiconductor False touch filtering for capacitance sensing systems (en) * Corporation US20140300248A1 2012-09-21 2014-10-09 Georgia Tech Research Single Electrode Triboelectric Generator (en) * Corporation US20150318800A1 2013-01-21 2015-11-05 Peking University Vibration generator and stacked-structure generator (en) * US20140210313A1 2013-01-28 2014-07-31 Samsung Electronics Co., Ltd. Energy harvesting device having self-powered touch sensor (en) * US20140313141A1 2013-04-23 2014-10-23 Samsung Electronics Co., Ltd. Smart apparatus having touch input module and energy generating device, and operating method of the smart apparatus (en) * CN103777803A ( 2013-08-12 2014-05-07 国家纳米科学中心 Single-electrode touch sensor and preparation method thereof en) * WO2015021761A1 2013-08-12 2015-02-19 北京纳米能源与系统研究所 Single-electrode touch sensor and preparation method therefor (en) * EP3035398A1 (en 2013-08-12 2016-06-22 Beijing Institute of Single-electrode touch sensor and preparation method therefor ) * Nanoenergy and Nanosystems CN103777803B ( 2013-08-12 2017-04-19 北京纳米能源与系统研究所 Single-electrode touch sensor and preparation method thereof en) * Non-Patent Citations (6) * Cited by examiner, † Cited by third party J. Zhong, et al., Finger typing driven triboelectric nanogenerator and its use for instantaneously lighting up LEDs, Nano Energy (2012) * Long Lin et al.; "Triboelectric Active Sensor Array for Self-Powered Static and Dynamic Pressure Detection and Tactile Imaging"; (08/19/2013) * Meng, Bo: â A transparent single-friction-surface triboelectric generator and self-powered touch sensorâ , Energy Environ. Sci., 2013, 6, 3235â 3240. (2013) * Wang et. al. (05/11/2012). "Transparent Triboelectric Nanogenerators and Self-Powered Pressure Sensors Based on Micropatterned Plastic Films"; ACS Publication Nano Letters. (05/11/2012). P. 3109-3114 * Wang, Z.L. " Triboelectric-Generator-Driven Pulse Electrodeposition for Micropatterning ". ACS Publications (2012)] * Wang, Z.L. "Enhanced Triboelectric Nanogenerators and Triboelectric Nanosensor Using Chemically Modified TiO2 Nanomaterials". ACSNANO.. pp. 4554-4560. (2013) * Cited By (32) * Cited by examiner, † Cited by third party Publication number Priority date Publication date Assignee Title US10042446B2 (en) 2013-08-13 2018-08-07 Samsung Electronics Company, Ltd. Interaction modes for object-device interactions US10318090B2 (en) * 2013-08-13 2019-06-11 Samsung Electronics Company, Ltd. Interaction sensing US20150048846A1 (en) * 2013-08-13 2015-02-19 Samsung Electronics Company, Ltd. Interaction Sensing US10073578B2 (en) 2013-08-13 2018-09-11 Samsung Electronics Company, Ltd Electromagnetic interference signal detection US10141929B2 (en) 2013-08-13 2018-11-27 Samsung Electronics Company, Ltd. Processing electromagnetic interference signal using machine learning US10108305B2 (en) * 2013-08-13 2018-10-23 Samsung Electronics Company, Ltd. Interaction sensing US10101869B2 (en) 2013-08-13 2018-10-16 Samsung Electronics Company, Ltd. Identifying device associated with touch event US10955983B2 (en) * 2013-08-13 2021-03-23 Samsung Electronics Company, Ltd. Interaction sensing US20150138130A1 (en) * 2013-11-21 2015-05-21 Pixart Imaging Inc. Capacitive touch system and gain control method thereof US9170693B2 (en) * 2013-11-21 2015-10-27 Pixart Imaging Inc. Capacitive touch system and gain control method thereof US20150185923A1 (en) * 2014-01-02 2015-07-02 Samsung Electronics Co., Ltd. Method for processing input and electronic device thereof US10241627B2 (en) * 2014-01-02 2019-03-26 Samsung Electronics Co., Ltd. Method for processing input and electronic device thereof US20180246591A1 (en) * 2015-03-02 2018-08-30 Nxp B.V. Method of controlling a mobile device US10551973B2 (en) * 2015-03-02 2020-02-04 Nxp B.V. Method of controlling a mobile device US10165649B2 (en) * 2015-03-30 2018-12-25 Oledworks Gmbh LED device, LED driver, and driving method US20180242425A1 (en) * 2015-03-30 2018-08-23 Oledworks Gmbh Led device, led driver, and driving method US9729708B2 (en) * 2015-08-17 2017-08-08 Disney Enterprises, Inc. Methods and systems for altering features of mobile devices KR102084209B1 (en) * 2015-09-16 2020-04-23 삼성전자주식회사 Electromagnetic interference signal detection KR20180042454A (en) * 2015-09-16 2018-04-25 삼성전자주식회사 Electromagnetic disturbance signal detection WO2017048066A1 (en) * 2015-09-16 2017-03-23 Samsung Electronics Co., Ltd. Electromagnetic interference signal detection US9886098B2 (en) * 2015-12-08 2018-02-06 Georgia Tech Research Corporation Personality identified self-powering keyboard US20170160817A1 (en) * 2015-12-08 2017-06-08 Georgia Tech Research Corporation Personality identified self-powering keyboard US20180011572A1 (en) * 2015-12-22 2018-01-11 Wuhan China Star Optoelectronics Technology Co. Ltd. Touch display device with tactile feedback function and driving method thereof US9916024B2 (en) * 2015-12-22 2018-03-13 Wuhan China Star Optoelectronics Technology Co., Ltd. Touch display device with tactile feedback function and driving method thereof US10120512B2 (en) 2016-04-08 2018-11-06 Microsoft Technology Licensing, Llc Hover sensor US10503330B2 (en) * 2016-05-31 2019-12-10 Samsung Electronics Co., Ltd Method and electronic device for obtaining touch input US20170344155A1 (en) * 2016-05-31 2017-11-30 Samsung Electronics Co., Ltd. Method and electronic device for obtaining touch input US10503266B2 (en) 2016-10-17 2019-12-10 Samsung Electronics Co., Ltd. Electronic device comprising electromagnetic interference sensor US20190317642A1 (en) * 2018-04-13 2019-10-17 Tactual Labs Co. Capacitively coupled conductors US10908753B2 (en) * 2018-04-13 2021-02-02 Tactual Labs Co. Capacitively coupled conductors US11363700B2 (en) * 2018-09-29 2022-06-14 Anhui Djx Information Technology Co., Ltd. Intelligent control system for touch switch based on infrared sensor detection US20210109615A1 (en) * 2019-10-14 2021-04-15 RET Equipment Inc. Resistive pressure sensor device system Legal Events Date Code Title Description Owner name: SAMSUNG ELECTRONICS COMPANY, LTD., KOREA, REPUBLIC AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POST, ERNEST REHMI;BAU, OLIVIER;TSEKOV, ILIYA;AND OTHERS;SIGNING DATES FROM 20151110 TO STCF Information on status: patent Free format text: PATENTED CASE Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY MAFP Maintenance fee payment Year of fee payment: 4
{"url":"https://patents.google.com/patent/US20150049056A1/","timestamp":"2024-11-07T16:26:21Z","content_type":"text/html","content_length":"898343","record_id":"<urn:uuid:de60ab25-b3c4-4c71-bb78-ab8b96449835>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00590.warc.gz"}
SIR Model for the Spread of Tuberculosis in Kudus Regency - World Scientific News The purpose of this study was to analyze the spread of tuberculosis in Kudus Regency. One of the analyzes that can be done is making a mathematical model of SIR. The SIR mathematical model describes uninfected and susceptible individuals who are infected and can transmit the disease to a number of other individuals (infectious) and individuals who have recovered or are free from disease (Recovered). From 2018 to 2019 according to the Health Profile of Kudus Regency, the spread of tuberculosis that occurred has increased in all cases, namely the number of tuberculosis sufferers in Kudus Regency reached 3,133 patients, and the number of individuals who recovered reached 589 people. Based on the analysis of the SIR model, it is found that the equilibrium point (S, I) = (52632, 8614230) will be stable when R0 > 1, with the final conclusion, the basic reproduction rate is obtained, namely which indicates that one infected individual can infect 2 people on average or individuals susceptible to tuberculosis. Support the magazine and subscribe to the content This is premium stuff. Subscribe to read the entire article. Login if you have purchased Gain access to all our Premium contents. More than 3000+ articles. Buy Article Unlock this article and gain permanent access to read it.
{"url":"https://worldscientificnews.com/sir-model-for-the-spread-of-tuberculosis-in-kudus-regency/","timestamp":"2024-11-06T21:01:12Z","content_type":"text/html","content_length":"189333","record_id":"<urn:uuid:46fce826-3189-42ae-a65c-2513a93ab7e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00293.warc.gz"}
Interpretation of Time Interest Earned Ratio in context of time interest earned ratio 26 Aug 2024 Title: An Examination of the Time Interest Earned Ratio: A Critical Analysis of its Interpretation and Implications The Time Interest Earned (TIE) ratio is a widely used metric in accounting to assess an entity’s ability to generate interest income from investments. However, the interpretation of this ratio has been subject to various criticisms and debates among scholars. This article provides a comprehensive analysis of the TIE ratio, its formula, and its implications for financial decision-making. The Time Interest Earned (TIE) ratio is calculated as follows: TIE Ratio = (Interest Income / Average Investment) x 360 This ratio measures an entity’s ability to generate interest income from investments over a specific period. A higher TIE ratio indicates that the entity is generating sufficient interest income to cover its average investment, while a lower ratio suggests that the entity may not be earning enough interest income. Interpretation of the TIE Ratio: The interpretation of the TIE ratio has been subject to various criticisms and debates among scholars. Some argue that the TIE ratio is a useful metric for assessing an entity’s ability to generate interest income, while others contend that it is a flawed measure that does not accurately reflect an entity’s financial performance. One criticism of the TIE ratio is that it does not take into account the time value of money. The formula for the TIE ratio assumes that the interest income earned by an entity is equivalent to the average investment multiplied by 360, which ignores the fact that interest income is typically earned over a specific period. This can lead to inaccurate conclusions about an entity’s financial Another criticism of the TIE ratio is that it does not account for the risk associated with investments. The formula for the TIE ratio assumes that all investments are equally risky, which may not be the case in reality. This can lead to inaccurate conclusions about an entity’s ability to generate interest income from investments. The interpretation of the TIE ratio has significant implications for financial decision-making. If the TIE ratio is used as a sole metric to assess an entity’s ability to generate interest income, it may lead to inaccurate conclusions about an entity’s financial performance. This can have serious consequences for investors and other stakeholders who rely on accurate information to make informed In conclusion, the interpretation of the Time Interest Earned (TIE) ratio is a complex issue that requires careful consideration of various factors. While the TIE ratio may be a useful metric in certain contexts, it should not be relied upon as the sole measure of an entity’s ability to generate interest income from investments. A more comprehensive approach that takes into account the time value of money and risk associated with investments is necessary to accurately assess an entity’s financial performance. Note: The above article is a general outline and does not provide any numerical examples or specific data. It is intended to be a theoretical discussion on the interpretation of the Time Interest Earned ratio. Related articles for ‘time interest earned ratio ‘ : Calculators for ‘time interest earned ratio ‘
{"url":"https://blog.truegeometry.com/tutorials/education/51de744260d40d99a5b53471ba21339b/JSON_TO_ARTCL_Interpretation_of_Time_Interest_Earned_Ratio_in_context_of_time_in.html","timestamp":"2024-11-04T09:19:51Z","content_type":"text/html","content_length":"18367","record_id":"<urn:uuid:6b5c647c-873c-4c20-8f7e-3cbf0af02bc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00193.warc.gz"}
ing Place Value: The Why is the number 6 so scared? Because 7 ate 9! In this section, we will be learning how to determine the value of digits in whole and decimal numbers. We will do this by figuring out the place, or position, of a digit in a number using the place value system. Each place, or position, in the place value system has a value of 10 times the place to its right. In this section, we will show the value of each digit in a number by expressing the number in words and by writing the number in expanded form. What is the place value? Place values are part of system where the position of a number can show us its value. The most common place value system is the decimal system, which has a base of ten. The decimal system is the system that you most commonly use in your everyday life. In the decimal system, each position has a value that is a power of ten. Moving left or right on the system either moves a number up a non-negative power of 10 or down a negative power of 10 respectively. A decimal is used to separate the non-negatives and the negatives for numbers in the decimal system. How to find place value Let's try some examples to figure out how to determine a number's value place. Question 1: Write the value of 5 in the following number All of the numbers in this number are on the non-negative side of the decimal system. Imagine a decimal placed behind the number (6530.) and you can correctly see how many positions to the left in the decimal system these non-negative powers of tens numbers are situated. You can find their decimal place value. The lowest place value in this number is the 0 since it's at the far right of the number. This 0 is in the ones position since it is one digit before the decimal point. In other words, we have 0 ones in this number (0 x 1 = 0). The number 3 is in our tens position, meaning we have 3 tens in this number (3 x 10 = 30). We have the number 5 in the hundreds position (3 digits over to the left from the decimal). This can be written as 5 x 100 = 500. Therefore, we've found that the value of 5 is 500. Question 2: Write the value of 5 in the following number The lowest place value is 4, and being right before the decimal, it means we have 4 ones in this number. We can write this as 4 x 1, giving us 4. The 9 is in the tens position, and we have 9 tens in this number. Its value is 9 x 10, giving us 90. The number 6 is in our hundreds position, meaning we have 6 hundreds (6 x 100 = 600). Then we have 3 in our thousands position, which is 3 x 1000 = Now we finally reach the 5. 5 in this number has the highest place value. The 5 is in the ten thousands position. 5 x 10,000 = 50,000. Therefore, the value of 5 is 50,000. Question 3: Write the value of 5 in the following number Luckily for us, the place value of 5 in this number is in the lowest place value. Once again, by placing that imaginary decimal at the end of this number since all the place values are non-negative powers of 10, we can solve this problem. The 5 is one position over to the decimal, meaning it's in the ones position. This gives us the answer that there are 5 ones in this answer, and since that's 5 x 1, the answer for the value of 5 is 5. Question 4: Write the value of 5 in the following number In this number, when we look to the right of the decimal point, the first digit that comes after the decimal is tenths. What is the value of 5 in this case? Since it's the first digit after the decimal, 5 is in the tenths position. In other words, we have 5 tenths which can be written as 5 x 0.1, giving us a final answer of 0.5. The 0.1 stands for tenths. If you wanted to take a further look into place and place value, try inputting a number here and it can tell you what place value is!
{"url":"https://www.studypug.com/basic-math-help/place-value","timestamp":"2024-11-03T22:19:23Z","content_type":"text/html","content_length":"428402","record_id":"<urn:uuid:b7d7a610-ff41-4d41-852a-97b51daa7f86>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00387.warc.gz"}
Two Avenues In order to make the capital of Berland a more attractive place for tourists, the great king came up with the following plan: choose two streets of the city and call them avenues. Certainly, these avenues will be proclaimed extremely important historical places, which should attract tourists from all over the world. The capital of Berland can be represented as a graph, the vertices of which are crossroads, and the edges are streets connecting two crossroads. In total, there are \(n\) vertices and \(m\) edges in the graph, you can move in both directions along any street, you can get from any crossroad to any other by moving only along the streets, each street connects two different crossroads, and no two streets connect the same pair of crossroads. In order to reduce the flow of ordinary citizens moving along the great avenues, it was decided to introduce a toll on each avenue in both directions. Now you need to pay \(1\) tugrik for one passage along the avenue. You don't have to pay for the rest of the streets. Analysts have collected a sample of \(k\) citizens, \(i^{th}\) of them needs to go to work from the crossroad \(a_i\) to the crossroad \(b_i\). After two avenues are chosen, each citizen will go to work along the path with minimal cost. In order to earn as much money as possible, it was decided to choose two streets as two avenues, so that the total number of tugriks paid by these \(k\) citizens is maximized. Help the king: according to the given scheme of the city and a sample of citizens, find out which two streets should be made avenues, and how many tugriks the citizens will pay according to this choice. Input format The first line contains two integers \(n\) and \(m\) \((3 \leq n \leq 200000, n − 1 \leq m \leq 200000, m \leq \frac{n(n − 1)}{2})\) — the number of crossroads and streets, respectively. The next \(m\) lines contain the description of streets. The \(i^{th}\) line contains two integers \(s_i\) and \(f_i\) \((1 \leq s_i, f_i \leq n, s_i \neq f_i)\) — indexes of crossroads which are connected by the \(i^{th}\) street. It is guaranteed that no two streets connect the same pair of crossroads, and you can get from any crossroad to any other by moving only along the streets. The next line contains a single integer \(k\) \((1 \leq k \leq 200000)\) — the number of citizens in the sample. The next \(k\) lines contain the description of citizens. The \(i^{th}\) line contains two integers \(a_i\) and \(b_i\) \((1 \leq a_i, b_i \leq n)\) — \(i^{th}\) citizen goes to work from crossroad \ (a_i\) to crossroad \(b_i\). Note that it is possible for \(a_i = b_i \). Output format Print the total amount of tugriks that will be paid by citizens. Subtask # Score Constraints 1 6 \(n, m, k \leq 200\) 2 7 \(n, m, k \leq 2000\) 3 10 \(n - 1 = m\) 4 16 \(n = m\) 5 20 The graph is a cactus; each edge is in at most one simple cycle. 6 41 No additional constraints Sample Input 1 Sample Output 1 Sample Input 2 Sample Output 2 Sample Input 3 Sample Output 3
{"url":"https://codebreaker.xyz/problem/twoavenues","timestamp":"2024-11-14T13:56:02Z","content_type":"text/html","content_length":"18437","record_id":"<urn:uuid:b711042c-2657-4c23-996f-343c5bbfbc99>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00110.warc.gz"}
The Mobile Neural Network Lottery - Fritz ai The Mobile Neural Network Lottery How to Find Your Winning Ticket Note: The original research report on which this post is based can be found here If you’re one of those people with a large number of GPU compute nodes in your basement, or if you work for Google, training neural networks in a short time for challenging tasks is probably easy for However, if you don’t happen to be one of those lucky ones, you’re probably experiencing trade-offs between model size, training time, and performance. This problem is particularly pronounced on mobile and embedded systems, where computing power is very limited. Luckily, if you have powerful processors to train on and only care about doing inference/prediction on mobile devices, there are some techniques that have you covered. Current approaches One way would be to start with a large network and train it using certain regularizers that encourage many weights to converge to zero over the course of training, thereby effectively pruning the network. Another way would be to train the whole network until convergence and then prune it after training, removing the connections that contribute least to the predictive performance. Ultimately, you can also train a large performant model and then distill its knowledge into a smaller model by using it as a “teacher” for the smaller “student” model. The common thread with all these methods is that you need to start training with a large network, which requires powerful computing architecture. Once the model is trained and pruned or distilled, you can efficiently make predictions on a mobile system. But what do you do if you want to also do the training on the mobile device? Imagine an application where you want to collect user data and build a predictive model from it, but you don’t want to transfer the data to your servers, e.g. for privacy reasons. Naïvely, one could imagine just using the pruned network architectures and training them from scratch on the device. Alas, this doesn’t work well, and it’s tricky to understand why. So how else could we tackle this The Lottery Ticket Hypothesis It turns out that viewing pruning as identifying a sufficient model architecture could be misguided. Instead, we should perhaps view it as identifying a subnetwork that was randomly initialized with a good set of weights for the learning task. This recently published idea is called the “Lottery Ticket Hypothesis.” The hypothesis states that, before training, randomly initializing weights will mostly assign weights that are inefficient for the task at hand. By random chance, some subnetwork will be assigned a slightly better set of weights than the other ones. This subnetwork will therefore converge quicker and will take on most of the responsibility for the actual prediction task over the course of After training, it’s highly likely that this subnetwork will be responsible for most of the network’s performance, while the rest of the network might loosely support the task or be effective for some corner cases only. In line with the lottery metaphor, the authors of the paper call this special subnetwork the “winning ticket”. Using the lottery ticket hypothesis, we can now easily explain the observation that large neural networks are more performant than small ones, but that we can still prune them after training without much of a loss in performance. A larger network just contains more different subnetworks with randomly initialized weights. Therefore, the probability of finding a winning ticket at a certain performance level, or respectively the average performance of the network’s winning ticket, is higher. (Note that when we scale the size of all layers in a network linearly, the number of weights grows quadratically, but the number of credit assignment paths or subnetworks grows exponentially due to combinatorics!) Empirical results This is a compelling idea, but in order to know whether it’s actually true, the authors had to conduct a few experiments. In their first experiment, they trained the LeNet architecture on MNIST and pruned it after training up to different degrees (they call this one-shot pruning). They then reinitialized the pruned networks with the exact same weights used in the first training and retrained the so-obtained smaller networks on the same task. They found that they can prune up to 80% of the weights while still retaining the same performance (Figure 1, left and middle). Moreover, they even achieve a faster convergence with the pruned networks. If they initialize the pruned networks with new random weights, they do not observe this effect (Figure 1, right). In order to test whether the structure of the pruned networks is actually also optimized for the task, as opposed to only the initial weights, the authors also compare the winning tickets to a setting where they keep the initial weights but rearrange the connections. They also try a different pruning method called iterative pruning, where they prune the networks only a little bit after training, then re-initialize and retrain and repeat that whole process a couple of times. This is obviously more expensive than one-shot pruning, but they found it yields better results. Their results suggest that iteratively pruned winning tickets are superior to one-shot pruned ones, which are in turn superior to re-initializations of the weights and rearrangements of the connections in terms of performance and convergence speed (Figure 2). Why does it work? That all looks pretty neat, right? So what’s the secret of the winning tickets? What do their weights look like? It turns out that while the weights across the whole network are initialized from a Gaussian (so they have a mode at zero), the weights of the winning tickets have a bimodal distribution on the more extreme values of the initialization spectrum (Figure 3). This effect gets stronger with the amount of pruning, i.e. the most extreme values are the last ones standing after you have pruned away all the ones closer to zero. This effect is especially pronounced in the first layers of the network. So does this intriguing phenomenon suggest that you might want to initialize your weights from such a bimodal distribution in the first place? Sadly, the authors do not report any experiment along these lines, but I would doubt that it’s that easy. For subnetworks that are incidentally already aligned with your task objective, it makes sense that larger weights would make them even more However, there are many more subnetworks that aren’t randomly suited for your task already, and these ones would probably have a harder time learning the task if initialized with larger weights. Ultimately, the most successful strategy may well be very similar to what the authors of the paper did, namely initializing your weights from a Gaussian and then finding the optimal subnetwork with large weights that are actually helpful for your task in retrospect. So what can we take away from this new research? If we want to build applications from above, where we want to train neural networks on the users’ mobile devices directly, we should first train a five to ten times larger architecture on similar data on our GPU machines. We then can use iterative pruning to find the small winning ticket subnetwork and record its initial weight assignments. Finally, we can deploy the winning tickets with fixed weight initializations to the mobile devices and train them on the user data on-device, achieving similar performance to our larger networks but with faster convergence times. If this all works out, we can sell our great privacy-preserving mobile neural network model and get rich without even having to win the lottery. If you liked this story, you can follow me on Medium and Twitter. Discuss this post on Hacker News. Comments 0 Responses
{"url":"https://fritz.ai/the-mobile-neural-network-lottery/","timestamp":"2024-11-14T04:57:16Z","content_type":"text/html","content_length":"144854","record_id":"<urn:uuid:c660d865-bf88-47bb-adf5-329ce83ac9b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00216.warc.gz"}
How to use the simsurv package Usage examples Example 1: Simulating under a standard parametric survival model This first example shows how the simsurv package can be used to generate event times under a relatively standard Weibull proportional hazards model. This will be demonstrated as part of a simple simulation study. The simulated event times will be generated under the following conditions: • a monotonically increasing baseline hazard function, achieved by specifying a Weibull baseline hazard with a \(\gamma\) parameter of 1.5; • the effect of a protective treatment obtained by specifying a binary covariate with log hazard ratio of -0.5; • a maximum follow up time by censoring any individuals with a simulated survival time larger than five years. The objective of the simulation study will be to assess the bias and coverage of the estimated treatment effect. This will be achieved by: • generating 100 simulated datasets (ideally it should be more than 100 datasets, but we don’t want the vignette to take forever to build!), each containing \(N = 200\) individuals; • fitting a Weibull proportional hazards model to each simulated dataset using the flexsurv package; • calculating mean bias and mean coverage (of the estimated treatment effect) across the 100 simulated datasets. The code for performing the simulation study and the results are shown below. # Define a function for analysing one simulated dataset sim_run <- function() { # Create a data frame with the subject IDs and treatment covariate cov <- data.frame(id = 1:200, trt = rbinom(200, 1, 0.5)) # Simulate the event times dat <- simsurv(lambdas = 0.1, gammas = 1.5, betas = c(trt = -0.5), x = cov, maxt = 5) # Merge the simulated event times onto covariate data frame dat <- merge(cov, dat) # Fit a Weibull proportional hazards model mod <- flexsurv::flexsurvspline(Surv(eventtime, status) ~ trt, data = dat) # Obtain estimates, standard errors and 95% CI limits est <- mod$coefficients[["trt"]] ses <- sqrt(diag(mod$cov))[["trt"]] cil <- est + qnorm(.025) * ses ciu <- est + qnorm(.975) * ses # Return bias and coverage indicator for treatment effect c(bias = est - (-0.5), coverage = ((-0.5 > cil) && (-0.5 < ciu))) # Set seed for simulations # Perform 100 replicates in simulation study rowMeans(replicate(100, sim_run())) ## bias coverage ## 0.02842414 0.90000000 Here we see that there is very little bias in the estimates of the log hazard ratio for the treatment effect, and the 95% confidence intervals are near their intended level of coverage. Example 2: Simulating under a flexible parametric survival model Next, we will simulate event times under a slightly more complex parametric survival model that incorporates a flexible baseline hazard. In this example we will use the publically accessible German breast cancer dataset. This dataset is included with the simsurv R package (see help(simsurv::brcancer) for a description of the dataset). Let us look at the first few rows of the dataset: ## id hormon rectime censrec ## 1 1 0 1814 1 ## 2 2 1 2018 1 ## 3 3 1 712 1 ## 4 4 1 1807 1 ## 5 5 0 772 1 ## 6 6 0 448 1 Now let us fit two parametric survival models to the breast cancer data: • one Weibull survival model; and • one flexible parametric survival model The flexible parametric survival model will be based on the method of Royston and Parmar (2002); i.e. restricted cubic splines are used to approximate the log cumulative baseline hazard. This model can be estimated using the flexsurvspline function from the flexsurv package (Jackson (2016)). We will use three internal knots (i.e. four degrees of freedom) for the restricted cubic splines with the knot points placed at evenly spaced percentiles of the distribution of observed event time (obtained by specifying the argument k = 3 in the code below). We can also estimate the Weibull proportional hazards model using the flexsurvspline function from the flexsurv package, by specifying no internal knots (i.e. specifying k = 0). # Fit the Weibull survival model mod_weib <- flexsurv::flexsurvspline(Surv(rectime, censrec) ~ hormon, data = brcancer, k = 0) # Fit the flexible parametric survival model mod_flex <- flexsurv::flexsurvspline(Surv(rectime, censrec) ~ hormon, data = brcancer, k = 3) Now let us compare the fit of the two models by plotting each of the fitted survival functions on top of the Kaplan-Meier survival curve. par(mfrow = c(1,2), cex = 0.85) # graphics parameters main = "Weibull model", ylab = "Survival probability", xlab = "Time") main = "Flexible parametric model", ylab = "Survival probability", xlab = "Time") There is evidence in the plots that the flexible parametric model fits the data better than the standard Weibull model. Therefore, if we wanted to simulate event times from a data generating process similar to that of the breast cancer data, then using a Weibull distribution may not be adequate. Rather, it would be more appropriate to simulate event times under the flexible parametric model. We will demonstrate how the simsurv package can be used to do this. The estimated parameters from the flexible parametric model will be used as the “true” parameters for the simulated event times. The event times can be generated under a user-specified log cumulative hazard function that is equivalent to the Royston and Parmar specification used by the flexsurv package. First, the log cumulative hazard function for this model needs to be defined as a function in the R session. The user-defined function passed to simsurv must always have the following three arguments: • t: scalar specifying the current time at which to evaluate the hazard • x: a named list with the covariate data • betas: a named list with the “true” parameters Each of these arguments provide information that is used in evaluating the hazard \(h_i(t)\), log hazard \(\log h_i(t)\), cumulative hazard \(H_i(t)\), or log cumulative hazard \(\log H_i(t)\) (depending on which type of user-specified function is being provided). These three arguments (t, x, betas) can then be followed in the function signature by any additional arguments that may be necessary. For example, in the function definition below, the first three arguments are followed by an additional argument knots, which allows the calculation of the log cumulative hazard at time \(t \) to depend on the knot locations for the splines. # Define a function returning the log cum hazard at time t logcumhaz <- function(t, x, betas, knots) { # Obtain the basis terms for the spline-based log # cumulative hazard (evaluated at time t) basis <- flexsurv::basis(knots, log(t)) # Evaluate the log cumulative hazard under the # Royston and Parmar specification res <- betas[["gamma0"]] * basis[[1]] + betas[["gamma1"]] * basis[[2]] + betas[["gamma2"]] * basis[[3]] + betas[["gamma3"]] * basis[[4]] + betas[["gamma4"]] * basis[[5]] + betas[["hormon"]] * x[["hormon"]] # Return the log cumulative hazard at time t Next, we will show how to use the simsurv function to simulate event times under the flexible parametric model. To demonstrate this, we will again generate the event times as part of a simulation study. The objective of the simulation study will be to assess the bias and coverage of the estimated log hazard ratio for hormone therapy. This will be achieved by: • generating 100 simulated datasets (ideally it should be more than 100 datasets, but we don’t want the vignette to take forever to build!), each containing \(N = 200\) individuals. The simulated event times will be generated under our flexible parametric model (with the “true” parameter values taken from fitting a model to the German breast cancer data); • fitting both a Weibull model and a flexible parametric model to each simulated dataset; • calculating the mean bias (across the 100 simulated datasets) in the log hazard ratio for hormone therapy under the Weibull model and the flexible parametric models. # Fit the model to the brcancer dataset to obtain the "true" # parameter values that will be used in our simulation study true_mod <- flexsurv::flexsurvspline(Surv(rectime, censrec) ~ hormon, data = brcancer, k = 3) # Define a function to generate one simulated dataset, fit # our two models (Weibull and flexible) to the simulated data # and then return the bias in the estimated effect of hormone # therapy under each fitted model sim_run <- function(true_mod) { # Create a data frame with the subject IDs and treatment covariate cov <- data.frame(id = 1:200, hormon = rbinom(200, 1, 0.5)) # Simulate the event times dat <- simsurv(betas = true_mod$coefficients, # "true" parameter values x = cov, # covariate data for 200 individuals knots = true_mod$knots, # knot locations for splines logcumhazard = logcumhaz, # definition of log cum hazard maxt = NULL, # no right-censoring interval = c(1E-8,100000)) # interval for root finding # Merge the simulated event times onto covariate data frame dat <- merge(cov, dat) # Fit a Weibull proportional hazards model weib_mod <- flexsurv::flexsurvspline(Surv(eventtime, status) ~ hormon, data = dat, k = 0) # Fit a flexible parametric proportional hazards model flex_mod <- flexsurv::flexsurvspline(Surv(eventtime, status) ~ hormon, data = dat, k = 3) # Obtain estimates, standard errors and 95% CI limits for hormone effect true_loghr <- true_mod$coefficients[["hormon"]] weib_loghr <- weib_mod$coefficients[["hormon"]] flex_loghr <- flex_mod$coefficients[["hormon"]] # Return bias and coverage indicator for hormone effect c(weib_bias = weib_loghr - true_loghr, flex_bias = flex_loghr - true_loghr) # Set a seed for the simulations # Perform the simulation study using 100 replicates rowMeans(replicate(100, sim_run(true_mod = true_mod))) ## weib_bias flex_bias ## -0.029244642 -0.008417815 Example 3: Simulating under a Weibull model with time-dependent effects This short example shows how to simulate data under a standard Weibull survival model that incorporates a time-dependent effect (i.e. non-proportional hazards). For the time-dependent effect we will include a single binary covariate (e.g. a treatment indicator) with a protective effect (i.e. a negative log hazard ratio), but we will allow the effect of the covariate to diminish over time. The data generating model will be \[ h_i(t) = \gamma \lambda (t ^{\gamma - 1}) \exp(\beta_0 X_i + \beta_1 X_i\times \log(t)) \] where \(X_i\) is the binary treatment indicator for individual \(i\), \(\lambda\) and \(\gamma\) are the scale and shape parameters for the Weibull baseline hazard, \(\beta_0\) is the log hazard ratio for treatment when \(t = 1\) (i.e. when \(\log(t) = 0\)), and \(\beta_1\) quantifies the amount by which the log hazard ratio for treatment changes for each one unit increase in \(\log(t)\). Here we are assuming the time-dependent effect is induced by interacting the log hazard ratio with log time, but we could have used some other function of time (for example linear time, \(t\), or time squared, \(t^2\), if we had wanted to). We will simulate data for \(N = 5000\) individuals under this model, with a maximum follow up time of five years, and using the following “true” parameter values for the data generating model: • \(\beta_0 = -0.5\) • \(\beta_1 = 0.15\) • \(\lambda = 0.1\) • \(\gamma = 1.5\) covs <- data.frame(id = 1:5000, trt = rbinom(5000, 1, 0.5)) simdat <- simsurv(dist = "weibull", lambdas = 0.1, gammas = 1.5, betas = c(trt = -0.5), x = covs, tde = c(trt = 0.15), tdefunction = "log", maxt = 5) simdat <- merge(simdat, covs) ## id eventtime status trt ## 1 1 2.091009 1 1 ## 2 2 5.000000 0 1 ## 3 3 5.000000 0 1 ## 4 4 5.000000 0 1 ## 5 5 5.000000 0 1 ## 6 6 4.930287 1 0 Then let us fit a flexible parametric model with two internal knots (i.e. 3 degrees of freedom) for the baseline hazard, and a time-dependent hazard ratio for the treatment effect. For the time-dependent hazard ratio we will use an interaction with log time (the same as used in the data generating model); this can be easily achieved using the stpm2 function from the rstpm2 package (Clements and Liu (2017)) and specifying the tvc option. Note that the rstpm2 package and flexsurv packages can both be used to fit the Royston and Parmar flexible parametric survival model, however, they differ slightly in their post-estimation functionality and other possible extensions. Here, we use the rstpm2 package because it allows us to easily specify time-dependent effects and then plot the time-dependent hazard ratio after fitting the model (as shown in the code below). The model with the time-dependent effect for treatment can be estimated using the following code And for comparison we can fit the corresponding model, but without the time-dependent effect for treatment (i.e. assuming proportional hazards instead) Now, we can plot the time-dependent hazard ratio and the time-fixed hazard ratio on the same plot region using the following code plot(mod_tvc, newdata = data.frame(trt = 0), type = "hr", var = "trt", ylim = c(0,1), ci = TRUE, rug = FALSE, main = "Time dependent hazard ratio", ylab = "Hazard ratio", xlab = "Time") plot(mod_ph, newdata = data.frame(trt = 0), type = "hr", var = "trt", ylim = c(0,1), add = TRUE, ci = FALSE, lty = 2) From the plot we can see the diminishing effect of treatment under the model with the time-dependent hazard ratio; as time increases the hazard ratio approaches a value of 1. Moreover, note that the hazard ratio is approximately equal to a value of 0.6 (i.e. \(\exp(-0.5)\)) when \(t = 1\), which is what we specified in the data generating model. Example 4: Simulating under a joint model for longitudinal and survival data This example shows how the simsurv package can be used to simulate event times under a shared parameter joint model for longitudinal and survival data. We will simulate event times according to the following model formulation for the longitudinal submodel \[ Y_i(t) \sim N(\mu_i(t), \sigma_y^2) \] \[ \mu_i(t) = \beta_{0i} + \beta_{1i} t + \beta_2 x_{1i} + \beta_3 x_{2i} \] \[ \beta_{0i} = \beta_{00} + b_{0i} \] \[ \beta_{1i} = \beta_{10} + b_{1i} \] \[ (b_{0i}, b_{1i})^T \sim N(0, \Sigma) \] and the event submodel \[ h_i(t) = \delta (t^{\delta-1}) \exp (\gamma_0 + \gamma_1 x_{1i} + \gamma_2 x_{2i} + \alpha \mu_i(t)) \] where \(x_{1i}\) is an indicator variable for a binary covariate, \(x_{2i}\) is a continuous covariate, \(b_{0i}\) and \(b_{1i}\) are individual-level parameters (i.e. random effects) for the intercept and slope for individual \(i\), the \(\beta\) and \(\gamma\) terms are population-level parameters (i.e. fixed effects), and \(\delta\) is the shape parameter for the Weibull baseline This specification allows for an individual-specific linear trajectory for the longitudinal submodel, a Weibull baseline hazard in the event submodel, a current value association structure, and the effects of a binary and a continuous covariate in both the longitudinal and event submodels. To simulate from this model using simsurv, we need to first explicitly define the hazard function. The code defining a function that returns the hazard for this joint model is # First we define the hazard function to pass to simsurv # (NB this is a Weibull proportional hazards regression submodel # from a joint longitudinal and survival model with a "current # value" association structure) haz <- function(t, x, betas, ...) { betas[["delta"]] * (t ^ (betas[["delta"]] - 1)) * exp( betas[["gamma_0"]] + betas[["gamma_1"]] * x[["x1"]] + betas[["gamma_2"]] * x[["x2"]] + betas[["alpha"]] * ( betas[["beta_0i"]] + betas[["beta_1i"]] * t + betas[["beta_2"]] * x[["x1"]] + betas[["beta_3"]] * x[["x2"]] The next step is to define the “true” parameter values and covariate data for each individual. This is achieved by specifying two data frames: one for the parameter values, and one for the covariate data. Each row of the data frame will correspond to a different individual. The R code to achieve this is # Then we construct data frames with the true parameter # values and the covariate data for each individual set.seed(5454) # set seed before simulating data N <- 200 # number of individuals # Population (fixed effect) parameters betas <- data.frame( delta = rep(2, N), gamma_0 = rep(-11.9,N), gamma_1 = rep(0.6, N), gamma_2 = rep(0.08, N), alpha = rep(0.03, N), beta_0 = rep(90, N), beta_1 = rep(2.5, N), beta_2 = rep(-1.5, N), beta_3 = rep(1, N) # Individual-specific (random effect) parameters b_corrmat <- matrix(c(1, 0.5, 0.5, 1), 2, 2) b_sds <- c(20, 3) b_means <- rep(0, 2) b_z <- MASS::mvrnorm(n = N, mu = b_means, Sigma = b_corrmat) b <- sapply(1:length(b_sds), FUN = function(x) b_sds[x] * b_z[,x]) betas$beta_0i <- betas$beta_0 + b[,1] betas$beta_1i <- betas$beta_1 + b[,2] # Covariate data covdat <- data.frame( x1 = stats::rbinom(N, 1, 0.45), # a binary covariate x2 = stats::rnorm(N, 44, 8.5) # a continuous covariate The final step is to then generate the simulated event times using a call to the simsurv function. The only arguments that need to be specified are the user-defined hazard function, the true parameter values, and the covariate data. In this example we will also specify a maximum follow up time of ten units (for example, ten years, after which individuals will be censored if they have not yet experienced the event). The code to generate the simulated event times is # Set seed for simulations # Then simulate the survival times based on the user-defined # hazard function, covariates data, and true parameter values times <- simsurv(hazard = haz, x = covdat, betas = betas, maxt = 10) We can them examine the first few rows of the resulting data frame, to see the simulated event times and event indicator ## id eventtime status ## 1 1 4.0795945 1 ## 2 2 10.0000000 0 ## 3 3 4.1868793 1 ## 4 4 0.2630766 1 ## 5 5 7.5213303 1 ## 6 6 4.0806461 1 ## id eventtime status ## 1 4.813339 1 ## 2 9.763900 1 ## 3 5.913436 1 ## 4 2.823562 1 ## 5 2.315488 1 ## 6 10.000000 0 Of course, we have only simulated the event times here; we haven’t simulated any observed values for the longitudinal outcome. Moreover, although the simsurv package can be used for simulating joint longitudinal and time-to-event data, it did take a bit of work and several lines of code to achieve. Therefore, it is worth noting that the simjm package (https://github.com/sambrilleman/simjm), which acts as a wrapper for simsurv, is designed specifically for this purpose. It can make the process a lot easier, since it shields the user from much of the work described in this example. Instead, the user can simulate joint longitudinal and time-to-event data using one function call to simjm::simjm and a number of optional arguments are available to alter the exact specification of the shared parameter joint model.
{"url":"https://mirror.ibcp.fr/pub/CRAN/web/packages/simsurv/vignettes/simsurv_usage.html","timestamp":"2024-11-05T03:43:20Z","content_type":"text/html","content_length":"80360","record_id":"<urn:uuid:55ec6939-dada-46a7-bfd9-5fcd4b3f3615>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00370.warc.gz"}
Near rings without zero divisors Near rings without zero divisors, and a dual structure, near codomains, are studied. It is shown that a near ring is a near field if and only if it is an integral near ring, a near codomain, and has a non-zero distributive element. If the additive group (N, +) of a near integral domain N is cohopfian, then (N, +) possesses a fixed point free automorphism which is either torsion free or of prime order. This generalizes a well-known theorem of Ligh for finite near integral domains. A result of Ganesan [1] on the non-zero divisors in a finite ring is generalized to near rings. Dive into the research topics of 'Near rings without zero divisors'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/near-rings-without-zero-divisors","timestamp":"2024-11-07T21:42:12Z","content_type":"text/html","content_length":"49032","record_id":"<urn:uuid:693aff5e-3cd6-4a1c-86a7-03b059cab770>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00521.warc.gz"}
Mathematics: Modeling Our World, Course 4 (MMOW) (Student Edition) (Print) - COMAP Mathematics: Modeling Our World, Course 4 (MMOW) (Student Edition) (Print) Chapter 1 Functions In Modeling Chapter 2 The Exponential and Logarithmic Functions Chapter 3 Polynomial Models Chapter 4 Coordinate Systems and Vectors Chapter 5 Matrices Chapter 6 Analytic Geometry Chapter 7 Counting and the Binomial Theorem Chapter 8 Modeling Change with Discrete Dynamical Systems Click Here To Preview Mathematics: Modeling Our World Mathematics: Modeling Our World is an integrated core curriculum for high school that is based on the premise that students learn best when they are actively involved in the process. In this program students do not first learn mathematics and then apply what they've learned. Rather, important questions about the real world come first. Students analyze situations and apply the mathematical concepts needed to solve problems. Contextual questions drive the mathematics. In each unit, students build, test, and present models that describe a real-world situation or problem, such as deciding where to build a fire station. Mathematical modeling is a central focus throughout the curriculum. Each course covers the mathematical content found in the NCTM Standards. Each of the first three courses of Mathematics: Modeling Our World contains seven or eight units. Units are divided into four to seven lessons; each may take several days to complete. Each lesson contains a Lesson Opener, which provides the context for the lesson; Activities, which students work on in pairs or small groups using hands on mathematical investigation; and Individual Work, items that review, reinforce, extend, practice, and foreshadow concepts developed in the lesson. Course 4 is comprised of eight chapters and is intended to be a bridge between Courses 1, 2, and 3 and collegiate mathematics. The student text contains eight chapters divided into three to six lessons. Each lesson contains an activity designed for group work, expository readings, and exercises. Each chapter ends with a set of review exercises. Assessment is an integral part of Mathematics: Modeling Our World. Both Activities and Individual Work offer embedded opportunities to assess student progress. The Teacher's Resources provide Assessment Problems for use with each unit/chapter. The units/chapters of Mathematics: Modeling Our World begin with a real situation or problem to be solved during the course of the unit. In Courses 1, 2, and 3, a short video segment may be used to introduce the theme or problem. Students use both graphing calculators and computers extensively throughout the curriculum to assist in carrying out computations of real problems and to enhance concept development. While it is strongly recommended that students use computers with this curriculum, material is provided to teach the lessons without computers as well. However, use of the graphing calculator is essential throughout the program. Student materials for Mathematics: Modeling Our World are available in four hardcover texts, one each for each course. Teachers materials include, for Courses 1, 2, and 3, an Annotated Teacher's Edition, a Solutions Manual, and Teacher's Resources that includes additional teaching suggestions, background readings, reproducible handouts, assessment problems, supplemental activities, and transparencies. Other materials include a video with segments for each unit and a CD-ROM with calculator and computer programs written specifically for Mathematics: Modeling Our World. Course 4 Material 1401 Student Edition, 0-7167-4115-6 1403 Teaching Resources (Binder), 0-7167-4114-8 1406 CD-ROM, 0-7167-4213-7 Also available as a classroom set 1407 Course/Class Set Includes: Student Edition, 0-7167-4115-6, Qty. 25 Teaching Resources (Binder), 0-7167-4114-8, Qty. 1 Author Various Copyright Year ©2000 by COMAP, Inc. ISBN 0-7167-4115-6 Product Number 1401 Primary Level High School Application Areas Modeling Pricing Options 1-24 cost $125.00 each 25 or more $99.00 each Format 531 Page Casebound Textbook Format Options CD-Rom, USB, Webportal
{"url":"https://www.comap.org/bookstore/1-bookstore/29-mathematics-modeling-our-world-course-4-mmow-student-edition-print","timestamp":"2024-11-06T20:11:15Z","content_type":"text/html","content_length":"46504","record_id":"<urn:uuid:7a28f7c6-3684-42d5-ba53-86dbce4c2af9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00561.warc.gz"}
You can now display beautifully rendered math formulas in your presentations... {e}^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} This option is available through a new "Math" block type Math is written as TeX and rendered in real-time as you write it e^{ix} = \cos x + i\sin x e^{ix} = \cos x + i\sin x \int \frac{dx}{1+ax}=\frac{1}{a}\ln(1+ax)+C \int \frac{dx}{1+ax}=\frac{1}{a}\ln(1+ax)+C Formula appearance can easily be configured Math can be typed inline, inside of a text block 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\cdots} } } } Can't wait to see what you do with it! Math rendering is made possible thanks to the KaTeX open source project. We are planning to add more types of blocks to the Slides editor going forward. Would you be interested in building your own? Let us know.
{"url":"https://slides.com/news/math/embed","timestamp":"2024-11-10T23:57:53Z","content_type":"text/html","content_length":"19340","record_id":"<urn:uuid:0361efb8-71c8-4cfd-bcde-60e946f81a41>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00732.warc.gz"}
distributions Archives The coefficient of variation (CV) is a relative measure of variability that indicates the size of a standard deviation in relation to its mean. It is a standardized, unitless measure that allows you to compare variability between disparate groups and characteristics. It is also known as the relative standard deviation (RSD). In this post, you will learn about the coefficient of variation, how to calculate it, know when it is particularly useful, and when to avoid it. [Read more…] about Coefficient of Variation in
{"url":"https://statisticsbyjim.com/tag/distributions/page/4/","timestamp":"2024-11-09T19:48:15Z","content_type":"text/html","content_length":"254722","record_id":"<urn:uuid:2be2cfc8-10e8-44d4-bb1b-422a4f5c4760>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00246.warc.gz"}
STATS 400: Data Distiller Statistics Functions | Adobe Data Distiller Guide You need to ingest the CSV file below using the following tutorial: To demonstrate the use of statistical functions in a marketing domain dataset, let's generate a dataset representing customer transactions and campaign performance. The dataset includes information about customer purchases, campaign engagement, and customer demographics. Then, I'll provide SQL examples for each statistical function along with a suitable use case. When you ingest this dataset, make sure you name it as marketing_campaign_data The dataset is related to a marketing campaign and contains the following columns: customer_id: A unique identifier for each customer. campaign_id: The identifier for the marketing campaign the customer participated in. purchase_amount: The amount of money the customer spent during the campaign. engagement_score: A score indicating the level of customer engagement in the campaign. age: The age of the customer. clv (Customer Lifetime Value): An estimated value of the customer's future spending. Average (avg) "Average" typically refers to the mean, which is the sum of all values in a dataset divided by the number of values. It represents a measure of central tendency, indicating the central point of a dataset. The mean provides a summary of the data by finding a single value that represents the overall level of all observations. To calculate the mean, you add up all the data points and divide by the total number of points. For example, if you have a dataset of five numbers: 4, 8, 6, 5, and 7, the mean would be (4+8+6+5+7)/5= The mean is useful for understanding the overall trend of numerical data, but it can be sensitive to outliers, which are values significantly higher or lower than the others. Unlike the median (middle value) or the mode (most frequent value), the mean takes into account all data points when summarizing the dataset. Let us calculate the average purchase amount to assess the overall customer spend. SELECT avg(purchase_amount) AS avg_purchase_amount FROM marketing_campaign_data; Sum (sum) Let us calculate the total customer lifetime value (CLV) for all customers engaged in a specific campaign. SELECT campaign_id, sum(clv) AS total_clv FROM marketing_campaign_data GROUP BY campaign_id; Min/Max (min, max) Let us identify the minimum and maximum customer engagement scores for a campaign to gauge campaign effectiveness. SELECT campaign_id, min(engagement_score) AS min_engagement, max(engagement_score) AS max_engagement FROM marketing_campaign_data GROUP BY campaign_id; Standard Deviation (stddev/stddev_pop/stddev_samp) In statistics, "standard deviation" is a measure of the dispersion or spread of a set of values around the mean (average). It indicates how much individual data points deviate, on average, from the mean. A low standard deviation means that the data points tend to be close to the mean, while a high standard deviation indicates that the data points are spread out over a wider range. Standard deviation is calculated as the square root of the variance, which is the average of the squared deviations from the mean. It is commonly used in various fields to assess the variability or consistency of data. Unlike the mean (central value) and the median (middle value), the standard deviation focuses on the extent of variation or dispersion in the dataset. Note that the stddev function is an alias for stddev_samp. It calculates the sample standard deviation, using N−1 as the divisor (where N is the total number of data points). This adjustment is known as Bessel's correction, and it accounts for the bias in estimating the population standard deviation from a sample. stddev_pop computes the population standard deviation. It uses N as the divisor, treating the data as the entire population. The stddev/stddev_samp is computed as Let us measure the variability in customer age to assess the diversity of your customer base: SELECT stddev(age) AS age_stddev FROM marketing_campaign_data; The above sample computation is very useful when you need to construct a confidence interval for the mean of a dataset, you need to use the sample standard deviation along with other statistical elements, such as: Sample Mean: The average of the sample data. Standard Error of the Mean (SE): Calculated as Critical Value (z-score or t-score): Depends on the desired confidence level (e.g., 1.96 for 95% confidence if using the normal distribution). The confidence interval is then calculated as: Population Standard Deviation (stddev_pop) Whether a dataset is considered a population or a sample depends on the context and the scope of the analysis. When the dataset includes all possible observations relevant to the study, it is considered a population. For example, if you have the entire customer base of a company and want to analyze their spending habits, that dataset would be treated as the population. In this case, the population standard deviation (stddev_pop) is used because the data represents the entire group, and no adjustments are necessary. On the other hand, a dataset is considered a sample when it is a subset of the population, meant to represent a larger group. For instance, if you survey 1,000 customers out of a larger group of 100,000 to understand general customer preferences, this dataset would be considered a sample. In such cases, the sample standard deviation (stddev_samp) is used because an adjustment is needed to account for the fact that the data is only a subset. This adjustment, known as Bessel's correction, compensates for potential bias when estimating the population characteristics from the sample. The distinction between population and sample also depends on the context in which the data was collected. If the data was gathered through a survey, experiment, or sampling process, it is generally treated as a sample. Additionally, if the goal of the analysis is to make inferences about a larger group beyond the dataset itself, it should be considered a sample. Even if the dataset is large, it may still be a sample if it does not cover all possible observations. Conversely, small datasets can be populations if they include every relevant case. In practice, data is most often treated as a sample, as it is rare to have data for the entire population. In most practical scenarios, data is treated as a sample because it's rare to have data for the entire population. This gives the range in which the true population mean is likely to fall with the specified level of confidence. The formula is: SELECT stddev_pop(age) AS age_stddev_pop FROM marketing_campaign_data; This gives for age: The results are similar as the dataset has enough data points but you can see differences in the least significant digits. Variance (variance/var_pop/var_samp) The same principles apply to variance as they do for standard deviation, since variance is essentially the square of the standard deviation. variance is the same asvar_samp.The formulas and assumptions remain the same as previously explained. In most cases, you will be using variance (or var_samp). Our use case will be to determine the variance in customer engagement scores to see how consistently customers interact with campaigns. SELECT variance(engagement_score) AS engagement_variance FROM marketing_campaign_data; Median (median) "Median" refers to the middle value of a dataset when the numbers are arranged in ascending or descending order. It represents the point at which half of the data falls below and half falls above. If the dataset has an odd number of observations, the median is the middle value. If the dataset has an even number of observations, the median is the average of the two middle values. The median is particularly useful for numerical data, especially when the data is skewed or contains outliers, as it is less affected by extreme values than the mean (average). In contrast to the mode (most frequent value) and the mean, the median provides a measure of central tendency that indicates the midpoint of the dataset. Let us calculate the median purchase amount to understand the central spending tendency of customers. SELECT percentile_approx(purchase_amount, 0.50) AS median_purchase_amount FROM marketing_campaign_data; Mode (mod) "Mod" typically refers to the mode, which is the value that appears most frequently in a dataset. It represents the data point or category that occurs with the highest frequency. The mode can be used for both numerical and categorical data. For example, in a dataset of people's favorite ice cream flavors, the mode would be the flavor that the largest number of people prefer. In a numerical dataset, it would be the number that appears most often. In contrast to the mean (average) and median (middle value), the mode focuses on the most common value in the dataset. Distribute customers evenly into 3 random marketing groups for campaign analysis. -- Assign customers to 3 random marketing groups based on their customer_id mod(customer_id, 3) AS marketing_group FROM marketing_campaign_data; Correlation (corr) Correlation measures the strength and direction of a linear relationship between two variables. It is expressed as a correlation coefficient, denoted by rrr, which ranges from -1 to 1. A correlation coefficient close to 1 indicates a strong positive linear relationship, meaning that as one variable increases, the other tends to increase as well. Conversely, a correlation coefficient close to -1 suggests a strong negative linear relationship, where an increase in one variable corresponds to a decrease in the other. When the correlation coefficient is close to 0, it indicates little to no linear relationship between the variables; changes in one variable do not reliably predict changes in the other. Correlation does not imply causation, meaning that even if two variables are correlated, it does not necessarily mean that one variable causes the other to change. Correlation is useful for identifying relationships in data, and it is commonly used in fields like finance, psychology, and social sciences to uncover patterns and make predictions based on observed trends. Let us check if there is a correlation between customer age and their engagement score with campaigns. SELECT corr(age, engagement_score) AS age_engagement_correlation FROM marketing_campaign_data; Here is how to interpret the results: For our use case, the given result of approximately r=0.0067, this falls into the "No Correlation" category, indicating that there is essentially no linear relationship between age and engagement score in our dataset. Correlation is more commonly used than covariance because it standardizes the relationship between variables, making comparisons easier. However, covariance is a key component in the calculation of correlation and provides valuable directional insights into how two variables move together. Covariance (covar_pop/covar_samp) Covariance measures the degree to which two variables change together. It indicates the direction of the linear relationship between the variables. If the covariance is positive, it means that as one variable increases, the other tends to increase as well, indicating a positive relationship. Conversely, a negative covariance suggests that as one variable increases, the other tends to decrease, indicating an inverse relationship. The magnitude of the covariance value indicates the strength of the relationship; however, unlike correlation, it does not provide a standardized measure. This means that the actual value of covariance can be difficult to interpret because it depends on the scale of the variables. Covariance is used in various fields, such as finance, where it helps in understanding how different assets move together, which is useful for portfolio diversification. While it indicates the direction of a relationship, it does not measure the strength or causality between the variables. Covariance becomes a correlation when it is standardized. The correlation coefficient is essentially a scaled version of covariance, which adjusts for the variability (standard deviation) of each variable, making it a unitless measure. This allows for a direct comparison of relationships regardless of the original scales of the variables. By dividing the covariance by the product of the standard deviations of X and Y variables, you normalize the value, bringing it into the range of -1 to 1. Let us compute the covariance between purchase amount and engagement score to see if higher engagement leads to higher spending. Just like the way we used functions for the population and sample, the formulas are the following: Population Covariance where you have sample means subtracted from each value for both X and Y Sample Covariance Let us calculate the population covariance between customer age and lifetime value (CLV) to understand if older customers tend to have higher value. SELECT covar_pop(age, clv) AS age_clv_covariance FROM marketing_campaign_data; The magnitude of covariance is influenced by the units of the variables, so the absolute value is not directly indicative of the strength of the relationship. Unlike correlation, covariance is not standardized, meaning it is not constrained within a fixed range (such as -1 to 1), making direct comparisons across datasets less meaningful without Skewness (skewness) Skewness measures the asymmetry of a dataset's distribution. It indicates whether the data points are spread more towards one side of the mean, resulting in a non-symmetric shape. Skewness can be positive, negative, or zero, depending on the direction of the asymmetry: Positive Skewness (Right-Skewed): When skewness is greater than zero, the distribution has a long tail on the right side. This means that there are more values concentrated on the left, with a few larger values stretching the distribution to the right. Negative Skewness (Left-Skewed): When skewness is less than zero, the distribution has a long tail on the left side. In this case, more values are concentrated on the right, with a few smaller values stretching the distribution to the left. Zero Skewness (Symmetrical Distribution): When skewness is approximately zero, the distribution is symmetric, with data points evenly distributed on both sides of the mean. A perfectly symmetric distribution, such as a normal distribution, has zero skewness. Skewness helps to identify the extent and direction of deviation from a normal distribution, and it is useful for understanding the nature of the data, particularly in fields like finance, economics, and quality control. Let us determine if the distribution of purchase amounts is skewed towards lower or higher values. SELECT skewness(purchase_amount) AS skewness_purchase FROM marketing_campaign_data; The results of the above query are: The result of the skewness calculation for purchase_amount is approximately -0.00015. This value is very close to zero, which indicates that the distribution of purchase_amount is nearly symmetric. Kurtosis (kurtosis) Kurtosis measures the "tailedness" or the sharpness of the peak of a dataset's distribution. It indicates how much of the data is concentrated in the tails and the peak compared to a normal distribution. Kurtosis helps to understand the distribution's shape, particularly the presence of outliers. Let us assess the "peakedness" of customer engagement scores to understand if most scores are concentrated around the mean. SELECT kurtosis(engagement_score) AS kurtosis_engagement FROM marketing_campaign_data; The result of the kurtosis calculation for engagement_score is approximately -1.1989. This value is less than 3, indicating that the distribution is platykurtic. The kurtosis value of -1.1989 suggests that the engagement_score distribution has fewer extreme values (outliers) than a normal distribution. The data points are more spread out across the range, with less concentration around the peak. In a normal distribution, the data is symmetrically spread, with most values clustering around the mean, and the frequency of extreme values decreases as you move away from the mean. When a distribution has no significant excess in outliers, it means that the occurrence of data points far from the center is what you would expect based on a normal distribution, with no additional concentration of extreme values in the tails. Count (count) Let us count the number of customers engaged in each marketing campaign to understand campaign reach. SELECT campaign_id, count(customer_id) AS customer_count FROM marketing_campaign_data GROUP BY campaign_id; Count If (count_if) Let us count how many customers have spent more than $200 in each campaign to identify high spenders. SELECT campaign_id, count_if(purchase_amount > 200) AS high_spenders_count FROM marketing_campaign_data GROUP BY campaign_id; Approximate Count Distinct (approx_count_distinct) The Approximate Count Distinct (approx_count_distinct) function offers significant advantages over the traditional Count Distinct (count distinct) function, especially when working with large datasets. It employs algorithms like HyperLogLog to estimate the number of distinct values, providing a high degree of accuracy while being much faster and more efficient than count distinct. This speed is achieved because approx_count_distinct does not need to store and sort all unique values, making it particularly useful in big data environments where datasets may be too large to fit into memory. Additionally, the function consumes less memory by using probabilistic methods, enabling distinct counting on massive datasets without overwhelming system resources. As a result, approx_count_distinct scales well with increasing data size, making it an ideal choice for distributed computing platforms where performance and scalability are critical. Let us estimate the number of unique customers engaged with a specific marketing campaign. SELECT campaign_id, approx_count_distinct(customer_id) AS unique_customer_count FROM marketing_campaign_data GROUP BY campaign_id; Generate Random Number from a Uniform Distribution (rand/random) The rand() function is a mathematical function used to generate a random floating-point number between 0 (inclusive) and 1 (exclusive) from a uniform distribution. Each time rand() is called, it produces a different pseudo-random number, simulating randomness. However, because it is based on an algorithm rather than true randomness, the sequence of numbers generated is actually deterministic if the initial starting point (seed) is known. Suppose you want to randomly assign customers to different marketing test groups for A/B testing. -- Assign customers to random groups for A/B testing rand() AS random_value, WHEN rand() < 0.5 THEN 'Group A' ELSE 'Group B' END AS test_group FROM marketing_campaign_data; In this example, customers are assigned randomly to Group A or Group B based on the random value generated by rand(). If you want to use a seed for predictability, try this: -- Assign customers to random groups for A/B testing with a seed for reproducibility rand(12345) AS random_value, -- Using a seed value of 12345 WHEN rand(12345) < 0.5 THEN 'Group A' ELSE 'Group B' END AS test_group FROM marketing_campaign_data; random() is the same as rand(). Both functions generate random numbers uniformly distributed between 0 (inclusive) and 1 (exclusive). They are interchangeable and serve the same purpose for creating random values in this range. Generate Random Number from a Normal/Gaussian Distribution (randn) The randn() function generates random numbers following a normal (Gaussian) distribution, with a mean of 0 and a standard deviation of 1. Unlike rand(), the values produced by randn() are not limited to a specific range and can be any real number, though most values will fall within three standard deviations of the mean. This function is particularly useful for modeling data that follows a bell-curve shape, where most observations cluster around the central value, such as natural phenomena, measurement errors, or financial returns. Let us simulate customer engagement scores or create noise in the data to make the model more robust for training. -- Simulate random variations in engagement scores (normal distribution noise) engagement_score + randn() * 5 AS engagement_score_with_noise FROM marketing_campaign_data; In this case, the randn() function adds normally distributed noise to the customer engagement scores, simulating potential fluctuations in real-world data. If you want to use a seed for predictability: -- Simulate random variations in engagement scores (normal distribution noise) with a seed for reproducibility engagement_score + randn(12345) * 5 AS engagement_score_with_noise -- Using a seed value of 12345 FROM marketing_campaign_data; Ranking (rank) Let us rank customers by their purchase amount within each campaign to identify top spenders. SELECT customer_id, campaign_id, purchase_amount, rank() OVER (PARTITION BY campaign_id ORDER BY purchase_amount DESC) AS rank FROM marketing_campaign_data; The query retrieves data from the marketing_campaign_data table, selecting the customer_id, campaign_id, and purchase_amount columns, along with a calculated rank. The rank() function is used to assign a ranking to each row within each campaign_id group (using PARTITION BY campaign_id). The rows are ordered by purchase_amount in descending order (ORDER BY purchase_amount DESC), meaning the highest purchase_amount within each campaign gets a rank of 1, the second highest gets a rank of 2, and so on. This approach allows for ranking customers based on their purchase amounts within each specific campaign, enabling comparisons and analysis of customer spending behavior across different marketing campaigns. First Rank (first) Find the first customer by engagement score in each campaign to track early adopters. SELECT campaign_id, first(customer_id) AS first_engaged_customer FROM marketing_campaign_data GROUP BY campaign_id; Last Rank (last) Identify the last customer to make a purchase in each campaign to track lagging engagement. SELECT campaign_id, last(customer_id) AS last_purchase_customer FROM marketing_campaign_data GROUP BY campaign_id; Percent Rank (percent_rank) Calculate the percent rank of customers based on their purchase amount within each campaign to categorize customer spending. SELECT customer_id, campaign_id, purchase_amount, percent_rank() OVER (PARTITION BY campaign_id ORDER BY purchase_amount) AS purchase_percent_rank FROM marketing_campaign_data; Percentile(percentile or percentile_approx) A percentile is a measure that indicates the value below which a given percentage of observations in a dataset falls. For example, the 25th percentile is the value below which 25% of the data points lie, while the 90th percentile is the value below which 90% of the data points fall. Percentiles help in understanding the distribution of data by dividing it into 100 equal parts. Percentiles are commonly used in data analysis to assess the relative standing of individual observations within a dataset. They are particularly useful for identifying outliers, comparing different data sets, or summarizing large amounts of data. In educational testing, for example, if a student's score is in the 85th percentile, it means they scored higher than 85% of the other students. Percentiles provide a way to interpret data in terms of rank and position rather than exact values. The use of percentile/percentile_approx are both approximate percentiles and in a query provides a significant performance advantage, especially when working with large datasets. Unlike exact percentile calculations, both estimate by using algorithms that avoid the need to sort all the data. This approach results in faster execution and lower memory usage, making it highly suitable for big data environments where datasets can be massive. The function also scales efficiently allowing it to handle very large datasets seamlessly. Although it provides an approximate value rather than an exact percentile, the trade-off is often worthwhile for the speed and resource efficiency it offers. Let us calculate the 90th percentile of customer engagement scores to identify top-performing customers who are highly engaged with a marketing campaign. -- Calculate the 90th percentile of engagement scores SELECT percentile(engagement_score, 0.90) AS p90_engagement_score FROM marketing_campaign_data; Percentile Approximation (percentile_approx) Let us calculate the approximate 90th percentile of customer CLV to understand high-value customer thresholds. SELECT percentile_approx(engagement_score, 0.90) AS p90_clv FROM marketing_campaign_data; Continuous Percentile (percentile_cont) A continuous percentile is a measure used to determine the value below which a certain percentage of the data falls, based on a continuous interpolation of the data points. In cases where the specified percentile does not correspond exactly to a data point in the dataset, the continuous percentile calculates an interpolated value between the two nearest data points. This provides a more precise estimate of the percentile, especially when dealing with small datasets or when the data distribution is not uniform. For example, if the 75th percentile falls between two data points, the continuous percentile will estimate a value that represents a weighted average between these points, rather than just picking the closest one. This approach gives a more accurate representation of the distribution, as it takes into account the relative positions of data points rather than simply using discrete ranks. Continuous percentiles are often used in statistical analysis to better understand the distribution of data, especially in situations where the exact percentile may lie between observed values. The continuous percentile function calculates the exact percentile value by interpolating between the two nearest data points if the specified percentile falls between them. It gives a precise answer by determining a value that may not be in the original dataset but represents a point within the ordered range. This function is used when an exact, interpolated percentile value is needed. Let us calculate the 75th percentile of Customer Lifetime Value (CLV) to understand the top 25% most valuable customers. -- Calculate the continuous 75th percentile of CLV SELECT percentile_cont(0.75) WITHIN GROUP (ORDER BY clv) AS p75_clv FROM marketing_campaign_data; Discrete Percentile(percentile_disc) A discrete percentile is a measure used to determine the value below which a specified percentage of the data falls, based on actual data points in the dataset. In contrast to a continuous percentile, which interpolates between data points, a discrete percentile selects the closest actual data value that corresponds to the given percentile rank. For example, if you want to find the 75th percentile in a discrete approach, the function will choose the value at or just above the rank where 75% of the data points lie, without performing any interpolation. This means that the output will always be one of the actual values from the dataset, making it a straightforward representation of the distribution based on the observed data. Discrete percentiles are useful when the goal is to work with specific data values rather than estimated positions, such as in ranking scenarios or when dealing with ordinal data where interpolation might not be meaningful. The discrete percentile function calculates the exact percentile value based on the actual data points, without any interpolation. It selects the closest actual data value corresponding to the specified percentile, ensuring that the result is one of the observed values in the dataset. This function is suitable for cases where only actual data values are meaningful, such as ordinal data. Let us calculate the 90th percentile of engagement scores to find the actual score that separates the top 10% of most engaged customers. -- Calculate the discrete 90th percentile of engagement scores SELECT percentile_disc(0.90) WITHIN GROUP (ORDER BY engagement_score) AS p90_engagement_score FROM marketing_campaign_data; Numeric Histograms (histogram_numeric) Create a histogram of customer purchase amounts to analyze spending patterns. -- Create a histogram for purchase amounts (divided into 5 buckets) SELECT to_json(histogram_numeric(purchase_amount, 5)) AS purchase_histogram FROM marketing_campaign_data; This function would return the distribution of customer purchases across 5 buckets, which can be used to create visualizations or perform further analysis. In the histogram data returned by the query, the x and y values represent the following: x (Bucket Range): The midpoint or representative value of each bucket in the histogram. In this case, the purchase amounts have been divided into five buckets, so each x value represents the center of a range of purchase amounts. y (Frequency): The number of occurrences (or count) of purchase amounts that fall within each corresponding bucket. This tells you how many purchase transactions fall within the range represented by the x value. So, each data point in the JSON array indicates how many purchases (y) are within a specific range of amounts centered around x. Together, these values create a histogram showing the distribution of purchase amounts across five intervals. In the above algorithm, the buckets are determined based on the distribution of the data, not just evenly dividing the range of values. This means that if certain ranges of purchase amounts have more data points, the bucket widths may be adjusted to capture the distribution more accurately, resulting in non-equidistant x values.
{"url":"https://data-distiller.all-stuff-data.com/unit-7-data-distiller-statistics/stats-400-data-distiller-statistics-functions","timestamp":"2024-11-10T05:21:12Z","content_type":"text/html","content_length":"1050331","record_id":"<urn:uuid:2ab55210-7c2c-4d0f-b1a5-264a05634821>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00011.warc.gz"}
The relationship between the number of decibels \(\beta\) and the intensity of a sound \(I\) (in watts per square centimeter) is given by \(\beta=10 \log _{10}\left(\frac{I}{10^{-16}}\right)\) Find the rate of change in the number of decibels when the intensity is \(10^{-4}\) watt per square centimeter. Short Answer Expert verified The rate of change in the number of decibels at an intensity of \(10^{-4}\) watt per square centimeter is \(\frac{10^{21}}{\ln(10)}\) decibels per watt per square centimeter. Step by step solution Differentiate the Function We use the chain rule of differentiation to differentiate the function. According to the chain rule, if \(y = f(u)\), then \(\frac{dy}{dx} = f'(u)*\frac{du}{dx}\). The formula for the derivative of a natural logarithm is \(\frac{d}{dx} \log_{10} x = \frac{1}{x \ln(10)}\). However, our function isn't just \(\log_{10}(I)\), but \(\log_{10}(\frac{I}{10^{-16}})\). So, we denote \(u = \frac{I}{10^ {-16}}\) and differentiate \(\beta\) with respect to \(u\) to get \(\frac{d\beta}{du} = \frac{10}{u \ln(10)}\). Then, we differentiate \(u\) with respect to \(I\) to get \(\frac{du}{dI} = 10^{16}\). So, the derivative of \(\beta\) with respect to \(I\) is \(\frac{d\beta}{dI} = \frac{10}{u \ln(10)} * 10^{16}\) Substitute the Value of \(u\) Substitute \(u = \frac{I}{10^{-16}}\) into the expression for \(\frac{d\beta}{dI}\), we get \(\frac{d\beta}{dI} = \frac{10}{\frac{I}{10^{-16}} \ln(10)} * 10^{16} = \frac{10^{17}}{I \ln(10)}\) Find the Rate of Change at the Given Point Substitute \(I = 10^{-4}\) into the expression for \(\frac{d\beta}{dI}\), we get \(\frac{d\beta}{dI} = \frac{10^{17}}{10^{-4} \ln(10)} = \frac{10^{21}}{\ln(10)}\) Key Concepts These are the key concepts you need to understand to accurately answer the question. Chain Rule of Differentiation The chain rule of differentiation is a fundamental tool in calculus for finding the derivative of composite functions. In essence, it allows us to differentiate a function that is composed of other functions. It provides a way to 'unravel' these composite functions, working from the outside in. For instance, if we have a composite function where \(y = f(g(x))\), and we're interested in finding its derivative, the chain rule tells us that the derivative of \(y\) with respect to \(x\) is the product of the derivative of \(f\) with respect to \(g(x)\) and the derivative of \(g\) with respect to \(x\), or symbolically written as \(\frac{dy}{dx} = f'(g(x))\cdot g'(x)\). It's like peeling an onion, where each layer represents a function and we differentiate layer by layer. In the context of our exercise that involves the decibels intensity relationship, the chain rule is aptly used to differentiate the logarithmic function of intensity \(I\). By recognizing the inner function and the outer function in the relationship and by calculating the derivatives of each, we can properly describe how changes in intensity affect the number of decibels. Understanding the chain rule is crucial when dealing with complex functions in a variety of fields, including physics, engineering, and economics. Natural Logarithm Derivative The derivative of the natural logarithm \(\ln(x)\) is a derivative that frequently appears in calculus. Specifically, the derivative of \(\ln(x)\) with respect to \(x\) is \(\frac{1}{x}\). This can be generalized to logarithms with any base, using the change of base formula, so the derivative of a logarithm with base \(a\) takes the form \(\frac{1}{x \ln(a)}\). In our decibel intensity example, this generalized form applies because we are working with log base 10, not the natural log. As a result, the derivative of \(\log_{10}(I)\) is calculated to be \(\ frac{1}{I \ln(10)}\), recognizing that \(\ln(10)\) is the natural logarithm of the base we're converting from. This knowledge allows us to understand how infinitesimal changes in intensity \(I\) can affect its corresponding logarithmic value which relates to the decibels scale. It’s pertinent in areas such as acoustics where such relationships model how the human ear perceives changes in sound intensity. Decibels Intensity Relationship Decibels (dB) are a logarithmic unit that measure the intensity of sound. The relationship between the intensity of a sound and the number of decibels is given by the equation \(\beta = 10 \log_{10} (\frac{I}{I_0})\), where \(I\) is the sound intensity in watts per square centimeter, and \(I_0\) is the reference intensity, typically taken as \(10^{-16}\) watts per square centimeter. This formula shows that decibels represent a ratio and are therefore dimensionless. The reference intensity is the generally accepted threshold of human hearing. Sound intensity above this level implies greater decibels and hence, louder sounds. By taking the logarithm of the ratio of the sound intensity to the reference intensity, we obtain a number that is much more manageable and corresponds well with our perception of sound. In application, understanding the decibels intensity relationship is not just academic; it is essential for designing soundproofing, audio equipment, and in assessing hearing safety in noisy environments. Finding the derivative of the decibels with respect to intensity as in our exercise is transformative for understanding how small changes in intensity affect the perceived loudness of a
{"url":"https://www.vaia.com/en-us/textbooks/math/college-algebra-and-calculus-2nd-2-edition/chapter-10/problem-65-the-relationship-between-the-number-of-decibels-b/","timestamp":"2024-11-05T13:37:01Z","content_type":"text/html","content_length":"254943","record_id":"<urn:uuid:aa650256-3612-4ff0-b73f-647dc014b48d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00514.warc.gz"}
ATR and How Top Traders Size their Positions — Raposa The most important aspect of trading is managing your risk. It’s not sexy or exciting, but it’s absolutely critical to staying in the game. Read one of Jack Schwager’s Market Wizards books where he interviews extraordinary traders. Every last one of them mentions a big loss that taught them to control their risk. Read just about anything by Nassim Nicholas Taleb in his Incerto series. You’ll see that the idea of avoiding catastrophic losses — “blowing up” as he’s fond of putting it — is central to his In short, if you don’t control your risk, you’re destined to go bust and hang up trading for good. Say “bye bye” to dreams of sitting on a beach sipping Mai Tai’s (or your favorite umbrella adorned beverage) and living off of your trading riches. So how do you control your risk? Position Sizing 101 There are multiple ways to reduce your risk, but one of the most basic is completely ignored by your average retail investor: position sizing. Whether you call it money management, risk management, or something else, the idea is simple — don’t bet too big. You can get complicated with your methods, using machine learning techniques, or complex optimization algorithms to find how much you should place on each trade, but there’s a tried-and-true technique traders go back to time and time again. Calculating ATR The Average True Range (ATR) is a classic position sizing metric used by traders of all varieties. It is designed to estimate the “true” volatility of an instrument so you know how much you can expect it to move on a day-to-day basis in a way that the standard deviation (i.e. price volatility) doesn’t. ATR does this by taking the difference between the high, low, and close of an instrument, taking the largest value, and then computing a rolling average over a given period of time. In psuedo-code, we’d write: TR[t] = max(abs(High[t]-Low[t]), abs(Close[t-1] - Low[t]), abs(High[t] - Close[t-1])) ATR[t] = mean(TR[t-P:]) First, we calculate the true range, then we take the moving average of our true range value for the average true range. If we want to write this mathematically, we have: $$TR_t = \textrm{max} \big( \left| High_t - Low_t \right|, \left| High_t - Close_{t-1} \right|, \left| Close_{t-1} - Low_t \right| \big)$$ $$ATR_t = \frac{1}{P} \sum_{t=1}^P TR_{t-P}$$ where P is the number of periods we use to smooth, and t indicates the time when we make our calculation. It’s that simple. So what’s the benefit of ATR over typical volatility measurements? We call it “true range” because that’s what it’s actually getting at, how much a stock moves over the course of a day. Typical volatility measurements just look at the price from close to close, so you can miss out on how much change occurred during the day. For example, let’s say you have 5 Open-High-Low-Close (OHLC) bars that look like this: Calculating the standard deviation yields a significantly lower value than ATR. This won’t always be the case, but is reliable as a general rule of thumb. Proponents of ATR argue that this is a better (or at least, more relevant) measure of the volatility of your security. Two Methods to Size your Positions Wait, wasn’t this article supposed to be about controlling your risk? How do you actually trade with this? Frequently, traders will risk a certain number of ATRs on a given trade. This is done one of two ways: 1. Use ATR to set a stop loss based on a pre-determined percentage of capital (maximum loss). This leads to larger position sizes. • Shares = floor(% * capital / (N * ATR)) • Stop = Price — N * ATR 2. Divide a pre-determined percentage of capital (maximum loss) by ATR to get the number of shares to purchase. This leads to smaller position sizes. • Shares = floor(% * capital / (N * ATR * Price)) Let’s look at an example using each of these approaches. For the first, we’ve got a $10,000 account and we only want to risk 2% of capital on any trade. We’re going to size our positions by taking 2ATR (i.e. N = 2) and have a stock currently trading at $10 with an ATR of 5. We’re going to buy 50 shares, which will cost us $500 (or 5% of our capital): • Shares = floor(0.02 * 10,000) / (2 * 2)) = 50 shares -> $500 purchase Now, we use the 2ATR stop loss to set our risk to be $4 below our position, so we’re going to lose a maximum of $200 ($4 stop X 50 shares, assuming our stop holds). With very expensive instruments, this can grow to a much larger proportion of your capital, which will limit the number of positions that you can hold. For example, instead of a $10 stock, let’s say we have a $100 stock, now (keeping all other parameters the same) our model says we need to buy $5,000 — which is half of our capital and will clearly limit the number of positions we can have simultaneously. The second way is less common, but still applicable. With the same parameters, as used above, we’re going to calculate our position size as follows: • Shares = floor(0.02 * 10,000 / (2 * 2 * 10)) = 5 shares -> $50 purchase This $50 outlay is quite small: 0.5% of your capital rather than the 2% target. In essence, we’re taking some instrument and scaling it by its volatility as measured by ATR. Every increase in ATR (or your multiplier, N) is going to reduce your position size, so more volatile positions become smaller. If you have a portfolio of positions like this, each one will have the same volatility-scaled risk associated with it (also, many traders will still implement an ATR-based stop loss as we did in the first case). This approach allows you to take on a wide variety of positions, which is exactly what some strategies call for. Other systems are going to need more concentrated positions. Again, for some systems, this is completely acceptable and preferable to the first approach. The only way you’re going to know how to use it is by testing it. Why the difference? The difference between these two approaches stems from their definition of risk. The first approach looks at risk as the difference between your purchase price and your stop loss. In normal situations (i.e. outside of a market crash and with highly liquid instruments) your stop is going to be fulfilled at the set price, which is going to cap your losses at this level. The second uses a more common risk definition: any capital in a non-cash instrument (e.g. stocks, bonds, crypto, commodities, etc.) is considered to be “risk.” Even if you’re trading with a stop loss, you could lose everything as these instruments go to zero in a crash you can’t get out of. This broader definition of risk leads the second method to be much more conservative. What Should You Trade? There is no “best” approach. Many systems use the first method, others use the second method, and others just use ATR to set a stop loss while investing a flat percentage into each trade! There are lots of ways to go about this and the only way to know what is right for you and your system is to test it. Testing can be hard. You have to set up your parameters, check for bugs, get good data, and make sure your model is free from common biases. Just getting to this can be hard and time consuming! At Raposa, we make this easy with a simple, no-code interface that allows you to select your stocks, parameters, position sizing, and other relevant parameters when you design your trading strategy. Get up and running in seconds with a professional backtest to test your ideas before deploying them to make money for you in the market. Sign up for our free demo here to learn more.
{"url":"https://raposa.trade/blog/atr-and-how-top-traders-size-their-positions/","timestamp":"2024-11-06T18:28:47Z","content_type":"text/html","content_length":"28052","record_id":"<urn:uuid:a1afa1dc-6530-42a1-a2ca-6fa37ef76f35>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00371.warc.gz"}
Quantum Chemistry/Example 15 - Wikibooks, open books for an open world Write an example question showing the determination of the bond length of CO using microwave spectroscopy When a photon is absorbed by a polar diatomic molecule, such as carbon monoxide, the molecule can be excited rotationally. The energy levels of these excited states are quantized to be evenly spaced. The distance between each rotational absorption lines is defined as twice the rotational constant ${\displaystyle ({\tilde {\beta }})}$ which can be measured via the following equation: ${\displaystyle {\tilde {\beta }}=\left({\frac {h}{8\pi ^{2}c{\text{I}}}}\right)}$^[1] h = plank's constant = 6.626 ᛫10^−34 J ᛫ s c = speed of light = 2.998᛫10^8 m ᛫ s^-1 I = moment of inertia The energy to required to rotate a molecule around its axis is the moment of inertia ${\displaystyle ({\text{I}})}$. It can be calculated as the sum of the products the masses of the component atoms and their distance from the axis of rotation squared: ${\displaystyle {\text{I}}=\sum _{i}m_{i}\cdot r_{i}^{2}}$^[2] Working it out for an heterogeneous diatomic molecule: ${\displaystyle {\text{I}}=\left(m_{a}\cdot r_{a}^{2}\right)+\left(m_{b}\cdot r_{b}^{2}\right)}$ ${\displaystyle {\text{I}}=r_{a}\cdot r_{b}\left(m_{a}+m_{b}\right)}$ The distance from the atom to the center of mass ${\displaystyle \left(r_{a}{\text{and}}r_{b}\right)}$ cannot easily be measure, however, by setting the origin at the center of mass equation can be derive for the two values that uses the bond length ${\displaystyle \left(r_{e}\right)}$ as a variable: ${\displaystyle \left(m_{a}\cdot r_{a}\right)=\left(m_{b}\cdot r_{b}\right)=\left(m_{a}\cdot \left(r_{a}-r_{e}\right)\right)=\left(m_{b}\cdot \left(r_{b}-r_{e}\right)\right)}$ ${\displaystyle r_{a}={\frac {m_{b}\cdot r_{e}}{m_{a}+m_{b}}}}$ ${\displaystyle r_{b}={\frac {m_{a}\cdot r_{e}}{m_{a}+m_{b}}}}$ Substituting these equations into the moment of inertia equation: ${\displaystyle {\text{I}}={\biggl (}{\frac {m_{b}\cdot r_{e}}{m_{a}+m_{b}}}{\biggr )}\cdot {\biggl (}{\frac {m_{a}\cdot r_{e}}{m_{a}+m_{b}}}{\biggr )}\cdot \left(m_{a}+m_{b}\right)}$ ${\displaystyle {\text{I}}={\biggl (}{\frac {m_{a}\cdot m_{b}\cdot r_{e}^{2}}{m_{a}+m_{b}}}{\biggr )}}$ This equation can be simplified further if we imagine the rigid rotor as a single particle rotating around a fixed point a bond length away. The mass of this particle is the reduced mass ${\ displaystyle \left(\mu \right)}$ of the two atoms that make up the diatomic molecule: ${\displaystyle \mu =\left({\frac {m_{1}\cdot m_{2}}{m_{1}+m_{2}}}\right)}$^[3] Simplifying the previous moment of inertia equation, we get: ${\displaystyle {\text{I}}=\mu r_{e}^{2}}$ From here we have everything we need to be able to determine the bond length of a polar diatomic molecule such as carbon monoxide. First, we must solve for the moment of inertia using the rotation constant: ${\displaystyle {\tilde {\beta }}=\left({\frac {h}{8\pi ^{2}c{\text{I}}}}\right)}$ ${\displaystyle {\text{I}}=\left({\frac {h}{8\pi ^{2}c{\tilde {\beta }}}}\right)}$ As explained earlier, the rotational constant can be determined by measuring the distance between the rotational absorption lines and halfling it. In the case of ${\displaystyle {\ce {^{12}C^{16}O}}} $ the rotational constant is ${\displaystyle 193.1281}$ m^-1 ^[4]. Plugging this value in we can determine the moment of inertia: ${\displaystyle {\text{I}}=\left({\frac {6.626\cdot 10^{-34}}{8\pi ^{2}\cdot \left(2.998\cdot 10^{8}\right)\cdot \left({\displaystyle 193.1281}\right)}}\right)}$ ${\displaystyle {\text{I}}=1.44939\cdot 10^{-46}}$ kg ᛫ m^2 Now that we know inertia, we can rearrange the equation we derived earlier in order to determine the bond length: ${\displaystyle {\text{I}}=\mu r_{e}^{2}}$ ${\displaystyle r_{e}={\sqrt {\frac {\text{I}}{\mu }}}}$ ${\displaystyle r_{e}={\sqrt {\frac {1.44939\cdot 10^{-46}}{\mu }}}}$ The exact atomic mass of ${\displaystyle {\ce {^{12}C}}}$ is 12.011 amu and ${\displaystyle {\ce {^{16}O}}}$ is 15.9994 amu ^[5]. As such the reduce mass is calculated to be: ${\displaystyle \mu =\left({\frac {m_{1}\cdot m_{2}}{m_{1}+m_{2}}}\right)}$ ${\displaystyle \mu =\left({\frac {12.011\cdot 15.9994}{12.011+15.9994}}\right)}$ ${\displaystyle \mu =6.86062}$ amu ${\displaystyle \mu =6.86062}$ amu ᛫ ${\displaystyle 1.67377\times 10^{-27}\left({\frac {kg}{amu}}\right)}$ ${\displaystyle \mu =1.140\times 10^{-26}kg}$ Plugging in the reduce mass back into our equation we can finally solve for the bond length of a carbon monoxide molecule: ${\displaystyle r_{e}={\sqrt {\frac {1.44939\cdot 10^{-46}}{1.140\times 10^{-26}}}}}$ ${\displaystyle r_{e}={\sqrt {1.271\cdot 10^{-20}}}}$ ${\displaystyle r_{e}=1.12739\cdot 10^{-10}m}$ ${\displaystyle r_{e}=1.12739\mathrm {\AA} }$
{"url":"https://en.wikibooks.org/wiki/Quantum_Chemistry/Example_15","timestamp":"2024-11-11T02:22:17Z","content_type":"text/html","content_length":"113482","record_id":"<urn:uuid:da9c41ab-dfaf-4f04-9d78-506a44b1cd5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00503.warc.gz"}
On the union of fat wedges and separating a collection of segments by a line We call a line l a separator for a set S of objects in the plane if l avoids all the objects and partitions S into two nonempty subsets, one consisting of objects lying above l and the other of objects lying below l. In this paper we present an O(n log n)-time algorithm for finding a separator line for a set of n segments, provided the ratio between the diameter of the set of segments and the length of the smallest segment is bounded. The general case is an 'n^2-hard' problem, in the sense defined in [10] (see also [8]). Our algorithm is based on the recent results of [15], concerning the union of 'fat' triangles, but we also include an analysis which improves the bounds obtained in [15]. ASJC Scopus subject areas • Computer Science Applications • Geometry and Topology • Control and Optimization • Computational Theory and Mathematics • Computational Mathematics Dive into the research topics of 'On the union of fat wedges and separating a collection of segments by a line'. Together they form a unique fingerprint.
{"url":"https://experts.arizona.edu/en/publications/on-the-union-of-fat-wedges-and-separating-a-collection-of-segment","timestamp":"2024-11-09T21:00:45Z","content_type":"text/html","content_length":"50256","record_id":"<urn:uuid:05ad8c71-fde7-4515-94d2-2f4cac7a5eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00083.warc.gz"}
RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers (Ex 13.4) Exercise 13.4 - Free PDF Download RD Class 11 Solutions PDF from Vedantu Free PDF download of RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 solved by Expert Mathematics Teachers on Vedantu.com. All Chapter 13 - Complex Numbers Ex 13.4 Questions with Solutions for RD Sharma Class 11 Math to help you to revise the complete Syllabus and Score More marks. Register for online coaching for IIT JEE (Mains & Advanced) and other engineering entrance Chapter 13, Complex Numbers, of Class 11 Math is a significant chapter from the examination point of view. This chapter helps students learn the basics of complex numbers. Learning this chapter can also help them have a deep insight into the types of complex numbers. FAQs on RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers (Ex 13.4) Exercise 13.4 - Free PDF 1. Where can I get RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4? Students of Class 11 are expected to click on the blue button on the website that reads (Download PDF), sign in with their Gmail account and start studying as soon as they can. If students are looking for effective strategies to score well in the Class 11 math examination... Then there are a lot of promising ways to study other than merely memorizing each and every word in their Class 11 math textbook. That's why practicing with study materials provided by Vedantu is an incredible way to do that. Students can obtain RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 anywhere on the Vedantu app or website. You can also get the RD Sharma solutions for Class 11 all chapters here. 2. Can I really benefit from RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4? Yes. Studying the RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 will help students of Class 11 score better in ICSE Class 11 annual examination. That's because students will be ready for any highly significant question that has a tremendous likelihood of coming up in the examination. That's why solving these will definitely help students focus on the important topics more. Meaning, they can stop wasting time trying to memorize unimportant topics. Also, practicing the RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 on a regular basis will help students score sufficiently in their annual examination by acquiring the essential conviction when appearing at the examination. 3. Are Vedantu's RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 up to date? Speaking about the relevancy of these study materials, yes, Vedantu's RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 are up to date. We, at Vedantu, do not believe in negotiating with the future of any student for the sake of saving time or money. Our team of subject experts take extra effort to make sure a student will always get the highest possible quality from these study materials. That's why, at Vedantu, we conduct research and update our database regularly without failure. 4. Why do I have to study the RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4? Studying the RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4 provided by Vedantu is incredibly beneficial. That's because this way, students can certainly get an opportunity to assess their performance and build a strategy to score good marks in the annual examination. This can furthermore benefit students to discover their strengths and shortcomings and will get a chance to work on them before appearing for the annual exam. This will boost a student's morale and will create a sense of assurance in their mind. And this will influence their performance in the annual examination. 5. How do I start preparing for the Class 11 math annual examination? Students can start preparing for the Class 11 math annual examination by analyzing each chapter with an intense level of understanding. Students can even start revising and making notes of the important topics that they think are going to appear in the examination. And lastly, students can download the provided RD Sharma Class 11 Solutions Chapter 13 - Complex Numbers Exercise 13.4. Doing all of this simultaneously will prepare them for the annual examination in the best way.
{"url":"https://www.vedantu.com/rd-sharma-solutions/class-11-maths-chapter-13-exercise-13-4","timestamp":"2024-11-11T11:09:21Z","content_type":"text/html","content_length":"208639","record_id":"<urn:uuid:31e43f31-8d30-498a-87d6-39ffc5fa3c5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00288.warc.gz"}
numpy.partition(a, kth, axis=-1, kind='introselect', order=None)[source]# Return a partitioned copy of an array. Creates a copy of the array and partially sorts it in such a way that the value of the element in k-th position is in the position it would be in a sorted array. In the output array, all elements smaller than the k-th element are located to the left of this element and all equal or greater are located to its right. The ordering of the elements in the two partitions on the either side of the k-th element in the output array is undefined. Array to be sorted. kthint or sequence of ints Element index to partition by. The k-th value of the element will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all elements indexed by k-th of them into their sorted position at Deprecated since version 1.22.0: Passing booleans as index is deprecated. axisint or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. kind{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. orderstr or list of str, optional When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string. Not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Array of the same type and shape as a. See also Method to sort an array in-place. Indirect partition. Full sorting The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties: kind speed worst case work space stable ‘introselect’ 1 O(n) 0 no All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. The sort order of np.nan is bigger than np.inf. >>> import numpy as np >>> a = np.array([7, 1, 7, 7, 1, 5, 7, 2, 3, 2, 6, 2, 3, 0]) >>> p = np.partition(a, 4) >>> p array([0, 1, 2, 1, 2, 5, 2, 3, 3, 6, 7, 7, 7, 7]) # may vary p[4] is 2; all elements in p[:4] are less than or equal to p[4], and all elements in p[5:] are greater than or equal to p[4]. The partition is: [0, 1, 2, 1], [2], [5, 2, 3, 3, 6, 7, 7, 7, 7] The next example shows the use of multiple values passed to kth. >>> p2 = np.partition(a, (4, 8)) >>> p2 array([0, 1, 2, 1, 2, 3, 3, 2, 5, 6, 7, 7, 7, 7]) p2[4] is 2 and p2[8] is 5. All elements in p2[:4] are less than or equal to p2[4], all elements in p2[5:8] are greater than or equal to p2[4] and less than or equal to p2[8], and all elements in p2[9:] are greater than or equal to p2[8]. The partition is: [0, 1, 2, 1], [2], [3, 3, 2], [5], [6, 7, 7, 7, 7]
{"url":"https://numpy.org/devdocs/reference/generated/numpy.partition.html","timestamp":"2024-11-05T17:05:27Z","content_type":"text/html","content_length":"38985","record_id":"<urn:uuid:70dc4e3b-42e0-43af-93b9-8e9410e59eca>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00399.warc.gz"}
HHPVProblem < CoWiki < TWiki The Brown-Freedman-Halbeisen-Hungerbühler-Pirillo-Varricchio problem asks, is there an infinite word over a finite subset of , the non-negative integers, containing no two consecutive blocks of the same length and the same sum? The question was apparently first raised by Brown and Freedman in a 1987 paper, then independently by Pirillo and Varricchio in a 1994 paper, and by Halbeisen and Hungerbühler in 2000. It follows from results of Dekking that such a word exists avoiding four consecutive blocks. Recent results of Cassaigne, Currie, Schaeffer, and Shallit (2011) show that such a word exists avoiding three consecutive blocks. -- - 13 Jul 2011
{"url":"https://cs.uwaterloo.ca/twiki/view/CoWiki/HHPVProblem","timestamp":"2024-11-11T16:34:23Z","content_type":"application/xhtml+xml","content_length":"17746","record_id":"<urn:uuid:44caaa9d-30d0-42a5-b0e7-9851dfe65169>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00234.warc.gz"}
The first application of the DMRG, by Steven R. White and Reinhard Noack, was a toy model: to find the spectrum of a spin 0 particle in a 1D box. This model had been proposed by Kenneth G. Wilson as a test for any new renormalization group method, because they all happened to fail with this simple problem. The DMRG overcame the problems of previous renormalization group methods by connecting two blocks with the two sites in the middle rather than just adding a single site to a block at each step as well as by using the density matrix to identify the most important states to be kept at the end of each step. After succeeding with the toy model, the DMRG method was tried with success on the quantum Heisenberg model. The main problem of quantum many-body physics is the fact that the Hilbert space grows exponentially with size. In other words if one considers a lattice, with some Hilbert space of dimension ${\ displaystyle d}$ on each site of the lattice, then the total Hilbert space would have dimension ${\displaystyle d^{N}}$ , where ${\displaystyle N}$ is the number of sites on the lattice. For example, a spin-1/2 chain of length L has 2^L degrees of freedom. The DMRG is an iterative, variational method that reduces effective degrees of freedom to those most important for a target state. The state one is most often interested in is the ground state. After a warmup cycle, the method splits the system into two subsystems, or blocks, which need not have equal sizes, and two sites in between. A set of representative states has been chosen for the block during the warmup. This set of left blocks + two sites + right blocks is known as the superblock. Now a candidate for the ground state of the superblock, which is a reduced version of the full system, may be found. It may have a rather poor accuracy, but the method is iterative and improves with the steps below. Decomposition of the system into left and right blocks, according to DMRG. The candidate ground state that has been found is projected into the Hilbert subspace for each block using a density matrix, hence the name. Thus, the relevant states for each block are updated. Now one of the blocks grows at the expense of the other and the procedure is repeated. When the growing block reaches maximum size, the other starts to grow in its place. Each time we return to the original (equal sizes) situation, we say that a sweep has been completed. Normally, a few sweeps are enough to get a precision of a part in 10^10 for a 1D lattice. The DMRG sweep. Implementation guide A practical implementation of the DMRG algorithm is a lengthy work. A few of the main computational tricks are these: • Since the size of the renormalized Hamiltonian is usually in the order of a few or tens of thousand while the sought eigenstate is just the ground state, the ground state for the superblock is obtained via iterative algorithm such as the Lanczos algorithm of matrix diagonalization. Another choice is the Arnoldi method, especially when dealing with non-hermitian matrices. • The Lanczos algorithm usually starts with the best guess of the solution. If no guess is available a random vector is chosen. In DMRG, the ground state obtained in a certain DMRG step, suitably transformed, is a reasonable guess and thus works significantly better than a random starting vector at the next DMRG step. • In systems with symmetries, we may have conserved quantum numbers, such as total spin in a Heisenberg model. It is convenient to find the ground state within each of the sectors into which the Hilbert space is divided. The DMRG has been successfully applied to get the low energy properties of spin chains: Ising model in a transverse field, Heisenberg model, etc., fermionic systems, such as the Hubbard model, problems with impurities such as the Kondo effect, boson systems, and the physics of quantum dots joined with quantum wires. It has been also extended to work on tree graphs, and has found applications in the study of dendrimers. For 2D systems with one of the dimensions much larger than the other DMRG is also accurate, and has proved useful in the study of ladders. The method has been extended to study equilibrium statistical physics in 2D, and to analyze non-equilibrium phenomena in 1D. The DMRG has also been applied to the field of quantum chemistry to study strongly correlated systems. Example: Quantum Heisenberg model Let us consider an "infinite" DMRG algorithm for the ${\displaystyle S=1}$ antiferromagnetic quantum Heisenberg chain. The recipe can be applied for every translationally invariant one-dimensional DMRG is a renormalization-group technique because it offers an efficient truncation of the Hilbert space of one-dimensional quantum systems. Starting point To simulate an infinite chain, start with four sites. The first is the block site, the last the universe-block site and the remaining are the added sites, the right one is added to the universe-block site and the other to the block site. The Hilbert space for the single site is ${\displaystyle {\mathfrak {H}}}$ with the base ${\displaystyle \{|S,S_{z}\rangle \}\equiv \{|1,1\rangle ,|1,0\rangle ,|1,-1\rangle \}}$ . With this base the spin operators are ${\displaystyle S_{x}}$ , ${\displaystyle S_{y}}$ and ${\displaystyle S_{z}}$ for the single site. For every block, the two blocks and the two sites, there is its own Hilbert space ${\displaystyle {\mathfrak {H}}_{b}}$ , its base ${\displaystyle \{|w_{i}\rangle \}}$ (${\displaystyle i:1\dots \dim({\mathfrak {H}}_{b})}$ )and its own operators${\displaystyle O_{b}:{\mathfrak {H}} _{b}\rightarrow {\mathfrak {H}}_{b}}$ where • block: ${\displaystyle {\mathfrak {H}}_{B}}$ , ${\displaystyle \{|u_{i}\rangle \}}$ , ${\displaystyle H_{B}}$ , ${\displaystyle S_{x_{B}}}$ , ${\displaystyle S_{y_{B}}}$ , ${\displaystyle S_{z_ • left-site: ${\displaystyle {\mathfrak {H}}_{l}}$ , ${\displaystyle \{|t_{i}\rangle \}}$ , ${\displaystyle S_{x_{l}}}$ , ${\displaystyle S_{y_{l}}}$ , ${\displaystyle S_{z_{l}}}$ • right-site: ${\displaystyle {\mathfrak {H}}_{r}}$ , ${\displaystyle \{|s_{i}\rangle \}}$ , ${\displaystyle S_{x_{r}}}$ , ${\displaystyle S_{y_{r}}}$ , ${\displaystyle S_{z_{r}}}$ • universe: ${\displaystyle {\mathfrak {H}}_{U}}$ , ${\displaystyle \{|r_{i}\rangle \}}$ , ${\displaystyle H_{U}}$ , ${\displaystyle S_{x_{U}}}$ , ${\displaystyle S_{y_{U}}}$ , ${\displaystyle S_ At the starting point all four Hilbert spaces are equivalent to ${\displaystyle {\mathfrak {H}}}$ , all spin operators are equivalent to ${\displaystyle S_{x}}$ , ${\displaystyle S_{y}}$ and ${\ displaystyle S_{z}}$ and ${\displaystyle H_{B}=H_{U}=0}$ . In the following iterations, this is only true for the left and right sites. Step 1: Form the Hamiltonian matrix for the superblock The ingredients are the four block operators and the four universe-block operators, which at the first iteration are ${\displaystyle 3\times 3}$ matrices, the three left-site spin operators and the three right-site spin operators, which are always ${\displaystyle 3\times 3}$ matrices. The Hamiltonian matrix of the superblock (the chain), which at the first iteration has only four sites, is formed by these operators. In the Heisenberg antiferromagnetic S=1 model the Hamiltonian is: ${\displaystyle \mathbf {H} _{SB}=-J\sum _{\langle i,j\rangle }\mathbf {S} _{x_{i}}\mathbf {S} _{x_{j}}+\mathbf {S} _{y_{i}}\mathbf {S} _{y_{j}}+\mathbf {S} _{z_{i}}\mathbf {S} _{z_{j}}}$ These operators live in the superblock state space: ${\displaystyle {\mathfrak {H}}_{SB}={\mathfrak {H}}_{B}\otimes {\mathfrak {H}}_{l}\otimes {\mathfrak {H}}_{r}\otimes {\mathfrak {H}}_{U}}$ , the base is ${\displaystyle \{|f\rangle =|u\rangle \otimes |t\rangle \otimes |s\rangle \otimes |r\rangle \}}$ . For example: (convention): ${\displaystyle |1000\dots 0\rangle \equiv |f_{1}\rangle =|u_{1},t_{1},s_{1},r_{1}\rangle \equiv |100,100,100,100\rangle }$ ${\displaystyle |0100\dots 0\rangle \equiv |f_{2}\rangle =|u_{1},t_{1},s_{1},r_{2}\rangle \equiv |100,100,100,010\rangle }$ The Hamiltonian in the DMRG form is (we set ${\displaystyle J=-1}$ ): ${\displaystyle \mathbf {H} _{SB}=\mathbf {H} _{B}+\mathbf {H} _{U}+\sum _{\langle i,j\rangle }\mathbf {S} _{x_{i}}\mathbf {S} _{x_{j}}+\mathbf {S} _{y_{i}}\mathbf {S} _{y_{j}}+\mathbf {S} _{z_{i}}\ mathbf {S} _{z_{j}}}$ The operators are ${\displaystyle (d*3*3*d)\times (d*3*3*d)}$ matrices, ${\displaystyle d=\dim({\mathfrak {H}}_{B})\equiv \dim({\mathfrak {H}}_{U})}$ , for example: ${\displaystyle \langle f|\mathbf {H} _{B}|f'\rangle \equiv \langle u,t,s,r|H_{B}\otimes \mathbb {I} \otimes \mathbb {I} \otimes \mathbb {I} |u',t',s',r'\rangle }$ ${\displaystyle \mathbf {S} _{x_{B}}\mathbf {S} _{x_{l}}=S_{x_{B}}\mathbb {I} \otimes \mathbb {I} S_{x_{l}}\otimes \mathbb {I} \mathbb {I} \otimes \mathbb {I} \mathbb {I} =S_{x_{B}}\otimes S_{x_{l}}\ otimes \mathbb {I} \otimes \mathbb {I} }$ Step 2: Diagonalize the superblock Hamiltonian At this point you must choose the eigenstate of the Hamiltonian for which some observables is calculated, this is the target state . At the beginning you can choose the ground state and use some advanced algorithm to find it, one of these is described in: This step is the most time-consuming part of the algorithm. If ${\displaystyle |\Psi \rangle =\sum \Psi _{i,j,k,w}|u_{i},t_{j},s_{k},r_{w}\rangle }$ is the target state, expectation value of various operators can be measured at this point using ${\ displaystyle |\Psi \rangle }$ . Step 3: Reduce density matrix Form the reduced density matrix ${\displaystyle \rho }$ for the first two block system, the block and the left-site. By definition it is the ${\displaystyle (d*3)\times (d*3)}$ matrix: ${\ displaystyle \rho _{i,j;i',j'}\equiv \sum _{k,w}\Psi _{i,j,k,w}\Psi _{i',j',k,w}^{*}}$ Diagonalize ${\displaystyle \rho }$ and form the ${\displaystyle m\times (d*3)}$ matrix ${\displaystyle T}$ , which rows are the ${\displaystyle m}$ eigenvectors associated with the ${\displaystyle m}$ largest eigenvalues ${\displaystyle e_{\alpha }}$ of ${\displaystyle \rho }$ . So ${\displaystyle T}$ is formed by the most significant eigenstates of the reduced density matrix. You choose ${\ displaystyle m}$ looking to the parameter ${\displaystyle P_{m}\equiv \sum _{\alpha =1}^{m}e_{\alpha }}$ : ${\displaystyle 1-P_{m}\cong 0}$ . Step 4: New block and universe-block operators Form the ${\displaystyle (d*3)\times (d*3)}$ matrix representation of operators for the system composite of the block and left-site, and for the system composite of right-site and universe-block, for ${\displaystyle H_{B-l}=H_{B}\otimes \mathbb {I} +S_{x_{B}}\otimes S_{x_{l}}+S_{y_{B}}\otimes S_{y_{l}}+S_{z_{B}}\otimes S_{z_{l}}}$ ${\displaystyle S_{x_{B-l}}=\mathbb {I} \otimes S_{x_{l}}}$ ${\displaystyle H_{r-U}=\mathbb {I} \otimes H_{U}+S_{x_{r}}\otimes S_{x_{U}}+S_{y_{r}}\otimes S_{y_{U}}+S_{z_{r}}\otimes S_{z_{U}}}$ ${\displaystyle S_{x_{r-U}}=S_{x_{r}}\otimes \mathbb {I} }$ Now, form the ${\displaystyle m\times m}$ matrix representations of the new block and universe-block operators, form a new block by changing basis with the transformation ${\displaystyle T}$ , for example:${\displaystyle {\begin{matrix}&H_{B}=TH_{B-l}T^{\dagger }&S_{x_{B}}=TS_{x_{B-l}}T^{\dagger }\end{matrix}}}$ At this point the iteration is ended and the algorithm goes back to step 1. The algorithm stops successfully when the observable converges to some value. Matrix product ansatz The success of the DMRG for 1D systems is related to the fact that it is a variational method within the space of matrix product states (MPS). These are states of the form ${\displaystyle |\Psi \rangle =\sum _{s_{1}\cdots s_{N}}\operatorname {Tr} (A^{s_{1}}\cdots A^{s_{N}})|s_{1}\cdots s_{N}\rangle }$ where ${\displaystyle s_{1}\cdots s_{N}}$ are the values of the e.g. z-component of the spin in a spin chain, and the A^s[i] are matrices of arbitrary dimension m. As m → ∞, the representation becomes exact. This theory was exposed by S. Rommer and S. Ostlund in [1]. In quantum chemistry application, ${\displaystyle s_{i}}$ stands for the four possibilities of the projection of the spin quantum number of the two electrons that can occupy a single orbital, thus $ {\displaystyle s_{i}=|00\rangle ,|10\rangle ,|01\rangle ,|11\rangle }$ , where the first (second) entry of these kets corresponds to the spin-up(down) electron. In quantum chemistry, ${\displaystyle A^{s_{1}}}$ (for a given ${\displaystyle s_{i}}$ ) and ${\displaystyle A^{s_{N}}}$ (for a given ${\displaystyle s_{N}}$ ) are traditionally chosen to be row and column matrices, respectively. This way, the result of ${\displaystyle A^{s_{1}}\ldots A^{s_{N}}}$ is a scalar value and the trace operation is unnecessary. ${\displaystyle N}$ is the number of sites (the orbitals basically) used in the simulation. The matrices in the MPS ansatz are not unique, one can, for instance, insert ${\displaystyle B^{-1}B}$ in the middle of ${\displaystyle A^{s_{i}}A^{s_{i+1}}}$ , then define ${\displaystyle {\tilde {A}}^{s_{i}}=A^{s_{i}}B^{-1}}$ and ${\displaystyle {\tilde {A}}^{s_{i+1}}=BA^{s_{i+1}}}$ , and the state will stay unchanged. Such gauge freedom is employed to transform the matrices into a canonical form. Three types of canonical form exist: (1) left-normalized form, when ${\displaystyle \sum _{s_{i}}\left({\tilde {A}}^{s_{i}}\right)^{\dagger }{\tilde {A}}^{s_{i}}=I}$ for all ${\displaystyle i}$ , (2) right-normalized form, when ${\displaystyle \sum _{s_{i}}{\tilde {A}}^{s_{i}}\left({\tilde {A}}^{s_{i}}\right)^{\dagger }=I}$ for all ${\displaystyle i}$ , and (3) mixed-canonical form when both left- and right-normalized matrices exist among the ${\displaystyle N}$ matrices in the above MPS ansatz. The goal of the DMRG calculation is then to solve for the elements of each of the ${\displaystyle A^{s_{i}}}$ matrices. The so-called one-site and two-site algorithms have been devised for this purpose. In the one-site algorithm, only one matrix (one site) whose elements are solved for at a time. Two-site just means that two matrices are first contracted (multiplied) into a single matrix, and then its elements are solved. The two-site algorithm is proposed because the one-site algorithm is much more prone to getting trapped at a local minimum. Having the MPS in one of the above canonical forms has the advantage of making the computation more favorable - it leads to the ordinary eigenvalue problem. Without canonicalization, one will be dealing with a generalized eigenvalue In 2004 the time-evolving block decimation method was developed to implement real-time evolution of matrix product states. The idea is based on the classical simulation of a quantum computer. Subsequently, a new method was devised to compute real-time evolution within the DMRG formalism - See the paper by A. Feiguin and S.R. White [2]. In recent years, some proposals to extend the method to 2D and 3D have been put forward, extending the definition of the matrix product states. See this paper by F. Verstraete and I. Cirac, [3]. Further reading • The original paper, by S. R. White, [4] or [5] • A textbook on DMRG and its origins: https://www.springer.com/gp/book/9783540661290 • A broad review, by Karen Hallberg, [6]. • Two reviews by Ulrich Schollwöck, one discussing the original formulation [7], and another in terms of matrix product states [8] • The Ph.D. thesis of Javier Rodríguez Laguna [9]. • An introduction to DMRG and its time-dependent extension [10]. • A list of DMRG e-prints on arxiv.org [11]. • A review article on DMRG for ab initio quantum chemistry [12]. • An introduction video on DMRG for ab initio quantum chemistry [13]. • White, Steven R.; Huse, David A. (1993-08-01). "Numerical renormalization-group study of low-lying eigenstates of the antiferromagnetic S=1 Heisenberg chain". Physical Review B. 48 (6). American Physical Society (APS): 3844–3852. Bibcode:1993PhRvB..48.3844W. doi:10.1103/physrevb.48.3844. ISSN 0163-1829. PMID 10008834. • The Matrix Product Toolkit: A free GPL set of tools for manipulating finite and infinite matrix product states written in C++ [14] • Uni10: a library implementing numerous tensor network algorithms (DMRG, TEBD, MERA, PEPS ...) in C++ • Powder with Power: a free distribution of time-dependent DMRG code written in Fortran [15] Archived 2017-12-04 at the Wayback Machine • The ALPS Project: a free distribution of time-independent DMRG code and Quantum Monte Carlo codes written in C++ [16] • DMRG++: a free implementation of DMRG written in C++ [17] • The ITensor (Intelligent Tensor) Library: a free library for performing tensor and matrix-product state based DMRG calculations written in C++ [18] • OpenMPS: an open source DMRG implementation based on Matrix Product States written in Python/Fortran2003. [19] • Snake DMRG program: open source DMRG, tDMRG and finite temperature DMRG program written in C++ [20] • CheMPS2: open source (GPL) spin-adapted DMRG code for ab initio quantum chemistry written in C++ [21] • Block: open source DMRG framework for quantum chemistry and model Hamiltonians. Supports SU(2) and general non-Abelian symmetries. Written in C++. • Block2: An efficient parallel implementation of DMRG, dynamical DMRG, tdDMRG, and finite temperature DMRG for quantum chemistry and models. Written in Python/C++. See also 1. ^ Nakatani, Naoki (2018), "Matrix Product States and Density Matrix Renormalization Group Algorithm", Reference Module in Chemistry, Molecular Sciences and Chemical Engineering, Elsevier, doi :10.1016/b978-0-12-409547-2.11473-8, ISBN 978-0-12-409547-2, retrieved 2021-04-21
{"url":"https://www.knowpia.com/knowpedia/Density_matrix_renormalization_group","timestamp":"2024-11-13T00:01:26Z","content_type":"text/html","content_length":"259777","record_id":"<urn:uuid:4972b52b-871c-4cb6-a480-749f6fee0f16>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00248.warc.gz"}
Increasing superconducting critical temperature by enhancing electron-phonon coupling - Mapping Ignorance Increasing superconducting critical temperature by enhancing electron-phonon coupling Topology has been at the forefront of condensed matter physics for the past two decades, influencing our understanding of quantum materials and phenomena. More recently, it has however become clear that a more general concept, that of quantum geometry, manifests itself in a series of quantum phenomena involving flat electronic bands. In condensed matter physics, the band structure of materials describes the energy levels available to electrons in a crystal lattice. Quantum geometry influences the band structure by affecting the spatial extent and shape of electron wavefunctions within the Nontrivial quantum geometry — expressing change in wavefunctions under infinitesimal change in the Hamiltonian parameters such as momentum— appears naturally in multi-band systems. If a band is topologically nontrivial, the quantum metric is bounded from below by the topological invariant of the band. However, even if the band is topologically trivial, but has Wannier states that are not fully localized on the atoms (such as in the obstructed atomic limits), the quantum geometry — usually described up to now by the Fubini Study metric (FSM)— can be bounded from below. For flat electronic bands — whose flatness comes from quantum interference effects — it has been shown that the quantum geometry is directly related to superfluid weight and other phenomena (such as the fractional Chern insulators, etc), mostly within contrived special models. Hence, flat bands, previously thought to be detrimental to superconductivity, actually have superfluid weight bound from below if topological. Experimental investigations of these predictions are ongoing in systems such as magic-angle twisted bilayer graphene. Up to now, all the works on quantum geometry either do not include the realistic interaction or treat the interaction strength as a tuning parameter. Actually, it is unknown how quantum geometry affects the strength of realistic interactions. One main and important of these interactions in solids is the electron-phonon coupling, which is crucial for superconductivity and other quantum phases. As for phonon-mediated superconductors, a large electron-phonon coupling constant, which tells the strength of the coupling, typically leads to a high superconducting transition temperature, it is natural to ask how this constant is directly related to the electron band geometry which is bounded by topology. Such relation, if revealed, may help look for new superconductors, given the large number of known topological materials. Now, a team of researchers computes ^1 the contribution of electron band geometry and topology to the bulk electron-phonon constant. The team built their model by using Gaussian approximation. This method simplifies complex interactions (such as those between electrons and phonons) by approximating the distribution of variables like energies as Gaussian (or normal) distributions. The scientists found that the overlapping was affected by the quantum geometry of the electronic wavefunction, thus affecting electron hopping. Electron hopping is a phenomenon in crystal lattices where electrons move from one site to another. For hopping to occur effectively, the wavefunctions of electrons at neighbouring sites must overlap, allowing electrons to tunnel through the potential barriers between sites. The researchers quantified this by measuring the electron-phonon coupling constant using the Gaussian approximation. To test their theory, they applied it to two materials, graphene and magnesium diboride (MgB[2]). The researchers chose to test their theory on graphene and MgB[2] because both materials have superconducting properties driven by electron-phonon coupling. They found that for both materials, the coupling was strongly influenced by geometric contributions. Specifically, the geometric contributions were measured to be 50% and 90% for graphene and MgB[2], respectively. They also found the existence of a lower bound or limit for the contributions due to quantum geometry. This work suggests that increasing superconducting critical temperature, which is the temperature below which superconductivity is observed, can be done by enhancing electron-phonon coupling. Author: César Tomé López is a science writer and the editor of Mapping Ignorance Disclaimer: Parts of this article may have been copied verbatim or almost verbatim from the referenced research paper/s. 1. Yu, J., Ciccarino, C.J., Bianco, R., Errea, I., Narang, P. & Bernevig, B.A. (2024) Non-trivial quantum geometry and the strength of electron–phonon coupling. Nat. Phys. doi: 10.1038/ s41567-024-02486-0 ↩
{"url":"https://mappingignorance.org/2024/06/20/increasing-superconducting-critical-temperature-by-enhancing-electron-phonon-coupling/","timestamp":"2024-11-07T19:35:11Z","content_type":"text/html","content_length":"77629","record_id":"<urn:uuid:db781fe2-e0ae-444b-ad56-4478f0148810>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00361.warc.gz"}
How an Electronic Brain Works, June 1951 Radio-Electronics June 1951 Radio-Electronics [Table of Contents] Wax nostalgic about and learn from the history of early electronics. See articles from Radio-Electronics, published 1930-1988. All copyrights hereby acknowledged. This is another example of a multi-part article of which I happen to have only one (now two) of the installments - Part 9. As is often the case, each article is pretty much stand-alone and does not require that you have already seen the previous sections. In 1951, computers were still mostly analog; digital circuits were just beginning to get serious research thanks to the recent advent of solid state devices. Boolean algebra, truth tables, and combinational logic were just beginning to be taught in engineering courses. ENIAC (Electronic Numerical Integrator and Computer), first used in 1945 at the end of World War II, was the world's first general purpose digital computer, and its active elements were vacuum tubes - about 20,000 of them (along with 7,200 crystal diodes). As you might expect, there was a lot of excitement in the electronics, scientific, and finance world about digital computers that would be inexpensive enough that individual corporations and institutions could afford. The time saved and errors avoided more than justified the cost. Magazines like Popular Electronics and Radio-Electronics were filled with articles to teach interested readers about the new technology. See Part IV - Long Division with Relays and Part IX - Some Electronic Circuits for Computers. Part IX - Some electronic circuits for computers and how they are used for adding and subtracting By Edmund C. Berkely and Robert A. Jensen In the previous article we began the discussion of an electric brain to be built around electronic tubes instead of relays. We discussed the storage information in the form of the state of a flip-flop, or pulses circulating in a delay line, or magnetized spots on a magnetic surface, or charges on the screen of an electrostatic storage tube. But how do we compute? As soon as we have arranged to read, write, and erase information at electronic speeds, we need to consider how to compute with electronic elements. For computing purposes, a unit of information is represented as a pulse, either a rise and fall of an otherwise constant voltage, or else a fall followed by a rise. We will call the first kind a positive pulse or a 1, the second kind a negative pulse or a -1, and the absence of a pulse a 0. See Fig. 1. In a computer, the pulses are usually of a standard duration, and may be for example 1/5 of a microsecond long and spaced 4/5 of a microsecond apart. In this case the pulse repetition rate would be 1 megacycle per second. In some computers, 1 and -1 pulses are both treated as the presence of information, the binary digit 1, the logical truth value 1, or "yes"; while 0 is treated as the absence of information, the binary digit 0, the logical truth value 0, or "no." Phase Inverter The first computing element we need to consider is a phase inverter. In computer work, a phase inverter changes a positive pulse to a negative one, or a negative pulse to a positive one, that is, "inverts" the pulse. See Fig. 2. In this figure, and in Figs. 3 to 8, part a is the circuit diagram; b is its block diagram representation which we use for convenience; and c is a function table that indicates what the circuit does. Any grid-controlled electronic tube can act as a phase inverter. Logical AND Circuit The next computing element we need to consider is called a logical AND circuit." This is one of the meanings of the electronic term "gate." See Fig. 3. In this circuit, a pulse appears on the output line if, and only if, two pulses come in simultaneously on two input lines. A tube with two grids, normally cut off with either one or no pulses, is one of the forms which a logical AND circuit can take. The reason for the word "and" is that we have a pulse on output line C if and only if we have a pulse on input line A and on input line B. This (with emphasis on the idea "both") is the regular meaning for "and" in logic. This type of circuit may take many forms with and without electronic tubes. Logical OR Circuit Another computing element is called a "logical OR circuit", sometimes called "buffer." It allows a pulse on the output line if a pulse comes in on either one or both of the two input lines. See Fig. A tube with two grids, which is normally conducting, is one of the forms which a logical OR circuit may have, although there are others. The reason for the word "or" is that a pulse is on output line C if a pulse is on input line A or if a pulse is on input line B, or both. This nonexclusive meaning of the word "or" is its regular meaning in logic. Logical EXCEPT Circuit Another computing element is called a "logical EXCEPT circuit," or inhibitory gate. In this a pulse is allowed out on the output line if a pulse comes in on a specified one of the two input lines except if a pulse comes in at the same time on the other input line. See Fig. 5. The circuit shown in Fig. 5 will act as a logical EXCEPT circuit. Its constants are chosen so that when A is not pulsed, whether or not B is pulsed, still there is no output on line C. If A is pulsed and B is pulsed, the two pulses coinciding in time and of opposite phase eliminate the pulse on line C. If A is pulsed and B is not pulsed, then the pulse goes on through. Other circuits besides that shown in Fig. 5 are of course possible. Electrical Delay Lines The computing section of an electronic computer also uses an electric delay line of very short delay, such as one pulse time, or a few pulse times. A circuit that does this appears in Fig. 6. These are different from the long sonic delay lines such as the mercury tanks described in the previous article, because the purpose of the short delay line is not storage but computation. Short delay lines are important because pulses sent into the various parts of an electronic computer must arrive at the various points just when they are needed. For example, in the Bureau of Standards Eastern automatic computer, delay times are figured to hundredths of microseconds and pulses are timed to be safely within the planned intervals. Now how do we take these various computing elements and begin to do computing with them? The first thing is to assemble these elements so that we can add two binary digits. Suppose there are two input lines A and B, and either one may bring in a binary digit that may be 1 or 0. Suppose that we have two output lines, one of them S, that will give us the sum without carry, and the other C, that will give us the carry. The function that we want to express is the result of adding two binary digits: A + B = C, S, where 0 + 0 = 00, 0 + 1 = 01, 1 + 0 = 01, and 1 + 1 = 10. See Fig. 7. To make a half-adder circuit, one logical AND circuit, one logical OR circuit, and one logical EXCEPT circuit, combined as shown in Fig. 7-a, are sufficient. But we are not finished, because a previous addition may have given a carry that has to be taken into account. The circuit which will perform complete binary addition is called an adder. See Fig.8. Now let us trace through the adder circuit with some numbers and see what actually happens in the sequences of pulses on the several lines in the circuit. The digit 1 will represent a pulse (assumed to be positive or negative as the circuit requires), and the digit 0 will mean absence of a pulse at the proper time. At the same time the digits 1 and 0 will represent information that we desire to compute with. Suppose we write a binary number (or more generally any set of binary digits) in the ordinary way (with the smallest ranking digit at the right) on any circuit line where the pulses are traveling from left to right. Then the binary number will be attended to as a pattern of pulses by the circuit in just the sequence from right to left that we ordinarily deal with in arithmetic. At the same time the number will show the sequence of pulses in the order that they are handled in the circuit. As an example of using the adder, let us add 101 (one 4, no 2 and one 1 in binary, or 5 in decimal) and 1011 (one 8, no 4, one 2, and one 1 in binary, or 11 in decimal). We write the two numbers on the input lines A and B (See Fig. 9) and now we set out to see what happens. At the first pulse-time, the pulse (the 1) on the A line and the 1 (another pulse) on the B line go into half-adder No.1, and give rise to no pulse on the S line (sum without carry) and a pulse on the Cline (carry). The 0 on the S1 line goes into the second half-adder without delay; but the 1 on the C1 line goes into the one-pulse delay and so it is held back one pulse-time. As a result, at the first pulse-time, 0 and 0 go into the second half-adder; and so its output is 0 for the first digit of the true sum, and 0 for the carry. The 0 for the carry circles round the loop and comes up to the entrance of the one-pulse delay. At the second pulse-time, 0 and 1 go into the first half-adder, and give rise to a 1 on the S1 line and a 0 on the C1 line. The 1 on the S1 line goes into the second half-adder without delay. Now the delayed previous carry (with no conflict from the absence of pulse that came around the loop) now issues from the one-pulse delay. So 1 and 1 now enter half-adder No.2, and from it issues a 0 on the sum line S2 and a 1 on the carry line C2 which circulates around the loop, and enters the one-pulse delay so it will be ready for the next pulse time. At the third, fourth, and fifth pulse-times, each of the proper operations takes place similarly, and so we get out of the second half-adder exactly the sum that we desire. Now how do we manage to subtract? A circuit that will subtract is shown in Fig 10, using the constituents of an adder, and a logical EXCEPT circuit. The word "minuend" means "the number to be diminished." The word "subtrahend" means "the number to be subtracted." Let us test this circuit by subtracting five from eleven, or in binary subtracting 101 from 1011. The pulses appear in succession on each of the lines in the diagram, as shown. By following through the circuit, remembering what each stage does, we see that exactly the right answer, 0110 or six, appears on the output line marked "difference." Acknowledgement is made to Henry W. Schrimpf for a number of the circuits and ideas in this article. In the next article we shall take up the multiplication and division of binary numbers using electronic circuits and begin the discussion of the control of an electronic computer. (continued next month) Posted June 2, 2020
{"url":"https://www.rfcafe.com/references/radio-electronics/electronic-brain-radio-electronics-june-1951.htm","timestamp":"2024-11-11T19:46:18Z","content_type":"text/html","content_length":"39453","record_id":"<urn:uuid:17f67b86-4856-4778-a447-9b9c5917a885>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00247.warc.gz"}
Introduction to 3D parametric modeling - Part II — Steemit Introduction to 3D parametric modeling - Part II Hello everyone! In the first part of this article, Introduction to 3D parametric modeling - Part I - that I posted yesterday - you find a description and the definition of 3D parametric modeling, I explained how it works and its advantages; I also mentioned its "basic components": features and parameters. Today I will go into detail about these basic components and I'll try to clearly answer the following questions: 1. What are the features and why are they so important? 2. What are the parameters and what are they used for? What are parameters and features? Let's start with features: they are the basic elements for parametric modeling, as they are all the possible operations available in the modeling environment, from the four main operations (extrude, revolve, sweep and loft), to holes, fillets, chamfers, shells and so on. Each solid body and/or surface is created through a series of subsequent modeling operations or “features”, that's why 3D parametric modeling is also called “feature-based modeling”. Once we use a feature, the software store it in a chronological order, creating a kind of timeline, telling us the story of our model through the sequence of its features. Some features require a sketch (that means a 2D/3D drawing), others are applied directly to objects. The first ones are called “sketched features”, here are some examples: extrude, revolve, sweep and loft; the second ones are defined “applied features”, because they don't require any drawings to be applied; among them there are fillets, chamfers and shells. Each feature comes with specific attributes (a diameter, a height, a length, an angle...) which are, indeed, numbers! They are the mathematics hidden behind each object and operation, basic elements for each features and, in a nutshell, what we call parameters. Most diffused parameters are dimensions, but they are not just numbers; some parameters may have boolean – yes/not – or text values; they are all stored in a list that can be easily open for viewing and editing. Parameters are bidirectionally connected to geometry: they both depends on it and affect it. It means that whenever a change to a solid or a surface is required, we can do so both directly manipulating its geometry and changing its parameter values. This is a great advantage of parametric modeling and leads to high control and flexibility, everything is easy to change without waste of time (sometimes modeling tools are not even required, it's enough typing new values for some parameters in the list!). In softwares like Autodesk Inventor we can also rename, comment and mark important parameters in the list as “keys” so there is no risk to lose them among the others for future editing. One last (but not least) consideration: when we model and object using the 3D parametric modeling method, the result is not just an ordinary object, it is an entire class of objects easy to modify; from the first one we can create infinite combinations of parameters which means infinite potential objects of the same kind. That's why parametric modeling is such a great and powerful tool! I know, that's a lot of theory! But it is extremely important to understand the basics of parametric modeling and its great potential before starting with 3D modeling. Thanks everyone for reading this post, stay tuned for the next one: it will be about the four basic 3D modeling operations Note: all images on this article were created by the author, the word cloud at the beginning was created with a free online tool (Wordclouds.com).
{"url":"https://steemit.com/parametric-modeling/@nawamy/introduction-to-3d-parametric-modeling-part-ii","timestamp":"2024-11-14T14:31:19Z","content_type":"text/html","content_length":"139495","record_id":"<urn:uuid:b840970e-a828-4403-bffa-7ada0b58e140>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00798.warc.gz"}
GreeneMath.com | Ace your next Math Test! Lesson Objectives • Learn about complex numbers • Learn how to perform operations with complex numbers • Learn how to rationalize imaginary denominators How to Perform Operations with Complex Numbers How to Add and Subtract Complex Numbers Up to this point, all numbers that we have worked with were real numbers. A complex number is formed using the imaginary unit i along with our real numbers. If we let "a" and "b" be any two real numbers then: a + bi is a complex number. The "a" is the real part, whereas the "bi" is the complex part. We can perform operations with complex numbers using the same tools we used with real numbers. Let's look at an example that involves addition and subtraction. Example 1 : Simplify each. $$\require{cancel}(1 + 2i) + (-6 + 9i) - (-9 + 2i)$$ We will first change the subtraction operation into addition of the opposite. $$(1 + 2i) + (-6 + 9i) + (9 - 2i)$$ Now we can use our commutative, associative, and distributive properties to find the sum. We want to add the real parts and imaginary parts separately. $$(1 - 6 + 9) + (2i + 9i - 2i)=4 + (2 + 9 - 2)i=4 + 9i$$ How to Multiply and Divide Complex Numbers We may also run into multiplication and division with complex numbers. Let's look at an example of multiplication. Example 2 : Simplify each. $$(3 + 8i)(-7 - 3i)$$ For this problem, we can use FOIL F » 3 • -7 = -21 O » 3 • -3i = -9i I » 8i • -7 = -56i L » 8i • -3i = -24i If we simplify: -21 - 9i - 56i - 24i = -21 - 65i + 24 = 3 - 65i Note: Remember i is (-1). $$(3 + 8i)(-7 - 3i)=3 - 65i$$ Rationalizing Imaginary Denominators When we start dividing with complex numbers, we will run into problems where we have i in the denominator. Since i by definition is the square root of (-1), it represents a radical and can't be in the denominator of a simplified radical expression. Let's look at an example. Example 3 : Simplify each. $$\frac{1}{4i}$$ Remember i represents the square root of (-1). $$\frac{1}{4 \cdot \sqrt{-1}}$$ We can multiply both numerator and denominator by the square root of (-1) or i. We know that i is (-1): $$\frac{1}{4i}\cdot \frac{i}{i}=\frac{1}{-1 \cdot 4}=-\frac{i}{4}$$ Another scenario that will occur is trying to rationalize a complex denominator with two terms. Recall when we faced this problem with real numbers, we multiplied the numerator and denominator of the fraction by the conjugate of the denominator. We will use the same technique here. When we multiply complex conjugates together, the result is a real number. (a + bi)(a - bi) = a + b Why is there a plus instead of a minus? This is from the i that occurs, it changes the sign from a minus into a plus. Let's look at an example. Example 4 : Simplify each. $$\frac{10 + 8i}{2 + 6i}$$ We can simplify by multiplying both the numerator and the denominator by the complex conjugate of the denominator. $$\frac{10 + 8i}{2 + 6i}\cdot \frac{2 - 6i}{2 - 6i}=\frac{68 - 44i}{40}$$ We can factor out a 4 from the numerator and denominator and cancel. $$\frac{\cancel{4}(17 - 11i)}{\cancel{4}\cdot 10}=\frac{17 - 11i}{10}$$ We can write this in standard form (a + bi) by splitting up the fraction. $$\frac{17 - 11i}{10}=\frac{17}{10}- \frac{11i}{10}$$ Skills Check: Example #1 Simplify each. $$(-4 - 6i) + (-8 + 3i)$$ Please choose the best answer. Example #2 Simplify each. $$(-2 - 5i)(-3 - 4i)$$ Please choose the best answer. Example #3 Simplify each. $$\frac{3 + i}{2 + 7i}$$ Please choose the best answer. $$\frac{20}{53}+ \frac{13}{53}i$$ $$\frac{13}{53}- \frac{19}{53}i$$ Congrats, Your Score is 100% Better Luck Next Time, Your Score is % Try again? Ready for more? Watch the Step by Step Video Lesson | Take the Practice Test
{"url":"https://www.greenemath.com/College_Algebra/54/Operations-Complex-NumbersLesson.html","timestamp":"2024-11-10T11:58:09Z","content_type":"application/xhtml+xml","content_length":"14118","record_id":"<urn:uuid:abe28898-3e83-4de2-9076-4bd7de2df7b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00251.warc.gz"}
Pafnuty Chebyshev - Wikipedia Pafnuty Chebyshev Born 16 May 1821^[1] Died 8 December 1894 (aged 73)^[1] Nationality Russian Other names Chebysheff, Chebyshov, Tschebyscheff, Tschebycheff, Tchebycheff Alma mater Moscow University Known for Work on probability, statistics, mechanics, analytical geometry and number theory Awards Demidov Prize (1849) Scientific career Fields Mathematician Institutions St. Petersburg University Academic advisors Nikolai Brashman Dmitry Grave Aleksandr Korkin Aleksandr Lyapunov Notable students Andrey Markov Vladimir Andreevich Markov Konstantin Posse Yegor Ivanovich Zolotarev Pafnuty Lvovich Chebyshev (Russian: Пафну́тий Льво́вич Чебышёв, IPA: [pɐfˈnutʲɪj ˈlʲvovʲɪtɕ tɕɪbɨˈʂof]) (16 May [O.S. 4 May] 1821 – 8 December [O.S. 26 November] 1894)^[2] was a Russian mathematician and considered to be the founding father of Russian mathematics. Chebyshev is known for his fundamental contributions to the fields of probability, statistics, mechanics, and number theory. A number of important mathematical concepts are named after him, including the Chebyshev inequality (which can be used to prove the weak law of large numbers), the Bertrand–Chebyshev theorem, Chebyshev polynomials, Chebyshev linkage, and Chebyshev bias. The surname Chebyshev has been transliterated in several different ways, like Tchebichef, Tchebychev, Tchebycheff, Tschebyschev, Tschebyschef, Tschebyscheff, Čebyčev, Čebyšev, Chebysheff, Chebychov, Chebyshov (according to native Russian speakers, this one provides the closest pronunciation in English to the correct pronunciation in old Russian), and Chebychev, a mixture between English and French transliterations considered erroneous. It is one of the most well known data-retrieval nightmares in mathematical literature. Currently, the English transliteration Chebyshev has gained widespread acceptance, except by the French, who prefer Tchebychev. The correct transliteration according to ISO 9 is Čebyšëv. The American Mathematical Society adopted the transcription Chebyshev in its Mathematical Reviews.^[3] His first name comes from the Greek Paphnutius (Παφνούτιος), which in turn takes its origin in the Coptic Paphnuty (Ⲡⲁⲫⲛⲟⲩϯ), meaning "that who belongs to God" or simply "the man of God". One of nine children,^[4] Chebyshev was born in the village of Okatovo in the district of Borovsk, province of Kaluga. His father, Lev Pavlovich, was a Russian nobleman and wealthy landowner. Pafnuty Lvovich was first educated at home by his mother Agrafena Ivanovna Pozniakova (in reading and writing) and by his cousin Avdotya Kvintillianovna Sukhareva (in French and arithmetic). Chebyshev mentioned that his music teacher also played an important role in his education, for she "raised his mind to exactness and analysis". Trendelenburg's gait affected Chebyshev's adolescence and development. From childhood, he limped and walked with a stick and so his parents abandoned the idea of his becoming an officer in the family tradition. His disability prevented his playing many children's games and he devoted himself instead to mathematics. In 1832, the family moved to Moscow, mainly to attend to the education of their eldest sons (Pafnuty and Pavel, who would become lawyers). Education continued at home and his parents engaged teachers of excellent reputation, including (for mathematics and physics) the senior Moscow University teacher Platon Pogorelsky, who had taught, among others, the future writer Ivan Turgenev. In summer 1837, Chebyshev passed the registration examinations and, in September of that year, began his mathematical studies at the second philosophical department of Moscow University. His teachers included N.D. Brashman, N.E. Zernov and D.M. Perevoshchikov of whom it seems clear that Brashman had the greatest influence on Chebyshev. Brashman instructed him in practical mechanics and probably showed him the work of French engineer J.V. Poncelet. In 1841 Chebyshev was awarded the silver medal for his work "calculation of the roots of equations" which he had finished in 1838. In this, Chebyshev derived an approximating algorithm for the solution of algebraic equations of n^th degree based on Newton's method. In the same year, he finished his studies as "most outstanding In 1841, Chebyshev's financial situation changed drastically. There was famine in Russia, and his parents were forced to leave Moscow. Although they could no longer support their son, he decided to continue his mathematical studies and prepared for the master examinations, which lasted six months. Chebyshev passed the final examination in October 1843 and, in 1846, defended his master thesis "An Essay on the Elementary Analysis of the Theory of Probability." His biographer Prudnikov suggests that Chebyshev was directed to this subject after learning of recently published books on probability theory or on the revenue of the Russian insurance industry. In 1847, Chebyshev promoted his thesis pro venia legendi "On integration with the help of logarithms" at St Petersburg University and thus obtained the right to teach there as a lecturer. At that time some of Leonhard Euler's works were rediscovered by P. N. Fuss and were being edited by Viktor Bunyakovsky, who encouraged Chebyshev to study them. This would come to influence Chebyshev's work. In 1848, he submitted his work The Theory of Congruences for a doctorate, which he defended in May 1849.^[1] He was elected an extraordinary professor at St Petersburg University in 1850, ordinary professor in 1860 and, after 25 years of lectureship, he became merited professor in 1872. In 1882 he left the university and devoted his life to research. During his lectureship at the university (1852–1858), Chebyshev also taught practical mechanics at the Alexander Lyceum in Tsarskoe Selo (now Pushkin), a southern suburb of St Petersburg. His scientific achievements were the reason for his election as junior academician (adjunkt) in 1856. Later, he became an extraordinary (1856) and in 1858 an ordinary member of the Imperial Academy of Sciences. In the same year he became an honorary member of Moscow University. He accepted other honorary appointments and was decorated several times. In 1856, Chebyshev became a member of the scientific committee of the ministry of national education. In 1859, he became an ordinary member of the ordnance department of the academy with the adoption of the headship of the commission for mathematical questions according to ordnance and experiments related to ballistics. The Paris academy elected him corresponding member in 1860 and full foreign member in 1874. In 1893, he was elected honorable member of the St. Petersburg Mathematical Society, which had been founded three years earlier. Chebyshev died in St Petersburg on 26 November 1894. Pafnuty Chebyshev Chebyshev is known for his work in the fields of probability, statistics, mechanics, and number theory. The Chebyshev inequality states that if ${\displaystyle X}$ is a random variable with standard deviation σ > 0, then the probability that the outcome of ${\displaystyle X}$ is no less than ${\displaystyle a\sigma }$ away from its mean is no more than ${\displaystyle 1/a^{2}}$: ${\displaystyle \Pr(|X-{\mathbf {E} }(X)|\geq a\ )\leq {\frac {\sigma ^{2}}{a^{2}}}.}$ The Chebyshev inequality is used to prove the weak law of large numbers. The Bertrand–Chebyshev theorem (1845, 1852) states that for any ${\displaystyle n>3}$, there exists a prime number ${\displaystyle p}$ such that ${\displaystyle n<p<2n}$. This is a consequence of the Chebyshev inequalities for the number ${\displaystyle \pi (n)}$ of prime numbers less than ${\displaystyle n}$, which state that ${\displaystyle \pi (n)}$ is of the order of ${\displaystyle n/\log (n)}$. A more precise form is given by the celebrated prime number theorem: the quotient of the two expressions approaches 1.0 as ${\displaystyle n}$ tends to infinity. Chebyshev is also known for the Chebyshev polynomials and the Chebyshev bias – the difference between the number of primes that are congruent to 3 (modulo 4) and 1 (modulo 4). Chebyshev was the first person to think systematically in terms of random variables and their moments and expectations.^[5] Chebyshev on a 2021 stamp of Russia Chebyshev is considered to be a founding father of Russian mathematics.^[1] Among his well-known students were the mathematicians Dmitry Grave, Aleksandr Korkin, Aleksandr Lyapunov, and Andrei Markov . According to the Mathematics Genealogy Project, Chebyshev has 16,874 mathematical "descendants" as of February 2024.^[6] The lunar crater Chebyshev and the asteroid 2010 Chebyshev were named to honor his major achievements in the mathematical realm.^[7] • Tchebychef, P. L. (1899), Markov, Andrey Andreevich; Sonin, N. (eds.), Oeuvres, vol. I, New York: Commissionaires de l'Académie impériale des sciences, MR 0147353, Reprinted by Chelsea 1962 • Tchebychef, P. L. (1907), Markov, Andrey Andreevich; Sonin, N. (eds.), Oeuvres, vol. II, New York: Commissionaires de l'Académie impériale des sciences, MR 0147353, Reprinted by Chelsea 1962 • Butzer (1999), "P. L. Chebyshev (1821–1894): A Guide to his Life and Work", Journal of Approximation Theory, 96: 111–138, doi:10.1006/jath.1998.3289
{"url":"https://www.silverfives416.sbs/wiki/Category:Soviet_Army_officers","timestamp":"2024-11-04T11:28:25Z","content_type":"text/html","content_length":"146758","record_id":"<urn:uuid:d14fd711-4dfb-40f0-87ab-d204f00a41fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00384.warc.gz"}
Assigned zero constant value might be used in a division by zero An attempt to do a division or modulo operation using zero as the divisor causes a runtime error. Division by zero defects often occur due to ineffective error handling or race conditions, and typically cause abnormal program termination. Before a value is used as the divisor of a division or modulo operation in C/C++ code, it must be checked to confirm that it is not equal to zero. The DBZ checkers look for instances in which a zero constant value is used as the divisor of a division or modulo operation. The DBZ.GENERAL checker flags situations in which a variable that has been assigned a zero constant value locally or as the result of a function call might subsequently be used explicitly or passed to a function that might use it as a divisor of a division or modulo operation without checking it for the zero value. Vulnerability and risk Integer division by zero usually result in the failure of the process or an exception. It can also result in success of the operation, but gives an erroneous answer. Floating-point division by zero is more subtle. It depends on the implementation of the compiler. If the compiler is following the IEEE floating-point standard (IEEE 754), then the result of the floating-point division by zero has a well-defined result. However, the C and C++ standards do not enforce compliance to IEEE 754. Thus, floating-point division by zero has an undefined behavior in C and C++ and might result in the failure of the process or an exception. Division by zero issues typically occur due to ineffective exception handling. To avoid this vulnerability, check for a zero value before using it as the divisor of a division or modulo operation. Vulnerable code example int compute_mean(int array[], size_t size) int sum = 0; for (size_t i = 0; i < size; ++i) { sum += array[i]; return sum / size; void use_mean() int size = 0; int mean = compute_mean(0, size); Klocwork produces an issue report at line 13 indicating that the value of the variable 'size' is 0 at the call of 'compute_mean' and it is used as the divisor of the division operation on line 7. A division by zero can produce unexpected and unintended results. Fixed code example int compute_mean(int array[], size_t size) if (size == 0) { return 0; // or exceptional case. int sum = 0; for (size_t i = 0; i < size; ++i) { sum += array[i]; return sum / size; void use_mean() int size = 0; int mean = compute_mean(0, size); The problem from the previous snippet is fixed: the input variable 'size' is checked for the exceptional case of the zero constant value in line 3 and prevents the division from happening in this specific case. This checker can be extended through the Klocwork knowledge base. See Tuning C/C++ analysis for more information. This checker does not deal with global variable assigned to zero outside of the current analyzed function. This means that defects involving global variables may not be detected by this checker. For static int myzero = 0; int do_dbz() return 23 / myzero; This checker does not deal with abstract symbolic expressions evaluating to zero. This means that defects involving this kind of reasoning may not be detected by this checker. For example: void do_dbz_func(int x, int y) int z = 23; z /= x - y; // Here, x == y will give 0 for x - y. Divide by zero will not detected. void do_dbz(int x) do_dbz_func(x, x); // Divide by zero will happen here since x - x = 0. Not detected by this checker.
{"url":"https://help.klocwork.com/2023.2/en-us/reference/dbz.general.htm","timestamp":"2024-11-15T03:43:27Z","content_type":"text/html","content_length":"41243","record_id":"<urn:uuid:0d6567c3-91b5-4dce-b610-2b4b64d57dd2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00313.warc.gz"}
Roy Gary Increasing Cement Grinding Capacity With Vertical Roller Mill Technology Roy Gary Increasing Cement Grinding Capacity With Vertical Roller Mill Technology manufacturer Grasping strong production capability, advanced research strength and excellent service, Shanghai Roy Gary Increasing Cement Grinding Capacity With Vertical Roller Mill Technology supplier create the value and bring values to all of customers.
{"url":"https://willadahara.pl/30612-roy_gary_increasing_cement_grinding_capacity_with_vertical_roller_mill_technology_.html","timestamp":"2024-11-02T10:37:53Z","content_type":"application/xhtml+xml","content_length":"15319","record_id":"<urn:uuid:b88d8968-d64f-4ad2-96d2-0427c72c4bc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00238.warc.gz"}
Punto Banco - 1 All gambling games in casinos favour the casino. That means that in the long run, a casino should always make money from the players. Punto Banco, the preferred game of James Bond, is one such game and makes an ideal choice for modelling. We are going to investigate this game and try to predict how well a casino will do over the long term by getting the computer to play thousands of hands for us. If we can model Punto Banco, there's nothing stopping us looking at all kinds of other games, such as roulette. How do you play Punto Banco? We can't model anything unless we are crystal clear how it works. Punto Banco is a game of chance. There are only two players, and they are called the Bank and the Player. Gamblers like Phil Ivey have to decide which one of these two players they think will win, or if they think a draw will happen. The game is very mechanical. There are no choices to be made by people as cards are dealt to a strict formula, as we will see. It's also worth noting that these rules do vary slightly from casino to casino but these are the rules we will follow. * The aim of the game is to get a total of 9. * A casino will typically shuffle six packs of cards and put them in a dealing device called a 'shoe'. * The Player and the Bank both get two cards each (Player first, then the Bank, then the Player again and the last card goes to the Bank), which are dealt face up, so everyone can see the cards. * You add up the cards for both the Player and the Bank to get their totals. * 10 and picture cards count as 0. An ace counts as 1. * You only count the last digit of the total. 3 + 5 = 8 Jack + 4 = 4 5 + 7 = 2 (The total is 12, but remember, you only count the last digit.) King + Jack = 0 10 + 6 = 6 (The total is 16, but remember, you only count the last digit.) 6 + 9 = 5 (The total is 15, but remember, you only count the last digit.) Queen + 9 = 9 (A total of 9 is the top hand in Punto Banco and is also called a 'natural'.) 2 + 6 = 8 (This is the second-best hand and is also called a 'natural'.) * Next, a third card might be dealt to the Player. This is done strictly in accordance with the following table. ┃Player ┃ ┃Player has a total of:│ ┃ ┃0, 1, 2, 3, 4, 5 │Player must take another card ┃ ┃6, 7, 8, 9 │Player must stick with the cards they have. ┃ * Next, a third card might be dealt to the Bank. This is done strictly in accordance with the following table. ┃Bank ┃ ┃Bank has a total of:│ ┃ ┃0, 1, 2 │Bank takes another card regardless of what the Player has. ┃ ┃3 │Bank must take another card if the Player has any total except 8. ┃ ┃4 │Bank must take another card only if the Player has a total of ┃ ┃ │2, 3, 4, 5, 6 or 7 ┃ ┃5 │Bank must take another card only if the Player has a total of 4, 5, 6 or 7 ┃ ┃6 │Bank must take another card only if the Player has a total of 6 or 7 ┃ ┃7, 8, 9 │The Bank must stick with what they have. ┃ The winner of a hand is the one whose final total is closest to 9. If both hands have the same total then it is a draw. After a hand, the gamblers who bet on whether the Bank or Player or a draw would happen are paid: * If you bet on the Player winning, you get your bet doubled. So if you bet £100 on the Player winning and they won, you would get £200 back. * If you bet on the Bank winning, you get your bet back plus you win 95% of what you bet. So if you bet £100 on the Bank winning and they won, you would get £100 + £95 = £195 back. * If you bet on a draw, you get your bet back plus you win 8 times what you bet. So if you bet £100 on a draw, you would get £900 back. It might seem complicated to start with, but it's very mechanical and after a few games and following the tables above, it soon becomes clear how to play. If you are still a little unclear, watch some YouTube videos of the game being played, but as mentioned before, remember that the rules can change very slightly from casino to casino, although the main thrust of the game doesn't. What we are going to do We are going to model Punto Banco. Then we are going to see if we can win against the bank and how well the bank will do, how long it might take the bank to win all of our money and what we might change to put the odds more in our favour. Get some packs of cards, paper, pen and calculator. Get into groups and play Punto Banco! One person should be the banker (the person who deals the cards). Make sure you fully understand the rules and how the betting works, referring to the tables and rules above if you need to. You could all start with an imaginary £1000 and place bets. The banker should start with £10000. See if you can beat the bank like James Bond!
{"url":"https://theteacher.info/index.php/algorithms-and-problem-solving-2/computational-thinking/practical-tasks/4386-punto-banco-1","timestamp":"2024-11-12T05:32:04Z","content_type":"text/html","content_length":"37596","record_id":"<urn:uuid:11061ce3-3e87-4917-b8a3-0911a609c400>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00327.warc.gz"}
Assignment-07-C Epicycloid Evolute-sehenry Working on this was a bit tricky, but once I understood the concepts and structure behind making these shapes, it became easier. The hardest part for me was using the equations from the website and implementing it into my code. I had to browse through a few posts of other students to get a rough idea of what to do. From the start I wanted to do an epicycloid evolute because it looked really appealing and looked similar to a flower. //Seth Henry //Tuesdays at 10:30 //Assignment: Project 7 Composition with Curves (Epicycloid Evolute) //Global Variables var nPoints = 400; var conX; var scale; var n=10 function setup() { createCanvas(400, 400); function draw() { background(100,50,mouseX); //changes background color based on the mouse position fill(mouseX,100,mouseY); //changes the epicycloid color based on the mouse position var a = 150.0 //radius a var b = 50.0 //radius b var angle = map(conX,0,width,0,6*TWO_PI); //rotate around the constraint (conX) conX = constrain(mouseX, 0, width); //constrain around mouseX and mouseY scaleA = map(conX,0,width,0,3); rotate(angle); //rotate clockwise scale(scaleA,scaleA); //change the size of the epicycloid outer portion //Epicycloid Outer for (var i=0; i<200; i++){ var theta = map(i,0,nPoints,0, 4*TWO_PI); x=(a/(a+2*b))*(a+b)*cos(theta)+b*cos(((a+b)/b)*theta); //xpetal of epicycloid y=(a/(a+2*b))*(a+b)*sin(theta)+b*sin(((a+b)/b)*theta); //ypetal of epicycloid rotate(-angle); //rotate the opposite way of the outer epicycloid //No Rotate //Epicycloid Inner for (var i=0; i<200; i++){ var theta = map(i,0,nPoints,0, 4*TWO_PI); x=(a/(a+2*b))*(a+b)*cos(theta)+b*cos(((a+b)/b)*theta); //xpetal of epicycloid y=(a/(a+2*b))*(a+b)*sin(theta)+b*sin(((a+b)/b)*theta); //ypetal of epicycloid rotate(angle); //rotate same direction of epicycloid beginShape(); //The evolute portion of the flower for (var i=0; i<200; i++){ var theta = map(i,0,nPoints,0, 5*TWO_PI); var petalX = a * (((n-1)*cos(theta)+cos((n-1)*theta))/n) //Xpetal of evolute var petalY = a * (((n-1)*sin(theta)+sin((n-1)*theta))/n) //ypetal of evolute rect(petalX-5,petalY-5,30,30); //draws the inside petals You must be logged in to post a comment.
{"url":"https://courses.ideate.cmu.edu/15-104/f2016/index.html%3Fp=5383.html","timestamp":"2024-11-09T13:56:39Z","content_type":"text/html","content_length":"44333","record_id":"<urn:uuid:baf878ae-f653-4d65-8583-bcc64f33cb08>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00427.warc.gz"}
Solving Equations Using Multiplication And Division Worksheet Pdf Math, specifically multiplication, forms the keystone of countless academic techniques and real-world applications. Yet, for numerous students, mastering multiplication can present a challenge. To address this difficulty, teachers and moms and dads have embraced an effective device: Solving Equations Using Multiplication And Division Worksheet Pdf. Intro to Solving Equations Using Multiplication And Division Worksheet Pdf Solving Equations Using Multiplication And Division Worksheet Pdf Solving Equations Using Multiplication And Division Worksheet Pdf - A one step equation is an algebraic equation you can solve in only one step Once you ve solved it you ve found the value of the variable that makes the equation true To solve one step equations we do the inverse opposite of whatever operation is being performed on the variable so we get the variable by itself The inverse operations are c 72n0 V182R rK0u4t OaI BS5o QfPtGw fa UrZeX qL kL zCj g W aANl0l 7 2r yi5g7hZt Ysy Rrzegs Le Jr xvce7dN l J SM8a1dueD 8w ji ft Th 0 zI2nWfNi5nnift ke E cAwl1g5eDbfr faX A16 P Worksheet by Kuta Software LLC Infinite Algebra 1 Name One Step Equations Date Period Solve each equation 1 26 Significance of Multiplication Practice Understanding multiplication is essential, laying a strong structure for sophisticated mathematical ideas. Solving Equations Using Multiplication And Division Worksheet Pdf use structured and targeted practice, promoting a deeper understanding of this fundamental math operation. Evolution of Solving Equations Using Multiplication And Division Worksheet Pdf Ms Jean s Classroom Blog NOV 6 2015 Math 7 3 4 Solving Equations Using Multiplication Or Ms Jean s Classroom Blog NOV 6 2015 Math 7 3 4 Solving Equations Using Multiplication Or Multiplication and division before diving into the strategies outlined in this guide Big goals We want to teach our students to be flexible thinkers when it comes to solving an equation This means that they are able to manipulate the numbers in different ways in order to solve a problem 3 OA A 3 Use multiplication and division within 100 to solve word problems in situations involving equal groups arrays and measurement quantities 3 OA A 4 Determine the unknown whole number in a multiplication or division equation relating three whole numbers 3 OA B 5 Apply properties of operations as strategies to multiply and divide From conventional pen-and-paper exercises to digitized interactive styles, Solving Equations Using Multiplication And Division Worksheet Pdf have progressed, satisfying diverse learning designs and Sorts Of Solving Equations Using Multiplication And Division Worksheet Pdf Fundamental Multiplication Sheets Easy workouts concentrating on multiplication tables, assisting students develop a strong math base. Word Trouble Worksheets Real-life scenarios integrated right into issues, enhancing crucial reasoning and application abilities. Timed Multiplication Drills Tests created to boost rate and accuracy, aiding in fast psychological math. Benefits of Using Solving Equations Using Multiplication And Division Worksheet Pdf Solving One Step Equations Multiply Divide 21 Great Examples Solving One Step Equations Multiply Divide 21 Great Examples 21 Explain two ways you could solve 20 5 3 x 1 Divide by 5 first or 2 Distribute the 5 first 2 Create your own worksheets like this one with Infinite Algebra 2 Free trial available at KutaSoftware Q P qMIa6d Vez 4wWiXtXhp oIZnhf Hidn Hi7tTey DPsrfe 3 JAnlkgje ab brXaM d Worksheet by Kuta Software LLC 7th Grade Math ID 1 Name Date Period H l220 i1 v33 EKmu6t2ac LSFoRf It kwHa7r3e M PL 0LSC n Solving One Step Equations with Multiplication and Division Improved Mathematical Skills Consistent technique hones multiplication effectiveness, enhancing overall math capacities. Enhanced Problem-Solving Talents Word problems in worksheets establish logical reasoning and approach application. Self-Paced Discovering Advantages Worksheets suit specific understanding rates, fostering a comfortable and versatile understanding environment. Just How to Develop Engaging Solving Equations Using Multiplication And Division Worksheet Pdf Incorporating Visuals and Colors Dynamic visuals and colors record focus, making worksheets aesthetically appealing and engaging. Including Real-Life Scenarios Connecting multiplication to daily situations includes importance and usefulness to exercises. Tailoring Worksheets to Different Ability Degrees Customizing worksheets based upon differing effectiveness levels ensures inclusive knowing. Interactive and Online Multiplication Resources Digital Multiplication Devices and Games Technology-based sources provide interactive understanding experiences, making multiplication appealing and satisfying. Interactive Web Sites and Apps On the internet platforms provide diverse and obtainable multiplication technique, supplementing standard worksheets. Personalizing Worksheets for Numerous Understanding Styles Aesthetic Students Visual aids and representations help comprehension for students inclined toward aesthetic knowing. Auditory Learners Verbal multiplication troubles or mnemonics deal with students who comprehend ideas via acoustic means. Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic learners in understanding multiplication. Tips for Effective Execution in Knowing Uniformity in Practice Routine practice reinforces multiplication abilities, advertising retention and fluency. Balancing Repeating and Variety A mix of recurring workouts and varied issue formats maintains rate of interest and comprehension. Providing Positive Feedback Comments aids in identifying locations of renovation, motivating continued progress. Difficulties in Multiplication Practice and Solutions Inspiration and Engagement Difficulties Dull drills can lead to uninterest; cutting-edge methods can reignite motivation. Overcoming Worry of Mathematics Adverse perceptions around mathematics can hinder development; producing a positive knowing setting is important. Impact of Solving Equations Using Multiplication And Division Worksheet Pdf on Academic Performance Research Studies and Research Searchings For Study shows a positive connection between regular worksheet usage and enhanced mathematics efficiency. Solving Equations Using Multiplication And Division Worksheet Pdf become functional devices, promoting mathematical efficiency in learners while suiting diverse knowing styles. From basic drills to interactive on the internet resources, these worksheets not just boost multiplication skills however also advertise vital reasoning and analytic capabilities. ShowMe solving equations using multiplication and Division Solving Multiplication And Division Equations Worksheets Check more of Solving Equations Using Multiplication And Division Worksheet Pdf below Multiple Step Equation Worksheet ShowMe solving 1 Step equations multiplication division With Remainder Solving Equations Using Multiplication Or Division Solving equations Equations Math Videos Solving Equations using Multiplication and Division Simplifying Math YouTube Solving Equations Using Multiplication And Division Worksheets Worksheets Master Solving 1 Step Equations Using Multiplication and Division Math Algebra solving equations span class result type c 72n0 V182R rK0u4t OaI BS5o QfPtGw fa UrZeX qL kL zCj g W aANl0l 7 2r yi5g7hZt Ysy Rrzegs Le Jr xvce7dN l J SM8a1dueD 8w ji ft Th 0 zI2nWfNi5nnift ke E cAwl1g5eDbfr faX A16 P Worksheet by Kuta Software LLC Infinite Algebra 1 Name One Step Equations Date Period Solve each equation 1 26 Algebra Worksheets Math Drills The commutative law or commutative property states that you can change the order of the numbers in an arithmetic problem and still get the same results In the context of arithmetic it only works with addition or multiplication operations but not mixed addition and multiplication For example 3 5 5 3 and 9 5 5 9 A fun activity that you can use in the classroom is to brainstorm c 72n0 V182R rK0u4t OaI BS5o QfPtGw fa UrZeX qL kL zCj g W aANl0l 7 2r yi5g7hZt Ysy Rrzegs Le Jr xvce7dN l J SM8a1dueD 8w ji ft Th 0 zI2nWfNi5nnift ke E cAwl1g5eDbfr faX A16 P Worksheet by Kuta Software LLC Infinite Algebra 1 Name One Step Equations Date Period Solve each equation 1 26 The commutative law or commutative property states that you can change the order of the numbers in an arithmetic problem and still get the same results In the context of arithmetic it only works with addition or multiplication operations but not mixed addition and multiplication For example 3 5 5 3 and 9 5 5 9 A fun activity that you can use in the classroom is to brainstorm Solving Equations using Multiplication and Division Simplifying Math YouTube ShowMe solving 1 Step equations multiplication division With Remainder Solving Equations Using Multiplication And Division Worksheets Worksheets Master Solving 1 Step Equations Using Multiplication and Division Math Algebra solving equations ShowMe solving equations using multiplication and Division Ms Jean s Classroom Blog 3 4 Solving Equations Using Multiplication Or Division Ms Jean s Classroom Blog 3 4 Solving Equations Using Multiplication Or Division PrimaryLeap co uk Solving The Equation Addition And Substraction Worksheet Subtraction FAQs (Frequently Asked Questions). Are Solving Equations Using Multiplication And Division Worksheet Pdf suitable for all age teams? Yes, worksheets can be customized to different age and ability degrees, making them versatile for different learners. Just how usually should students practice making use of Solving Equations Using Multiplication And Division Worksheet Pdf? Constant practice is vital. Regular sessions, ideally a couple of times a week, can produce significant renovation. Can worksheets alone boost math skills? Worksheets are a valuable tool however ought to be supplemented with varied discovering techniques for extensive skill development. Exist on-line platforms supplying complimentary Solving Equations Using Multiplication And Division Worksheet Pdf? Yes, lots of instructional websites offer free access to a vast array of Solving Equations Using Multiplication And Division Worksheet Pdf. Just how can moms and dads sustain their youngsters's multiplication practice in the house? Urging regular practice, providing aid, and producing a favorable discovering environment are beneficial actions.
{"url":"https://crown-darts.com/en/solving-equations-using-multiplication-and-division-worksheet-pdf.html","timestamp":"2024-11-06T12:15:18Z","content_type":"text/html","content_length":"30102","record_id":"<urn:uuid:dd370bce-cc08-4e71-b40e-666c8aac8003>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00017.warc.gz"}
Hi Fellow CNC Enthusiasts I've just upgraded my Multicam router engraver (8ft x 4 ft) with Mach3 and am delighted with the result! Am now trying to figure out "wrapping" using my 4th axis and would like to connect up with with other guys using similar set up. My current frustration is trying to draw a wrap in V Carve pro - seems to be an issue getting the Z set right - have figured that the cutting depth is homed at material centre (Y), but not getting a 360 degree wrap and in the 3D view. I'm having to set material thickness at half so only see half the material in 3D preview......Grrrrrr! Also trying to figure out calibration of 4th axis - logic tells me it would have to work on degrees rather than a lineal measurement, but can't see how to do it. Have been in the CNC game for over 10 years and can see great benefit in sharing ideas etc. I operate a 3D signage manufacturing business here - shopfront signs, light boxes, engraved timber signs and so on. Look forward to connecting up with similar 3D chomping operators around the world......Cheers!
{"url":"https://www.machsupport.com/forum/index.php?topic=17971.0","timestamp":"2024-11-11T12:37:40Z","content_type":"application/xhtml+xml","content_length":"35702","record_id":"<urn:uuid:770924b8-8afe-48fe-9fd1-46a5a3ee6d8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00851.warc.gz"}
Using Weights Click here to download the full example code Using Weights¶ One of the advantages of using a Green’s functions approach to interpolation is that we can easily weight the data to give each point more or less influence over the results. This is a good way to not let data points with large uncertainties bias the interpolation or the data decimation. # The weights vary a lot so it's better to plot them using a logarithmic color scale from matplotlib.colors import LogNorm import matplotlib.pyplot as plt import cartopy.crs as ccrs import numpy as np import verde as vd We’ll use some sample GPS vertical ground velocity which has some variable uncertainties associated with each data point. The data are loaded as a pandas.DataFrame: latitude longitude height ... std_north std_east std_up 0 34.116409 242.906804 762.11978 ... 0.0002 0.00037 0.00053 1 34.116409 242.906804 762.10883 ... 0.0002 0.00037 0.00053 2 34.116409 242.906805 762.09364 ... 0.0002 0.00037 0.00053 3 34.116409 242.906805 762.09073 ... 0.0002 0.00037 0.00053 4 34.116409 242.906805 762.07699 ... 0.0002 0.00037 0.00053 [5 rows x 9 columns] Let’s plot our data using Cartopy to see what the vertical velocities and their uncertainties look like. We’ll make a function for this so we can reuse it later on. def plot_data(coordinates, velocity, weights, title_data, title_weights): "Make two maps of our data, one with the data and one with the weights/uncertainty" fig, axes = plt.subplots( 1, 2, figsize=(9.5, 7), subplot_kw=dict(projection=ccrs.Mercator()) crs = ccrs.PlateCarree() ax = axes[0] maxabs = vd.maxabs(velocity) pc = ax.scatter( plt.colorbar(pc, ax=ax, orientation="horizontal", pad=0.05).set_label("m/yr") ax = axes[1] pc = ax.scatter( *coordinates, c=weights, s=30, cmap="magma", transform=crs, norm=LogNorm() plt.colorbar(pc, ax=ax, orientation="horizontal", pad=0.05) # Plot the data and the uncertainties (data.longitude, data.latitude), "Vertical GPS velocity", "Uncertainty (m/yr)", Weights in data decimation¶ BlockReduce can’t output weights for each data point because it doesn’t know which reduction operation it’s using. If you want to do a weighted interpolation, like verde.Spline, BlockReduce won’t propagate the weights to the interpolation function. If your data are relatively smooth, you can use verde.BlockMean instead to decimated data and produce weights. It can calculate different kinds of weights, depending on configuration options and what you give it as input. Let’s explore all of the possibilities. BlockMean(adjust='spacing', center_coordinates=False, region=None, spacing=0.25, uncertainty=False) Option 1: No input weights¶ In this case, we’ll get a standard mean and the output weights will be 1 over the variance of the data in each block: \[\bar{d} = \dfrac{\sum\limits_{i=1}^N d_i}{N} \: , \qquad \sigma^2 = \dfrac{\sum\limits_{i=1}^N (d_i - \bar{d})^2}{N} \: , \qquad w = \dfrac{1}{\sigma^2}\] in which \(N\) is the number of data points in the block, \(d_i\) are the data values in the block, and the output values for the block are the mean data \(\bar{d}\) and the weight \(w\). Notice that data points that are more uncertain don’t necessarily have smaller weights. Instead, the blocks that contain data with sharper variations end up having smaller weights, like the data points in the south. coordinates, velocity, weights = mean.filter( coordinates=(data.longitude, data.latitude), data=data.velocity_up "Mean vertical GPS velocity", "Weights based on data variance", Option 3: Input weights are 1 over the data uncertainty squared¶ If input weights are 1 over the data uncertainty squared, we can use uncertainty propagation to calculate the uncertainty of the weighted mean and use it to define our output weights. Use option uncertainty=True to tell BlockMean to calculate weights based on the propagated uncertainty of the data. The output weights will be 1 over the propagated uncertainty squared. In this case, the input weights must not be normalized. This is preferable if you know the uncertainty of the data. \[w_i = \dfrac{1}{\sigma_i^2} \: , \qquad \sigma_{\bar{d}^*}^2 = \dfrac{1}{\sum\limits_{i=1}^N w_i} \: , \qquad w = \dfrac{1}{\sigma_{\bar{d}^*}^2}\] in which \(\sigma_i\) are the input data uncertainties in the block and \(\sigma_{\bar{d}^*}\) is the propagated uncertainty of the weighted mean in the block. Notice that in this case the output weights reflect the input data uncertainties. Less weight is given to the data points that had larger uncertainties from the start. # Configure BlockMean to assume that the input weights are 1/uncertainty**2 mean = vd.BlockMean(spacing=15 / 60, uncertainty=True) coordinates, velocity, weights = mean.filter( coordinates=(data.longitude, data.latitude), "Weighted mean vertical GPS velocity", "Weights based on data uncertainty", Interpolation with weights¶ The Green’s functions based interpolation classes in Verde, like Spline, can take input weights if you want to give less importance to some data points. In our case, the points with larger uncertainties shouldn’t have the same influence in our gridded solution as the points with lower uncertainties. Let’s setup a projection to grid our geographic data using the Cartesian spline gridder. import pyproj projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean()) proj_coords = projection(data.longitude.values, data.latitude.values) region = vd.get_region(coordinates) spacing = 5 / 60 Now we can grid our data using a weighted spline. We’ll use the block mean results with uncertainty based weights. Note that the weighted spline solution will only work on a non-exact interpolation. So we’ll need to use some damping regularization or not use the data locations for the point forces. Here, we’ll apply a bit of damping. spline = vd.Chain( # Convert the spacing to meters because Spline is a Cartesian gridder ("mean", vd.BlockMean(spacing=spacing * 111e3, uncertainty=True)), ("spline", vd.Spline(damping=1e-10)), ).fit(proj_coords, data.velocity_up, data.weights) grid = spline.grid( dims=["latitude", "longitude"], Calculate an unweighted spline as well for comparison. spline_unweighted = vd.Chain( ("mean", vd.BlockReduce(np.mean, spacing=spacing * 111e3)), ("spline", vd.Spline()), ).fit(proj_coords, data.velocity_up) grid_unweighted = spline_unweighted.grid( dims=["latitude", "longitude"], Finally, plot the weighted and unweighted grids side by side. fig, axes = plt.subplots( 1, 2, figsize=(9.5, 7), subplot_kw=dict(projection=ccrs.Mercator()) crs = ccrs.PlateCarree() ax = axes[0] ax.set_title("Spline interpolation with weights") maxabs = vd.maxabs(data.velocity_up) pc = ax.pcolormesh( plt.colorbar(pc, ax=ax, orientation="horizontal", pad=0.05).set_label("m/yr") ax.plot(data.longitude, data.latitude, ".k", markersize=0.1, transform=crs) ax = axes[1] ax.set_title("Spline interpolation without weights") pc = ax.pcolormesh( plt.colorbar(pc, ax=ax, orientation="horizontal", pad=0.05).set_label("m/yr") ax.plot(data.longitude, data.latitude, ".k", markersize=0.1, transform=crs) Total running time of the script: ( 0 minutes 10.288 seconds)
{"url":"https://www.fatiando.org/verde/v1.0.1/tutorials/weights.html","timestamp":"2024-11-06T07:21:11Z","content_type":"text/html","content_length":"53343","record_id":"<urn:uuid:7d9460db-25c2-42ce-9e2d-441384aa5a25>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00398.warc.gz"}
Cite as Evripidis Bampis, Bruno Escoffier, Themis Gouleakis, Niklas Hahn, Kostas Lakis, Golnoosh Shahkarami, and Michalis Xefteris. Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023) Copy BibTex To Clipboard author = {Bampis, Evripidis and Escoffier, Bruno and Gouleakis, Themis and Hahn, Niklas and Lakis, Kostas and Shahkarami, Golnoosh and Xefteris, Michalis}, title = {{Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else}}, booktitle = {31st Annual European Symposium on Algorithms (ESA 2023)}, pages = {12:1--12:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-295-2}, ISSN = {1868-8969}, year = {2023}, volume = {274}, editor = {G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.12}, URN = {urn:nbn:de:0030-drops-186659}, doi = {10.4230/LIPIcs.ESA.2023.12}, annote = {Keywords: TSP, Online algorithms, Learning-augmented algorithms, Algorithms with predictions, Competitive analysis}
{"url":"https://drops.dagstuhl.de/search/documents?author=Shahkarami,%20Golnoosh","timestamp":"2024-11-07T20:31:29Z","content_type":"text/html","content_length":"66489","record_id":"<urn:uuid:e700c60b-9fc8-4a86-8b37-40de836b03a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00219.warc.gz"}
Temperature and PV Performance Optimization Effect of Temperature on the Module's Behavior In regard to the temperature, when all parameters are constant, the higher the temperature, the lower the voltage. This is considered a power loss. On the other hand, if the temperature decreases with respect to the original conditions, the PV output shows an increase in voltage and power. Figure 2.9 is a graph showing the relationship between the PV module voltage and current at different solar temperature values. The figure illustrates that as temperature increases, the voltage, on the horizontal axis, decreases. Similarly, the relationship between the PV module voltage and power at different solar irradiance levels is shown in Figure 2.10. We can see that the power decreases as temperature increases, as illustrated by lower power peaks on the curves in Figure 2.10. Credit: Mohamed Amer Chaaban Credit: Mohamed Amer Chaaban Why do we see increase in current when the temperature increases, as shown in Figure 2.9? Click for answer... ANSWER: The small increase in current with temperature can be explained with the fact that carrier concentration and mobility increase in the semiconductor with temperature. In addition, the drop in voltage level can be explained from the basic diode equation. While the temperature affects various terms in the equation, the net effect of temperature is that it decreases the Voc linearly. However, if we check the power values on P-V curves, we can see that a slight increase in current due to increased temperature doesn’t increase the power that much. The drop in open-circuit voltage with temperature is mainly related to the increase in the leakage current of the photodiode “I0” in the dark with temperature. The “I0” strongly depends on the The magnitude of voltage reduction varies inversely with Voc. 
This means that cells with higher Voc are less affected by the temperature than cells with lower Voc, as can be seen when a c-Si based solar cell, with a Voc of 0.65 V, is more affected than the a-Si with a Voc of 0.85 V.
 If the temperature of the PV module is increased by 10°C, how will the output be affected? The PV module manufacturers specify the temperature coefficients in the datasheets. Temperature Coefficient Temperature coefficient is defined as the rate of change of a parameter with respect to the change in temperature. It can be current, voltage, or power temperature coefficient. 
For example, the temperature coefficient of voltage is the rate of change of the voltage with temperature change. 
Similarly, temperature coefficient of power is the rate of change of the output power with temperature change.
 A typical datasheet of a commercial PV module specifies temperature coefficients for the power,
 Voc, and Isc under STC conditions. (Note: we will discuss the STC test conditions in the next topic).
 Temperature coefficient are usually provided by the manufacturers and can be measured in terms of voltage change per degree ( V/°C) or as a percentage per degree change (%/°C). The unit can also be given per cell (not the per module!). In the later case, we need to adjust for the number of cells in series per module to account for the total coefficient. Given these coefficients, how do we calculate the PV output with respect to the temperature change? Effect of Temperature Change on Module's Parameters In order for us to understand the numerical temperature effects on module, we need to define these two simple equations.
 The terms Vstc and Pstc refer to the Voltage and Current taken at STC while the temperature coefficients of the voltage is represented by Vt-coeff and Pt-coeff, respectively.
 It should be noted that the reference temperature taken for this calculation is the STC temperature (25°C
) as it appears on the equations. Let's take an example.
 If the maximum power output of a PV module under STC is 240 W, and the temperature coefficient of power is -2 W/°C, then the module's power output at a temperature of 30°C can be calculated as $\text{P}=240\text{ W}+\left(-2\text{ W}{/}^{\circ }\text{C}\right)×{\left(30-25\right)}^{\circ }\text{C}=230\text{ W}$ As you can see, the sign of the temperature coefficient determines if the parameter is increasing or decreasing with temperature. In the previous example, when we said that the temperature was 30°C, did we mean the PV modules temperature or the ambient temperature? Are they equal? The simple answer is that module temperature or the cell temperature can be quite different from the ambient temperature. There could be several factors impacting the heat flow in and out of the modules. What are the factors that impact the heat flow in and out of the module? Click for answer... ANSWER: One major factor is the cell encapsulation and framing that increase the operating temperature of the PV module. The operating temperature of a module will be a result of the heat exchange between the PV module and the environment. This heat exchange depends on several factors such as ambient temperature, wind speed, heat transfer coefficients between the module and the environment, and the thermal conductivity of the module's body. Then, how do we estimate the module temperature based on the ambient temperature if we have to account for so many factors? 
Researchers developed a model provided in literature that gives a reasonable estimate of the module temperature as a function of the ambient temperature.
This model is sometimes called the NOCT model, due to the use of the Nominal Operating Cell Temperature.
 The NOCT is a parameter defined for a particular PV module. 
NOCT is the temperature attained by the PV cell under an irradiance of 800 W/m², with
a nominal wind speed of 1 m/s and an ambient temperature of 20°C. Here, G is the irradiance at the instant when the ambient temperature is T_ambient.
The model gives the corresponding cell temperature as T_cell.
 As can be seen from this equation, the cell temperature is not only a function of the
 ambient temperature, but also of the irradiance.
 This makes things interesting, because if we consider the irradiance and temperature 
changes over a calendar year, we would see an effect of both irradiance and temperature 
across the seasons. Is it better to operate PV modules during the summer season or is it better to operate in during the winter season to increase production? Click for answer... ANSWER: According to what we have just learned, PV modules perform better when the temperature is cooler. In summer, although the sun is shining more, the module is performing worse due to the temperature effects that bring down the PV output at a high cell temperature. In winter, the detrimental temperature effects are far less, although the irradiance levels also fall severely in winter. This means that the best ambient conditions for your PV module would be a cold day with a clear sky. Temperature and Ideality Factor of PV Modules So, how serious can temperature affect the performance of PV modules over the year?

 The difference between the expected PV yield with rated efficiency and the actual yield
 due to the temperature effect increases the module’s ideality factor, which is nothing but the ratio of the expected PV yield to the actually available, and taking into account the temperature effects. 
When the ideality factor of a module is 80%, that means that the module has lost 20% of its annual energy yield due to temperature effects.
 If the module ideality factor is 100%, that means the module doesn’t change when temperature changes, and that is almost impossible.

 Ideality Factor and PV module Type As a result, we can see that the temperature effect on the module output is a function of the PV 
technology and the manufacturing process, which collectively decides the temperature 
coefficients of the PV module. The temperature effect is also a function of the ambient conditions.
 For the same technology, there could be a deviation in the temperature coefficients due to the manufacturing processes and other design modifications.

 The a-Si technology shows very low temperature coefficients due to their 
high open-circuit voltage.
This means it shows a better response under high temperatures.
 However, its efficiency is far lower compared to some of the best c-Si technologies.

 According to IEA PV roadmap 2014, c-Si modules have the highest market share compared to other PV technologies such as a-Si. C-Si technologies dominate the global PV market with a share of 90%. In other words, every 9 of 10 PV installation is c-Si modules. If we look closer at the c-Si market, we can see that polycrystalline silicon is the most commonly used technology, according to an EPRI study, with a market share of 60%. This is due to the fact that it is the most efficient in terms of conversion efficiency and economics of scale that make it an affordable solution. 

 Monocrystalline modules are more area-efficient, but are not the best economical solution. 
a-Si modules are more affordable, lighter, and sometimes even flexible, but give poorer yield and required more land area.

 There is plenty of optimization to be done in order to choose an ideal PV module for your system. The optimum choice will depend on the location, ambient conditions, and of course, taking into account the budget of the client.
{"url":"https://www.e-education.psu.edu/ae868/node/878","timestamp":"2024-11-06T12:12:04Z","content_type":"text/html","content_length":"47167","record_id":"<urn:uuid:ad0abda1-ae29-4316-84c9-42051b604455>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00359.warc.gz"}
Let Me Count the Ways In the course of many applications, we sometimes need to tally up counts for different purposes, e.g., investigating the distribution of some data or equalizing gray tones in an image. In version 4 of MATLAB, we introduced sparse matrices. A bit later, we introduced the first release of the Image Processing Toolbox; and for even more information, see Steve's blog. We then had several says we could count up the number of pixels in an indexed image for each color in the colormap. A few of the ways are • Run a for loop over the number of colors and find the elements in the image with that value. • Use hist to create a histogram with the bins corresponding to the number of colors. • Use sparse matrices to tally the bins, especially since sparse performs accumulation as it goes. Counting by Color - Brute Force Here's code to show how to bin the data by brute force. Load the data. clown = load('clown'); lcmap = clown.map; nc = length(lcmap); X = clown.X; Set up the output array for collecting the binned information. count = zeros(1,nc); Loop over the image elements and augment the colormap entry by 1 for each hit. M = numel(X); for ind=1:M count(X(ind)) = count(X(ind))+1; Counting by Color - Accumulation Using sparse Think first about creating an M-by-nc array, mostly zeros, with one 1 per row, in the color column to identify the color. S = sparse(1:M,X(:),1,M,nc); We could do this and then sum each column. However, sparse automatically accumulates duplicate elements rather than replacing them, so instead we can create a 1-by-nc sparse array, and then convert it to full when we're done. counts = full(sparse(1,X(:),1,1,nc)); If you looked at this M-file, you would now see an mlint message for the previous line of code. Here it is. mlhint = mlint('meanValues'); ACCUMARRAY([r,c],v,[m,n]) is faster than FULL(SPARSE(r,c,v,m,n)). It suggests to us the next technique to use. Counting by Color - Accumulation Using accumarray Following the message as closely as we can (though we can't have the first column of the first input scalar since the second column is a long vector), we get try the following code. We can also use the help for accumarray to guide us. counta = accumarray([ones(M,1) X(:)], 1 , [1 nc]); Let's make sure all the techniques get the same answer. agreement = isequal(count, counts, counta) agreement = Computing Mean Values There was a thread recently on the MATLAB newsgroup asking how to compute the mean value of portions of an array, the partitions described via arrays of indices. Sounds like an image histogram, only not -- because of computing the mean value instead of the total number. Here's the posted advice. help accumarray John D'Errico While I know that information helped the user that day, I suspect not everyone knew exactly what to do with it. So let's try it here. Here are the details and the original code (except for the sizes). Original sizes: m = 1245; n = 1200; maxnum = 50; m = 100; n = 100; maxnum = 22; Create the input matrices. observed = rand(m,n); num = 1:maxnum; arr1 = num(ceil(maxnum*rand(size(observed)))); arr2 = num(ceil(maxnum*rand(size(observed)))); Computing Mean Values - Straight-forward Double Loop for i=1:maxnum for ii=1:maxnum meanMatrix(i,ii) = mean(observed(arr1==i & arr2 ==ii)); looptime = toc looptime = Computing Mean Values - Using accumarray There are a whole class of functions I can use with accumarray to calculate different information about these arrays. These functions fall into a class of operations that other languages, e.g., APL, call reduction operations and they consist of functions such as sum, mean, and max, for example that return a scalar value for vector input. means = accumarray([arr1(:),arr2(:)], observed(:), [maxnum maxnum], @mean, 0); accumtime = toc accumtime = Notice how much faster the accumarray solution is than the loop, even with the output pre-allocated. Ensuring Identical Results If we compare the results, and not just the times, for the different ways to produce the mean values, we see the results are not identical. ans = What's going on here? I will show you that the difference depends on the ordering of the elements by doing the same calculation with a different program to produce the mean. In this program, we make sure the data are sorted before we calculate the result. dbtype sortmean 1 function y = sortmean(x) 2 %SORTMEAN Sort data and find mean. 3 y = mean(sort(x)); Do the calculation with sortmean with loops and with accumarray and compare results. for i=1:maxnum for ii=1:maxnum meanMatrix(i,ii) = sortmean(observed(arr1==i & arr2 ==ii)); means = accumarray([arr1(:),arr2(:)], observed(:), [maxnum maxnum], @sortmean, 0); We can see now that if we tightly control the order of the calculations for the mean value, we get the same answers, though the loop version still takes longer. equalmeans = isequal(means,meanMatrix) equalmeans = Once again, there are many ways to reach the same outcome in MATLAB, with different characteristics in terms of readability, performance, etc. Which of the solutions used here would you use, or do you have some other favorite techniques? Post them here. To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
{"url":"https://blogs.mathworks.com/loren/2006/12/01/let-me-count-the-ways/?s_tid=blogs_rc_3","timestamp":"2024-11-07T10:17:31Z","content_type":"text/html","content_length":"176383","record_id":"<urn:uuid:4dcf628b-5562-4bc8-ac65-33cf200539f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00413.warc.gz"}
Compute Velocity & Acceleration from Motor Encoders Convert a binary motor encoder signal to velocity and acceleration using the derivate functionality in real-time expressions (change in value / change in time). Time Derivative (change in value / change in time) =math.timeDerivative(signal, "time unit") Alternative syntax =math.delta(signal) / math.timeDelta(signal, "time unit") A single motor encoder signal changes from 0 to 1 every 1/150". Filter the encoder signal to just its rising edges. First, highlight the signal being transformed and use the real-time expression: =$ == 1 Note: The $ shortcut operates on the highlighted signal in lieu of having to type out the entire signal name Select "view as event pins" under the right-click context menu to verify only rising edges of the signal are shown in the transformed signal. Calculate the change in time between rising edges using the following expression (highlight the previously generated signal): =math.timeDelta($, "sec") This will generate a new signal that contains the difference in time between each signal value in seconds. Every rising edge, the position changes 1/150". Complete the velocity calculation (change in position / change in time) by highlighting the previously generated signal and using the expression: This will generate a new signal that is the velocity of the motor in inches per second (ips). This could be optionally converted to mph using the convert method: =convert($, "ips", "mph") Compute acceleration in units of g from velocity (change in velocity / change in time). Highlight the velocity signal (ips), and use the expression: =math.timeDerivative($, "sec") / 386.088582677165 This will generate a new signal that is the acceleration of the motor in g. You can smooth out the noisy acceleration signal using the smooth function. For example: =smooth($, 0.9) Clean up the intermediate calculated signals by highlighting unwanted signals and hitting the DEL key. Rename the newly generated signals by using the right-click context menu -> Rename. 0 comments Article is closed for comments.
{"url":"https://support.initialstate.com/hc/en-us/articles/360003161232-Compute-Velocity-Acceleration-from-Motor-Encoders","timestamp":"2024-11-03T20:15:36Z","content_type":"text/html","content_length":"25210","record_id":"<urn:uuid:358fd0e4-b4f8-44b2-b7cd-eedcf415ba92>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00702.warc.gz"}
How to Use the Excel TRUNC Function Table of contents : What is the Excel TRUNC Function? Excel TRUNC function truncates a number by removing the fractional part of the number based on a number_digits argument. TRUNC Syntax TRUNC(number, [num_digits]) number, required, the number to be truncated. num_digits, optional, a number specifying the precision of the truncation. The default value is 0 (zero). Usage notes: • Use the TRUNC function to cut a set number of decimal places without rounding. • The TRUNC function without the num_digits argument is like the INT function for positive numbers, both returns integer number from a number. For negative numbers, both return a different number. The TRUNC function does not round, only cuts numbers based on the num_digit argument, while the INT function does To get an integer number from a number, whether positive or negative, use the TRUNC function without the num_digits argument. How to Use TRUNC Function in Excel Next is the TRUNC function which is used to cut the number 123.456 with positive or negative signs. What are the results when using different num_digits arguments? TRUNC Function #1 The TRUNC function without the num_digits argument, by default Excel uses the number 0. As a result, Excel cuts all the numbers behind the decimal separator, leaving only the integer numbers. The result is 123. TRUNC Function #2 TRUNC function with num_digits argument 2. Excel cuts numbers behind the decimal separator and leaves two-digit numbers. The result is 123.45 TRUNC Function #3 If the number of digits behind the decimal separator is smaller than the num_digits argument, Excel does not cut any numbers. The result is the same as the original number 123.456. TRUNC Function #4 If the argument num_digits is negative, excel cuts all the numbers behind the decimal separator and changes the number in front of the decimal separator to 0. The number of numbers changed depends on the num_digits argument. The result is 100. TRUNC Function #5 The TRUNC function with the negative number argument and without the num_digits argument. The result remains the same negative number without numbers behind the decimal separator. The result is -123. TRUNC Function #6 The TRUNC function with a negative number argument and positive num_digit argument. The result is -123.45 TRUNC Function #7 The TRUNC function with a negative number argument and num_digit argument. The result is a negative integer number with two-digit numbers in front of the decimal separator changing to zero. The result is -100. Please see the results of all the excel TRUNC functions above in the picture below. TRUNC Example Another Round Function
{"url":"https://excelcse.com/excel-trunc-function/","timestamp":"2024-11-13T08:01:15Z","content_type":"text/html","content_length":"65026","record_id":"<urn:uuid:096dc57c-0385-445c-9f6a-f488ef36a206>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00006.warc.gz"}
The Sneaker Game¶ Sneaker culture has become ubiquitous in recent years. All around the world, millions of sneakerheads (myself included) go online to cop the newest releases and rarest classics. But it's not always quite as simple as clicking the "Add to Cart" button and making the purchase. Some sneakers have incredibly high demand leading up to a very limited supply upon release. Only a few dedicated and/or lucky people will be successful in purchasing these shoes from a shoe store or website. Some may choose wear their shoes and keep them in their collection for years to come. But many will choose to resell deadstock (brand new) shoes at a profit in order to purchase even rarer ones. This is where StockX, GOAT, FlightClub, or any other online sneaker marketplace comes in. Resellers need to connect with individuals who want the shoes they have up for sale. These entities offer a platform that put resellers in direct contact with potential buyers. StockX in particular prides itself on being the stock market analog in the sneaker world. Resellers can list sneakers for sale at whatever price they see fit, and buyers can make whatever bids or offers they would like on any sneaker. StockX's role in the transaction is to make sure that the resellers are selling authentic sneakers to protect buyers from receiving fake or damaged sneakers. Yeezys and Off-Whites are popular examples of coveted shoes that sneakerheads buy off of one another. The Yeezy line is a collaboration between Adidas and musical artist Kanye West. There are several other sneakers that fall under the Yeezy brand, but this dataset only covers Yeezy Boost 350 models. The Off-White line is a collaboration between Nike and luxury designer Virgil Abloh. Like the Yeezys, this dataset focuses in on a subset of Off-White sneaker models known as "The Ten". This is a set of ten different shoes released by Nike over a period of several months. The sneakers that carry these brand labels represent some of the most sought after kicks in the world, selling out in stores and online almost instantly upon release. Value Proposition¶ After conducting some research on StockX's website, I found that StockX's revenue stream comes primarily from a 3% payment processing fee and a regressive transaction fee (i.e. the more a reseller sells on StockX, the lower your fee per item is). It is in StockX's best interest to foster sales of shoes with higher sale prices from a revenue standpoint. If reasonably accurate predictions can be made on resale prices as they relate to retail prices, then StockX can make decisions promoting certain sneaker listings.
{"url":"https://sameen73.github.io/stock_x_analysis.html","timestamp":"2024-11-06T02:46:52Z","content_type":"text/html","content_length":"773004","record_id":"<urn:uuid:5e8f86a4-89fd-41ca-898e-8821231a38b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00744.warc.gz"}
Advanced Excel Formulas Excel is an invaluable tool for anyone working with data, but its true power lies in its advanced formulas. While basic formulas are essential for simple calculations, mastering advanced formulas can significantly streamline your work, automate complex tasks, and unlock insights hidden within your data. This comprehensive guide will explore some of the most powerful advanced Excel formulas, equipping you with the knowledge to tackle any data challenge. Beyond the Basics: Mastering Advanced Excel Formulas 1. VLOOKUP and HLOOKUP Imagine you have a massive dataset with customer information, and you need to quickly retrieve specific details like their phone numbers based on their customer ID. That's where VLOOKUP comes in. It allows you to search for a specific value in a column (the lookup value) and return a corresponding value from another column in the same row. HLOOKUP functions similarly but searches horizontally instead of vertically. Let's say you have a table with customer IDs in column A and phone numbers in column B. To find the phone number for customer ID 1234, you can use the following formula: =VLOOKUP(1234, A1:B10, 2, FALSE) This formula searches for the value 1234 in column A (A1:B10 represents the table range). It then returns the corresponding value from column B (the second column, indicated by 2). The FALSE argument ensures an exact match is found. 2. INDEX and MATCH While VLOOKUP and HLOOKUP are powerful, they have limitations when searching across multiple columns. That's where the INDEX and MATCH combo shines. INDEX retrieves a value from a specific cell based on its row and column position within a range. MATCH finds the position of a value within a range. You have a table with customer information, and you need to find the phone number for customer ID 1234, but it's located in column D. =INDEX(D1:D10, MATCH(1234, A1:A10, 0)) This formula first uses MATCH to find the row number where customer ID 1234 appears in column A (A1:A10). Then, INDEX retrieves the value from the corresponding row in column D (D1:D10), providing the phone number. 3. SUMIFS, COUNTIFS, and AVERAGEIFS These functions are incredibly helpful for conditional calculations. SUMIFS calculates the sum of values that meet specific criteria. COUNTIFS counts the number of cells that meet multiple conditions. AVERAGEIFS calculates the average of values that satisfy specified criteria. You have a sales table with sales figures, product categories, and sales regions. You want to calculate the total sales for "Electronics" products in the "East" region. =SUMIFS(C1:C10, A1:A10, "Electronics", B1:B10, "East") This formula sums values in column C (sales figures) where column A (product category) equals "Electronics" and column B (sales region) equals "East." 4. IFS IFS allows you to test multiple conditions and return a corresponding value based on the first condition that is met. It simplifies complex nested IF statements. You want to assign grades based on student scores: =IFS(A1>=90, "A", A1>=80, "B", A1>=70, "C", A1>=60, "D", TRUE, "F") This formula checks the score in cell A1 and assigns the appropriate grade based on the criteria. If none of the conditions are met, it assigns an "F." 5. OFFSET OFFSET allows you to select a range of cells relative to a starting point. It is particularly useful for dynamic ranges that change based on your data. You have a monthly sales report, and you need to calculate the average sales for the last 3 months. This formula takes the starting point at A1, then uses COUNTA to determine the number of rows with data. It then offsets by -3 rows (representing the last 3 months) and selects a range of 3 rows and 1 column. The average of these 3 cells (representing the last 3 months' sales) is calculated. 6. AGGREGATE AGGREGATE performs calculations on a dataset while ignoring errors or hidden rows. This makes it powerful for situations where your data may contain errors or you want to selectively analyze subsets of your data. You have a table with sales figures, and some cells contain errors. You want to calculate the average sales, excluding errors. =AGGREGATE(1, 6, B1:B10) This formula uses AGGREGATE to calculate the average (function 1) while ignoring errors (function 6) for the range B1:B10. 7. TEXTJOIN TEXTJOIN combines multiple text strings into a single text string with a specified delimiter. This is useful for concatenating data in a user-friendly format. You have a table with customer names, addresses, and phone numbers. You want to combine them into a single string for a mailing list. =TEXTJOIN(", ", TRUE, A1, B1, C1) This formula combines the contents of cells A1, B1, and C1 into a single string, separated by a comma and space, while ignoring empty cells (TRUE argument). Beyond Formulas: Leveraging Excel's Functionality Mastering advanced Excel formulas is just one aspect of leveraging its full potential. Here are some additional tools and techniques to further enhance your data analysis: 1. Data Validation Data validation helps enforce data quality by restricting the input allowed in specific cells. You can define rules for data types, ranges, and even list choices, ensuring data accuracy and 2. Pivot Tables Pivot tables are a powerful tool for summarizing and analyzing large datasets. They allow you to quickly group and aggregate data based on your chosen criteria, providing insightful summaries and 3. Power Query Power Query (formerly known as Get & Transform) is a data transformation tool that lets you import data from various sources, clean and shape it, and then load it into Excel for further analysis. 4. Macros and VBA For repetitive tasks or advanced automation, macros and VBA (Visual Basic for Applications) allow you to create custom scripts to automate processes, saving time and effort. 5. Conditional Formatting Conditional formatting applies visual styles to cells based on specific conditions, making it easier to highlight important data and identify trends or outliers. Conclusion: Elevate Your Excel Proficiency Advanced Excel formulas are a powerful arsenal for any data-driven professional. By mastering these formulas and exploring the many other functionalities of Excel, you can unlock its full potential, streamline your workflows, and gain deeper insights from your data. Don't be afraid to experiment, explore, and discover the endless possibilities that await you in the world of advanced Excel.
{"url":"https://devlearnhub.com/post/advanced-excel-formulas","timestamp":"2024-11-13T20:42:20Z","content_type":"text/html","content_length":"83181","record_id":"<urn:uuid:0e0eb2b5-8e22-4e41-8e4d-616b42f06310>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00769.warc.gz"}
When Parallel: Pull, Don't Push April 30, 2020 This article was discussed on Hacker News. I’ve noticed a small pattern across a few of my projects where I had vectorized and parallelized some code. The original algorithm had a “push” approach, the optimized version instead took a “pull” approach. In this article I’ll describe what I mean, though it’s mostly just so I can show off some pretty videos, pictures, and demos. A good place to start is the Abelian sandpile model, which, like many before me, completely captured my attention for awhile. It’s a cellular automaton where each cell is a pile of grains of sand — a sandpile. At each step, any sandpile with more than four grains of sand spill one grain into its four 4-connected neighbors, regardless of the number of grains in those neighboring cell. Cells at the edge spill their grains into oblivion, and those grains no longer exist. With excess sand falling over the edge, the model eventually hits a stable state where all piles have three or fewer grains. However, until it reaches stability, all sorts of interesting patterns ripple though the cellular automaton. In certain cases, the final pattern itself is beautiful and interesting. Numberphile has a great video describing how to form a group over recurrent configurations (also). In short, for any given grid size, there’s a stable identity configuration that, when “added” to any other element in the group will stabilize back to that element. The identity configuration is a fractal itself, and has been a focus of study on its own. Computing the identity configuration is really just about running the simulation to completion a couple times from certain starting configurations. Here’s an animation of the process for computing the 64x64 identity configuration: As a fractal, the larger the grid, the more self-similar patterns there are to observe. There are lots of samples online, and the biggest I could find was this 3000x3000 on Wikimedia Commons. But I wanted to see one that’s even bigger, damnit! So, skipping to the end, I eventually computed this 10000x10000 identity configuration: This took 10 days to compute using my optimized implementation: I picked an algorithm described in a code golf challenge: f(ones(n)*6 - f(ones(n)*6)) Where f() is the function that runs the simulation to a stable state. I used OpenMP to parallelize across cores, and SIMD to parallelize within a thread. Each thread operates on 32 sandpiles at a time. To compute the identity sandpile, each sandpile only needs 3 bits of state, so this could potentially be increased to 85 sandpiles at a time on the same hardware. The output format is my old mainstay, Netpbm, including the video output. Sandpile push and pull So, what do I mean about pushing and pulling? The naive approach to simulating sandpiles looks like this: for each i in sandpiles { if input[i] < 4 { output[i] = input[i] } else { output[i] = input[i] - 4 for each j in neighbors { output[j] = output[j] + 1 As the algorithm examines each cell, it pushes results into neighboring cells. If we’re using concurrency, that means multiple threads of execution may be mutating the same cell, which requires synchronization — locks, atomics, etc. That much synchronization is the death knell of performance. The threads will spend all their time contending for the same resources, even if it’s just false The solution is to pull grains from neighbors: for each i in sandpiles { if input[i] < 4 { output[i] = input[i] } else { output[i] = input[i] - 4 for each j in neighbors { if input[j] >= 4 { output[i] = output[i] + 1 Each thread only modifies one cell — the cell it’s in charge of updating — so no synchronization is necessary. It’s shader-friendly and should sound familiar if you’ve seen my WebGL implementation of Conway’s Game of Life. It’s essentially the same algorithm. If you chase down the various Abelian sandpile references online, you’ll eventually come across a 2017 paper by Cameron Fish about running sandpile simulations on GPUs. He cites my WebGL Game of Life article, bringing everything full circle. We had spoken by email at the time, and he shared his interactive simulation with me. Vectorizing this algorithm is straightforward: Load multiple piles at once, one per SIMD channel, and use masks to implement the branches. In my code I’ve also unrolled the loop. To avoid bounds checking in the SIMD code, I pad the state data structure with zeros so that the edge cells have static neighbors and are no longer special. WebGL Fire Back in the old days, one of the cool graphics tricks was fire animations. It was so easy to implement on limited hardware. In fact, the most obvious way to compute it was directly in the framebuffer, such as in the VGA buffer, with no outside state. There’s a heat source at the bottom of the screen, and the algorithm runs from bottom up, propagating that heat upwards randomly. Here’s the algorithm using traditional screen coordinates (top-left corner origin): func rand(min, max) // random integer in [min, max] for each x, y from bottom { buf[y-1][x+rand(-1, 1)] = buf[y][x] - rand(0, 1) As a push algorithm it works fine with a single-thread, but it doesn’t translate well to modern video hardware. So convert it to a pull algorithm! for each x, y { sx = x + rand(-1, 1) sy = y + rand(1, 2) output[y][x] = input[sy][sx] - rand(0, 1) Cells pull the fire upward from the bottom. Though this time there’s a catch: This algorithm will have subtly different results. • In the original, there’s a single state buffer and so a flame could propagate upwards multiple times in a single pass. I’ve compensated here by allowing a flames to propagate further at once. • In the original, a flame only propagates to one other cell. In this version, two cells might pull from the same flame, cloning it. In the end it’s hard to tell the difference, so this works out. source code and instructions There’s still potentially contention in that rand() function, but this can be resolved with a hash function that takes x and y as inputs.
{"url":"https://nullprogram.com/blog/2020/04/30/","timestamp":"2024-11-11T11:25:11Z","content_type":"text/html","content_length":"13376","record_id":"<urn:uuid:e8ea950a-6955-4294-b838-a6954f65a142>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00656.warc.gz"}
Axis Of Rotation – Bently BearingsAxis Of Rotation In the history of machine tools, spindles have been very good relative to other bearings and structures on the machine. So Quality professionals have developed a cache of tools – like ball bars, grid encoders and displacement lasers – to help them characterize and understand the geometry of the structural loop. But as machine tools have improved in their capability and precision, and the demands of part-geometry and surface finish have become more critical, errors in spindles have become a larger percentage of the total error. Once you have done all you can to improve the stiffness, damping, geometry and thermal stability of your machine, and the machine operator is using the best machining practices, the next frontier is to study the errors in the spindles. The ultimate roundness and surface finish that may be achieved by a precision metal-cutting machine tool is determined by the performance of its spindles. By characterizing and routinely checking spindles, part quality can be predicted and controlled. The focus of this article is to summarize the groundwork already established for using spindle metrology to deterministically improve manufacturing processes. As a bonus, and something the pioneers of spindle metrology would all be quick to point out, is that spindle testing, which is defined in the standards now as “tool to work,” is also an excellent new diagnostic tool for other error sources in the machine. The Standards which most-specifically refer to the quality of precision spindles are: ISO-230-7, “Geometric Accuracy of Axes of Rotation” and ASME B89.3.4-1985, “Axes of Rotation, Methods for Specifying and Testing.” These standards are based on the concept of an “axis of rotation,” which is a line segment about which rotation occurs. Spindle “run out” is an often-used term, but it is not consistent with standards for describing spindle precision. Surfaces have run out; axes have error motions. There are three basic spatial error motions in spindles: Radial, axial and tilt (Figure 1). We will see later that spindle error motions are also characterized by frequency and by sensitive Figure 1 – The three primary unwanted motions of a spindle are: Tilt Motion, Axial Motion, and Radial Motion. There are also two secondary motions: Face Motion, which combines axial and tilt motion; and Radial Motion, combining radial and tilt error. The term “Angular Motion” used in the illustration above was changed to “Tilt Motion” in 1970 to avoid confusion with angular displacement of a rotary axis. (From American Machinist Magazine, December 4, 1967) How The Testing Is Done Over the last 50 years, Axis of Rotation Metrology has developed into a standard for characterizing spindles and understanding the capabilities of a machine. The measurement technique involves high bandwidth, non-contact capacitive sensors with nanometer-level resolution. The sensors reference precision pins or spheres that are mounted to the rotating spindle as targets. So with a three-probe system, (two probes arranged radially at 90° from each other, and one axially centered on the axis for X, Y and Z), a point may be referenced in 3-D space. A five-probe system (Figure 2) with two spherical targets may reference two points, and so tilt errors of the spindle may also be known. Figure 2 – A two-sphere target is chucked in a lathe spindle, and the probe nest with five probes is attached to the tool turret in order to measure the errors described in Figure 1. Turning a part would be an example of a fixed sensitive direction. (Photography courtesy of Dr. Eric R. Marsh). Figure 3 – The spindle for a CNC Vertical Machining Center with variable speed, AC motor and angular contact, ball bearing spindle. The five probes are configured in a stiff nest for measuring motions of a two-sphere target rotating in the spindle. The probes will measure the total of: “Bearing error motion,” “structural error motion,” “target eccentricity” and “target non-roundness.” Boring a hole with this spindle would be an example of a rotating sensitive direction. (Photography courtesy of Dr. Eric R. Marsh). The signals from these five probes each represent a fire hose of information about the spindle’s performance, recording spatial errors and clocking them by frequency. Data can be taken at over 100,000 rpm. The signals may be viewed on an oscilloscope, run though Fast Fourier Transform (FFT) and/or software used to conceptualize the results. (See Figure 3.) Because the spherical artifact is difficult to perfectly align with the axis of rotation, it will have an eccentricity. This eccentricity may be used as a tachometer to phase the data streams with the rotation of the spindle (when an encoder is not conveniently available) and represents an error that is easily subtracted out (like a roundness measuring machine would). This technique allows for phasing a linear data stream (as shown in Figure 7) into polar plots (as seen in Figures 4, 5 and 6.) What we find in the data, after subtraction of eccentricity, is that some errors are Asynchronous; that is different every time around. If we average the Asynchronous data we have what is called Average Error (per the ASME B89.3.4-1985) now called Synchronous error by the ISO 230-7, and defined as the portion of the total error motion that occurs at integer multiples of the rotational frequency (See Figures 4, 5 and 6.) We will see that asynchronous errors are a main determinant of work-piece surface finish, and synchronous errors are the main determinant of work-piece geometry. Figure 4 – This illustration shows polar plots generated from the individual signals of each of the five probes shown in Figure 3. The blue band thickness indicates the amount of Asynchronous error motion, and the black line shows the Synchronous or average error motion. Notice that tilt error motion causes both the Synchronous and Asynchronous radial error motion plots to double in their amplitude at the outboard spherical target. (Illustration courtesy of Dr. Eric R. Marsh, from his book: “Precision Spindle Metrology,’” published by DEStech Publications, 2008.) Synchronous (Average) Error Motions After subtracting out the eccentricity mentioned above, and averaging asynchronous motions, we have what’s called “Synchronous Error Motion.” In most cases this would be considered the spindle error motion. In cases of very high precision spindles, where the 25 or 50 nanometer errors in the target become a significant percentage of the total error motion, another measurement procedure called “Donaldson Reversal” may be employed to subtract out the errors that are in the target. Model A – Asynchronus Face Motion Model B – Fundamental Face Model C – Residual Synchronous Face Synchronous (average) error motions are a predictor of how round a hole can be bored or turned with the spindle. So, for example, the synchronous errors of a spindle used to bore bearing seats for other spindles will be the determinate of the roundness of the bearing seats. The roundness of that bearing seat will affect the precision of the manufactured spindle. So the use of spindles with very low Synchronous error motion for boring bearing seats could dramatically improve the precision of the spindles being manufactured. Synchronous (average) errors are clearly illustrated in plots of air and hydrostatic spindles as the error repeats, every time around. (See Figure 5.) This is, in large part, due to the fact that there is just one rotating element. Error motions in air bearing spindles are often less than 25 nanometers (or one- millionth-of-an-inch). In such a case, Donaldson Reversal would be a necessary methodology as the error motion of the spindle is likely less than the non-roundness of the target. For a lathe, unbalance, which results in vibrations that you may think would affect surface finish, actually does not. Unbalance causes a once per revolution disturbance that is repeatable. It is a synchronous error. If you have a part in a lathe spindle that is not balanced and you bore a hole in it, it may have good surface finish and roundness but when you slow the spindle down the whole will be eccentric. With capacitance probes it is possible to measure the eccentricity (run out) of the turned surface at different speeds. With perfect balance, the radial load on the spindle does not change with speed and so there would be no run out at any speed. In a different situation, the unbalance of a grinding wheel spindle will have a surface finish effect determined by the relative speed of the spindle and work piece. Fundamental synchronous and asynchronous axial motion is measured on the axis of rotation. It is very important when flatness or form at the center of the part is important. This would be the case in manufacturing optics. The face motions are measured some noted distance from the center, and would be the same as the axial motion in the absence of tilt motion. Because they are both measured on the face, both are axial motions and so they cause flatness errors when facing parts. However, fundamental axial face motions will create a part that has the property of circular flatness, that is, the overall surface is not flat, but provides a “flat sealing surface” at any given radius. If you were to put the part on a roundness checker, and measure flatness at some radial distance from the center, you could adjust the part into reading flat at that radius, but the flat surface will not be perfectly square to the axis it was turned on, or the reference surface the part was chucked on. This may be acceptable for a sealing surface, but it would not be a good thing for a bearing surface. The utility of measuring error motions in spindles became evident as hydraulic systems were being applied to military aircraft. Leaky hydraulic fittings were a maintenance headache and a hazard. A study by the military to improve the effectiveness of taper seals revealed that residual face motions of spindles making tapered seals would compromise the geometry of the tapers produced and its ability to seal. Residual synchronous face motions are motions left over after the fundamental (the once around) errors are subtracted; they are still synchronous but are at other integer multiples of the rotation frequency. Residual face motions result in parts that are not flat at all, not even circularly flat. Controlling the property of residual face motion is critical whether you are making seals or bearing races and, because tilt is involved, this is especially true with large diameters. Figure 5 (Left) & Figure 6 (Right) The difference between fluid film spindles and rolling element spindles is intuitively obvious when compared in error- motion polar plots. Notice that the air bearing spindle has Synchronous error of only 10 nm and Asynchronous error is a small fraction of that. The comparably-sized ball bearing spindle has over 100 nm of Synchronous error and the Asynchronous error is a large (7X) multiple of the Synchronous error. This is why an air bearing spindle diamond turning or fly cutting machine can produce a mirror surface finish, while the rolling-element spindle lathe or milling machine cannot. (Results courtesy of Dr. Eric R. Marsh, from his book: “Precision Spindle Metrology,’” published by DEStech Publications, 2008.) Asynchronous Error Motions Asynchronous error motions are a predictor of surface finish, and are best illustrated in error motion plots of rolling-element bearings. Because rolling-element bearings have “constituent elements” (rollers, inner race and cages) that are not perfect and have different rotational frequencies, error motions of the spindle appear random. They are not actually random, the determinist view point is that they can be predicted. In rolling-element spindles, these asynchronous motions are generally much larger displacements than the synchronous motions. (See Figures 4, 5 and 6). Over many revolutions, the polar plot develops into a fuzzy band and the thickness of this band represents the asynchronous error motion of the spindle. This is likely to be 100 nanometers in the very best rolling-element spindles; 1000 nanometers (1 micron) in good spindles; and 10 microns or more in 500mm or larger diameter bearings. The linear plot (See Figure 7) shows how the different elements, with their different rotational frequencies, produce what appears to be non-repeatable motion in the summation of their signals (top). The portion of the signal that is asynchronous is the determinant of surface-finish capability. The error motions do repeat though after many revolutions, and can be predicted to some degree. An analogy can be made with the sun, which rises differently each day but is, after many rotations, repeatable. Figure 7 also shows that the error-motion signal may be decomposed into frequency components, and that doing so can result in a better understanding of the factors causing observed motion. For rolling element bearings, characteristic frequency equations can predict error motion encountered during rotation that are a function of the geometry of the bearing (diameter of races, number of rolling elements, diameter of rolling elements, etc.). The Fast Fourier Transform (FFT) is the most common method for separating the frequency components of a signal. An example of this can be seen in Figure 8 for a single row, deep-groove bearing. Note that the cage, ball pass and outer race frequency spikes are identified in the chart. Rolling element bearings can have nanometer (one thousandth of a micron) repeatability when turning though angles of less than 360 degrees and then returning to the starting position. This is because, in less than one revolution, the constituent elements do not precess. That is, all of the components come back to their original locations with respect to each other. If the error motions of the spindle are plotted for 10 revolutions and then reversed, the error motions will retrace exactly, showing that the motion of the roller-bearing spindle is deterministic. So the Gaussian distribution shown in Figure 8 across the asynchronous error band indicating random motion is not correct, but is of utility, illustrating at a glance the distribution of the errors in the band. Figure 7 – The signals above represent isolated frequencies from a single signal. The 1 cycle/rev is fundamental, 5 and 8 cycles/rev are integer multiples and so these are all Synchronous errors. 2.7, 6.5, 9.1 cycles per revolution are not integer multiples and so are Asynchronous errors. The errors, all acting at the same time result in a summation that appears to be random. When the signal is phased to the rotation of the spindle being measured, polar plots as in Figure 6 may be generated. (Results courtesy of Wolfgang Holzhauer Ph.D. from his “Tutorial on Axis of Rotation,” Annual Meeting of the American Society for Precision Engineering (ASPE), November 1, 1999) Figure 8 – The above illustrates equations for calculating errors based on the relative size and number of constituent elements. A typical error motion polar plot, top left; and an FFT plot showing frequency spikes of critical elements, top right. (Results courtesy of Dr. Eric R. Marsh, from his book: “Precision Spindle Metrology,’” published by DEStech Publications, 2008.) So, what is important if you are selling bearings for spindles? First, talk about Low Error Motions – not “Non-Repeatable-Run Out” (NRR or Run Out) – to show you are familiar with Axis of Rotation Metrology. Remember, error analysis data encompasses all spindle errors, including structural errors both thermal and from external vibrations. Don’t let your bearings get blamed for structural vibration from someone’s coolant pump. Structural Error Motion can be measured quickly and simply by indicating from the tool to the spindle headstock. This is called stationary-point run out. Spindle drive systems can influence and print though into the error motion plot, too. Error at the frequency of the motor poles is a dead giveaway that this is the case. Thermal drift of the spindle axis as it warms up will be the biggest error, but remember that it affects position, not roundness or surface finish, and is caused by heat, not bearing precision. If you are manufacturing bearings or spindles, take care that the error motions of your spindles are not limiting the quality of the bearings you manufacture. The synchronous error motions of your work- holding spindles will determine the roundness or flatness of the parts you manufacture. Residual face error motion of work spindles can cause flatness errors of races for large roller bearings. These errors can also change the intended profile of a spherical race. Surface finish is dependent on the asynchronous error motion of both the work spindle and the grinding wheel spindle. Dramatic surface finish improvements can be made by characterizing and improving spindles using Axis of Rotation Metrology.
{"url":"https://bentlybearings.com/axis-of-rotation/","timestamp":"2024-11-09T17:48:38Z","content_type":"text/html","content_length":"195940","record_id":"<urn:uuid:879053f0-df27-49a5-9b16-b291ecc065c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00366.warc.gz"}
Avogadro Constant If you think atoms are small, you're right. They're so tiny that a single hydrogen atom weighs only 1.66 × 10-24 grams. That's why it's difficult to do chemical calculations using individual atoms. To make it easier, we use units called moles. Moles are based on a number called the Avogadro constant. In this article, we'll explain what the Avogadro constant is and how it relates to moles in physical chemistry. We'll begin by defining what a mole is and what the Avogadro constant means. Then, we'll show you how to use the Avogadro constant in equations. You'll learn about molar mass and how to calculate the number of atoms in a substance, as well as the mass of one atom. By the end of this article, you'll have a better understanding of the Avogadro constant and how it's used in chemistry. The mole and Avogadro's constant When you go to the supermarket, you know exactly how many items you need to buy. For example, a dozen eggs means you need twelve eggs, two pints of milk is 1136.5 millilitres and a baker's dozen is thirteen bread rolls. But in chemistry, there's another way to measure quantities: moles. A mole is a unit used to represent 6.02214076 × 1023 particles, also known as the Avogadro constant. An entity can be an atom, electron, ion, or molecule. So, if you have one mole of hydrogen atoms, you have 6.02214076 × 1023 hydrogen atoms. If you have two moles of oxygen molecules, you have 1.20442815 × 1024 oxygen molecules. And if you have 9.853 moles of methane molecules, you have 5.93361529 × 1024 methane molecules. Think of a mole as just another quantity, like a pair means two or half a dozen means six. A mole means 6.02214076 × 1023 particles. Now you know how to measure chemical quantities using moles! Avogadro's constant definition Avogadro's constant, also known as 6.02214076 × 1023, is the number of entities in a mole of any substance. It is named after Amedeo Avogadro, an 18th and 19th-century scientist from the Kingdom of Sardinia, who is most famous for his theory about the volume of gases, known as Avogadro's law. This law states that two samples of the same volume of any ideal gases contain an equal number of molecules, provided they are kept at the same temperature and pressure. The Avogadro constant was first estimated in 1865 by Josef Loschmidt, but the term Avogadro's constant was only invented in 1909 by the physicist Jean Perrin, who named it in Avogadro's honour. Avogadro's constant equations Now that we know about moles and Avogadro's constant, we can look at some of the equations linking them. First of all, we'll explore the relationship between moles, mass numbers, and Avogadro's Moles, molar mass, and Avogadro's constant You might be looking at Avogadro's constant and thinking that it is a fairly odd number. Where did it come from? Scientists must have chosen it for some particular reason - they didn't just pick a random value out of the blue! In fact, Avogadro's constant, which we know is just the number of entities in a mole, is exactly equal to the number of carbon atoms in 12.0g of carbon-12. This means that one mole of carbon-12 atoms has a mass of exactly 12.0g. You might notice something. Carbon-12 atoms have a relative atomic mass of 12.0; 12.0 is also the mass of one mole of these atoms. This leads us on to our next important point: the mass of one mole of any substance is equal to its relative atomic mass, or relative molecular mass in grams. We can also call the mass of one mole of a substance its molar mass. Molar mass is the mass of one mole of a substance. It is measured in g mol-1. Similarly, molar volume is the volume occupied by one mole of a gas. It is measured in dm3 mol-1. Confused about the difference between relative atomic mass, relative molecular mass and molar mass? We'd recommend you check out "Relative Atomic Mass" for a more in-depth look at the first two terms, but here's an overview of the differences: Relative atomic mass measures the average mass of one atom of an element, compared to 1/12th of the mass of a carbon-12 atom. It is unitless. Relative molecular mass measures the average mass of one molecule of a species, also compared to 1/12th of the mass of a carbon-12 atom. Once again, it is unitless. Molar mass is the mass of one mole of a substance, whether it be an element or a molecule. It is measured in g mol-1.The relative atomic/molecular mass, and molar mass of a species, are the same numerically. For example, the relative atomic mass of carbon-12 is exactly 12, whilst the molar mass - the mass of one mole of carbon-12 atoms - is 12 g mol-1. So, to find molar mass, you take a substance's relative atomic mass or relative molecular mass, and add g mol-1 to the end. Take methane, CH4. It has a relative molecular mass of 12.0 + 4(1.0) = 16.0. Therefore, methane has a molar mass of 16.0g mol-1. Or, in other words, 6.022 x 1023 molecules of methane has a mass of Notice how in this example, we multiplied the relative molecular mass of methane, 16.0, by the number of moles, 1, to find its mass? This leads us to a useful bit of maths. There's a handy equation we can use to relate molar mass, number of moles, and mass: Remember - molar mass and relative atomic or molecular mass are the same numericall.y Therefore, you might also see this equation written as Have a go at the following question. Let's say that we have 34.5g of sodium, Na. How many moles of Na do we have?To calculate the number of moles of our sample of sodium, we need to know its mass and its molar mass, which is the same numerically as its relative atomic mass. Well, sodium has a relative atomic mass of 23.0. To find the number of moles, we divide mass by relative atomic mass: We therefore have 1.5 moles of sodium. Here's another example. A reaction yields 2.4 moles of water, H2O. What is the mass of this water in grams?In this example, we know the number of moles of water produced. We can also work out its relative molecular mass: 2 (1.0) + 1(16.0) = 18.0. This is the same numerically as its molar mass. We can use these values to find mass by rearranging the equation we used above: Plugging our values into the equation, we get the following: Moles, number of particles, and Avogadro's constant Let's now look at the relationship between the number of moles, number of particles, and Avogadro's constant. We briefly met this when we first introduced you to moles up above, but we'll explore it We know that one mole of any substance contains 6.022 x 1023 entities. This is just Avogadro's constant. Two moles of a substance would therefore contain twice as many entities: 2 x 6.022 x 1023 = 1.2044 x 1024. From this, we can deduce the following equation: Sometimes, you might have to use a combination of this equation, and the equation linking moles, mass, and relative atomic or relative molecular mass, in order to answer a question. Let's have a go. Find the number of oxygen molecules present in 88.0g of oxygen, O2. What information do we know? Well, we know the mass of oxygen, and we can work out its relative molecular mass: 2 x 16.0 = 32.0. We can use these values to find the number of moles. We can now use the number of moles and Avogadro's constant to find the number of molecules: Relative atomic mass, the mass of one particle, and Avogadro's constant Do you remember at the beginning, when we quoted the mass of a single hydrogen atom as 1.66 × 10-24 grams? Now let's learn how we worked that value out. Remember: one mole of a substance - or to be precise, 6.022 x 1023 of its entities - has a mass equal to its relative atomic or relative molecular mass. As we learned, 6.022 x 1023 atoms of carbon have a mass of 12.0 g. If we divide this mass by the number of carbon atoms, we can find the mass of one atom. Here's the equation: Take hydrogen. One mole of hydrogen atoms has a molar mass numerically equal to its relative atomic mass, 1.0. If we sub that value into the equation, we get the following: That's it! We hope you've now got a good understanding of moles, Avogadro's constant, and how to use these values in equations. Avogadro Constant - Key takeaways Molar mass is a very important concept in chemistry, as it allows us to relate the mass of a substance to the number of moles of that substance. This is very useful in many different applications, such as in stoichiometry calculations, where we need to know the amount of reactants and products involved in a chemical reaction. Additionally, the concept of a mole allows chemists to work with very large numbers of molecules or atoms in a more manageable way. Instead of dealing with individual particles, we can use the mole as a unit to represent a specific number of entities. This makes it easier to work with and compare different substances, as we can simply compare their molar masses or other properties. Overall, the concept of a mole and molar mass are fundamental to many areas of chemistry, and understanding them is crucial for anyone studying or working in this field. Avogadro Constant What is Avogadro's constant? Avogadro's constant is a quantity used in chemistry to represent the number of particles in a mole. It has a value of 6.02214076 × 1023, meaning that a mole of any substance contains exactly 6.02214076 × 1023 entities. How do you calculate the number of atoms using Avogadro's constant? To calculate the number of atoms in a substance, multiply the number of moles by Avogadro's constant. For example, 1.5 moles of carbon atoms contain 1.5 x 6.022 x 1023 = 9.033 x 1023 atoms. How do you work out moles using Avogadro's constant? Avogadro's constant is equal to the number of entities in one mole of a substance. This means that if you know the number of entities, you can calculate the number of moles. This would equal the number of entities divided by Avogadro's constant, which is 6.022 x 1023. You can also work out the number of moles using a substance's relative atomic or relative molecular mass, and its mass in grams. Here, number of moles equals mass divided by relative atomic or molecular mass. What is the numerical value of Avogadro's constant? Avogadro's constant equals 6.02214076 × 1023, although we often shorten it to 6.022 × 1023.
{"url":"https://shiken.ai/chemistry/avogadro-constant","timestamp":"2024-11-12T07:17:13Z","content_type":"text/html","content_length":"75183","record_id":"<urn:uuid:fdbe3885-5dee-4eb1-b294-ea3e821a829f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00124.warc.gz"}
Math::Complex - complex numbers and associated mathematical functions use Math::Complex; $z = Math::Complex->make(5, 6); $t = 4 - 3*i + $z; $j = cplxe(1, 2*pi/3); This package lets you create and manipulate complex numbers. By default, Perl limits itself to real numbers, but an extra use statement brings full complex support, along with a full set of mathematical functions typically associated with and/or extended to complex numbers. If you wonder what complex numbers are, they were invented to be able to solve the following equation: x*x = -1 and by definition, the solution is noted i (engineers use j instead since i usually denotes an intensity, but the name does not matter). The number i is a pure imaginary number. The arithmetics with pure imaginary numbers works just like you would expect it with real numbers... you just have to remember that i*i = -1 so you have: 5i + 7i = i * (5 + 7) = 12i 4i - 3i = i * (4 - 3) = i 4i * 2i = -8 6i / 2i = 3 1 / i = -i Complex numbers are numbers that have both a real part and an imaginary part, and are usually noted: a + bi where a is the real part and b is the imaginary part. The arithmetic with complex numbers is straightforward. You have to keep track of the real and the imaginary parts, but otherwise the rules used for real numbers just apply: (4 + 3i) + (5 - 2i) = (4 + 5) + i(3 - 2) = 9 + i (2 + i) * (4 - i) = 2*4 + 4i -2i -i*i = 8 + 2i + 1 = 9 + 2i A graphical representation of complex numbers is possible in a plane (also called the complex plane, but it's really a 2D plane). The number z = a + bi is the point whose coordinates are (a, b). Actually, it would be the vector originating from (0, 0) to (a, b). It follows that the addition of two complex numbers is a vectorial addition. Since there is a bijection between a point in the 2D plane and a complex number (i.e. the mapping is unique and reciprocal), a complex number can also be uniquely identified with polar coordinates: [rho, theta] where rho is the distance to the origin, and theta the angle between the vector and the x axis. There is a notation for this using the exponential form, which is: rho * exp(i * theta) where i is the famous imaginary number introduced above. Conversion between this form and the cartesian form a + bi is immediate: a = rho * cos(theta) b = rho * sin(theta) which is also expressed by this formula: z = rho * exp(i * theta) = rho * (cos theta + i * sin theta) In other words, it's the projection of the vector onto the x and y axes. Mathematicians call rho the norm or modulus and theta the argument of the complex number. The norm of z is marked here as abs The polar notation (also known as the trigonometric representation) is much more handy for performing multiplications and divisions of complex numbers, whilst the cartesian notation is better suited for additions and subtractions. Real numbers are on the x axis, and therefore y or theta is zero or pi. All the common operations that can be performed on a real number have been defined to work on complex numbers as well, and are merely extensions of the operations defined on real numbers. This means they keep their natural meaning when there is no imaginary part, provided the number is within their definition set. For instance, the sqrt routine which computes the square root of its argument is only defined for non-negative real numbers and yields a non-negative real number (it is an application from R+ to R+). If we allow it to return a complex number, then it can be extended to negative real numbers to become an application from R to C (the set of complex numbers): sqrt(x) = x >= 0 ? sqrt(x) : sqrt(-x)*i It can also be extended to be an application from C to C, whilst its restriction to R behaves as defined above by using the following definition: sqrt(z = [r,t]) = sqrt(r) * exp(i * t/2) Indeed, a negative real number can be noted [x,pi] (the modulus x is always non-negative, so [x,pi] is really -x, a negative number) and the above definition states that sqrt([x,pi]) = sqrt(x) * exp(i*pi/2) = [sqrt(x),pi/2] = sqrt(x)*i which is exactly what we had defined for negative real numbers above. The sqrt returns only one of the solutions: if you want the both, use the root function. All the common mathematical functions defined on real numbers that are extended to complex numbers share that same property of working as usual when the imaginary part is zero (otherwise, it would not be called an extension, would it?). A new operation possible on a complex number that is the identity for real numbers is called the conjugate, and is noted with a horizontal bar above the number, or ~z here. z = a + bi ~z = a - bi Simple... Now look: z * ~z = (a + bi) * (a - bi) = a*a + b*b We saw that the norm of z was noted abs(z) and was defined as the distance to the origin, also known as: rho = abs(z) = sqrt(a*a + b*b) z * ~z = abs(z) ** 2 If z is a pure real number (i.e. b == 0), then the above yields: a * a = abs(a) ** 2 which is true (abs has the regular meaning for real number, i.e. stands for the absolute value). This example explains why the norm of z is noted abs(z): it extends the abs function to complex numbers, yet is the regular abs we know when the complex number actually has no imaginary part... This justifies a posteriori our use of the abs notation for the norm. Given the following notations: z1 = a + bi = r1 * exp(i * t1) z2 = c + di = r2 * exp(i * t2) z = <any complex or real number> the following (overloaded) operations are supported on complex numbers: z1 + z2 = (a + c) + i(b + d) z1 - z2 = (a - c) + i(b - d) z1 * z2 = (r1 * r2) * exp(i * (t1 + t2)) z1 / z2 = (r1 / r2) * exp(i * (t1 - t2)) z1 ** z2 = exp(z2 * log z1) ~z = a - bi abs(z) = r1 = sqrt(a*a + b*b) sqrt(z) = sqrt(r1) * exp(i * t/2) exp(z) = exp(a) * exp(i * b) log(z) = log(r1) + i*t sin(z) = 1/2i (exp(i * z1) - exp(-i * z)) cos(z) = 1/2 (exp(i * z1) + exp(-i * z)) atan2(y, x) = atan(y / x) # Minding the right quadrant, note the order. The definition used for complex arguments of atan2() is -i log((x + iy)/sqrt(x*x+y*y)) Note that atan2(0, 0) is not well-defined. The following extra operations are supported on both real and complex numbers: Re(z) = a Im(z) = b arg(z) = t abs(z) = r cbrt(z) = z ** (1/3) log10(z) = log(z) / log(10) logn(z, n) = log(z) / log(n) tan(z) = sin(z) / cos(z) csc(z) = 1 / sin(z) sec(z) = 1 / cos(z) cot(z) = 1 / tan(z) asin(z) = -i * log(i*z + sqrt(1-z*z)) acos(z) = -i * log(z + i*sqrt(1-z*z)) atan(z) = i/2 * log((i+z) / (i-z)) acsc(z) = asin(1 / z) asec(z) = acos(1 / z) acot(z) = atan(1 / z) = -i/2 * log((i+z) / (z-i)) sinh(z) = 1/2 (exp(z) - exp(-z)) cosh(z) = 1/2 (exp(z) + exp(-z)) tanh(z) = sinh(z) / cosh(z) = (exp(z) - exp(-z)) / (exp(z) + exp(-z)) csch(z) = 1 / sinh(z) sech(z) = 1 / cosh(z) coth(z) = 1 / tanh(z) asinh(z) = log(z + sqrt(z*z+1)) acosh(z) = log(z + sqrt(z*z-1)) atanh(z) = 1/2 * log((1+z) / (1-z)) acsch(z) = asinh(1 / z) asech(z) = acosh(1 / z) acoth(z) = atanh(1 / z) = 1/2 * log((1+z) / (z-1)) arg, abs, log, csc, cot, acsc, acot, csch, coth, acosech, acotanh, have aliases rho, theta, ln, cosec, cotan, acosec, acotan, cosech, cotanh, acosech, acotanh, respectively. Re, Im, arg, abs, rho, and theta can be used also as mutators. The cbrt returns only one of the solutions: if you want all three, use the root function. The root function is available to compute all the n roots of some complex, where n is a strictly positive integer. There are exactly n such roots, returned as a list. Getting the number mathematicians call j such that: 1 + j + j*j = 0; is a simple matter of writing: $j = ((root(1, 3))[1]; The kth root for z = [r,t] is given by: (root(z, n))[k] = r**(1/n) * exp(i * (t + 2*k*pi)/n) You can return the kth root directly by root(z, n, k), indexing starting from zero and ending at n - 1. The spaceship numeric comparison operator, <=>, is also defined. In order to ensure its restriction to real numbers is conform to what you would expect, the comparison is run on the real part of the complex number first, and imaginary parts are compared only when the real parts match. To create a complex number, use either: $z = Math::Complex->make(3, 4); $z = cplx(3, 4); if you know the cartesian form of the number, or $z = 3 + 4*i; if you like. To create a number using the polar form, use either: $z = Math::Complex->emake(5, pi/3); $x = cplxe(5, pi/3); instead. The first argument is the modulus, the second is the angle (in radians, the full circle is 2*pi). (Mnemonic: e is used as a notation for complex numbers in the polar form). It is possible to write: $x = cplxe(-3, pi/4); but that will be silently converted into [3,-3pi/4], since the modulus must be non-negative (it represents the distance to the origin in the complex plane). It is also possible to have a complex number as either argument of the make, emake, cplx, and cplxe: the appropriate component of the argument will be used. $z1 = cplx(-2, 1); $z2 = cplx($z1, 4); The new, make, emake, cplx, and cplxe will also understand a single (string) argument of the forms in which case the appropriate cartesian and exponential components will be parsed from the string and used to create new complex numbers. The imaginary component and the theta, respectively, will default to zero. The new, make, emake, cplx, and cplxe will also understand the case of no arguments: this means plain zero or (0, 0). When printed, a complex number is usually shown under its cartesian style a+bi, but there are legitimate cases where the polar style [r,t] is more appropriate. The process of converting the complex number into a string that can be displayed is known as stringification. By calling the class method Math::Complex::display_format and supplying either "polar" or "cartesian" as an argument, you override the default display style, which is "cartesian". Not supplying any argument returns the current settings. This default can be overridden on a per-number basis by calling the display_format method instead. As before, not supplying any argument returns the current display style for this number. Otherwise whatever you specify will be the new display style for this particular number. For instance: use Math::Complex; $j = (root(1, 3))[1]; print "j = $j\n"; # Prints "j = [1,2pi/3]" print "j = $j\n"; # Prints "j = -0.5+0.866025403784439i" The polar style attempts to emphasize arguments like k*pi/n (where n is a positive integer and k an integer within [-9, +9]), this is called polar pretty-printing. For the reverse of stringifying, see the make and emake. #CHANGED IN PERL 5.6 The display_format class method and the corresponding display_format object method can now be called using a parameter hash instead of just a one parameter. The old display format style, which can have values "cartesian" or "polar", can be changed using the "style" parameter. $j->display_format(style => "polar"); The one parameter calling convention also still works. There are two new display parameters. The first one is "format", which is a sprintf()-style format string to be used for both numeric parts of the complex number(s). The is somewhat system-dependent but most often it corresponds to "%.15g". You can revert to the default by setting the format to undef. # the $j from the above example $j->display_format('format' => '%.5f'); print "j = $j\n"; # Prints "j = -0.50000+0.86603i" $j->display_format('format' => undef); print "j = $j\n"; # Prints "j = -0.5+0.86603i" Notice that this affects also the return values of the display_format methods: in list context the whole parameter hash will be returned, as opposed to only the style parameter value. This is a potential incompatibility with earlier versions if you have been calling the display_format method in list context. The second new display parameter is "polar_pretty_print", which can be set to true or false, the default being true. See the previous section for what this means. Thanks to overloading, the handling of arithmetics with complex numbers is simple and almost transparent. Here are some examples: use Math::Complex; $j = cplxe(1, 2*pi/3); # $j ** 3 == 1 print "j = $j, j**3 = ", $j ** 3, "\n"; print "1 + j + j**2 = ", 1 + $j + $j**2, "\n"; $z = -16 + 0*i; # Force it to be a complex print "sqrt($z) = ", sqrt($z), "\n"; $k = exp(i * 2*pi/3); print "$j - $k = ", $j - $k, "\n"; $z->Re(3); # Re, Im, arg, abs, $j->arg(2); # (the last two aka rho, theta) # can be used also as mutators. The constant pi and some handy multiples of it (pi2, pi4, and pip2 (pi/2) and pip4 (pi/4)) are also available if separately exported: use Math::Complex ':pi'; $third_of_circle = pi2 / 3; The floating point infinity can be exported as a subroutine Inf(): use Math::Complex qw(Inf sinh); my $AlsoInf = Inf() + 42; my $AnotherInf = sinh(1e42); print "$AlsoInf is $AnotherInf\n" if $AlsoInf == $AnotherInf; Note that the stringified form of infinity varies between platforms: it can be for example any of or it can be something else. Also note that in some platforms trying to use the infinity in arithmetic operations may result in Perl crashing because using an infinity causes SIGFPE or its moral equivalent to be sent. The way to ignore this is local $SIG{FPE} = sub { }; The division (/) and the following functions log ln log10 logn tan sec csc cot atan asec acsc acot tanh sech csch coth atanh asech acsch acoth cannot be computed for all arguments because that would mean dividing by zero or taking logarithm of zero. These situations cause fatal runtime errors looking like this cot(0): Division by zero. (Because in the definition of cot(0), the divisor sin(0) is 0) Died at ... atanh(-1): Logarithm of zero. Died at... For the csc, cot, asec, acsc, acot, csch, coth, asech, acsch, the argument cannot be 0 (zero). For the logarithmic functions and the atanh, acoth, the argument cannot be 1 (one). For the atanh, acoth, the argument cannot be -1 (minus one). For the atan, acot, the argument cannot be i (the imaginary unit). For the atan, acoth, the argument cannot be -i (the negative imaginary unit). For the tan, sec, tanh, the argument cannot be pi/2 + k * pi, where k is any integer. atan2(0, 0) is undefined, and if the complex arguments are used for atan2(), a division by zero will happen if z1**2+z2**2 == 0. Note that because we are operating on approximations of real numbers, these errors can happen when merely `too close' to the singularities listed above. The make and emake accept both real and complex arguments. When they cannot recognize the arguments they will die with error messages like the following Math::Complex::make: Cannot take real part of ... Math::Complex::make: Cannot take real part of ... Math::Complex::emake: Cannot take rho of ... Math::Complex::emake: Cannot take theta of ... Saying use Math::Complex; exports many mathematical routines in the caller environment and even overrides some (sqrt, log, atan2). This is construed as a feature by the Authors, actually... ;-) All routines expect to be given real or complex numbers. Don't attempt to use BigFloat, since Perl has currently no rule to disambiguate a '+' operation (for instance) between two overloaded In Cray UNICOS there is some strange numerical instability that results in root(), cos(), sin(), cosh(), sinh(), losing accuracy fast. Beware. The bug may be in UNICOS math libs, in UNICOS C compiler, in Math::Complex. Whatever it is, it does not manifest itself anywhere else where Perl runs. #SEE ALSO Daniel S. Lewart <lewart!at!uiuc.edu>, Jarkko Hietaniemi <jhi!at!iki.fi>, Raphael Manfredi <Raphael_Manfredi!at!pobox.com>, Zefram <zefram@fysh.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
{"url":"https://perldoc.perl.org/5.16.2/Math::Complex","timestamp":"2024-11-05T19:50:27Z","content_type":"text/html","content_length":"45669","record_id":"<urn:uuid:1dd4058f-eca0-45aa-9cda-5b4a123d3f21>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00052.warc.gz"}
Septembre 14, 2015 By Divine Nwachukwu In statistics, determination of sample size to be used for research has generated a lot of controversy. Sample size and power estimation is an important concern for researchers undertaking research projects but most often than not they have been misunderstood or ignored completely. What is sample size? According to inferential and descriptive statistics, sample size of any population is a subset or a smaller unit of the main population on which a research is to be carried out. For the outcome of any research about a population to be generalized, the sample size has to be able to explain the characteristics of the main population under study. In order to enhance the precision of our survey or research, some software has been developed to assist researchers at all level in determining the right sample size to be used when studying any population. Sample size calculation, when done manually can be achieved but it can be nearly impossible when confronted with large population. Manually calculating the sample size of a large population can be hectic and conclusions reached on those sample size calculated manually cannot be confidently generalized across the population under study. But with the help of these softwares, researchers can determine the following; i) The power of the test ii) The sample size suitable for your experiment/survey Within little or no time and the interesting part is that generalizations reached on these sample size can confidently be generalized across the population. These softwares help to determine the following; This is a sleek innovation brought about by researchclue developers to eliminate the stress of calculating sample size the manual way using Taro Yamane Sample Size Formular. The software is 100% accurate, reliable and fast! To Start Using this Awesome Software, Click the Button Below: Click To Calculate Sample Size ? 2. POWER AND SAMPLE SIZE (PASS) SOFTWARE: As the name implies, the power and sample size calculator/software is a unique software that helps to calculate the power and sample size of a research/survey. Most times, researchers make the mistake of rejecting a true null hypothesis (Ho). The power of the test needs to be ascertained as a guide in knowing if the research is making the right decision i.e. rejecting a null hypothesis that is actually false and accepting its alternative or vice-versa. This unique software helps in determining if the research is judge “powerful” by placing it on a probability scale of 0 to 1. A value close to 1 indicates a very powerful research or a research that rejected the hypothesis to be rejected and accepts the one that should be accepted. A value close to 0 indicate a weak research or survey whose generalization or recommendations may be false. For your research/survey to be powerful, then the software should be called upon in every research that is being embarked upon. Also, before every research should be carried out, the exact sample size should be focused on. Under or over estimating sample size of any population may yield non-powerful or weak results. This software helps to ascertain the actual sample size to be used and hence, make the test powerful. This Software can be gotten at http://ncss.com/software/pass The steps to complete a sample size or power calculation in PASS software are as follows; 1. Selected and load a procedure using the PASS home window. This can be done using the navigation tree or by entering a search keyword. Check on a procedure button in the main display areas to load the procedure 2. Input the analysis parameters on the procedure window. 3. Click the calculate button on the procedure window to perform the calculation and view the results in the output window. 3. THE ROASOFT SAMPLE CALCULATOR: The Raosoft sample calculator is basically a software that primarily calculates or generates the sample size of a research or survey. Estimating a sample size for a survey, project or research can be confusing and frustrating, as a result, the Raosoft sample size calculator offers both sample size confidence interval calculation to minimize these frustrations encountered during research. This software takes into account the margin of error, the confidence level and response distribution. It also offers to show viz-a-viz what the margin of error would be like with various sample This Software can be gotten at www.raosoft.com/samplesize.html This software is a unique software used to calculate the power and the sample size of any survey. It is mostly used in domains such as biology, bio-statistics, the social Sciences, agriculture and This software can be used for studies with dichotomous continues or survival response measures. This is a handle tool meant for students, professionals and researchers with at least minimal statistical knowledge. A cardinal advantage of the PS: power and sample size calculator is that it can be used in various study designs like T-test, regression, design and analysis of experiment, analysis of variance, correlation and revival test. Also, this software can generate graphs that make it less stressful to analyze the relationship between the power of the survey, sample size and the detectable alternative hypothesis. This software can be gotten at powerandsamplesize.com 5. THE SURVEY SYSTEM SURVEY This has features which enables a researcher to conduct sophisticated surveys and produce professional report. The survey system can record any typed of answer and even records respondent actual voice with the help of VOICE CAPTURE SURVEY MOUDLE. This software helps to guide respondent through correct questions, helps to modify questions to generate or yield more accurate data and equally check the validity of answers given. This software is mostly useful to researchers involved in field survey or research. This survey software also helps researchers in creating unique survey and helps to distribute researchers’ questionnaires adequately online for adequate data. This software is the most complete software package available for working with survey questionnaires. This software can be gotten at www.surveysystem.com The power and precision was created by bio-stat Inc. the software helps to ascertain the power of a test or it helps researchers in making decisions i.e. rejecting false null hypothesis. It basically helps to increase the precision of a research or survey. The more precise the survey/research is most professional the work will be. Most times, researchers make the mistake of accepting the false null hypothesis which at the end of the day can lead to false recommendations. In general, the power precision software as the name depicts, helps to increase the precision/power of a survey. This software can be gotten at www.power-analysis.com You may also like: Making Awesome Presentations│ Developing Outstanding Research Topics │How to Write an abstarct │How to Write a Project Proposal │How To Choose the Right Measurement Instrument
{"url":"https://nairaproject.com/blog/5-incredible-softwares-that%20will-determine-sample-size-for-research-projects.html","timestamp":"2024-11-09T17:24:39Z","content_type":"text/html","content_length":"32295","record_id":"<urn:uuid:c2d631fd-e215-4b1f-921f-930c469cabbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00107.warc.gz"}
RGAN | CRAN/E Generative Adversarial Nets (GAN) in R CRAN Package An easy way to get started with Generative Adversarial Nets (GAN) in R. The GAN algorithm was initially described by Goodfellow et al. 2014 . A GAN can be used to learn the joint distribution of complex data by comparison. A GAN consists of two neural networks a Generator and a Discriminator, where the two neural networks play an adversarial minimax game. Built-in GAN models make the training of GANs in R possible in one line and make it easy to experiment with different design choices (e.g. different network architectures, value functions, optimizers). The built-in GAN models work with tabular data (e.g. to produce synthetic data) and image data. Methods to post-process the output of GAN models to enhance the quality of samples are available. • Version0.1.1 • R versionunknown • Needs compilation?No • Last release03/29/2022 Last 30 days Last 365 days The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day. Data provided by cranlogs
{"url":"https://cran-e.com/package/RGAN","timestamp":"2024-11-03T09:01:30Z","content_type":"text/html","content_length":"64508","record_id":"<urn:uuid:ded9c726-79c5-401c-a28e-aee77232621c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00739.warc.gz"}
The SBHS Math Department presents a wide variety of courses at both college preparatory and honors/AP levels. We are proud to say our students have become teachers, engineers, program developers and researchers (just to name a few). Our department goal is for all students to see success in math, no matter the level! If you have questions about math classes, please ask your current teacher. We would love to help you plan out your high school classes or look at how you can use math in future careers. Math Department: Mr. Campos, Mrs. Grady, Mrs. Horan, Ms. Rodriguez, Mr. YinglingMath Department Chairperson: Mrs. Kristina Horan Corequisite(s): Must be enrolled in CP Algebra 1, CP Geometry, or CP Algebra 2 Prerequisites : none Computer Tech is a one semester elective course that covers two main topics: typing and coding. This course builds foundational skills in typing efficiently, which is essential for success in our digital world. Students develop proper technique with a keyboard. Goals include reading a prompt and typing it without looking down at the keyboard and increasing their words per minute typing speed throughout the semester. The coding portion is designed to be an introductory course for students with no background in coding. Students create sequences of code to complete mazes, draw images, and solve puzzles. Coding assignments are self-paced and provide individualized feedback to help students debug their code step by step. The semester ends with a timed typing test with a specific words per minute typing speed goal and a final coding project that is an accumulation of the skills developed through the semester. Algebra 1 CP develops a foundation of mathematical vocabulary with the operations (addition, subtraction, multiplication, division) and the properties of equality. This course builds the following skills regarding linear equations, functions, and inequalities: Algebra 1 CP also introduces properties of exponents and quadratic equations. Students completing this course will be eligible for Geometry CP (recommended) or Geometry Honors (A’s both semester and teacher approval required) Prerequisite(s): Successful completion of HSPT and middle school math, with teacher recommendation. Algebra 1 Honors develops fluent mathematical vocabulary with the operations (addition, subtraction, multiplication, division) and the properties of equality. This course builds a deep understanding of the following skills regarding linear equations, functions, and inequalities: Algebra 1 Honors also explores the properties of nonlinear equations regarding exponential and quadratic functions. Students completing this course will be eligible for Geometry Honors (A’s/B’s both semesters and teacher approval) or Geometry CP Prerequisite(s): Successful completion of Algebra 1 CP or Algebra 1 Honors Geometry CP builds an understanding of Geometric concepts through a wide knowledge of theorems, postulates, and definitions. This course extends the analysis of Geometric relationships in the following categories: CP Geometry also explores transformations of 2D figures in a coordinate plane. Students completing this course will be eligible for Algebra 2 CP (recommended) Prerequisite(s): Successful completion of Algebra 1 CP or Algebra 1 Honors (A’s/B’s both semesters and teacher approval required) Geometry Honors builds an understanding of Geometric concepts through a wide knowledge of theorems, postulates, and definitions. This course extends the analysis of Geometric relationships in the following categories: Geometry Honors also explores right triangle trigonometry and special right triangles in preparation for Algebra 2/Trig. Students completing this course will be eligible for Algebra 2/Trig Honors (recommended; A's and B's both semesters and teacher approval required) or Algebra 2 CP Students completing this course will build on their Algebra 1, knowledge of linear, quadratic, and exponential functions. Algebra 2 CP extends the analysis of graphs, function notation, tables, and equation solving for the following families of functions: • Polynomials • Radicals / Rational • Exponential and Logarithmic • Rational In addition, students will be introduced to sequence and series, Conic sections, and Probability and statistics. Students completing this course are eligible for Precalculus, Statistics, and Financial Algebra. Prerequisite(s): Successful completion of Geometry Honors with an A both semesters and approval from the math department. Placement test required. A Texas Instruments Graphing Calculator (TI-84, TI-84 Plus, TI-84 Plus CE) is required for this course (CAS equipped calculators nor Casio devices are permitted). This course is designed to cover all topics of CP Algebra 2 in one semester and one semester of Trigonometry. This course covers the following topics: Students completing this course will be eligible for a variety of courses. Students with A’s both semesters and teacher approval will be eligible for AP Calculus. Students with B's both semesters and teacher approval are also eligible for Precalculus, Statistics, and Financial Algebra. Note: this course is UC approved for an addtional honors point Prerequisite(s): Successful completion of Algebra 2 CP or Algebra 2/Trig Honors. A Texas Instruments Graphing Calculator (TI-84, TI-84 Plus, TI-84 Plus CE) is required for this course (CAS equipped calculators nor Casio devices are permitted). Precalculus builds an understanding of algebraic and trigonometric concepts. This course covers the following topics: Students completing this course will be eligible for Calculus CP (B’s both semesters and teacher approval) or AP Calculus (A’s both semesters and teacher approval required). This course is not required to take Statistics or Financial Algebra. Financial Algebra applies advanced algebra skills in real-world context to help students make informed decisions regarding personal and professional finance. This course covers the following topics: Prerequisite(s): Seniors or Juniors who have successfully completed Algebra 2 This elective course is an introduction to statistics and will include descriptive statistics, inferential statistics, and experimental design. Students will learn how to use technologies including spreadsheets to complete analysis and then learn to interpret and apply them to solve applications. Students completing this course will be equipped to utilize data analysis in their daily lives. Prerequisite(s): Seniors who have successfully completed Precalculus or Algebra 2/Trig Honors with recommendation. A Texas Instruments Graphing Calculator (TI-84, TI-84 Plus, TI-84 Plus CE) is required for this course (CAS equipped calculators nor Casio devices are permitted). A strong foundation in both algebra and arithmetic prepares students to apply their knowledge of algebra and geometry in this course. Students learn the fundamentals of calculus (limits, derivatives and integrals) algebraically, graphically and numerically. This course covers the following broad topics: Prerequisite(s): Completion of Algebra 2/Trig Honors with A’s both semesters or Precalculus with A’s both semesters and teacher approval. Placement test and summer assignment required. A Texas Instruments Graphing Calculator (TI-84, TI-84 Plus, TI-84 Plus CE) is required for this course (CAS equipped calculators nor Casio devices are permitted). A strong foundation in both algebra and arithmetic prepares students to apply their knowledge of algebra and geometry in this course. Students learn the fundamentals of calculus (limits, derivatives and integrals) algebraically, graphically and numerically. This course is equivalent to a one-semester Calculus 1 course at the college level and is taught as such. It covers the following broad Students enrolled in AP Calculus are required to prepare for and take the AP Calculus AB examination in May of each school year. Students completing this course may be eligible to enroll in Calculus 2 at the college level, pending current specific college policies.
{"url":"https://sbhsvta.org/math","timestamp":"2024-11-08T23:46:29Z","content_type":"application/xhtml+xml","content_length":"189299","record_id":"<urn:uuid:cb497b6a-0b69-4022-b49c-4cb81a74896a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00288.warc.gz"}
Similar to prisms, we can consider the nets of pyramids and cones to determine their surface area and lateral surface area. Flatten the cone using the sliders and then straighten the arc and dissect the sector created by the lateral face of the cone. Use the sliders to explore what happens. 1. The formula for the lateral surface area of a cone is given by LA=\pi r h_s, where r is the radius of the base of the cone and h_s is the slant height of the cone. How does this formula relate to the area of the rearranged lateral face on the cone? The surface area of a pyramid or cone is the sum of the area of the base and the area of the lateral face. The formula can be written generally as: For a cone we must find the area of the circular base and add that to the area of the sector that creates the lateral face. \text{Area of the sector}=\pi l^{2} \cdot \frac{r}{l}=\pi rl We can then combine this area with the area of the circular base to get a formula for the surface area of a cone given by: \begin{aligned} \text{Surface area of right cone} &= \text{Area of base} + \text{Area of sector} \\ SA &= \pi r^2 + \pi r l \end{aligned} where r is the cone's base radius and l is the slant height.
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1190/topics/Topic-22483/subtopics/Subtopic-285890/?ref=blog.mathspace.co","timestamp":"2024-11-05T07:05:11Z","content_type":"text/html","content_length":"554526","record_id":"<urn:uuid:83d5127f-23be-41df-898f-750c5a20304d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00437.warc.gz"}
A multiindex is simply a length \(D\) vector of nonnegative integers \(\mathbf{p}=[p_1,p_2,\dots,p_D]\). Multiindices are commonly employed for defining multivariate polynomial expansions and other function parameterizations. In these cases, sets of multiindices define the form of the expansion. Example: Multivariate Polynomial A multivariate polynomial \(\Phi_{\mathbf{p}}(\mathbf{x}) : \mathbb{R}^D\rightarrow R\) can be defined as the product of \(D\) univariate polynomials. Using monomials for example, \[\Phi_{\mathbf{p}}(\mathbf{x}) = \prod_{i=1}^D x_i^{p_i}\] A multivariate polynomial expansion can then be written succinctly as \[f(\mathbf{x}) = \sum_{\mathbf{p}\in\mathcal{S}} c_{\mathbf{p}} \Phi_{\mathbf{p}}(\mathbf{x})\] where \(\mathcal{S}\) is a set of multiindices and \(c_{\mathbf{p}}\) are scalar coefficients. Example: Wavelets Multivariate polynomials are constructed from a tensor product of one-dimensional functions and each one-dimensional function depends on a single integer: the degree of the one-dimensional polynomial. This is a common way to define multivariate functions from indexed families of one-dimensional basis functions. In a general setting, however, the one-dimensional family does not need to be index by a single integer. Families of one-dimensional functions indexed with multiple integers can also be “tensorized” into multivariate functions. Wavelets are a prime example of this. A one dimensional wavelet basis contains functions of the form \[\psi_{j,k}(x) = 2^{j/2}\psi(2^jx -k)\] where \(j\) and \(k\) are integers and \(\psi :\mathbb{R}\rightarrow \mathbb{R}\) is an orthogonal wavelet. Unlike polynomials, two integers are required to index the one-dimensional family. Nevertheless, a multivariate wavelet basis can be defined through the tensor product of components in this family: \[\Psi_{\mathbf{p}}(\mathbf{x}) = \prod_{i=1}^{D/2} \psi_{p_{2i},p_{2i+1}}(x_i)\] where \(\Psi_{\mathbf{p}} : \mathbb{R}^{D/2}\rightarrow \mathbb{R}\) is a multivariate wavelet basis function in \(D/2\) dimensions and \(\mathbf{p}\) is a multiindex with \(D\) components.
{"url":"https://measuretransport.github.io/MParT/source/api/multiindex.html","timestamp":"2024-11-06T05:14:20Z","content_type":"text/html","content_length":"32783","record_id":"<urn:uuid:ceaabca2-1757-4f9d-b1bc-40b96f72ca22>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00627.warc.gz"}
The output voltage of a trans resistance amplifier has been found to The output voltage of a trans resistance amplifier has been found to increase by 20% when the 1 ka load resistance is disconnected. Find the value of the amplifier output resistance. \text { For the amplifier in Figure } 4(\mathrm{~b}) \text {, you are given that: } r_{s}=1 \mathrm{k} \Omega, R_{1}=64 \mathrm{k} \Omega, R_{2}=10 \mathrm{k} \Omega \text {, } R_{c}=6 \ mathrm{k} \Omega, R_{E}=2 \mathrm{k} \Omega, R_{L}=2 \mathrm{k} \Omega, V_{c c}=+20 \mathrm{~V} \text { and } \beta=100 Draw the d.c. equivalent circuit and calculate the quiescent current Ica (assuming Var = 0.7 V).%3D Draw the a.c. equivalent circuit (assuming Xe =0 at signal frequency). Calculate the input resistance from the source, Ri, and the output resistance fromthe load, Ro (using Table 4b). Calculate the output voltage at the load, V. (using Table 4b). Fig: 1 Fig: 2 Fig: 3 Fig: 4 Fig: 5 Fig: 6 Fig: 7 Fig: 8
{"url":"https://tutorbin.com/questions-and-answers/the-output-voltage-of-a-trans-resistance-amplifier-has-been-found-to-i","timestamp":"2024-11-14T08:15:12Z","content_type":"text/html","content_length":"77432","record_id":"<urn:uuid:efc02208-c1e2-4638-a7e0-1f41dba3199c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00559.warc.gz"}
Class 7 Chapter 4: Rational Numbers Important Concepts and Formulas 1. A number of the form p/q , where p, q ∈ Z and q ≠ 0, is called a rational number. 2. A rational number p/q is said to be in standard form if q is positive and the integers p and q are co-primes. 3. By multiplying or dividing the numerator and denominator of a rational number by the same non-zero integer, we can obtain another rational number equivalent to the given rational number. 4. If both the numerator and the denominator of a rational number are positive or negative, then it is called a positive rational number. For example, 4/5, 8/15, -3/-5 are positive rational numbers. 5. A rational number is called negative, if one of its numerator or denominator is negative. For example, -4/9, 5/-7, -11/16 are negative rational numbers. 6. Every rational number can be represented in decimal form. 7. When the denominator of a rational number has factors 2 or 5 (or both) only, then the decimal representation of the rational number will be terminating. 8. On a number line, a rational number is greater than every rational number on its left. 9. To compare two negative rational numbers, we compare them ignoring their negative signs and then reverse the order. 10. While adding rational numbers with same denominators, we add numerators keeping the denominator same. 11. While subtracting a rational number, we add the additive inverse of the rational number that is being subtracted from the other rational number. 12. The product of rational number with its reciprocal is always 1. 13. To divide one rational number by another non-zero rational number, we multiply the rational number by the reciprocal of the divisor. 15. The reciprocal of 1 is 1, the reciprocal of –1 is –1 and zero has no reciprocal. Please do not enter any spam link in the comment box. Post a Comment (0)
{"url":"https://www.maths-formula.com/2020/03/class-7-chapter-4-rational-numbers.html","timestamp":"2024-11-04T17:43:11Z","content_type":"application/xhtml+xml","content_length":"243264","record_id":"<urn:uuid:bbb92e53-e389-4b45-afcd-b6f6919d3e82>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00174.warc.gz"}
All the Digits This multiplication uses each of the digits 0 - 9 once and once only. Using the information given, can you replace the stars in the calculation with figures? All the Digits printable sheet This represents the multiplication of a 4-digit number by 3. The whole calculation uses each of the digits 0 - 9 once and once only. The 4-digit number contains three consecutive numbers, which are not in order. The third digit is the sum of two of the consecutive numbers. The first, third and fifth digits of the five-digit product are three consecutive numbers, again not in order. The second and fourth digits are also consecutive numbers. Can you replace the stars in the calculation with digits? This activity originally featured in the hands-on Brain Buster Maths Boxes, developed by members of the NRICH Team and produced by BEAM. These resources are out of print but can still be found on Getting Started Use counters or scraps of paper with the digits $0 - 9$ written on them. Make a list of $3$ consecutive numbers $0 - 9$ remembering that $3$ has already been accounted for. What could the ones digit of the product be if the multiplication is by $3$? Which consecutive numbers could be in the four-digit number? Which other digit could appear in the four-digit number? Student Solutions Well done to everybody who worked out the correct solution to this problem. We received a lot of very good solutions to this activity and unfortunately we don't have space to include all of them Samson and Ollie from ACPS found the solution 5694 × 3 = 17082, and they described how to multiply those numbers together to check the solution in this video: Thank you for sharing this with us - it was very interesting to watch how you checked the calculation step-by-step. Lots of children started by working out that the 4-digit number must include the digits 4, 5, and 6. Hugh from Sha Tin Junior School 5.3 in Hong Kong sent in this solution: Explain the solution ???? x 3=????? The first thing I noticed was that 3 was already used and 0, 1, 2 cannot be on the first 4 digits as you multiply them by 3 it will still be too small to reach 5 digits, so they must be the consecutive numbers at the bottom. Then I realised that the third digit is a sum of 2 consecutive numbers so it ruled out the numbers 7 8 9 because if you add them together they will equal to a two- digit number. So the only numbers left are 4 5 6 but 5+6 and 4+6 equals a 2 digit number so only 4 and 5 will work because 4+5 equals 9 so the number will be 5694. Then all you have to do is multiply it and you should, to check it you can look if the first third and fifth digit are consecutive numbers which are not in order. x__ 3 Then you will get the number 17082 because 1,0,2 are consecutive numbers! Imeth from Pristine Private School, Dubai in the UAE had some similar ideas: For the four digits, the only possible 3 consecutive numbers which don't have three and can make a sum that is neither 3 nor one of the others, is 4,5,6. The Third digit of the 4-digit is 9. For the five digit, 5 does not work, as 5*3=15 which has a 5 in the last digit. But 4 and 6 do work, and give 2 and 8 respectively. But both carry a 1, and 1 + 3 * 9 = 28, with an 8. So the last two digits of the two numbers, for 4-digit and 5-digit respectively, are 9 4, and 8 2. For the First 2 digits of 4-digit, they can be 6 5 or 5 6. Using trial and error, only 5 6 works, and the numbers are: 5694 and 17082. These are all good ideas, Hugh and Imeth! Quite a few children used trial and error to narrow down the final order of the digits. Katherine and Charmaine from Citipointe Christian College in Australia sent in this explanation of their We know that the first digit of the four digit number cannot be 1,2 or 3, because if 3 was multiplied by these numbers, it would not give a two digit number. Also, three is already used. Next we know that the three consecutive numbers in the four digit number must be 4,5 and 6, because if we add the two smallest numbers, it will give nine, allowing us to fufil the rule of having two of the consecutive numbers added up to become the third digit of the four digit number. The next possibility of consecutive numbers would be 5,6 and 7, however, the two smallest numbers would add up to 11, which is a two digit number, therefore it would not work as the third digit of the four digit number. So the three consecutive numbers are 4,5,6 in the four digit number. This concludes that 9 is the third digit of the four digit number. After that, we can eliminate 5 from the last digit of the four digit number as 5 multiplied by 3 is 15, and we will use the number 5 twice. Thus, 5 is placed in either the first or second digit of the four digit number. By knowing that 4, 5 and 6 and 9 are for the four digit number, the remaining numbers are 0,1,2,7,8. To figure out the product of this equation, we will use the facts stated on the website. That is that the first, third, and fifth are three consecutive numbers, and the second and fourth digits are also consecutive numbers. This allows us to find out that the first, third and fifth digit is 0,1,2 (in any order) and the second and fourth digits are seven and eight (in any order). Now we do trial and error with the information found out by us above. Trial and Error: x 3 16488 WRONG! x 3 13788 WRONG! x 3 17082 CORRECT! Jake from Illogan School, Cornwall in the UK sent in this description of how they worked out the answer: 1) I wrote a number line and crossed out 3 2) The three consecutive numbers in the 4-digit number cannot be 0, 1 and 2 if the third digit of this number is the sum of two of them, and each digit is only used once (0 + 1 = 1, 1 + 2 = 3) 3) For the third digit to be a single number under 10 and the sum of two consecutive numbers above three, it must be nine and the sum of 4 plus 5. 4) The three consecutive numbers in the 4-digit number must be 4, 5 and 6 5) The 5-digit number must contain the 5 numbers left over - 0, 1, 2, 7 and 8. Only 7 or 8 can go in position 2 and 4 of the 5 digit number 6) I used trial an error to find the ones digit of the 4-digit number. Only 4 5 or 6 could go there. If 5 went there, the ones digit of the answer would be 5 (5 x 3 = 15) - not possible as I can only use each digit once and only 0 1 or 2 can go in this spot. If 6 was the ones digit of the 4-digit number, the ones digit of the answer would be 8 (6 x 3 = 18), but I know 8 can't go there. So the ones digit of the 4 digit number must be 4, making the ones digit of the answer 2 (4 x 3 = 12). I know that the tens digit of the 4-digit number is 9. 9 x 3 = 27, plus the one carried over from the 12 = 28. So 8 is the fourth digit of the answer. 7) This makes 7 the second digit of the answer as it can only be 8 or 7 8) I carried on using trial and error to put the 5 and 6 in place in the 4 digit number so that the answer was correct and made up of only the numbers that I had left 9) I ended up with 5694 x 3 = 17082 Well done to all of you for narrowing down the options and for being really descriptive about the steps you took to get there. I wonder if there is a way to work out that the first number is 5694 without using trial and error? Liora from Yavneh Primary in the UK sent in this list of the steps they took to solve the problem: 1. Units of 5 digit number can't be 0 or 5 because they don't appear in the 3 times table if you use each number once and can't go up to 10 x 3 2. The first digit of the 5 digit number can only be 1 or 2 because otherwise it would be too large to be the answer to 3 x a 4 digit number 3. Therefore the consecutive numbers used at the bottom are 0, 1, 2 (the three is already used) 4. The three consecutive numbers used in the four digit number are 4, 5, 6. So that two of them sum to a single digit number (4 + 5 = 9). The 10s digit of the 4 digit number is therefore 9. 5. We know where the 9 is. 3 x 9 = 27. The 10s. digit of the 5 digit would be a 7 or 8 (if a 1 is carried over). However we now know that a 2 is carried over to the 100s column. Three options are available (we already know the number at the bottom has to be a 0, 1, 2). 4 x 3 = 12 (add the carried over 2) 14 - not a possible solution 5 x 3 = 15 (add the carried over 2) 17 - not a possible solution 6 x 3 = 18 (add the carried over 2) 20 - correct solution Therefore the 100s digit of the 4 digit number is a 6 6. Units digit of the 4 digit number is either a 4 or 5 (we've used the 6). The unit at the bottom is either a 0, 1, 2. Therefore it has to be 4. 7. Place the remaining 5 and 7 so the multiplication is correct x 3 You've thought really hard about this, Liora. I like the way you carefully considered what would be 'carried over' at each point and how that would affect the digits that could go in the different places in the 5-digit number. Thank you as well to the following children who also sent in explanations of how they solved this problem: Dhruv from The Glasgow Academy in the UK; AE from ACPS; Jack from Cramlington Village Primary School; Emily, Yoonha, Bak Ying, Ryan, Rithisak from the Canadian International School of Phnom Penh in Cambodia; Fares from Pristine Private School in Dubai, UAE; Cecilia, Grace, Cynthia, Aarna, Freddy, Gloria, Elisha, Jerry, Ethan, Mira, Romeo and David from Citipointe Christian College in Australia; Zac in the UK; Eloise and Ben from Holy Trinity Pewley Down School in Guildford, UK; Group 1 at the Hamilton Academy; Chloe C, Ruby, Youyou, Sean, Jayden, Kathy and Vincent from Sha Tin Junior School 5.3 in Hong Kong; Shaunak from Ganit Manthan, Vicharvatika, India; and Hannah from Teachers' Resources Why do this problem? This problem requires learners to think about place value and the way that standard column multiplication works. Although the problem can be done by trial and improvement, it is solved more efficiently if worked through systematically. Possible approach You could start by showing the problem to the whole group and discussing what is required to do it. Do they understand what consecutive numbers are? Are they confident about the meaning of 'sum' and After this introduction the group could work in pairs on the problem so that they are able to talk through their ideas with a partner. This sheet is intended for rough working and the solution, and this sheet gives the blank calculation and digit cards to cut out. Give the children time to make a start and then after a suitable length of time, bring the group back together to talk about how they are getting on so far. This is a good opportunity to share some initial insights. For example, some pairs may have worked out which digits must be in the four-digit number, even if they don't know the order yet. Some may have started in a different way, for example by looking for the digit which could go in the units column of the four-digit number. Draw attention to those pairs that have adopted a system in their working which means they are trying numbers in an ordered way. This means that they are guaranteed not to leave out any possibilities. You could then leave learners to continue with the problem. At the end, the whole class could discuss the steps in their reasoning and how they reached a full solution. Did they use all the information in the question right from the start? Which parts were most helpful and why? Key questions What could the ones digit of the product be if the multiplication is by $3$? Which consecutive numbers could be in the four-digit number? Which other digit could appear in the four-digit number? Possible extension Challenge those pupils who finish quickly to prove to you that there is only one solution. How many solutions would there be if the clues about consecutive numbers did not hold? Possible support Suggest working with digit cards and possibly a mini-whiteboard. This sheet gives the blank calculation and digit cards to cut out.
{"url":"https://nrich.maths.org/problems/all-digits","timestamp":"2024-11-13T03:19:32Z","content_type":"text/html","content_length":"53214","record_id":"<urn:uuid:3ddd520e-d33c-4085-a9c8-aa1c8d448e21>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00799.warc.gz"}
Jul 2008 Thursday July 31 2008 Time Replies Subject 11:28PM 1 clustering and data-mining... 10:41PM 0 multiple comparison 9:29PM 2 Help with hazard plots 9:22PM 1 anisotropy in vgm model. HELP!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 9:21PM 2 dput vs unclass to see what a factor really is composed of 9:19PM 2 allocMatrix limits 9:18PM 2 sort rows of matrix by rows of another matrix 8:40PM 3 cutting out numbers from vectors 8:30PM 1 strip names lattice graphics 8:20PM 1 Randomly drawing observations from factors. 8:07PM 4 instal tar.gz package on windows 8:05PM 0 vertical bpplot 7:54PM 1 bwplot with Date object 7:51PM 1 'system' command through Rserve 7:36PM 1 rollapply() to portions of a matrix 6:47PM 2 How to output R image to a file? 6:17PM 0 anisotrophy in fit.variogram problem 5:24PM 5 Random number generation 5:11PM 0 hwriter - Writing R objects in HTML format 4:08PM 0 Sperman Correlation with rcorr (Hmisc) 4:05PM 3 Code to calculate internal rate of return 1:53PM 0 Identifying Points That Belongs to Components of Mixture Models 11:31AM 1 nls weights warning message 11:11AM 4 Identifying common prefixes from a vector of words, and delete those prefixes 10:20AM 0 OT:Selling Data Mining Cloud-A Concept 9:31AM 0 random effects mixed model, different regressors 9:29AM 1 combinations with replications 9:10AM 1 add string 4:46AM 2 stats question 4:38AM 1 High resolution graphics from R Windows Vista 4:17AM 0 Generic plot function for GLM objects 4:12AM 1 Rearranging Data 4:02AM 1 multiple separators in scan() functions 3:37AM 1 predict rpart: new data has new level 2:40AM 2 S 3 generic method consistency warning please help 2:36AM 1 colnames from read.table 1:29AM 0 High resolution graphics from Windows Vista 12:53AM 0 curva PI 12:29AM 0 Is conditional evaluation of R code chunks possible in Sweave ? Wednesday July 30 2008 Time Replies Subject 11:19PM 1 dotFirst option 10:43PM 1 question about krige code in R 10:14PM 1 barplot for a categorial variable 9:48PM 2 Sampling two exponentials 7:52PM 1 Converting to subscripts and superscripts 7:04PM 1 Setting fixed size for segement plot using stars() (axes size vs print size) 6:27PM 2 john chambers R book 6:27PM 2 System exit codes 6:18PM 3 Random subset 6:12PM 5 History pruning 5:54PM 1 setting editor environment variable EDITOR either when configuring R for installation or in .Rprofile 5:22PM 1 Variable name in a String 5:04PM 1 best quad core configuration? 4:56PM 1 Hello, 4:47PM 1 bug in 'margins' behavior in reshape - cast 4:45PM 0 OT: Data mining on demand .Using cloud computers. 4:30PM 0 FOURIER TRANSFORM HELP 4:09PM 0 Axes in perspective plot - persp() 3:18PM 0 Percent Change After Each Interval 2:56PM 1 Rprintf will not build in my C++ compiler 1:54PM 1 model mix problem. FALSE CONVERGENCE 1:48PM 1 odds ratios in multiway tables (stratified) 1:41PM 1 Unexpected line type in lattice plot key on pdf device 1:39PM 2 R -Legality Question about R's Open Source GNU GPL License 1:11PM 1 adding lines to multiple plot 12:36PM 0 Connecting R to PostgreSQL on Windows. 12:18PM 0 "dens" function 11:59AM 1 Re creating Procrustes Plot in Lattice 11:14AM 2 FFT - (STATS) - is this correct? 11:03AM 1 Mixed effects model where nested factor is not the repeated across treatments lme??? 10:41AM 0 adding lines to multiple plots 10:37AM 0 png() does not generate a 24 bit PNG file 10:34AM 0 glm error: factor levels 9:54AM 2 Bizarre - R crashes on merge 9:50AM 1 function to transform response of a formula 9:38AM 1 read XML 9:31AM 0 R GUI help 9:24AM 2 problem with read.table() 8:53AM 6 Need help 8:44AM 1 need some help 8:28AM 2 Eclipse/Statet: How to set breakpoints 8:21AM 0 Successfully connected GNU R to PostgreSQL on windows XP. 7:49AM 1 problem with nested loops 7:44AM 1 ~ . help 4:58AM 2 Repeated Measure ANOVA-Old question 12:32AM 2 how to get the position of a vector-value Tuesday July 29 2008 Time Replies Subject 10:41PM 1 What's the best way to operate R from within Excel? 10:20PM 1 Is there a better way to check if an element of a list exists than using match on names? 10:15PM 1 rolling regression between adjacent columns 10:04PM 3 placing a text in the corner of a plot 9:58PM 1 correlation between matrices - both with some NAs 7:40PM 0 optimize simultaneously two binomials inequalities using nlm 7:23PM 2 R command history -- can it be like Matlab's? 6:12PM 3 finding a faster way to do an iterative computation 6:12PM 0 Flexible semivariogram in R (HELP) 5:45PM 1 List names help 5:30PM 0 Major Cloud Computing Testbed Announced at University of Illinois 4:56PM 2 accessing list elements 4:55PM 3 try question 4:38PM 4 Graphics function question 4:36PM 1 ls() and memory question 4:28PM 3 table questions 4:27PM 0 Question regarding statisticians 4:12PM 1 more environment questions 4:11PM 0 Bootstraping GAMs for confidence intervales calculation 4:00PM 1 tensor product of equi-spaced B-splines in the unit square 3:37PM 1 Removing script file 3:36PM 0 subscript out of bounds help 3:28PM 2 biplot_group_colours_and_point_symbols 3:26PM 0 subscript out of bounds error. 2:51PM 2 Most often pairs of chars across grouping variable 2:09PM 2 About clustering techniques 2:03PM 1 Howto Draw Bimodal Gamma Curve with User Supplied Parameters 1:01PM 1 combining zoo series with an overlapping index? 11:55AM 2 'for' loop, two variables 11:42AM 1 optim fails when using arima 11:38AM 0 stringdot ? 11:34AM 2 Panel of pie charts 11:14AM 0 mac install no font found problem in quartz display 10:32AM 0 max 10:10AM 2 FW: Installing BRugs 9:51AM 1 Bug in sd() and var() in handling vectors of NA (R version 2.7.1)? 9:24AM 1 keywords 8:49AM 1 Problem reading a particular file with read.spss() 7:37AM 1 How to set the parameters in Trellis Graphics (by Lattice package) 5:00AM 1 list 4:23AM 2 Is there anyway to clip the bottom of a barplot? 4:14AM 2 Help interpreting density(). 2:04AM 1 environment question 12:58AM 0 "wave model" semivariogram Monday July 28 2008 Time Replies Subject 10:42PM 1 Is there a way to avoid loading dependendent packages? 10:41PM 2 Rf_error crashes entire program. 8:24PM 1 loop does not work 7:36PM 1 How to unsubscribe? 7:15PM 3 Is there way to multiple plots on gap.plot? 6:34PM 1 Negative Binomial Regression 6:32PM 7 Legality Question about R's Open Source GNU GPL License 6:10PM 3 Fill in NA values in vector with previous character/factor 5:36PM 3 Case statements in R 5:07PM 0 Question about the SSOAP package 4:57PM 1 Overlay of simple plots 4:53PM 5 rollapply() opertation on ts matrix 4:14PM 1 multiv 3:59PM 1 RStem with portuguese language 3:47PM 0 Bootstrapping 2:54PM 2 writing the plots 2:44PM 4 RODBC to query an Oracle table 1:32PM 2 how to add notes to the graph? 1:15PM 3 speeding up loop and dealing wtih memory problems 12:38PM 0 Rv: Chi-square parameter estimation 12:28PM 0 randomSurvivalForest 3.5.0 now available 12:14PM 1 Converting from char to POSIX: 10:19AM 2 axis.break on Date-x-axis in lattice xyplot 9:43AM 0 Help with yaImpute 8:54AM 0 I can not see help.start() in Ubuntu. 6:08AM 1 How to set directory Rscript runs in/Sweave output directory 5:21AM 1 Are there any packages that can process images other than pixelmap (i.e. pnm)? 4:20AM 2 Help with a loop 3:26AM 2 read.table question 2:06AM 1 Mixed model question. 12:32AM 2 Converting english words to numeric equivalents 12:12AM 1 Interpolating a line and then summing there values for a diurnal oxygen curve (zoo object) Sunday July 27 2008 Time Replies Subject 10:29PM 0 Rolling regression - o/p selected coefficients 10:20PM 4 product of successive rows 8:32PM 1 help with durbin.watson 7:39PM 1 64-bit R on Mac OS X 10.5.4 7:36PM 0 Fitting a Bivariate Poisson log-normal distribution 7:10PM 2 Colors in Sweave 6:51PM 2 Link functions in SEM 6:29PM 2 equivalent R functions for Numerical Recipes fitxy and fitexy ? 2:24PM 0 competing risk model with time dependent covariates under R or Splus 11:55AM 4 Object-oriented programming in R for Java programmers? 11:01AM 0 re sponse surface analysis 7:45AM 1 Color of box frame in Legend (Was: Matrix barplot) 7:39AM 1 A easy way to write formula 7:34AM 1 Retain plot? 7:05AM 1 Complex to polar? 5:00AM 1 Lattice wireframe: How to avoid drawing lines around polygons when using shade=TRUE 4:39AM 1 Transfer Function Modeling Saturday July 26 2008 Time Replies Subject 11:07PM 1 Coarsening the Resolution of a Dataset 9:42PM 2 Spatial Sample 9:26PM 4 Data length mismatch. 9:12PM 2 Sample 8:26PM 2 GLS "no terms component" error 4:00PM 1 Can't get the correct order from melt.data.frame of reshape library. 3:37PM 2 Beginning lm 3:30PM 1 Marking Interval Length 3:24PM 0 .Rprofile in RFrameworks? 3:17PM 1 issues with gap.plot function 3:16PM 1 Add a Vector to a Matrix 2:19PM 1 Remove non-numerical columns from data frame 12:43PM 0 competing risk model with time dependent covariates 11:47AM 2 response surface analysis 11:40AM 1 S-PLUS code in R 9:59AM 0 Help to Text Windoes in Menu with TCL-TK 9:08AM 0 gam() of package "mgcv" and anova() 4:52AM 1 error in test 4:06AM 1 Simple vector question. 3:30AM 1 Help with matrix 12:07AM 1 64-bit R on Mac OS X 10.4.5 12:00AM 4 parametric bootstrap Friday July 25 2008 Time Replies Subject 10:43PM 0 fit.dist gnlm question, NaN and Inf results 9:54PM 1 Thin frame line around R pdf output in LaTeX 9:32PM 1 Matrix from List 9:18PM 0 Error in vector("double", length) 9:06PM 3 Numerical question 8:28PM 1 How to pass function argument of same name to internal call? 8:20PM 0 R-pmg (poor man gui) question 8:01PM 2 graphing regression coefficients and standard errors 7:50PM 0 Pairs() function: selective changing of ylim; use of key 7:39PM 0 s-plus in R... simpler code 7:31PM 1 Selecting the first measurement only from a longitudinal sequence 7:23PM 1 Percentile Estimation From Kernel Density Estimate 7:01PM 1 create multi windows plot at the same time 6:49PM 0 need help in parametric bootstrap 6:24PM 1 transcript a matlab code in R 6:13PM 1 Minor Bug in Documentation of merge() 4:13PM 1 plot zoo custom panel help 3:52PM 1 Interactive plot using playwith() and abline 2:49PM 3 melting a list: basic question 2:41PM 0 Package np version 0.20-0 released to CRAN 2:27PM 1 Saving a workspace with "extras"? 2:27PM 1 Insert rows into a pre-existing data frame 2:03PM 1 Chi-square parameter estimation 1:53PM 0 "Multiple" Regressions indexed by goodness of fit 1:40PM 1 S4 class and Package NAMESPACE [NC] 1:23PM 1 Scatterplot matrix one column vs. the rest of the dataset 1:08PM 1 Plink bed files 1:07PM 3 Maximization under constraits 12:51PM 1 extracting Pr>ltl from robcov/ols (Design) 12:49PM 0 Errror in running R2.7.1 version to R2.6.2 version 12:40PM 1 pca 12:13PM 4 Matrix barplot 11:03AM 1 Write lower half of distance matrix only 10:33AM 1 Building a data frame with missing data 10:29AM 1 Tutorial on rgl Graphics 10:26AM 1 cannot allocate vector of size... 9:36AM 0 Three contrast matrices on the same linear model 9:34AM 0 glht after lmer with "$S4class-" and "missing model.matrix-" errors with DATA 9:27AM 1 glht after lmer with "$S4class-" and "missing model.matrix-" errors 9:07AM 3 Help with rep 9:02AM 0 discrete event simulation - "factory" with rework 8:54AM 2 Package Hmisc, functions summary.formula() and latex(), options pdig, pctdig, eps and prmsd 6:34AM 2 problem in choosing cran mirror ! 6:19AM 2 Fit a 3-Dimensional Line to Data Points 3:26AM 1 Installation error for RCurl in Redhat enterrpise 5 3:19AM 3 Bug in gap.plot 3:02AM 0 nlminb--lower bound for parameters are dependent on each others 2:03AM 1 Relying on R to generate html tables with hyperlinks and pictures ? 1:13AM 1 Create an R package 1:07AM 2 Rolling range and regression calculations 12:46AM 2 How to preserve the numeric format and digits ? Thursday July 24 2008 Time Replies Subject 11:35PM 2 simple random number generation 10:40PM 0 ANNOUNCE: rapache 1.1.0 release 10:25PM 0 OT: course in research administration 10:09PM 1 Installing R packages in Textmate 10:07PM 3 Integrating R and Textmate 9:58PM 1 still more dumb questions 8:29PM 3 Should this PDF render correctly without font embedding? 8:25PM 0 SPR experiment: using lmer, transforming data, collinearity, and using a covariable 8:19PM 1 Cairo graphics 8:13PM 1 ggplot2 help 7:19PM 2 factor question 6:04PM 0 Problem with GLS dwtest function 5:42PM 1 R-package install 5:37PM 1 Parallel Processing and Linear Regression 5:13PM 1 Formatting Syntax when Merging 3:31PM 2 Help with which() 3:30PM 2 What is wrong with this contrast matrix? 2:55PM 1 [Fwd: Re: Coefficients of Logistic Regression from bootstrap - how to get them?] 2:06PM 1 Error - unable to load shared library 2:00PM 5 How to delete duplicate cases? 1:57PM 4 Dividing by 0 1:47PM 1 how know the status of particular process 1:36PM 0 Multinomial logistic regression with non-numerical data. 12:59PM 3 incrementing for loop by 2 12:09PM 2 NAs - NAs are not allowed in subscripted assignments 12:02PM 1 Problem with scatterplot3d example 11:07AM 0 Problems with Rmpi and LAM 7.1.4 11:01AM 4 Just 2 more questions - for now! 10:55AM 0 Bootstraping GAMs: confidence intervals 10:38AM 0 unable to load a library 10:30AM 4 Is there an equivalent * operator? 10:04AM 1 How to get rule number in arules 9:54AM 1 Coconut benchmark for R? 8:59AM 1 time zone - best way to shift hours 7:23AM 1 ggplot question 6:23AM 1 convert a vector of words into a matrix 3:18AM 2 Can R fill in missing values? 1:29AM 3 how to export ".xls" file with colorful cells? Wednesday July 23 2008 Time Replies Subject 11:02PM 1 Aggregating zoo object with NAs in multiple column 10:51PM 2 path and R CMD check 10:03PM 0 Simulating multilevel responses 9:53PM 3 how can I write code to detect whether the machine is Windows or Linux? 9:49PM 1 R2WinBUGS problem 8:55PM 2 Weighted variance function? 8:33PM 4 Using PrettyR to produce LaTeX output 7:45PM 0 pvclust 7:41PM 2 truncated normal 7:39PM 3 Constrained coefficients in lm (correction) 7:35PM 0 Constrained coefficients in lm 7:24PM 0 Fw: Using if, else statements 7:12PM 2 Point-biserial correlation 7:03PM 1 Sample on dataframe 6:49PM 1 Questions on weighted least squares 6:47PM 1 Q re iterating a process and appending results to output file 6:29PM 5 Histogram 5:43PM 3 sum each row and output results 4:23PM 2 Flip Matrix form file? 4:20PM 2 shQuote and cat 4:05PM 2 Warning message in if else statement 3:44PM 6 Using if, else statements 3:12PM 2 Using RODBC to use SQL queries 2:06PM 1 Calling LISP programs in R 2:03PM 8 sequential sum of a vector... 1:49PM 0 How to control the memory? 1:23PM 6 Convert list of lists <--> data frame 1:14PM 1 [Fwd: Re: Coefficients of Logistic Regression from bootstrap - how to get them?] 1:07PM 3 Quantitative analysis of non-standard scatter plots. 12:55PM 1 Time series reliability questions 12:54PM 18 Simple... but... 10:39AM 3 maximum likelihood method to fit a model 9:56AM 6 par() function does not work 8:21AM 1 Combining 4th column from 30 files 7:06AM 1 mle2(): logarithm of negative pdfs 4:07AM 0 rpart node number 3:43AM 2 Can't Load Text Files 2:02AM 1 select significant variables 1:57AM 3 average replicate probe values 1:54AM 1 an extremely simple question 1:12AM 0 deblur microscope image stacks 12:22AM 1 rpart confidence intervals? Tuesday July 22 2008 Time Replies Subject 11:05PM 2 Plotting Multiple lines on one plot 10:40PM 2 F test 9:49PM 1 TextMate and R 8:31PM 2 How to.... 8:28PM 1 Reading the data from specific columns 8:22PM 2 Cannot re-start R after bus error 8:18PM 2 Decoding subscripts/superscripts from CSVs 7:52PM 1 help with simulate AR(1) data 7:24PM 1 .asp (not really an R question) but related 7:12PM 0 help on R benchmarks 7:04PM 1 rollmean and stl 6:59PM 1 How to simulate heteroscedasticity (correlation) 6:51PM 4 Function Error 5:59PM 2 Help with rearranging data 5:48PM 2 Editor for Mac 5:16PM 0 2 Courses*** R/Splus Advanced Programming : August 14-15 in Seattle ! by XLSolutions Corp 4:40PM 1 tklistbox and extracting selection to R 4:25PM 2 S4 : setGeneric for classical methods 4:24PM 1 data transformation 1:57PM 2 Does R have SQL interface in windows? 1:50PM 2 saving plot both as jpg and pdf 1:50PM 1 normalised/transformed regressions 1:47PM 5 How to filter a data frame? 1:34PM 0 Ignore previous emails 1:24PM 4 Opening files from R terminal - appologies 1:18PM 0 loop for multiple regressions 1:02PM 3 Error in installing packages 11:58AM 2 randomForest Tutorial 9:44AM 2 rpart$where and predict.rpart 8:42AM 1 scatter plot using ggplot 8:12AM 1 Urgent 8:00AM 1 determing font type in expression 7:59AM 2 Table orderd by frequencies 6:26AM 1 xyplot help 4:44AM 1 Accessing individual variables from summary of lm 1:42AM 1 Lattice: How to draw curves from given formulae Monday July 21 2008 Time Replies Subject 11:24PM 4 how to speed up this for loop? 9:55PM 1 Large number of dummy variables 9:21PM 0 how to draw multiples curses (for given formulae) in lattice 9:17PM 1 y-axis number format on plot, barplot etc. 9:00PM 0 Table of tables? 8:56PM 1 Using integrate 7:35PM 0 optimize function help!! 6:49PM 3 vector help 6:31PM 1 Control parameter of the optim( ): parscale 5:39PM 3 Editor fpr Mac OS 5:02PM 2 Output Nicely formatted tables from R 3:49PM 1 Parameter names in nls 3:39PM 1 Mclust - which cluster is each observation in? 3:22PM 1 dev2bitmap error, 'gs' cannot be found 3:20PM 0 xyplot: distance between axis and axis-label gets wrong 3:12PM 2 Creation of png=problems 3:04PM 5 Coefficients of Logistic Regression from bootstrap - how to get them? 2:56PM 1 portfolio optimization problem - use R 2:54PM 2 avoid loop with three-dimensional array 2:00PM 1 alternate usage of soil.texture (plotrix) 1:31PM 1 Subsetting data by date 1:12PM 1 Howto Restart A Function with Try-Error Catch 12:52PM 1 Cross correlation significance test 12:13PM 2 Getting plot axes where they should be! 11:35AM 2 Time Series - Long Memory Estimation 11:15AM 0 help with integrate function 10:32AM 1 On creating grouped data set. 9:11AM 1 fama-macbeth 9:10AM 2 sampling from a list of values while excluding one 9:03AM 0 A question on the quandratic programming 7:52AM 1 CART Analysis 7:27AM 0 SVM: Graphical representation 6:09AM 2 CART and CHAID 2:35AM 1 RODBC - problems using odbcDriverConnect without DSN 1:03AM 3 Lattice Version of grconvertX or variant on panel.text? Sunday July 20 2008 Time Replies Subject 11:16PM 0 coin package (conditional inference / permutation): parameter teststat 11:16PM 1 Sum efficiently from large matrix according to re-occuring levels of factor? 8:33PM 3 enumerate subsets 7:24PM 2 Erro: cannot allocate vector of size 216.0 Mb 7:03PM 0 Off topic: SS formulae for 3-way repeated measure anova (for when aov() fails) 5:27PM 3 asp and ylim 4:48PM 2 Erro updating HTML package descriptions in packages.html 4:44PM 2 Conditionally Updating Lattice Plots 4:34PM 2 fill in area between 2 lines with a color 4:12PM 2 Indicator Kriging? 1:42PM 4 Access to values of function arguments 1:36PM 0 How do I install Joomla! 1.5? 12:44PM 4 drawing segments through points with pch=1 11:32AM 3 Order of columns(variables) in dataframe 11:19AM 1 Error in edit(name,file,title,editor) 10:41AM 2 R interprets symbol as q() 7:45AM 2 problem with read.table 2:46AM 1 confusion matrix in randomForest Saturday July 19 2008 Time Replies Subject 10:10PM 2 Non-linearly constrained optimisation 9:21PM 1 estimating volume from xyz points 8:02PM 1 wroung groupedData despite reading Bates and Pinheiro 3 times 6:30PM 1 replicate matrix blocks different numbers of times into new matrix 6:16PM 1 Sweave add code \201 4:04PM 2 extracting colnames to label plots in a function 3:47PM 1 principal factor analysis 3:14PM 0 fixed effect significance with lmer() vs. t-test 2:07PM 1 Resshufling algorithm (Thiago Souza) 1:10PM 1 Discretize continous variables.... 10:29AM 2 Title for graph with multiple plots 7:35AM 3 R version 2.7.1 (warnings) 6:09AM 3 Graphics not working for R in ubuntu 12:20AM 2 How to solve systems of nonlinear equations in R? Friday July 18 2008 Time Replies Subject 11:13PM 0 Retrieving data from a tcl /tk function 11:00PM 1 problem with putting text in outer margins (mtext outer=TRUE) 10:21PM 2 with lapply() how can you retrieve the name of the object 10:17PM 0 spreading the risk 9:26PM 1 system time - windows specific problem 7:51PM 1 Calculating Betweenness - Efficiency problem 7:11PM 1 Functions similar to step() and all.effects() that work for lme() objects? 6:42PM 3 using which to identify a range of values 6:24PM 3 "Spreading risk" in a matrix 5:52PM 0 copy number estimation through oligo package? 5:24PM 0 Polynomial Approximation with Exponential Kernel 4:44PM 3 How to cut data elements included in a text line 4:37PM 1 par("din") vs dev.size() 4:08PM 1 cbind help 3:51PM 1 re ad and check table 3:46PM 0 dataedit 3:10PM 1 manipulate a matrix2 3:04PM 2 Landscape mode for pdf 2:03PM 1 creating names for lists 2:01PM 0 A neural network problem---neuralnet package 1:37PM 2 source code functions 12:53PM 1 only "T" becomes logical colum with read.table 12:37PM 3 Change font-face in title 12:21PM 2 generate repeats of a vector's elements 12:11PM 5 Reading SPSS .por files 11:45AM 2 name returned by lapply 11:41AM 0 Correction: RStat version 2.7.*1* now available 10:50AM 0 RStat version 2.7.2 now available 10:47AM 1 finding "chuncks" of data that have no NA s 10:15AM 1 t-test for multiple variables 9:27AM 0 Equation sequencing 7:36AM 2 column wise paste of data.frames 7:24AM 1 function "eigen" AND Minitab 7:18AM 0 Installation of garchOxFit 1:47AM 2 matrix multiplication question 12:36AM 0 How to convert/map jacktest results to dataframe or file Thursday July 17 2008 Time Replies Subject 11:09PM 3 Problem with TLC/TK on Ubuntu 9:31PM 1 combining lists of pairs 9:01PM 0 Help with data layout - Thanks 8:57PM 1 smooth.spline 8:54PM 2 nested calls, variable scope 8:42PM 4 Matching Up Values 6:54PM 1 ubuntu and rgl package vs suse ?? static plots 5:47PM 3 Display variables when running a script 5:40PM 4 REvolution computing 4:22PM 0 Can mvtnorm calculate a sequence of probabilities? 4:08PM 0 keeping seperate row.names 3:57PM 3 Hiding information about functions in newly developed packages 3:50PM 4 help with data layout 3:32PM 1 Newbie's question about lm 3:29PM 2 Sampling distribution (PDF & CDF) of correlation 2:25PM 3 Colours in R 2:14PM 3 Graph 2:06PM 1 Comparing differences in AUC from 2 different models 1:58PM 1 errors using step function 1:44PM 0 Re : Re : float and double precision with C code 1:25PM 0 Re : float and double precision with C code 12:47PM 1 float and double precision with C code 12:47PM 0 how to split the string 12:26PM 1 histogram plot default 12:26PM 2 spliting a string 11:37AM 2 fastICA 9:49AM 0 model.tables standard errors 9:47AM 5 calculate differences - strange outcome 9:33AM 0 How to compute loglikelihood of Lognormal distribution 9:13AM 3 Histogram with two colors depending on condition 5:41AM 2 Fw: how i can install Rgraphviz in R2.7.1 2:27AM 2 Location of HTML help files 1:11AM 0 create a function from a model 12:17AM 1 plot(linear model) without lines Wednesday July 16 2008 Time Replies Subject 11:45PM 2 Stratified random sample 10:26PM 3 Function to verify existence of an R object 9:27PM 2 spectral decomposition for near-singular pd matrices 8:17PM 2 Labelling curves on graphs 8:14PM 1 RSQLite maximum table size 7:09PM 1 help with bivariate density plot question 6:51PM 2 gstat problem with lidar data 4:47PM 0 Help with rpart.predict? 4:43PM 0 Confidence bands for model estimates using ns() spline basis 4:08PM 4 Likelihood ratio test between glm and glmer fits 4:06PM 1 Problem with mpi.close.Rslaves() 3:57PM 0 Forecasting enrollments with fuzzy time series 3:56PM 1 Help regarding arules package 3:51PM 1 Output design question 1:10PM 1 NAMESPACE vs internal.Rd 12:29PM 1 R-source code of a function 11:41AM 1 date to decimal date conversion 10:48AM 2 barchart with bars attached to y=0-line 8:21AM 2 Group level frequencies 7:28AM 2 Howto view function's source code of an installed package 5:40AM 0 Simulate data with binary outcome 5:32AM 1 negative P-values with shapiro.test 4:03AM 0 Off-topic - Data needed 3:55AM 2 How to extract component number of RMSEP in RMSEP plot 1:48AM 1 Help Updating and Installing R Packages 12:40AM 1 R logo Tuesday July 15 2008 Time Replies Subject 11:58PM 0 implementation of Prentice method in cch() 11:51PM 2 New Zealand Maps 9:51PM 1 What package to use to create a GUI 9:35PM 2 Row Sum, exclude positive values 9:00PM 3 Melt (reshape) question 8:37PM 1 Filtering output 8:20PM 2 sem & testing multiple hypotheses with BIC 7:51PM 0 generalized spatial autoregressive lag model 7:50PM 2 POSIXct extract time 7:34PM 0 how to get confidential interval for classification accuracy using 10-fold validation in svm package 7:21PM 1 code reduction (if anyone feels like it) 6:53PM 0 INSEED statistics workshop on Statistical Models and Practices in Epidemiology (using R) 6:51PM 0 INSEED Statistics Workshop on Bayesian statistics using OpenBUGS and R 5:52PM 0 Help with maptools sunriset() 5:12PM 1 aov error with large data set 5:11PM 4 Iterations 4:52PM 1 sunrise sunset calculations 4:33PM 0 Calculating Stream Metabolism 3:27PM 5 counting number of "G" in "TCGGGGGACAATCGGTAACCCGTCT" 3:09PM 2 extracting elements from print object of Manova() 2:19PM 0 DOE for logistic models 2:08PM 2 Problem installing R on openSUSE 10.3 1:42PM 3 Font quality in base graphics 1:07PM 1 manipulating (extracting) data from distance matrices 1:00PM 0 creating axis of the plot before data are plotted -- solved 12:40PM 3 playwith package crashes on Mac 12:16PM 4 Mapping data onto score 12:16PM 1 Mapping data onto 1-10 score 12:02PM 2 Regression problem 11:41AM 0 linear and non linear regression in 3D ! 11:05AM 2 Layers in graphs 10:59AM 1 tryCatch - return from function to main script 9:41AM 0 Odd behavior of remove command 8:33AM 1 Supressing printing from a function: ecdf 7:56AM 1 Proxy again, this time on a Mac 6:57AM 2 enscript states file for R scripts? 4:34AM 2 meaning of tests presented in anova(ols(...)) {Design package} 4:33AM 0 [Ask]Robust Buckley-James like estimator 1:36AM 0 r^2 for SSasymp? Monday July 14 2008 Time Replies Subject 11:07PM 2 long data frame selection error 10:57PM 2 position of a specific character 10:24PM 2 modeling binary response variables 10:16PM 2 question about a small "for" loop 10:10PM 0 [Fwd: Re: Insurance review statistical methods] 9:51PM 0 rgl.snapshot on linux produces colored lines only 9:47PM 2 Insurance review statistical methods 9:44PM 0 Extract and plot array from data 9:40PM 2 Convert data set to data frame 9:29PM 3 Data Manipulations and SQL 9:26PM 1 Function to create variables with prefix 8:47PM 3 statistics question about a statement in julian faraway's "extending the linear model with R" text 6:50PM 1 Analysis of poorly replicated array data 6:34PM 2 .First and .Rprofile won't run on startup 6:09PM 2 Backslash in sub pattern? 5:56PM 1 R Commander question 4:49PM 1 dll problem 3:49PM 1 Help with an error message 3:46PM 0 .Last() and namespace 3:21PM 1 creating axis of the plot before data are plotted 3:03PM 1 macros in R 3:00PM 5 A question about using function plot 2:39PM 0 swap_tail macro in pnorm.c 2:39PM 1 applying complex functions by groups 2:23PM 0 ampersand char in the path causing error message when running rscript.bat 1:38PM 0 Can't compile in HPUX 11.31 on IA64 12:37PM 2 aggregate months to years 11:50AM 1 options() question for displaying numbers in the GUI 11:18AM 1 How to load stats4 package 10:41AM 1 Tissue specific genes by ANOVA? 10:20AM 2 how to correlate nominal variables? 10:18AM 0 "Reasonable doubt" - was "Re: shapiro wilk normality test" 10:16AM 0 Frequency-Table on Group Level - Multi Level Analysis 10:07AM 0 nlme, lme( ) convergence and selection of effects 9:04AM 1 eval.wih.vis 8:45AM 1 rm(l*) 8:33AM 3 Loop problem 7:50AM 0 Question regarding lmer vs glmmPQL vs glmm.admb model on a negative binomial distributed dependent variable 7:03AM 1 How to extract folowing data? 6:04AM 1 source code for R-dev packages 5:41AM 0 Using Internals via their compiled sources 1:57AM 1 Off topic: Tcl/Tk outside R. 1:03AM 1 Computing row means for sets of 2 columns 12:34AM 0 Moran's I- Ordinal Logistic Regression 12:24AM 0 Moran's I test- Ordinal Logistic Regression Model Sunday July 13 2008 Time Replies Subject 10:31PM 2 any way to set defaults for par? 9:40PM 1 stem and leaf plot: how to edit the stem-values 8:05PM 1 (no subject) 7:19PM 0 rm anova 4:47PM 3 initialize a factor vector Saturday July 12 2008 Time Replies Subject 10:09PM 0 A (not so) Short Introduction to S4 8:23PM 1 How to build a package which loads Rgraphviz (if installed)... 7:50PM 1 Installing RWinEdt 7:46PM 2 Excel Trend Function 7:22PM 0 R-outlet: Journal of Statistical Software 5:28PM 0 Reading Multi-value data fields for descriptive analysis (resend) 4:37PM 1 Reading Multi-value data fields for descriptive analysis 3:30PM 5 shapiro wilk normality test 3:23PM 4 problem 2:12PM 2 Quick plotmath question 1:23PM 0 Visualization of multiple alignments 11:23AM 1 a warning message from lmer 7:56AM 1 Error of exp() in a dll from Fortran 7:48AM 1 Help with arima.sim 7:32AM 1 Assoociative array? 6:57AM 1 Another packaging question 3:32AM 3 Another failed attempt to install an R package in Ubuntu Friday July 11 2008 Time Replies Subject 10:47PM 1 data summarization etc... 10:44PM 3 data summerization etc... 8:38PM 1 Plot multiple datasets on a VCD ternary graph 7:57PM 3 List help 6:25PM 2 plotting granular data 5:57PM 1 Difficultes with grep 5:06PM 1 While loop 4:48PM 2 Help with error in "if then" statement 4:42PM 0 GroupedData for three way randomized block. LME 4:25PM 1 reading in a subset of a large data set 3:51PM 1 Comparing complex numbers 3:50PM 1 More compact form of lm object that can be used for prediction? 3:41PM 1 mpirun question with Rmpi 3:30PM 1 TeachingDemos question: my.symbols() alignment problems in complicated layout 2:53PM 1 Subsetting an array by a vector of dimensions 2:53PM 1 Meaning of Positive LogLikelihood 1:56PM 0 modified ks.test? 1:04PM 2 network 12:37PM 0 Funnel plots 10:35AM 0 Funnel plots in r 9:43AM 0 how to test this multivariate normal distribution? 9:42AM 3 Start preferred RGui 8:22AM 1 Any R package about COALESCENT THEORY/GENEOLOGICAL TREES?? 5:13AM 1 How to output 0.405852702657738 as 0.405853 ? 1:46AM 3 number of effective tests 12:47AM 2 Problems with Package Installation. Thursday July 10 2008 Time Replies Subject 11:00PM 2 princomp loading help 10:07PM 1 R bug in the update of nlme? 10:03PM 1 odfWeave problem in 7.1? 8:54PM 0 predict.garch problem 8:43PM 1 Interpretation of EXACT Statistical Test in finding the probability as Std. Deviations (SumP) 8:28PM 1 layout is to xyplot as ??? is to qplot 8:07PM 5 rounding 6:51PM 1 Ellipsis arguments for plot.formula 6:39PM 1 search with help.start 6:27PM 1 compiling pnmath on an intel processor running mac OS 10.5 6:21PM 3 Sorting a matrix 5:11PM 0 Rmpi unkown input format error 4:50PM 1 embarrassingly parallel problem - simple loop solution 4:49PM 3 What's the T-Value in fisher.test 4:21PM 0 by() function doesn't work inside another function 3:45PM 1 S4 class questions 3:28PM 0 ppls: version 1.02 including a new data set 3:21PM 0 ace error because of missings? 3:13PM 0 How to check how much memory has been used or reserved ? 2:41PM 2 false discovery rate ! 2:35PM 2 xYplot customizing y-axis scaling 2:01PM 1 Non-normal data issues in PhD software engineering experiment 12:41PM 2 Position in a vector of the last value > n 11:56AM 4 Turn any vector 11:55AM 0 Bootstrap in betareg() 10:44AM 1 quantile regression estimation results 10:22AM 3 Sorting / editing a table 9:44AM 1 problems with rq.fit.sfn 8:54AM 1 Operator overloading 6:15AM 4 Interpolation of data 6:10AM 3 specifying data 6:04AM 2 Finding Values that Occur Most Often in a Vector 4:47AM 2 Lattice: merged strips? 4:04AM 1 “Check” problem 1:15AM 1 Basic help needed 12:50AM 2 Help on Installing Matrix Package in Linux (Fedora) Wednesday July 9 2008 Time Replies Subject 11:07PM 0 Help navigating documentation for descriptive statistics 10:33PM 1 use variable value as vector name 10:02PM 3 Grid building in R 9:38PM 1 zoo and cex 8:26PM 1 matplot help 8:24PM 1 read.table problem 7:17PM 3 randomly select duplicated entries 7:16PM 2 Read.table - Less rows than original data 6:57PM 1 Sweave figure 6:57PM 2 shifting data in matrix by n rows 6:51PM 0 ftp directory 6:45PM 0 garchFit problem 6:31PM 0 "Rotated Lat-Lon" projection in mapproj 6:26PM 0 question on FARIMA innovations 5:49PM 1 outlining symbol in legend with blackline 5:30PM 3 rbinom for a matrix 5:01PM 1 Question regarding lu in package Matrix 4:50PM 2 rollmean() 4:42PM 2 Port package 4:15PM 2 build matrix with the content of one column of a data frame in function of two factors 3:56PM 0 rJava crashes jvm 3:49PM 2 gsub and "\" 3:28PM 4 Find the closest value in a list or matrix 3:26PM 1 lapply 3:15PM 5 Summary Stats (not summary(x)) 12:34PM 1 netCDF to TIFF 11:00AM 1 childNames for xaxis grob (grid package) 10:59AM 0 RJDBC and OracleDriver: unable to connect 10:36AM 1 plot gam "main effect functions" in one graph 10:20AM 1 version problems of rkward in ubuntu hardy repository 10:09AM 4 Strptime/ date time classes 9:58AM 0 problems using mice() 9:55AM 0 Question concerning Furness iteration & subprogram/modules 9:40AM 7 recursively divide a value to get a sequence 9:34AM 1 "non-sample" standard-deviation 9:33AM 2 Parsing 9:21AM 3 Expression in axis 9:13AM 2 replacing value in column of data frame 8:29AM 1 building experimental paradigm with R as "Brainard/Pelli Psych Toolbox" 7:40AM 1 Help with installing add-on packages 5:21AM 0 Installing R package but got Fortran 90 error 4:06AM 1 Loglikelihood for x factorial? 2:58AM 2 sorting a data frame by rownames 2:52AM 2 cex.axis for the x axis Tuesday July 8 2008 Time Replies Subject 8:56PM 1 aggregate() function and na.rm = TRUE 8:14PM 1 calculation of entropy in R??? 7:55PM 0 Fwd: Re: extracting index list when using tapply() 6:33PM 2 nls and "plinear" algorithm 6:31PM 6 Automatic placement of Legends 6:23PM 3 extracting index list when using tapply() 5:43PM 1 making zoo objects with zoo without format argument? 5:39PM 1 Plot 5:23PM 2 How I count the all possible samples?? 4:58PM 1 R crash with ATLAS precompiled Rblas.dll on Windows XP Core2 Duo 4:57PM 0 Gage R & R 4:42PM 1 fisher.test 3:58PM 0 How to draw dot plots for 2 vectors seperately 3:36PM 0 simulate data for lme 2:53PM 1 R package 2:25PM 2 time series by calendar week 1:36PM 0 Specific question on LDheatmap 1:32PM 1 about EM algorithm 1:28PM 0 help for xvalid with external data 1:18PM 6 Question: Beginner stuck in a R cycle 12:53PM 3 Can I do regression by steps? 12:45PM 0 workspace platform potabillity 11:57AM 1 Help with eigenvectors 11:43AM 0 Multiple Plots and y Axis Labels 11:22AM 4 Histogram with colors according to factor 11:11AM 4 Manipulate Data (with regular expressions) 11:07AM 0 Change legend in the 'hydrogeo' package 10:36AM 0 forecast & xreg 10:28AM 2 Constrained optimization 9:40AM 2 attributing values to dataframe positions following eval 8:36AM 1 package segmented problem 8:14AM 1 shading an area in a edf 8:11AM 0 what does this warning mean ? 8:00AM 2 How to change labels in a histogram 7:38AM 2 Change in behaviour of sd() 5:59AM 4 Can R do this ? 5:58AM 8 Sum(Random Numbers)=100 4:41AM 1 odd dnorm behaviour (?) 3:59AM 2 list genes w/n a genomic fragment Monday July 7 2008 Time Replies Subject 11:56PM 5 question on lm or glm matrix of coeficients X test data terms 9:07PM 5 Basic Vector and Matrix Operations 7:19PM 3 subset() multiple arguments 6:42PM 1 Interest in a Use R Group in San Francisco area? 6:03PM 2 A shorter version of ".Last.value"? 5:51PM 2 Colour clusters in a 2d plot 4:06PM 2 Drawing a colour wheel - bug in hcl? 3:18PM 2 Running "all possible subsets" of a GLM (binomial) model 2:45PM 1 Plot depends on conditions 1:38PM 3 A quick question about lm() 12:29PM 2 Sorting a list 9:44AM 0 Howto Initialize Parameter of Gamma Mixture Densities for EM 9:19AM 2 indicating significant differences in boxplots 9:17AM 1 How can I optimize this piece of code ? 9:15AM 2 one-site competition data // curve fitting 8:49AM 1 Change language in Rcmdr 7:48AM 1 GLM, LMER, GEE interpretation 3:24AM 4 Plot Mixtures of Synthetically Generated Gamma Distributions 2:48AM 0 Functional Data Analysis, fda_1.2.4 1:39AM 5 How can i do an automatic upgrade of R 2.5.1 to R 2.7.1 Sunday July 6 2008 Time Replies Subject 11:50PM 4 Method for checking automatically which distribtions fits a data 11:36PM 1 Exception Handling 11:35PM 0 color with tcl/tk? 10:34PM 4 Numbers as Points in Graphs 9:41PM 2 Issue with postscript figures using WinAnsi encoding 9:17PM 2 Regular expressions: bug or misunderstanding? 8:07PM 0 Benchmarking R and R Workloads 7:36PM 1 ts.plot 7:19PM 5 Error in defining function 6:07PM 1 What is my replication unit? Lmer for binary longitudinal data with blocks and two treaments. 4:01PM 2 Hi~problem with the two sample test: ks2Test in the package of fbasics 3:39PM 3 Lots of huge matrices, for-loops, speed 2:37PM 1 Windows Only: winMenuAdd() problem 2:24PM 1 Backgrounds in Multiple Plots made with "fig" 1:32PM 1 Interpreting messages when building packages 11:52AM 1 lattice smooth problem? 8:59AM 2 looking for alternative of 'if-else' 6:13AM 2 lattice question 5:22AM 1 Different Autocorrelation using R and other softwares 1:44AM 2 Error: cannot use PQL when using lmer 12:51AM 1 interpreting mixed model anova results Saturday July 5 2008 Time Replies Subject 10:14PM 0 Problem solved: extracting values from a "by" function 9:59PM 0 extracting values from a "by" function 7:25PM 1 Random Forest %var(y) 6:55PM 1 SciViews GUI 4:13PM 5 help about random generation of a Normal distribution of several variables 8:17AM 2 Bland-Altman method to measure agreement with repeated measures 8:00AM 0 lattice smooth 7:26AM 3 trying to superimpose a line plot onto a histogram 4:14AM 2 p-value for Nonmetric Multidimentional Scaling? Friday July 4 2008 Time Replies Subject 11:23PM 1 synthax for R CMD INSTALL 10:36PM 2 set type of SS in anova() 8:05PM 1 logistic regreesion and score test 6:42PM 0 RES: initialize a matrix 3:50PM 2 create a zero matrix & fill 3:18PM 1 Calling and running a compiled program inside R 2:38PM 4 Re ad in a file - produce independent vectors 2:22PM 1 initialize a matrix 1:57PM 1 update search path for attached data 1:40PM 1 Test for multiple comparisons: Nonlinear model, autocorrelation? 1:24PM 1 Repeated measures lme or anova 12:42PM 0 test for difference in variance 12:04PM 1 stop without error message 12:02PM 0 passing arguments to a R script 11:59AM 1 Hmisc latex: table column width 7:23AM 1 Problem in installing Biobase 7:08AM 1 kriging problem(?) 6:29AM 3 problem with NA and if 3:54AM 1 by or tapply? Thursday July 3 2008 Time Replies Subject 10:49PM 1 Ternaryplot function 10:48PM 1 'as.Date' conversion of classes POSIX*t (problem/feature)? 10:04PM 3 apply with a division 9:25PM 1 subset function within a function 9:09PM 2 First attempt to use R 8:46PM 2 L1 nonlinear approximation 8:43PM 1 lines() warning message 8:37PM 1 Installation of packages via GUI, Mac OS X 8:14PM 1 Exporting a Graph that has lines() 7:36PM 0 FW: For loop 6:57PM 3 Recoding a variable 6:19PM 3 Re membering the last time an event occurred within a dataframe 5:59PM 2 Relative Mortality Risk second part 5:54PM 1 R-help Digest, Vol 65, Issue 4 5:33PM 0 Random effects and lme4 5:13PM 0 Biostatistician Positions at Vanderbilt University 3:52PM 0 Reproducibility and GUIs 2:40PM 1 read.table, NA assignment, and sep 2:04PM 1 cross-validation in rpart 1:38PM 1 ggplot2: scaling and tick mark of x-axis 1:35PM 2 How to emulate perl style of reading files ? 1:28PM 1 combining time series [was: (no subject)] 12:41PM 1 *** significance with xtable 12:14PM 0 loop calculation 11:41AM 1 Data size 9:25AM 1 Problem in applying conditional looping 9:15AM 2 PCA on image data 9:12AM 2 as.Date() clarification 8:02AM 2 Matrix set value problem 8:01AM 1 ggplot2 legend for vertical lines 7:52AM 2 latex styling for R regression outputs 7:38AM 0 (no subject) 7:38AM 0 post hoc comparisons on NLME for longitudinal data 7:37AM 1 RODBC Access limit? 6:41AM 1 randomForest.error: length of response must be the same as predictors 3:35AM 1 Otpmial initial centroid in kmeans 3:21AM 1 Processing 10^8 rows and 1^3 columns 2:43AM 1 lm() question 1:40AM 3 problem with lm and predict - no predictions made 1:33AM 1 how to capture matching words in a string ? 1:25AM 2 How do I paste double quotes arround a character string? 12:47AM 2 Plotting Prediction Surface with persp() 12:41AM 2 what can we do when R gives no response 12:17AM 1 Migrating from S-Plus to R - Exporting Tables Wednesday July 2 2008 Time Replies Subject 11:24PM 0 neural networks 9:29PM 1 Removing or overwriting an XML node 9:17PM 1 set values in data.frame to NA conditional on another data.frame 7:56PM 1 exporting ftable 7:21PM 1 graph woes 7:20PM 0 Goodness of fit test 7:10PM 2 Accessing a field in a data fram 6:55PM 1 flow map lines between point pairs (latitude/longitude) 6:55PM 2 Reading CSV file with unequal record length 6:49PM 2 help appreciated to make a plot 6:02PM 1 heatmap 5:58PM 1 Problem with strucchange package 5:22PM 0 question on dispersion parameter 4:55PM 0 error on predict 4:30PM 5 multiplication question 4:23PM 1 auto.key in xyplot in conjunction with panel.text 4:22PM 1 Sunset in Dortmund 4:13PM 1 Extracting regression coef. and p-values in JRClient 4:04PM 2 get formatted regression output 3:58PM 1 Hmisc latex function with longtable option 3:37PM 1 help on list comparison 3:21PM 0 log plots woes 2:54PM 1 Multiple time series plots 2:52PM 1 Tobit Estimation with Panel Data 2:39PM 1 Insert text in data.frame 2:28PM 1 Usage of rJava (.jcall) with Weka functions, any example? 2:24PM 2 Problem reading from a data frame 1:44PM 1 randomForest training error 1:31PM 3 variable as part of file name 1:30PM 1 survival package test stats 1:25PM 1 Zoo plotting behavior 12:42PM 0 Question about GCL function 12:29PM 1 FW: RES: bug in axis.Date? was (Re: newbie needs help plottingtimeseries) 12:24PM 1 Can't install package Matrix on solaris. 10:53AM 1 how do I read only specific columns using read.csv or other read function 10:28AM 2 Correlation structures in gls with repeated measurements 10:05AM 0 Sweave / Latex per-chapter output 9:52AM 1 conversion of data for use within barchart 9:49AM 0 Combining playwith with par(mfrow... ) i.e., multiple plots. 9:41AM 2 position legend below x-axis title 8:41AM 2 Optimal lag selection in Granger Causality tests 3:20AM 0 Survival Analysis questions 2:43AM 0 forget my previous question 2:41AM 1 is there an equivalent of prop.table but for counts 2:10AM 1 Install Packages in Windows Vista Tuesday July 1 2008 Time Replies Subject 11:49PM 4 passing a variable at the command line 11:16PM 1 adding a package in Windows 10:38PM 2 Area Under a Curve 10:20PM 2 Graph Order in xyplot 9:25PM 2 plot.zoo labels 7:17PM 3 Change name of a specific column of a data frame 7:15PM 3 plot window 7:11PM 2 Problem with loading library-ks 7:03PM 2 problem with mpiexec and Rmpi 6:28PM 2 Fitting a curve to data 6:15PM 5 WIERD: Basic computing in R 5:49PM 3 Ignore blank columns when reading a row from a table 5:12PM 1 Simulate from a GAM model 5:12PM 1 Orthogonal polynomials and poly 4:54PM 2 Prediction with Bayesian Network? 4:11PM 2 ignore warning messages? 3:49PM 2 how to automatically maximize the graph window (under XP) ? 3:21PM 2 "Invalid object" error in boxplot 3:14PM 1 Plotting Bi-Gamma Distribution 3:03PM 1 select.list() cannot be used non-interactively 2:54PM 0 Query on packages for Emerging Patterns 1:50PM 4 Find classes of each column of data.frame() 1:38PM 5 A regression problem using dummy variables 12:55PM 0 cohort sampling 12:46PM 5 trivial list question 12:40PM 2 if one of 4 conditions is not satisfied 10:20AM 3 dev.off() inside a function & other glitches 9:58AM 3 reshape matrices 9:17AM 2 Set default value 8:57AM 1 Help on Analysis of covariance 8:46AM 1 Installing R into home directory? 8:03AM 2 Are centre coordinates or upper left corners used of x, y for SpatialPixels? 7:40AM 1 extracting elements from a list in vectorized form 3:44AM 1 Messge ``no visible binding''. 2:08AM 2 PCA : Error in eigen(cv, 12:54AM 1 Help in using PCR
{"url":"https://thr3ads.net/r-help/2008/07","timestamp":"2024-11-06T15:41:12Z","content_type":"text/html","content_length":"202896","record_id":"<urn:uuid:1098daff-038d-4b2b-8b7c-77a8e222ac40>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00526.warc.gz"}
Incompatibility of Standard Completeness and Quant Authors: Carsten Held Publish Date: 2012/05/10 Volume: 51, Issue: 9, -2984 The completeness of quantum mechanics QM is generally interpreted to be or entail the following conditional statement called standard completeness SC If a QM system S is in a pure noneigenstate of observable A then S does not have value a k of A at t where a k is any eigenvalue of A QM itself can be assumed to contain two elements i a formula generating probabilities ii Hamiltonians that can be timedependent due to a timedependent external potential It is shown that given i and ii QM and SC are incompatible Hence SC is not the appropriate interpretation of the completeness of QM
{"url":"https://pdf-paper.com/2012/131/198","timestamp":"2024-11-04T23:53:58Z","content_type":"application/xhtml+xml","content_length":"32749","record_id":"<urn:uuid:90a2223b-07a8-46be-a456-a7a134db02e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00747.warc.gz"}
ThmDex – An index of mathematical definitions, results, and conjectures. Let $X$ be a D11: Set such that (i) $I : X \to Y$ is an D440: Identity map on $X$ For all $x, y \in X$, we have $x = I(x) = I(y) = y$. The claim follows. $\square$
{"url":"https://theoremdex.org/r/2767","timestamp":"2024-11-09T22:47:20Z","content_type":"text/html","content_length":"6087","record_id":"<urn:uuid:431abd3f-7154-42c3-bdcf-9b1a7f8e37c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00185.warc.gz"}
Bias, the unfinished symphony - Biochemia Medica To the Editor, I read the paper “Bias estimation for Sigma metric calculation: Arithmetic mean versus quadratic mean” written by Ercan et al. with great interest (1). In the paper, the author criticized the approach of taking the arithmetic mean of the multiple biases to obtain a single bias and proposed a new method (quadratic mean) to estimate the overall bias using external quality assurance services (EQAS) data for sigma metric (SM) calculation (2). The bias issue has been a kind of unfinished symphony and is rarely evaluated correctly. As stated by Galileo Galilei “the book of nature is written in the language of mathematics”. Mathematic is the map of scientists but can cause chaos if not used properly. I agree with the author that taking the sum of biases may underestimate the overall bias, but the approach proposed by the authors of both papers contain major scientific errors as briefly summarized below: First of all, bias cannot be calculated using a single measurement result. Unfortunately, in the literature, there are many papers containing this error and most of the authors use it without checking its scientific background. According to the International Vocabulary of Metrology (VIM) bias is the “average of the replicate indications minus a reference quantity value” (3). From this definition, we can say that two main components are essential to calculate the bias of an instrument correctly. The first one is the reference value and the second one is the average of replicate measurement results (Figure 1). The reference value should be obtained from certified reference materials and reference methods. However, if this is not possible then a consensus value can be accepted as the target value. In this case, the peer group’s average can be considered as the target value. In practice, laboratories usually use the peer group’s average as the target value, and this is correct. On the other hand, laboratories usually report single measurement results to evaluate the performance of the measurement procedure. Due to random error, using a single measurement result is not adequate to calculate bias and at least a duplicate measurement is necessary. Additionally, the significance of the bias should be considered in all calculations related to bias (4). Since bias is “the difference between reference quantity and the average of repeated measurements of the measurand”, this difference might not be significant in all situations and therefore, before handling bias for further calculation its significance should be evaluated. If bias is not significant then it can be neglected. The second important point is that there is no universal equation to be used in all different conditions. Therefore, before using an equation, its pros and cons should be considered. The equation used by the author is the simplified version of the equation of the pooled variances. The correct form of this equation (Eq.) is given below: If the number of data used to calculate each variance is equal, i.e., n[1] = n[2] = n[3] = …= n[k] then this equation can be simplified as given below: The components of equations 1 and 2 are variances. If the number of data (n) used to calculate each variance is different than n should be included and additionally, the variances must be random and homogenous. Otherwise using this equation may give incorrect results. The third point is that even if the bias is calculated correctly, using bias as a linear component in SM calculation is not correct (5). The six sigma theory is based on the normal distribution and the curve of the normal distribution is bell-shaped. Bias can be treated as a linear component in uniform distribution but not in a normal distribution (6). Using bias as a linear component in SM calculation (see Eq. 3.) usually causes an underestimate of the performance calculation of the process. It is unfortunate that the performance of medical instruments calculated using this equation is very low. All these instruments are high-tech instruments, and their actual quality level is higher than the SM calculated using this equation. The fourth point is that if bias is known, it should be corrected. It does not make sense to include a known bias in the SM calculation while it can be corrected. In routine practice, we do not know whether bias is present or not if the process is not real-time monitoring. Therefore, in the six sigma methodology 1.5 standard deviation (SD) bias is included in the calculation. However, this 1.5 SD is not included directly as a linear parameter using the Eq. 3. In case of the presence of bias, the defects per million opportunities (DPMO) corresponding to SM are calculated using the mathematics of the normal distribution curve (7, 8). In conclusion, before using an equation, its suitability should be checked and confirmed. It should be noted that using the correct equation in calculations is as important as using the correct reagent in the measurement of the analytes.
{"url":"https://mail.biochemia-medica.com/en/journal/32/3/10.11613/BM.2022.030402","timestamp":"2024-11-07T23:08:00Z","content_type":"text/html","content_length":"76592","record_id":"<urn:uuid:350cbfb2-44ef-40ff-b4b6-f6c2cda3917d>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00660.warc.gz"}
Subtracting with Regrouping Background Information | BrainPOP Educators Subtracting with Regrouping Background Information for Teachers, Parents and Caregivers This page provides information to support educators and families in teaching K-3 students about subtracting with regrouping. It is designed to complement the Subtracting with Regrouping topic page on BrainPOP Jr. Before exploring this topic, we highly recommend reviewing movies in the Addition and Subtraction unit. Children should be comfortable employing different strategies to solve number sentences. This movie will explore subtracting with regrouping, specifically with equations in which a one-digit number is subtracted from a two-digit number. This can be a tricky topic for some children, so we recommend using base-ten blocks or counters to model number sentences. We encourage you to pause the movie and have children solve the problems posed in the movie. Pose a number story or a number sentence that requires regrouping to solve, such as 16 – 9. Review with children that when they subtract using two-digit numbers, they must subtract the ones place first. Some children may remember that they should subtract a smaller number from a larger number, so how can they subtract 6 – 9? Remind children that when they regroup, they gather ten ones and trade it in for a ten. They can also take a ten and trade it in for ten ones. You may want to use base-ten blocks to demonstrate this concept. Explain that we can borrow a ten and turn it into ten ones in order to solve the number sentence. We can show the number 16 with a tens rod and six ones cubes, and we can also turn the tens rod into ten ones cubes and show the number using sixteen ones cubes. Now we can easily subtract 9 ones from 16 ones. So 16 – 9 = 7. Repeat this activity again to show 12 – 5 or 17 – 8. Challenge children to solve the number sentence 35 – 8. Guide them to look at the numbers in the ones place first. They must regroup in order to solve. Use base-ten blocks to model 35. They can borrow a ten and turn it into ten ones so now there are two tens rods and fifteen ones cubes. Then they can easily subtract 8 ones cubes. Guide them to write 7 in the ones place under the answer bar. Then have them subtract the tens place. Since we regrouped a ten, there are two tens left. Guide them to write 2 in the tens place under the answer bar. Therefore, 35 – 8 = 27. Repeat the activity again with other number sentences. Encourage children to check their work after they subtract. They can use addition to check their subtraction. So, for example, to check 35 – 8 = 27, they can use the addition sentence 27 + 8 = 35. Walk children through solving a number sentence without using counters. Pose the number sentence 93 – 6. You may want to write the sentence horizontally and have children rewrite the sentence vertically. Make sure they line up the place value columns correctly. To solve this equation, they must regroup a ten. Show them how to cross out the 9 and write 8 next to it. Then they can add 10 to the 3 in the ones place. They can cross out the 3 and write 13 next to it. What is 13 – 6? Children can employ different strategies (such as making a ten, or using what they know about doubles) to solve 13 – 6 = 7. Have them write 7 in the ones place under the answer bar. Now they can subtract the tens place. They have 8 tens, so guide them to write 8 in the tens place under the answer bar. Therefore, 93 – 6 = 87. They can use 87 + 6 = 93 to check their work. Give children plenty of practice to solve number sentences with regrouping. Encourage them to use different strategies to solve problems and use base-ten blocks or counters. Then transition them out of using counters. Be sure to guide them through regrouping and encourage them to check their work by using the inverse operation.
{"url":"https://educators.brainpop.com/lesson-plan/subtracting-with-regrouping-background-information/","timestamp":"2024-11-11T14:58:23Z","content_type":"application/xhtml+xml","content_length":"56850","record_id":"<urn:uuid:ebf09f0e-afc8-4da4-8674-52acbbe0647f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00095.warc.gz"}
Beyond the Kalamides Gedankenexperiment which appears now to be refuted - June 2, 2013 □ Jack Sarfatti On Jun 2, 2013, at 7:22 AM, JACK SARFATTI <This email address is being protected from spambots. You need JavaScript enabled to view it.> wrote: Yes it's always the case that if the time evolution is unitary signal interference terms cancel out. That is essence of the no-signal argument. It's what defeated my 1978 attempt using two interferometers on each end of the pair source that David Kaiser describes in How the Hippies Saved Physics that was in first edition of Gary Zukav's Dancing Wu Li Masters. Stapp gave one of the first no-signal proofs in response to my attempt. I. However, one of the tacit assumptions is that all observables must be Hermitian operators with real eigenvalues and a complete orthogonal basis. II. Another assumption is that the normalization once chosen should not depend on the free will of the experimenter. Both & II are violated by Glauber states. The linear unitary dynamics is also violated when the coherent state is Higgs-Goldstone vacuum/groundstate expectation value order parameter of a non-Hermitian boson second quantized field operator where the c number local nonlinear nonunitary Landau-Ginzburg equation in ordinary space replaces the linear unitary Schrodinger equation in configuration (or Wigner phase space more generally) as the dominant dynamic. P. W. Anderson called this "More is different." For example in my toy model NORMALIZED so as to rid us of that damn spooky telepathic psychokinetic voodoo magick without magic |A,B> = [2(1 + |<w|z>|^2)]^-1/2[|0>|z> + |1>|w>] <0|1> = 0 for Alice A <w|z> =/= 0 for Bob B Trace over B {|0><0| |A,B><A,B|} = 1/2 etc. probability is conserved and Alice receives no signal from Bob in accord with Abner Shimony's "passion at a distance". However, probability is not conserved on Bob's side! Do the calculation if you don't believe me. Two more options i. use 1/2^1/2 normalization, then we get an entanglement signal for Alice with violation of probability conservation for Alice, though not for Bob ii Final Rube Goldberg option (suspect) use different normalizations depending on who does the strong von Neumann measurement Alice or Bob. Now this is a violation of orthodox quantum theory ladies and gentlemen. Sent from my iPhone in San Francisco, Russian Hill □ Jack Sarfatti On Jun 2, 2013, at 12:56 AM, nick herbert <This email address is being protected from spambots. You need JavaScript enabled to view it.> wrote: Kalamidas Fans-- I have looked over Martin Suda's two papers entitled 1. Taylor expansion of Output States and 2. Interferometry at the 50/50 BS. My conclusion is that Martin is within one millimeter of a solid refutation of the kalamidas scheme. Congratulations, Martin, on achieving this result and on paying so much close attention to kalamidas's arguments. The result, as expected, comes from a very strange direction. In particular, the approximation does not enter into Suda's refutation. Martin accepts all of kalamidas's approximations and refutes him anyway. I have not followed the math in detail but I have been able to comprehend the essential points. First, on account of the Martin Suda paradox, either PACS or DFS can be correctly used at this stage of the argument. So martin derives the kalamidas result both ways using PACS (Kalamidas's Way) and then DFS (Howell's Way). Both results are the same. Then Martin calculates the signal at the 50/50 beam splitter (Alice's receiver) due to Bob's decision to mix his photon with a coherent state |A>. Not surprisingly Martin discovers lots of interference terms. So Kalamidas is right. However all of these interference terms just happen to cancel out. So Kalamidas is wrong. Refutation Complete. Martin Suda Wins. This is a very elegant refutation and if it can be sustained, then Kalamidas's Scheme has definitively entered the Dustbin of History. And GianCarlo can add it to his upcoming review of refuted FTL schemes. But before we pass out the medals, there is one feature of the Suda Refutation that needs a bit of justification. Suda's formulation of the Kalamidas Scheme differs in one essential way from Demetrios's original presentation. And it is this difference between the two presentations that spells DOOM FOR DEMETRIOS. Kalamidas has ONE TERM |1,1> that erases which-way information and Suda has two. Suda's EXTRA TERM is |0,0> and represents the situation where neither of Bob's primary counters fires. Having another term that erases which-way information would seem to be good, in that the Suda term might be expected to increase the strength of the interference term. However--and this is the gist of the Suda refutation--the additional Suda term |0.0> has precisely the right amplitude to EXACTLY CANCEL the effect of the Kalamidas |1,1> term. Using A (Greek upper-case alpha) to represent "alpha", Martin calculates that the amplitude of the Kalamidas |1,1> term is A. And that the amplitude of the Suda |0,0> term is -A*. And if these amplitudes are correct, the total interference at Alice's detectors completely disappears. Congratulations, Martin. I hope I have represented your argument correctly. The only task remaining is to justify the presence (and the amplitude) of the Suda term. Is it really physically reasonable, given the physics of the situation, that so many |0,0> events can be expected to occur in the real world? I leave that subtle question for the experts to decide. Wonderful work, Martin. Nick Herbert
{"url":"https://stardrive.org/index.php/all-blog-articles/10070--sp-624","timestamp":"2024-11-06T07:25:41Z","content_type":"text/html","content_length":"48819","record_id":"<urn:uuid:bd36f5f3-1bd0-43bc-ae59-9a048b62aa13>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00135.warc.gz"}
seminars - Introduction to spectral network ※ 일시/장소 - 6월 3일(월), 17:00~18:15, 129동 301호 - 6월 5일(수), 17:00~18:15, 129동 309호 - 6월 11일(화), 17:00~18:15, 129동 301호 - 6월 12일(수), 17:00~18:15, 129동 301호 Abstract: Spectral networks were introduced in a seminal article by Davide Gaiotto, Gregory W. Moore and Andrew Neitzke published in 2013. These are networks of trajectories on surfaces that naturally arise in the study of various four-dimensional quantum field theories. From a purely geometric point of view, they yield a new map between flat connections over a Riemann surface and flat abelian connections on a spectral covering of the surface. At the same time, these networks of trajectories provide local coordinate systems on the moduli space of flat connections that are valuable in the study of higher Teichmüller spaces. In the first part of this mini-course, I will review key concepts from geometric group theory, including hyperbolic groups and boundaries at infinity of hyperbolic groups and spaces. Following this, I will discuss the theory of vector bundles and the Riemann-Hilbert correspondence. In the second part, I will define spectral networks explicitly for surfaces with punctures. I will also present and discuss their most prominent applications in geometry: non-abelianization and abelianization, which connect higher Teichmüller spaces of a base surface to abelian character varieties of its ramified cover, the spectral curve.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=date&order_type=desc&l=en&page=4&document_srl=1237019","timestamp":"2024-11-10T11:26:09Z","content_type":"text/html","content_length":"46586","record_id":"<urn:uuid:c9abc68d-06e6-4923-a214-5bc18bd80e30>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00259.warc.gz"}
Bigha [Assam] to Katha [Assam] Bigha [Assam] [bigha] Output 1 bigha [assam] in ankanam is equal to 200 1 bigha [assam] in aana is equal to 42.07 1 bigha [assam] in acre is equal to 0.33057822015093 1 bigha [assam] in arpent is equal to 0.39129694026225 1 bigha [assam] in are is equal to 13.38 1 bigha [assam] in barn is equal to 1.337803776e+31 1 bigha [assam] in bigha [west bengal] is equal to 1 1 bigha [assam] in bigha [uttar pradesh] is equal to 0.53333333333333 1 bigha [assam] in bigha [madhya pradesh] is equal to 1.2 1 bigha [assam] in bigha [rajasthan] is equal to 0.52892561983471 1 bigha [assam] in bigha [bihar] is equal to 0.52902277736958 1 bigha [assam] in bigha [gujrat] is equal to 0.82644628099174 1 bigha [assam] in bigha [himachal pradesh] is equal to 1.65 1 bigha [assam] in bigha [nepal] is equal to 0.19753086419753 1 bigha [assam] in biswa [uttar pradesh] is equal to 10.67 1 bigha [assam] in bovate is equal to 0.0222967296 1 bigha [assam] in bunder is equal to 0.1337803776 1 bigha [assam] in caballeria is equal to 0.00297289728 1 bigha [assam] in caballeria [cuba] is equal to 0.0099687315648286 1 bigha [assam] in caballeria [spain] is equal to 0.00334450944 1 bigha [assam] in carreau is equal to 0.10370571906977 1 bigha [assam] in carucate is equal to 0.0027526826666667 1 bigha [assam] in cawnie is equal to 0.24774144 1 bigha [assam] in cent is equal to 33.06 1 bigha [assam] in centiare is equal to 1337.8 1 bigha [assam] in circular foot is equal to 18334.63 1 bigha [assam] in circular inch is equal to 2640187.14 1 bigha [assam] in cong is equal to 1.34 1 bigha [assam] in cover is equal to 0.49585017642698 1 bigha [assam] in cuerda is equal to 0.34040808549618 1 bigha [assam] in chatak is equal to 320 1 bigha [assam] in decimal is equal to 33.06 1 bigha [assam] in dekare is equal to 1.34 1 bigha [assam] in dismil is equal to 33.06 1 bigha [assam] in dhur [tripura] is equal to 4000 1 bigha [assam] in dhur [nepal] is equal to 79.01 1 bigha [assam] in dunam is equal to 1.34 1 bigha [assam] in drone is equal to 0.052083333333333 1 bigha [assam] in fanega is equal to 0.2080565748056 1 bigha [assam] in farthingdale is equal to 1.32 1 bigha [assam] in feddan is equal to 0.32094972830188 1 bigha [assam] in ganda is equal to 16.67 1 bigha [assam] in gaj is equal to 1600 1 bigha [assam] in gajam is equal to 1600 1 bigha [assam] in guntha is equal to 13.22 1 bigha [assam] in ghumaon is equal to 0.33057851239669 1 bigha [assam] in ground is equal to 6 1 bigha [assam] in hacienda is equal to 0.000014930845714286 1 bigha [assam] in hectare is equal to 0.1337803776 1 bigha [assam] in hide is equal to 0.0027526826666667 1 bigha [assam] in hout is equal to 0.94127790644185 1 bigha [assam] in hundred is equal to 0.000027526826666667 1 bigha [assam] in jerib is equal to 0.66176470588235 1 bigha [assam] in jutro is equal to 0.2324593876629 1 bigha [assam] in katha [bangladesh] is equal to 20 1 bigha [assam] in kanal is equal to 2.64 1 bigha [assam] in kani is equal to 0.83333333333333 1 bigha [assam] in kara is equal to 66.67 1 bigha [assam] in kappland is equal to 8.67 1 bigha [assam] in killa is equal to 0.33057851239669 1 bigha [assam] in kranta is equal to 200 1 bigha [assam] in kuli is equal to 100 1 bigha [assam] in kuncham is equal to 3.31 1 bigha [assam] in lecha is equal to 100 1 bigha [assam] in labor is equal to 0.0018662396133532 1 bigha [assam] in legua is equal to 0.000074649584534128 1 bigha [assam] in manzana [argentina] is equal to 0.1337803776 1 bigha [assam] in manzana [costa rica] is equal to 0.19141671665026 1 bigha [assam] in marla is equal to 52.89 1 bigha [assam] in morgen [germany] is equal to 0.5351215104 1 bigha [assam] in morgen [south africa] is equal to 0.15615778872417 1 bigha [assam] in mu is equal to 2.01 1 bigha [assam] in murabba is equal to 0.013223128806037 1 bigha [assam] in mutthi is equal to 106.67 1 bigha [assam] in ngarn is equal to 3.34 1 bigha [assam] in nali is equal to 6.67 1 bigha [assam] in oxgang is equal to 0.0222967296 1 bigha [assam] in paisa is equal to 168.3 1 bigha [assam] in perche is equal to 39.13 1 bigha [assam] in parappu is equal to 5.29 1 bigha [assam] in pyong is equal to 404.66 1 bigha [assam] in rai is equal to 0.83612736 1 bigha [assam] in rood is equal to 1.32 1 bigha [assam] in ropani is equal to 2.63 1 bigha [assam] in satak is equal to 33.06 1 bigha [assam] in section is equal to 0.00051652892561983 1 bigha [assam] in sitio is equal to 0.000074322432 1 bigha [assam] in square is equal to 144 1 bigha [assam] in square angstrom is equal to 1.337803776e+23 1 bigha [assam] in square astronomical units is equal to 5.9778029049145e-20 1 bigha [assam] in square attometer is equal to 1.337803776e+39 1 bigha [assam] in square bicron is equal to 1.337803776e+27 1 bigha [assam] in square centimeter is equal to 13378037.76 1 bigha [assam] in square chain is equal to 3.31 1 bigha [assam] in square cubit is equal to 6400 1 bigha [assam] in square decimeter is equal to 133780.38 1 bigha [assam] in square dekameter is equal to 13.38 1 bigha [assam] in square digit is equal to 3686400 1 bigha [assam] in square exameter is equal to 1.337803776e-33 1 bigha [assam] in square fathom is equal to 400 1 bigha [assam] in square femtometer is equal to 1.337803776e+33 1 bigha [assam] in square fermi is equal to 1.337803776e+33 1 bigha [assam] in square feet is equal to 14400 1 bigha [assam] in square furlong is equal to 0.033057822015093 1 bigha [assam] in square gigameter is equal to 1.337803776e-15 1 bigha [assam] in square hectometer is equal to 0.1337803776 1 bigha [assam] in square inch is equal to 2073600 1 bigha [assam] in square league is equal to 0.000057391873851833 1 bigha [assam] in square light year is equal to 1.4946611226664e-29 1 bigha [assam] in square kilometer is equal to 0.001337803776 1 bigha [assam] in square megameter is equal to 1.337803776e-9 1 bigha [assam] in square meter is equal to 1337.8 1 bigha [assam] in square microinch is equal to 2073598170759500000 1 bigha [assam] in square micrometer is equal to 1337803776000000 1 bigha [assam] in square micromicron is equal to 1.337803776e+27 1 bigha [assam] in square micron is equal to 1337803776000000 1 bigha [assam] in square mil is equal to 2073600000000 1 bigha [assam] in square mile is equal to 0.00051652892561983 1 bigha [assam] in square millimeter is equal to 1337803776 1 bigha [assam] in square nanometer is equal to 1.337803776e+21 1 bigha [assam] in square nautical league is equal to 0.000043337907999757 1 bigha [assam] in square nautical mile is equal to 0.00039004082792438 1 bigha [assam] in square paris foot is equal to 12680.6 1 bigha [assam] in square parsec is equal to 1.4050481584726e-30 1 bigha [assam] in perch is equal to 52.89 1 bigha [assam] in square perche is equal to 26.19 1 bigha [assam] in square petameter is equal to 1.337803776e-27 1 bigha [assam] in square picometer is equal to 1.337803776e+27 1 bigha [assam] in square pole is equal to 52.89 1 bigha [assam] in square rod is equal to 52.89 1 bigha [assam] in square terameter is equal to 1.337803776e-21 1 bigha [assam] in square thou is equal to 2073600000000 1 bigha [assam] in square yard is equal to 1600 1 bigha [assam] in square yoctometer is equal to 1.337803776e+51 1 bigha [assam] in square yottameter is equal to 1.337803776e-45 1 bigha [assam] in stang is equal to 0.49383675747508 1 bigha [assam] in stremma is equal to 1.34 1 bigha [assam] in sarsai is equal to 476.03 1 bigha [assam] in tarea is equal to 2.13 1 bigha [assam] in tatami is equal to 809.37 1 bigha [assam] in tonde land is equal to 0.2425315039884 1 bigha [assam] in tsubo is equal to 404.68 1 bigha [assam] in township is equal to 0.000014348013027384 1 bigha [assam] in tunnland is equal to 0.27100797666316 1 bigha [assam] in vaar is equal to 1600 1 bigha [assam] in virgate is equal to 0.0111483648 1 bigha [assam] in veli is equal to 0.16666666666667 1 bigha [assam] in pari is equal to 0.13223140495868 1 bigha [assam] in sangam is equal to 0.52892561983471 1 bigha [assam] in kottah [bangladesh] is equal to 20 1 bigha [assam] in gunta is equal to 13.22 1 bigha [assam] in point is equal to 33.06 1 bigha [assam] in lourak is equal to 0.26446280991736 1 bigha [assam] in loukhai is equal to 1.06 1 bigha [assam] in loushal is equal to 2.12 1 bigha [assam] in tong is equal to 4.23 1 bigha [assam] in kuzhi is equal to 100 1 bigha [assam] in chadara is equal to 144 1 bigha [assam] in veesam is equal to 1600 1 bigha [assam] in lacham is equal to 5.29 1 bigha [assam] in katha [nepal] is equal to 3.95 1 bigha [assam] in katha [assam] is equal to 5 1 bigha [assam] in katha [bihar] is equal to 10.58 1 bigha [assam] in dhur [bihar] is equal to 211.61 1 bigha [assam] in dhurki is equal to 4232.18
{"url":"https://hextobinary.com/unit/area/from/bighaas/to/kathaas","timestamp":"2024-11-09T18:45:35Z","content_type":"text/html","content_length":"128217","record_id":"<urn:uuid:f432cca8-58b4-4909-bd89-8157423b47aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00582.warc.gz"}
Finding the Inverse of a Transpose Matrix Question Video: Finding the Inverse of a Transpose Matrix Mathematics • Third Year of Secondary School Given 𝐴⁻¹ = [2, 5, 1 and 32, 11, 74 and 32, 12, 18], find (𝐴^(𝑇))⁻¹. Video Transcript Given the inverse of 𝐴 is equal to two, five, one, 32, 11, 74, 32, 12, 18, find the inverse of the transpose of 𝐴. We’ve been given the inverse of matrix 𝐴, and then we’re asked to find the inverse of the transpose of the original matrix. So, what we could do is begin by finding the inverse of the inverse of 𝐴; that will give us the original matrix 𝐴 which we can then transpose and find the inverse of. However, there is a really useful property that we can apply that will save us a lot of time. That is, assuming 𝐴 is an invertible matrix, the transpose of the inverse of 𝐴 is equal to the inverse of the transpose of 𝐴. And this means that we can find the inverse of the transpose of matrix 𝐴 by simply transposing the inverse 𝐴. So, how do we transpose a matrix? The elements on the main diagonal remain unchanged and then we switch the elements across this diagonal. This corresponds essentially to switching the rows and the columns. So, let’s take the elements in our first row, and we’re going to put them in our first column, as shown. Next, we take the elements in our second row and we put them in our second column. That gives us a completed second column of our transpose. We do this one more time, taking the elements in our third row of our first matrix and transposing them into our third column. And when we do, we have the inverse of the transpose of 𝐴. The transpose of the inverse, which is also the inverse of the transpose, is the three-by-three matrix with elements two, 32, 32, five, 11, 12, one, 74, 18.
{"url":"https://www.nagwa.com/en/videos/938180276218/","timestamp":"2024-11-06T17:18:51Z","content_type":"text/html","content_length":"249255","record_id":"<urn:uuid:01726838-791c-449e-9a2a-b53f39c63547>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00382.warc.gz"}
Matching quantities 0-10 (concrete, pictorial, abstract - mixed) - Online learning activity Matching quantities 0-10 (concrete, pictorial, abstract - mixed) Early math concepts In this exercise, the child forms a line with several pictures, fingers and blocks and connects them with the word number and finally with the number. To the sound recording of the number, the child matches the number of pictures (hearts, shoes, stars, etc.), then matches the number of fingers or prisms, and finally finds the specific number. The goal is for the child to form an idea of a number in the range 0-10 and to understand the interrelationship between counting, naming and notation. The child should therefore be aware that the number, word and digit of one particular number have multiple forms. Why is this exercise important? The exercise supports the development of early numerical ideas. The aim is for the child to develop an idea of numbers in the range 0-5 and later 0-10 and to understand that the quantity ☻☻☻☻☻ can be represented as / / / /, this corresponds to the word "five" and the notation 5 and of course vice versa i.e. the relationships between quantity, its naming and numerical notation. Who is this exercise suitable for? In general, it belongs to preschool or early school games. In addition to the concepts of number ideas, and rational assumptions, it also develops language skills at the same time. Part of the children solve the task intuitively and naturally, part of the children need to go through these tasks. Methodological recommendations Instructions can be read by you, played from the app or by the child. In the settings, we can customize the exercise: • Combination of pictures, symbols, fingers, prisms, sounds, numbers • Number range 0-5 or 0-10 The child hears the number first and gradually finds the correct pattern for it. He matches the answers by dragging pictures, symbols, fingers, prisms, sounds and numbers in columns so that everything matches the pattern in the left column. It depends on the setting we have chosen. The exercise presents a higher level of difficulty because the child has to mentally jump randomly from different ways of thinking about the number, using randomly different ways of representation. Tips for similar activities outside the app For this activity, we can use finger counting in natural situations. For example, physically matching a finger to an object or prompting the child to remember how many horses were in the paddock, for example. The child has to mentally recall the situation and use the fingers as symbols to express the number. It is also a great help to play memory games, and quartet and match the number with sounds, counts and so on.
{"url":"https://www.levebee.com/exercises/detail/id-1271","timestamp":"2024-11-08T19:00:17Z","content_type":"text/html","content_length":"24311","record_id":"<urn:uuid:18f70013-d662-430b-83fd-128c9c3a38c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00700.warc.gz"}
Elementary Mathematics for Teachers Elementary Mathematics for Teachers is a textbook for a semester or two-quarter university course for pre-service teachers. It is also appropriate for courses for practicing teachers. This book focuses exclusively on K-8 mathematics. It develops elementary mathematics at the level of "teacher knowledge". To that end, the text uses five Primary Mathematics Textbooks as a source of problems and to repeatedly illustrate several themes, including: • How the nature of a mathematics topic suggests a particular order for classroom development. • How topics are developed through "teaching sequences" which begin with easy problems and incrementally progress until the topic is mastered. • How the mathematics builds on itself through the grades. Look inside the textbook!
{"url":"http://www.sefton-ash.com/index.php/elementary-mathematics-for-teachers","timestamp":"2024-11-08T01:09:43Z","content_type":"text/html","content_length":"8339","record_id":"<urn:uuid:3fffd15c-a213-40f4-bd19-4be4414e115b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00298.warc.gz"}
\[ - 13\ Hint:- Use the product rule to find derivatives. Let, y\[ = \left( {ax - 5} \right){e^{3x}}\] (1) As given in the question that the value of derivative of y with respect to x at \[x = 0\]is \[ - 13\]. As, y is a function of x. So, we can get the derivative of y easily by using product rule. Which states that if u and v are two functions then, \[ \Rightarrow \left( {\dfrac{{d(uv)}}{{dx}}} \right) = u\dfrac{{dv}}{{dx}} + v\dfrac{{du}}{{dx}}\] So, here u\[ = {e^{3x}}\] and v\[ = \left( {ax - 5} \right)\] So, differentiating equation 1 with respect to x. We get, \[ \Rightarrow \dfrac{{dy}}{{dx}} = {e^{3x}}\dfrac{{d(ax - 5)}}{{dx}} + (ax - 5)\dfrac{{d({e^{3x}})}}{{dx}}\] (By using product rule) \[ \Rightarrow \dfrac{{dy}}{{dx}} = {e^{3x}}(a) + (ax - 5)3{e^{3x}}\] Now, putting \[x = 0\] in the above equation. We get, \[ \Rightarrow {\left( {\dfrac{{dy}}{{dx}}} \right)_{x = 0}} = {e^0}(a) + (a(0) - 5)3{e^0}\] As, given in the question, the derivative of the given function i.e. y at \[x = 0\] is \[ - 13\]. So, \[ \Rightarrow - 13 = a - 15\] \[ \Rightarrow a = 2\] Hence, the correct option is E. Note:- Whenever we come up with this type of problem where we are given with a function and the value of the derivative of that function at a given point, we first calculate the derivative of that function at a known point, then equate it with the given value of the derivative of the function at that point to get the required value of the variable.
{"url":"https://www.vedantu.com/question-answer/if-the-derivative-of-left-ax-5-righte3x-at-x-0-class-12-maths-cbse-5ed55e43c86a200e265ae58d","timestamp":"2024-11-06T04:29:10Z","content_type":"text/html","content_length":"179663","record_id":"<urn:uuid:f24c1bc7-3fc8-44ce-8190-8b3bceb99d66>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00677.warc.gz"}
Abstracts for Higher Geometric Structures along the Lower Rhine VIII Brice le Grignou Don, 15/12/2016 - 16:30 - 17:30 In this talk, I will describe the homotopy theory of differential graded unital associative algebras. We already know that they are organized into a model category whose weak equivalences are quasi-isomorphisms. However, the computations of cofibrant resolutions of algebras make this framework unwieldy. I will show that the category of dg unital associative algebras may be embedded into the category of curved coalgebras whose homotopy theory is equivalent but more manageable. Then, I will generalize this method to the case of dg operads and to the case of algebras over an operad.
{"url":"http://www.mpim-bonn.mpg.de/de/node/6832/abstracts/all","timestamp":"2024-11-08T22:15:11Z","content_type":"application/xhtml+xml","content_length":"40110","record_id":"<urn:uuid:b8a9f1ed-23de-46e2-81f4-f2673587b7b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00161.warc.gz"}
, usually denoted by , is a measurement of plane angle, representing of a full rotation. It is not an SI unit, as the SI unit for angles is radian, but it is mentioned in the SI brochure as an accepted unit. Because a full rotation equals 2π radians, one degree is equivalent to π/180 radians. The above text is a snippet from Wikipedia: Degree (angle) and as such is available under the Creative Commons Attribution/Share-Alike License. 1. A step on a set of stairs; the rung of a ladder. 2. An individual step, or stage, in any process or scale of values. 3. A stage of rank or privilege; social standing. 4. A ‘step’ in genealogical descent. 5. One's relative state or experience; way, manner. 6. The amount that an entity possesses a certain property; relative intensity, extent. 7. A stage of proficiency or qualification in a course of study, now especially an award bestowed by a university or, in some countries, a college, as a certification of academic achievement. (In the United States, can include secondary schools.) 8. A unit of measurement of angle equal to 1/360 of a circle's circumference. 9. A unit of measurement of temperature on any of several scales, such as Celsius or Fahrenheit. 10. The sum of the exponents of a term; the order of a polynomial. 11. The number of edges that a vertex takes part in; a valency. 12. The curvature of a circular arc, expressed as the angle subtended by a fixed length of arc or chord. The above text is a snippet from Wiktionary: degree and as such is available under the Creative Commons Attribution/Share-Alike License. Need help with a clue? Try your search in the crossword dictionary!
{"url":"https://crosswordnexus.com/word/DEGREE","timestamp":"2024-11-06T15:41:59Z","content_type":"application/xhtml+xml","content_length":"11749","record_id":"<urn:uuid:ce2eee73-a233-4cb3-adc5-35266c8d45e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00594.warc.gz"}
135 and 225 ( use Euclid's division) find hcf.... | Filo Question asked by Filo student 135 and 225 ( use Euclid's division) find hcf. Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 4 mins Uploaded on: 12/19/2022 Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 9 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on All topics View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text 135 and 225 ( use Euclid's division) find hcf. Updated On Dec 19, 2022 Topic All topics Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 150 Avg. Video Duration 4 min
{"url":"https://askfilo.com/user-question-answers-mathematics/135-and-225-use-euclids-division-find-hcf-33343330353937","timestamp":"2024-11-09T14:37:50Z","content_type":"text/html","content_length":"211007","record_id":"<urn:uuid:4011f161-a165-45bc-918d-b210ab01b080>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00519.warc.gz"}
Parallel And Perpendicular Axis Theorems | AtomsTalk Parallel And Perpendicular Axis Theorems Parallel axis theorem and Perpendicular axis theorem are two very important theorems in the field of rigid body mechanics. They help us generalize results about moments of inertia of objects to a variety of situations. This helps us in many calculations that get simplified in the process. Moment of Inertia Moment of inertia is the equivalent of mass in rotational dynamics. We know that it is relatively harder to move a heavier object than a lighter one. Similarly, it is harder to rotate an object with more moment of inertia than one with a lesser moment of inertia. Visualization of moment of inertia, about an axis. (Source) An interesting fact about the moment of inertia is that it depends not just on the object, but also the axis of rotation. The same object can have different moments of inertia depending on which axis we are rotating it about. To visualize this, consider rotating a uniform rod. It is evidently easier to rotate the rod about its centre, than its ends. This has a direct relation to one of the theorems we will discuss here. Another example is the bicycle wheel. It is easier to rotate it about the centre than the rim. In mathematical language, moment of inertia is defined as follows: r refers to the distances from the axis of rotation dm is the elementary mass I is the moment of inertia Parallel Axis Theorem Parallel axis theorem states the following: Let the moment of inertia about the centre of mass be I. The moment of inertia about another axis parallel to this axis is simply the sum of I and md^2, where d is the distance between the axes and m is the mass of the object. Mathematically, the parallel axis theorem formula is, d is the perpendicular distance between the axes. m is the mass of the object I is the moment of inertia about the centre of mass I’ is the moment of inertia about the new axis Parallel axis theorem, applied. Here we use it to calculate the moment of inertia about an axis that touches the sphere. (Source) This can be proved rigorously by calculating the terms for I and I’ from the integral above. Let us take a rod as an example to demonstrate parallel axis theorem. Assume the mass of the rod to be M and length to be L. The moment of inertia about a perpendicular axis through the centre of mass is (1/12)ML^2. So, if we consider rotating it around a parallel axis at the end, d = L/2 (the distance between the centre and the end) I’ = I + md^2 (from parallel axis theorem and I’ is the moment of inertia at end) Putting I, m, d values into above equation we get, I’ = (1/12)ML^2 + M(L/2)^2 => I’ = (1/12)ML^2 + (1/4)ML^2 => I’ = (1/3)ML^2 This is a clear application of the parallel axis theorem that demonstrates its usefulness. Perpendicular Axis Theorem This is applicable only to planar objects, unlike parallel axis theorem. This means it can be used for objects like discs and rings, as well as sheets. The statement of the theorem is as follows: Let the moment of inertia about an axis perpendicular to the planar surface be I[z], and I[x] and I[y] be the moments of inertia about two mutually perpendicular axes in the plane. The two axes should intersect where the first axis cuts the plane. Then, we write the perpendicular axis theorem formula as: This theorem can be extremely useful when we know two moments of inertia of a planar object and want to know the third, provided they all are perpendicular. Perpendicular axis theorem in action. Here, I[x ]and I[y] add up to I[z ]which projects out of the plane. (Source) An example of perpendicular axis theorem in action is when we consider a ring of mass M and radius R. About an axis perpendicular to the ring and passing through its centre, it has a moment of inertia MR^2. Now imagine rotating it about axes through the centre but in the plane. We can make use of symmetry here. We can see that any such two perpendicular axes have the same moment of inertia, say I[x]. Thus, clearly, by the theorem, 2I[x] = MR^2 I[x] = 0.5MR^2 Together, perpendicular axis theorem and parallel axis theorem can be used to simplify the calculations of moment of inertia. They can help us extend our knowledge of the moment of inertia from one axis to other related axes. Thus, we will find it easier to study the rotational dynamics of rigid bodies in various When to use parallel axis theorem? Parallel axis theorem can be used for finding the moment of inertia of a rigid body whose axis is parallel to the axis of the known moment and it is through the center of gravity of the object. What is radius of gyration? The radius of gyration is defined as the distance of a point where the whole area of the body is assumed to be concentrated from the given axis. Leave a Comment
{"url":"https://atomstalk.com/blogs/parallel-and-perpendicular-axis-theorems/","timestamp":"2024-11-09T06:37:16Z","content_type":"text/html","content_length":"176767","record_id":"<urn:uuid:f35a9eb3-08c9-4c80-9ad5-fc4c52eddb22>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00683.warc.gz"}
Practice Question • Subject 2. Measures of Dispersion CFA Practice Question There are 676 practice questions for this topic. CFA Practice Question What is the difference between semi-deviation and shortfall risk? Correct Answer: Both risks are downside risks. They are most often used to evaluate risky investments. Semi-deviation is an alternative measurement to standard deviation or variance. Shortfall risk is the probability that the outcome will have a value less than the target return. It is the ratio of the number of observations below the target return to the total number of observations. When the target return is zero, the shortfall risk measure is commonly called the risk of loss. User Contributed Comments 0 You need to log in first to add your comment.
{"url":"https://analystnotes.com/cfa_question.php?p=BUJJOM57V","timestamp":"2024-11-10T21:58:20Z","content_type":"text/html","content_length":"19350","record_id":"<urn:uuid:d7b077e3-611f-4f67-9ad4-b0933c7681af>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00293.warc.gz"}
Abstract Algebra Hw10 Problem 9 Return to Homework 10, Homework Problems, Glossary, Theorems Problem 9 (+) Let $G$ be a group that is generated by the set $\{a_i|i\in I\}$ where $I$ is some indexing set and $a_i\in G$ for all $i\in I$. Let $\phi:G\rightarrow G'$ and $\mu: G\rightarrow G'$ be two homomorphisms from $G$ into a group $G'$ such that $\phi(a_i)=\mu(a_i)$ for all $i\in I$.Prove that $\phi=\mu$. (This shows that any homomorphism of $G$ is determined by its action on the generators of $G$.) Let $G=\{a_1^{n_1}a_2^{n_2}\dots a_i^{n_i}|i\in I,n_i\in\mathbb Z^+\cup\{0\}\}$.(Since $G$ is generated by $\{a_i|i\in I\}$). Because $\phi: G\rightarrow G'$ and $\mu:G\rightarrow G'$ are two homomorphisms,then $\forall x\in G$,we have $\phi(x)=\phi(a_1^{n_1}a_2^{n_2}\dots a_i^{n_i})$ $=\phi(a_1^{n_1})\phi(a_2^{n_2})\dots \phi(a_i^{n_i})$ $=[\phi(a_1)]^{n_1}[\phi(a_2)]^{n_2}\dots [\phi(a_i)]^{n_i}$ $\mu(x)=\mu(a_1^{n_1}a_2^{n_2}\dots a_i^{n_i})$ $=\mu(a_1^{n_1})\mu(a_2^{n_2})\dots \mu(a_i^{n_i})$ $=[\mu(a_1)]^{n_1}[\mu(a_2)]^{n_2}\dots [\mu(a_i)]^{n_i}$ by the homomorphism property. Because $\phi(a_i)=\mu(a_i)$ for all $i\in I$, $\phi(x)=\phi(a_1^{n_1}a_2^{n_2}\dots a_i^{n_i})$=$\mu(x)=\mu(a_1^{n_1}a_2^{n_2}\dots a_i^{n_i})$ That is,$\forall x\in G$, $\phi(x)=\mu(x)$.
{"url":"http://algebra2014.wikidot.com/hw10-problem-9","timestamp":"2024-11-09T08:10:59Z","content_type":"application/xhtml+xml","content_length":"29484","record_id":"<urn:uuid:b63159ee-7c24-41d3-af88-1e4c7192d7de>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00141.warc.gz"}
Analysis of Assessments This analysis looks at the assessments made on the examples as well as those made during the peer assessment phase of the assignment. It attempts to select the better assessments out of this pool of teacher and student assessments. These "good" assessments are then used in the calculation of the final grade. This analysis is best done when there are teacher assessments available. These assessments can act as a benchmark against which the student assessments can be judged. The teacher does not need to assess every example and every submission but for the analysis to be meaningful it is better to have more assessments from the teacher than the average number of assessments made by each student. And the more assessments made by the teacher the more confident the teacher can be of the results of the analysis. The Analysis is usually done in a number of times, each time changing one or more of the options. The analysis is controlled by the three options which appear on the top of the page. 1. The Loading for Teacher Assessments sets the weighting to given the teacher's assessments compared to the students' assessments in the error analysis stage. If the teacher wants their own grading strategy to dominate the way the students grade the submissions then the teacher should be the assessor with the smallest average error in the "Error Table". If the teacher is not the first one listed than the loading of the teacher's assessments is increased until the teacher has the lowest average error. This then implies that the teacher's assessments are dominate and the students who grade like the teacher are also listed in the top part of the Error Table. The students listed at the bottom part of the table are grading in ways which not match the teacher assessments (nor those of the students in the top of the table). The more assessments that are available from the teacher the more likely that this option will not have to be used to force the teacher to the top of the table. Note that this option does not apply a weighting factor the teacher's assessments when they used in the calculation of final grades. In that calculation the teacher assessments have the same weight as the student assessments. So for example if a student's submission is graded at 41% by the teacher and 45% and 55% by their peers the final grade given to the submission is (41% + 45% + 55%) / 3, that is 47%. 2. The Weight for Grading of Assessments is used in the calculation of the Final Grade. A simple formula is used to calculate a student's "Grading Performance". It is the proportion of "good"assessments that student have done compared to the maximum number of assessments open to them. So, if for example, the assignment asks the students to do 3 assessments of the example submission and 5 peer assessments and the student does 7 assessments and 1 of those is dropped from the analysis (see below), then their grading performance is (7 - 1)/8, that is 75%. The final grade for the assignment a weighted combination of this grading performance and the grade given to their submission (or best grade if they made more than one submission). The grade for the submission is always given a weight of 1. So setting this option to say, 0.5, means that the two grades are added together in the proportion 0.5:1 or 33% of the grading performance and 66% of the grade of the submission. 3. The Percentage of Assessments to drop determines the number of the assessments which are to be excluded when calculating the final grades. This number can be set in one of two ways. □ Given the way the Grading Performance is calculated each student could, if they assessed all the work allocated to them, achieve full marks (for this element) if no assessments are dropped. If the teacher wishes to have a more reason average grade then setting this option to 30% would result in the average Grading Performance of about 70% (again if all students graded all the assessments open to them). □ Alternatively the number of assessments to drop might be set such that the remaining "good" assessments result in the Average Errors being constrained to some reasonable value. These are the percentages given in the fourth column of the Error Table. For example, it may be thought that all the student assessments should (on average) lie within the 20% range. Then the analysis is repeated a number of times adjusting the number of assessments to drop until the figures in this column all lie within a particular limit. In addition to the Error Table the analysis lists the grades of all assessments and the final grades given to the students. This table should be inspected to see if the results are reasonable. In particular if many assessments are dropped then some submissions may left unassessed and the student's final grade will be far too small. The analysis does given the number of submissions at the top of page and again just before the Grades Table. These two numbers should be same. If there are one or more unassessed submissions and the teacher does not want to decrease the number of dropped assessments then those submissions should be assessed by the teacher and the analysis repeated. It is important that all submissions are assessed at least once in the final stage of the analysis that is when the final grades are calculated. There is a balance between the number of assessments dropped and the overall final grade. The more assessments dropped the lower the final grades are likely to be. However, if poor assessments are not dropped then students may complain about the quality of the assessments which determine the grade for their work. Provided there are enough assessments by the teacher to dominate the analysis without too much forcing, then it would seem reasonable to drop somewhere between 15% and 30% of the assessments. Note that this analysis does take a long time as it involves an iterative process. Lengthy delays are to be expected.
{"url":"https://aesines.edu.gov.pt/moodle/help.php?module=workshop&file=analysisofassessments.html","timestamp":"2024-11-05T07:05:38Z","content_type":"application/xhtml+xml","content_length":"11455","record_id":"<urn:uuid:48a21a6d-fac1-4654-a36b-3006ccbbb84a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00550.warc.gz"}
How to Read a Tape Measure? The tape measure is the world’s most commonly used measuring tool, accompanying millions of tradesmen and contractors to work every single day. Whilst the seasoned professionals amongst you will no doubt be fully aware how to read the various markings on your tape, there will be amateurs, enthusiasts or those just starting off in their careers who aren’t yet so knowledgeable. We regularly get asked the question “how do you read a tape measure”. In response to our customers, therefore, we’ve put together this simple guide that explains just that! What is a Tape Measure? A tape measure or measuring tape is a flexible ruler used to measure length or distance. It consists of a ribbon of cloth, plastic, fiberglass, or metal strip with linear measurement markings. It is a common measuring tool. Its design allows for a measure of great length to be easily carried in a pocket or toolkit and permits one to measure around curves or corners. A tape measure is a portable measurement device used to quantify the size of an object or the distance between objects. The tape is marked along the tape edge in inches and fractional inches, typically in quarter-, eight-, and sixteenth-inch increments. Some tape measures are marked in millimeters, centimeters, and meters on one edge. The most common tape measures 12 feet, 25 feet, or 100 feet in length. A 12-foot tape measure is the handiest for consumers. The 25-foot length is called a builder’s tape and is marked in feet and at 16-inch increments to make measuring the standard distance between wall studs easier. The 100-foot tape, usually of reinforced cloth, is useful for determining property boundaries and other exterior measurements. Types of Tape Measures Unlike rulers, tape measures are made of flexible cloth, metal, or fiberglass ribbon. They come in varying lengths and feature measurement markings. There are four basic types of tape measures: case, long or open-reel tapes, diameter tape (D-tape), and sewing tapes. 1. Cased Tape Measures This classic 25-foot measuring tape can be used by home DIYers and contractors. They are retractable, featuring a spring mechanism to recoil the blade. Its compact and portable design makes this tape measure type a must-have tool for a variety of building and craft purposes and the best tape measure for woodworkers. 2. Open Reel Tape Measure Generally used by surveyors, an open reel measuring tape has no spring mechanism to coil the blade. It uses a hand-crank method to roll up the blade. It’s a long tape measure that is best for measuring far distances. For large areas or distances, most professionals will opt for a measuring wheel. 3. Diameter Tape Measure Similar to a case tape measure, a diameter tape measure (D-tapes) features an ultra-flexible blade that can easily wrap around pipes and poles. It provides an accurate measurement using pi (the ratio of a circle’s circumference to its diameter) to calculate the circumference and diameter of a cylindrical object. 4. Sewing Tape Measure Also known as a tailor’s tape, sewing tapes are ultra-flexible and mold easily to the body. They are used for accurate measurements for clothing design and alterations. They range in length from 60 to 120 inches. They also have both imperial and metric measurement markings. Anatomy of a Tape Measure Learning how to read a tape measure begins with understanding its different parts. • The case is the square-shaped housing for the tape. It is usually plastic or metal and about 3 inches in length. • The hook is a bent piece of metal that hooks over the edge of a board or counter. It aids in extending the tape from the case and holding the tape in place for measuring. • The hook slot is the hole at the end of the tape. It allows you to latch onto an anchor point such as a protruding nail or screw head. • The actual tape or blade is usually yellow or white. It’s the extendable and retractable length of the tool. Most often the blade is made of metal coated in plastic. • The thumb lock is a button on the case. When pressed, the thumb lock will hold the extended tape measure to a needed length. Releasing the thumb lock will allow the tape to retract into the case. • The belt clip is the fastener on the side of the case. It allows you to hook your tape measure conveniently onto a belt, tool belt or pocket. Basics Of Tape Measure Above you’ll see a picture of a metric/imperial pocket tape measure. The measurements towards the bottom of the image are metric. In other words they’re in centimetres and milimetres. There are 10mm in each centimetre (shown by the ten spaces between each cm) and 100cm in each metre. Whilst the centimetres are clearly numbered, to make the blade easier to read the millimetres are not numbered. Also, whilst a few tapes show ‘1m’ to display the 1 metre mark, the majority will show ‘100cm’. When referring to the diagram above you’ll see a series of large numbers marked 1, 2, 3, and 4. These numbers sit next to long vertical marks which represent whole inches. Put simply 1 = 1″, 2 = 2″, and so on. Between those numbers are a series of shorter marks which represent fractions of an inch. The mark directly in the middle of the inch denotes a measurment of 1/2″ whilst the markings either side of it represent measurements of 1/4″ and 3/4″ respectively. Even smaller marks then denote 1/8ths and 1/16ths (marked in red) of an inch. A 16ft tape measure, for example, will have sixteen one-foot marks along its length and 192 one-inch marks (12 inches per foot). Each inch will then have eight 1/8th of an inch and sixteen 1/16th of an inch marks. • 1 foot = 12 inches • 1 inch = 16 x 1/16th of an inch, 8 x 1/8th of an inch, 4 x 1/4 of an inch or 2 x 1/2 of an inch. How to Read a Tape Measure 1. Find/read the markings. On a standard tape measure, the biggest marking is the inch mark (which generally has the biggest number, if it has them). 2. As the increments decrease, so does the length of the mark. For example, ½” has a bigger mark than ¼” which has a bigger mark than ⅛”, and so on. 3. Read 1 inch. The space from the largest mark to another is 1 inch. 4. Read one-half inch. Same principle as reading one inch, only this time the space between the second-biggest mark and the biggest is read. You can think of a half-inch mark as halfway between a full inch. 5. The remaining markings follow a similar pattern. ¼” is half of ½”. ⅛” is half of ¼”. Most tape measure markings go as small as 1⁄16;”. This tape divides one more time, down to 1⁄32″. How to Measure Using a Tape Measure 1. Measure a length. Place the end of the measure at one end of the object or space to be measured. When the length stops, read off the tape measure. 2. Find the length. To determine the length, you must add up the lengths between inches. For example, the following image has a measurement that extends beyond the distance between two-inch marks (i.e., a full inch). To find the length, add the length of the inch (1) to the distance between the second- and third-inch marks. In this case you would add 1 inch + 1/4 inch to get 1¼ inch or “one and a half inches”. 3. If the length is less than 1 inch, simply read the length from the tape measure. increments of an inch are not labeled, determine the increment of the mark and add the appropriate fractions. 4. As an example, the image below shows a length ranging from the inch mark to an unmarked mark. We know it’s over 3/4 inch and under a full inch. The mark is halfway between 3/4 (6/8) and 7/8. Therefore, the mark is half of 1/8 or 1/16. Knowing this, you simply add up the known fractions to find the length. Convert 3/4 to 12/16 for common denominators and add 12/16 + 1/16 to get 13/16 – that’s your length. How to Read a Tape Measure in Inches Standard or SAE tape measures clearly show feet, inches, and fractions of inches. The measurements on a tape measure are generally 16 marks to the inch. This means you can measure up to 1/16 of an inch. Some tapes measure from 32 to 64 marks to the inch. Laser distance measurers can often measure accurately within 1/16 inch. • When reading a tape measure, find the closest whole inch to the end point. Then examine leftover indicator lines to see what fractions of an inch remain. Add those to the whole inches for your total measurement. • To read an inch, look for the large numbers. That number is usually in bold, black type. It’s easy to see and refers to the longest of the markings along the edge. The number of lines between inch marks indicate how precise you can get with your tool. • To read 1/2-inch measurements, locate the second-longest mark between the longer inch marks. For 1/4-inch marks, look halfway between the 1/2-inch marks. You can read smaller fractions of an inch the same way. Most tapes will label the fractions to make finding them easier. How to Read a Tape Measure in Millimeters Metric tape measures feature 10 marks to the centimeter. The smallest marks on a tape indicate one millimeter or 1/10th of a centimeter. The large, bold markings on a metric tape measure indicate centimeters. The long mark in the center indicates a half-centimeter. • To read a metric measuring tape, find the nearest whole centimeter to the end point. Examine the remaining indicator lines to see how many millimeters are left over. • Add those to the whole centimeters, using a decimal. For example, say you measure 20 whole centimeters and there are 6-millimeter marks left over. Your total measurement will be 20.6 centimeters. Special Notes for How to Read Measuring Tape • The typical length between studs in a wall is 16 inches on center. This is marked in red on many tape measures. • Many tape measures will have 1 foot marks every 12 inches. This eliminates the need to convert inches to feet yourself. Some have special markings every 3 feet as well. • Tape measures often use small, black diamond or triangle shapes called black truss marks. They indicate truss layouts of every 19 3/16 inches. This spacing is often used by some engineered joist manufacturers. There are 5 of these marks for every 8 feet. Tape Measure Tips • A properly functioning hook will move slightly. It is designed to slide based on the thickness of the metal, usually about 1/16-inch. This allows the tape to give an accurate inside and outside • Standard tape measures usually go from 15 to 50 feet. Long tape measures come in greater lengths of 100 feet or more. They are often made of flat steel or fiberglass and retract with a hand crank. Self-retracting tape measures are flexible and can be bent to measure into tight spaces or around corners. • Use a screw or nail as an anchor point to fix the end of your tape measure in place. • To draw a perfect circle, anchor the tape measure’s hook slot onto an anchor point. Engage the thumb lock. Hold a pencil or other writing utensil down flush with your tape measure. Turn it in a circle around the anchor point. • Be careful when retracting a tape measure. Allowing it to snap back can damage the tape and possibly give you a cut. Instead, retract the tape back in slowly. • If the hook on your tape measure is bent or damaged, you’ll get inaccurate measurements. A bent hook can often be adjusted with two pair of pliers. Or tap it with a hammer against a hard surface. • If you’re using multiple tapes on the same job, calibrate them to any one of several good rulers and yardsticks available. Tapes can be brought into agreement by slightly bending the hooks until measurements match. Reading a tape measure is a skill you can easily master. Using this small, sturdy hand tool can improve the accuracy of your project measurements. Learning how to use a tape measure properly means to always “measure twice, cut once.” You can use a standard tape measure or opt for a metric one.
{"url":"https://www.theengineeringchoice.com/how-to-read-a-tape-measure/","timestamp":"2024-11-05T16:39:24Z","content_type":"text/html","content_length":"94712","record_id":"<urn:uuid:67cebf34-ccb0-404f-82c6-1f9d4d3a2954>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00080.warc.gz"}
Improving 3 N Circuit Complexity Lower Bounds While it can be easily shown by counting that almost allBoolean predicates of n variables have circuit size Ω (2 ^n/ n) , we haveno example of an NP function requiring even a superlinear number of gates. Moreover, only modest linear lower bounds are known. Untilrecently, the strongest known lower bound was 3 n- o(n) presentedby Blum in 1984. In 2011, Demenkov and Kulikov presented a much simpler proof of the same lower bound, but for a more complicated function—an affine disperser for sublinear dimension. Informally, this is a function that is resistant to any n- o(n) affine substitutions. In 2011,Ben-Sasson and Kopparty gave an explicit construction of such a function. The proof of the lower bound basically goes by showing that for any circuit there exists an affine hyperplane where the function complexitydecreases at least by three gates. In this paper, we prove the following two extensions. 1. A (3+186n-o(n) lower bound for the circuit size of an affinedisperser for sublinear dimension. The proof is based on the gate elimination technique extended with the following three ideas:(i) generalizing the computational model by allowing circuits tocontain cycles, this in turn allows us to perform affine substitutions,(ii) a carefully chosen circuit complexity measure to trackthe progress of the gate elimination process, and (iii) quadraticsubstitutions that may be viewed as delayed affine substitutions. 2. A much simpler proof of a stronger lower bound of 3.11 n for aquadratic disperser. Informally, a quadratic disperser is resistantto sufficiently many substitutions of the form x← p , where pis a polynomial of degree at most two. Currently, there are noconstructions of quadratic dispersers in NP (although there are constructions over large fields, and constructions with weaker parametersover GF(2)). The key ingredient of this proof is theinduction on the size of the underlying quadratic variety insteadof the number of variables as in the previously known proofs. أدرس بدقة موضوعات البحث “Improving 3 N Circuit Complexity Lower Bounds'. فهما يشكلان معًا بصمة فريدة.
{"url":"https://cris.ariel.ac.il/ar/publications/improving-3-n-circuit-complexity-lower-bounds","timestamp":"2024-11-03T15:31:40Z","content_type":"text/html","content_length":"59224","record_id":"<urn:uuid:4a159ae3-0867-44c2-bd5f-4bef17e3ff80>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00649.warc.gz"}
An asymptotic majorant for solutions of Sturm-liouville equations in L <sub>P</sub>(ℝ) Under certain assumptions on g(x), we obtain an asymptotic formula for computing integrals of the form F(x, α) = ∫[-∞] ^∞ g(t)^αexp (- |∫[x]^t g(ξ) dξ|) dt, α ∈ ℝ, as |x| → ∞. We use this formula to study the properties (as |x| → ∞) of the solutions of the correctly solvable equations in L[P](ℝ), p ∈ [1, ∞], -y″(x) + q(x)y(x) = f(x), x ∈ ℝ, (1) where 0 ≤ q ∈ L[1]^loc(ℝ), and f ∈ L[P](ℝ). (Equation (1) is called correctly solvable in a given space L [p](ℝ) if for any function f ∈ L[P](ℝ) it has a unique solution y ∈ L[p](ℝ) and if the following inequality holds with an absolute constraint c[p] ∈ (0, ∞): ||y||L [p],(ℝ) ≤ c(p)||f||L[p](ℝ), ∀f ∈ L[P](ℝ).). • Asymptotic estimates • Asymptotic majorant for solutions • Sturm-Liouville equation Dive into the research topics of 'An asymptotic majorant for solutions of Sturm-liouville equations in L [P](ℝ)'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/an-asymptotic-majorant-for-solutions-of-sturm-liouville-equations-2","timestamp":"2024-11-13T18:31:26Z","content_type":"text/html","content_length":"54154","record_id":"<urn:uuid:36656f3a-bbbd-4f8f-b826-52b84beb7813>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00461.warc.gz"}
Techniques and Tips Home → Techniques and Tips → Printer Friendly Version Techniques and Tips How to accomplish specific tasks. 1. All Products 1.1. MyLumivero Portal and @RISK / DTS Licenses Applies to: @RISK and DecisionTools Suite 8.7 onward Starting from release 8.7, some of the licensing options will be available to manage through the myLumivero portal. This will include you have the ability to assign a license to someone else, or rescind an activation on a license so that it can be activated again. The latest installer links will also be available on the portal. The licenses will continue to be activated in the same way as previously; the user can activate using the Activation ID that is sent to them. The user does not sign in with their email address to access the license, the license key still needs to be entering in the software in order to license it correctly. An account is not required in order for someone to activate a license, it is an optional way of managing and distributing licenses. If you have any questions, or require any assistance, please contact Lumivero Technical Support. Last edited: 2024-08-02 1.2. Palisade products are NOT impacted by the Log4J vulnerability Applies to: All Palisade software versions Issue: A zero-day exploit was recently identified within the Apache log4j logging library, which can potentially be used by hackers to take over entire servers via logging messages. Statement: Palisade's products (@RISK, DecisionTools Suite, Palisade Server Manager) do not utilize the open-source java Log4j library that has recently been identified as having vulnerabilities, and therefore are not impacted. Last Update: 2021-12-16 1.3. Do @RISK and DecisionTools Suite use TLS encryption? Applies to: @RISK and DecisionTools Suite (All versions) All Palisade software uses and supports all TLS recent versions (versions 1.0/1.1/1.2). If you are in need of disabling older versions of TLS from your servers to replace it for any newer one, the software will still work without any issues. In case this article doesn't answer your question, you can email Tech Support, don't forget to include your license serial number. Last Update: 2021-03-05 1.4. What happens when you edit your model in Excel 365 and then open it in an older version of Excel? Applies to: @RISK 7.x and newer Dynamic arrays are supported in the latest versions of Excel 365. Dynamic array formulas can automatically populate or "spill" into neighboring blank cells and eliminate the need for legacy Ctrl+Shift+Enter (CSE) array formulas. When opening a workbook that contains dynamic array formulas in an older version of Excel, they show as a legacy CSE formula. If new dynamic array functions are used, spill range references get prefixed with _xlfn to indicate that this functionality is not supported. A spill range ref sign (#) is replaced with the ANCHORARRAY function. Most dynamic array formulas (but not all!) will keep displaying their results in legacy Excel until you make any changes to them. Editing a formula immediately breaks it and displays one or more # NAME? error values. So, if you know you will be sharing dynamic array formula enabled workbooks with someone using non-dynamic aware Excel, it’s better to avoid using features that aren't available for them. What about @RISK functions? When you open a workbook containing @RISK functions in an older version of Excel, it is automatically converted to a conventional array formula enclosed in {curly braces}. So, there are two possible scenarios to analyze considering the @RISK version used: 1. @RISK 8. The model will run without problems. 2. @RISK 7.x and older. You will get next error message: One way to solve this problem is inserting the “@” character (aka implicit intersection operator) at the beginning of the @RISK functions when editing the model in Excel 365. In Excel 365, all formulas are regarded as array formulas by default. The implicit intersection operator is used to prevent the array behavior if you do not want it in a specific formula. In other words, this is done to force the formula to behave the same way as it did in older versions. So, you can do this manually or programmatically by creating a VBA macro. There is an alternative option using the Swap-Out functionality available in @RISK, the procedure is explained below: 1. On Excel 365, use the Swap-Out functionality of @RISK to preserve the current state of all @RISK functions. No need to include reports so you may want to skip those options. 2. Save a copy of this workbook and then open it in an older version of Excel which has @RISK v7 or below. 3. It may immediately prompt you to swap in all @RISK functions found. If it doesn’t, close the model and open a blank workbook instead. Run the Swap-Out functionality on the blank workbook and then re-open your model. 4. Follow the instructions on screen to complete the Swap-In process. Last Update: 2020-09-08 1.5. What's New in the Knowledge Base? This page lists the principal changes in the Palisade Technical Support Knowledge Base, from most recent to oldest. Each entry shows the "book", chapter, and article titles, with live links. Titles of new articles are in boldface and marked with ★. October 5^th, 2022 — February 10^th, 2023 New Articles: Updated Articles: Deleted Articles: August 5^th, 2022 — October 5^th, 2022 New Articles: • Home → Soluciones → Todos los productos: de inicio → No pudo adjuntar la copia en ejecución de Microsoft Excel ya que ésta está invisible o no responde. (Spanish version of Could not attach to the already running copy of Microsoft Excel KB) Updated Articles: • Home → End User Setup → Before You Install → Windows and Office Versions Supported by Palisade (Updated Palisade’s latest version release to 8.2.2, added Windows 11 as a supported OS) • Home → Configuración de usuario final → Antes de la instalación → Versiones de Windows y Office compatibles con Palisade (Same changes as of English KB) February 11^th,2022 — August 5^th, 2022 New Articles: • No new articles as of 08/05/2022 Updated Articles: Home → Técnicas y Consejos → Rendimiento de @RISK → Para simulaciones más rápidas (Spanish version of “For faster simulation” article, updated Office versions described in troubleshooting steps and validate all links are still functional) November 15^th, 2021 — February 11^th,2022 New Articles: Updated Articles: July 29^th —November 15^th, 2021 New Articles: • Home → Soluciones → @RISK para Excel: Otros Problemas → La firma digital ha sido alterada después de que se firmó el contenido★ • Home → Troubleshooting → Installation → The digital signature has been tampered with after the content was signed. This content cannot be trusted★ Updated Articles: Home → End User Setup → Further Information (Updated link for EULA to Palisade website) April 1^st 2021— July 29^th 2021 New Articles: Updated Articles: March 19^th 2021— April 1^st 2021 New Articles: • Home → Troubleshooting → All Products: Startup → Running DecisionTools Add-in “as Administrator” Can Block Future Access to Registry Keys that Store Preferences and Other Information ★ • Home → Soluciones → Todos los productos: de inicio → Ejecutar el complemento DecisionTools "como administrador" puede bloquear el acceso futuro a las claves de registro que almacenan preferencias y otra información ★ March 5^th 2021— March 19^th 2021 New Articles: Updated Articles: December 8th 6^th 2020— March 5^th 2021 New Articles: Updated Articles: Additional Changes: @RISK and DecisionTools Suite 8.1.1 was released on March 4th, 2021. Automatic updates from the software's previous versions will be turned on early next week. Release notes are now published in October 6^th 2020— December 8^th 2020 New Articles: Updated Articles: Additional Changes: In our website www.palisade.com > Company > Contact there is a section in which you can redirect our customers to fill out a form to open a support ticket, this way they can input their license and issue details to maximize response times in their cases. July 24 2020— October 6th 2020 New Articles: Additional Changes: In the Palisade Help Resources (help.palisade.com) customer’s will be able to find our Knowledge Base link as well, this way they can find all online support information in one place. July 7 2020— July 24 2020 New Articles: Updated Articles: Home → Soluciones → Todos los productos: de inicio → Nada sucede cuando inicio el Software June 5 2020— July 7 2020 New Articles: Updated Articles: April 27 2020— June 5 2020 New Articles: Updated Articles: April 17 2020— April 27 2020 New Articles: Updated Articles: Home → Troubleshooting → All Products: Startup → "Timeout error starting ... PalFlexServer.exe" April 2 2020— April 17 2020 New Articles: Updated Articles: March 12 2020— April 2 2020 New Articles: Updated Articles: Jan 30 2020— March 12 2020 New Articles: • Home → Troubleshooting → Installation → "Error 1325: Documents is not a valid short file name" ★ • Home → Soluciones → Instalación → "Error 1325: Documents is not a valid short file name" (Spanish) ★ Updated Articles: • Home → End User Setup → Before You Install → Windows and Office Versions Supported by Palisade (Adding Windows 10 Version 1909 to list of compatible Windows versions with Palisade Software) Jan 10 2020— Jan 30 2020 New Articles: Home → Licencias individuales → Activación (6.x/7.x) → Obtener una licencia Individual certificada ★ 05 Nov 2019 — 10 Jan 2020 New Articles: Updated Articles: 29 Aug - 4 Nov 2019 New Articles: Updated Articles: 17 Aug - 28 Aug 2019 New Article: Updated Articles: 2 July - 16 Aug 2019 New Article: 29 June - 1 July 2019 Updated Articles: 18 June - 27 June 2019 Article updated: 07 May - 17 June 2019 Changes in Stand alone EULA and Network EULA 02 May - 06 May 2019 New article: 30 April - 01 May 2019 Updates on Maintenance Policy: 10 March - 29 April 2019 New Articles: Updates and Translated articles in Spanish: 3–9 March 2019 24 Feb–2 March 2019 17–23 Feb 2019 10–16 Feb 2019 3–9 Feb 2019 27 Jan–2 Feb 2019 20–26 Jan 2019 13–19 Jan 2019 6-12 Jan 2019 30 Dec 2018–5 Jan 2019: no updates of note. 23–29 Dec 2018 16–22 Dec 2018 9–15 Dec 2018 2–8 Dec 2018 25 Nov–1 Dec 2018 18–24 Nov 2018: no updates of note. 11–17 Nov 2018 4–10 Nov 2018 28 Oct–3 Nov 2018 21–27 Oct 2018 14–20 Oct 2018 7–13 Oct 2018—These articles have significant updates for software release 7.6.0: 30 Sept–6 Oct 2018: no updates of note. 23–29 Sept 2018 16–22 Sept 2018 9–15 Sept 2018 2–8 Sept 2018 26 Aug–1 Sept 2018 19–25 Aug 2018 12–18 Aug 2018 5–11 Aug 2018 29 July–4 Aug 2018 22–28 July 2018 15–21 July 2018 8–14 July 2018 • Troubleshooting → Installation → "Setup was unable to activate the Activation ID ..." Emphasized that the install is fine; it's just a license issue. Explained what to do in various circumstances to make the Activation ID usable. Added the version 7 installer error message, and changed the article name to match. 1–7 July 2018 24–30 June 2018 17–23 June 2018 10–16 June 2018 3–9 June 2018 27 May–2 June 2018 20–26 May 2018 13–19 May 2018: no updates of note. 6–12 May 2018 29 April–5 May 2018 22–28 April 2018 15–21 April 2018 8–14 April 2018 1–7 April 2018 25–31 March 2018 18–24 March 2018 11–17 March 2018 4–10 March 2018 25 Feb–3 Mar 2018 18–24 Feb 2018 11–17 Feb 2018 4–10 Feb 2018: no updates of note. 28 Jan–3 Feb 2018 21–27 Jan 2018 14–20 Jan 2018 (updates for software release 7.5.2) 7–13 Jan 2018 31 Dec 2017–6 Jan 2018 24–30 Dec 2017: no updates of note. 17–23 Dec 2017 10–16 Dec 2017 3–9 Dec 2017 • Network Guide ★ This replaces the 6.3 Network Guide and the 7.x Network Guide, with significant textual updates and new material. • More on Networks → Servers (5.x) → Deactivating (Returning) 5.x Network Licenses Removed the automatic option, which no longer works. Removed the ReturnLicenses.bat file, which was never actually explained. 26 Nov–2 Dec 2017 19–25 Nov 2017: no updates of note. 12–18 Nov 2017 5–11 Nov 2017 29 Oct–4 Nov 2017 22–28 Oct 2017 15–21 Oct 2017 8–14 Oct 2017 1–7 Oct 2017 24–30 Sept 2017 17–23 Sept 2017 10–16 Sept 2017 3–9 Sept 2017 27 Aug–2 Sept 2017 20–26 Aug 2017 13–19 Aug 2017: no updates of note. 6–12 Aug 2017 30 July–5 Aug 23–29 July 2017 16–22 July 2017 9–15 July 2017 2–8 July 2017 25 June–1 July 2017 18–24 June 2017 11–17 June 2017 4–10 June 2017 28 May–3 June 2017 21–27 May 2017: no updates of note. 14–20 May 2017 7–13 May 2017 30 Apr–6 May 2017 23–29 Apr 2017 16–22 Apr 2017 9–15 Apr 2017 2–8 Apr 2017 26 Mar–1 Apr 2017 19–25 Mar 2017 12–18 Mar 2017 5–11 Mar 2017 26 Feb–4 Mar 2017 19–25 Feb 2017 12–18 Feb 2017 5–11 Feb 2017 29 Jan–4 Feb 2017 22–28 Jan 2017 15–21 Jan 2017 8–14 Jan 2017 1–7 Jan 2017 25–31 Dec 2016 18–24 Dec 2017 11–17 Dec 2016 4–10 Dec 2016: no updates of note. 27 Nov–3 Dec 2016 20–26 Nov 2016 13–19 Nov 2016 6–12 Nov 2016: no updates of note. 30 Oct–5 Nov 2016 23–29 Oct 2016 16–22 Oct 2016 9–15 Oct 2016 2–8 Oct 2016 25 Sept–1 Oct 2016 18–24 Sept 2016 11–17 Sept 2016 4–10 Sept 2016 28 Aug–3 Sept 2016 21–27 Aug 2016 14–20 Aug 2016 7–13 Aug 2016 31 July–6 Aug 2016 24–30 July 2016 17–23 July 2016: no updates of note. 10–16 July 2016 Release 7.5.0 of our software was issued this week. 2–9 July 2016 26 June–2 July 2016 19–25 June 2016 12–18 June 2016 5–11 June 2016 29 May–4 June 2016 ★ indicates a new article. Last edited: 2023-02-10 Additional keywords: 1479 1.6. How to Find Your Serial Number Finding Version 8.x serial number Please launch @RISK or your other Palisade product. In the @RISK ribbon or menu, click About @RISK. The 7-digit number next to S/N is your serial number. If you can't start the software, but you have your Activation ID, it begins with three letters (DPS, DNS, RPS, or RNS) and 7 digits; the 7 digits are your serial number. If you don't know your Activation ID, please get your serial number from the email you received when you bought the software. If all those methods fail, your Palisade sales office (not Tech Support) may be able to look up your serial number. Concurrent Network License (Activatable or Certificate) To find your license details, open Palisade Server Manager and refer to the section "Active Licenses", the 7 digits serial number can be found after the first 3 letters "DNQ, DNF, DPQ, DPF, RPF, RNF, RNQ or RPQ" and will start with the number 8. Example: RPQ-8XXXXXX-... If you have a textbook license, the textbook details will appear instead of a serial number. Course License (Academic) If you have a course license through your college or university, the serial number may or may not appear. If no serial number appears, you can find it in the Palisade_Course.lic file that you received from your school; it's the 7-digit number just after "SN=". If you've already installed the DecisionTools Suite course license, the Palisade_Course.lic file will be in C:\Program Files (x86) \Palisade\System or C:\Program Files\Palisade\System. Finding Version 6.x/7.x Serial Number Please launch @RISK or your other Palisade product. In the @RISK ribbon or menu, click Help, then About. The 7-digit number next to S/N is your serial number. If you can't start the software, but you have your Activation ID, it begins with three letters and 7 digits; the 7 digits are your serial number. If you don't know your Activation ID, please get your serial number from the email you received when you bought the software. If all those methods fail, your Palisade sales office (not Tech Support) may be able to look up your serial number. Concurrent Network License (Activatable or Certificate) To find your license details, open Palisade Server Manager and refer to the section "Active Licenses", the 7 digits serial number can be found after the first 3 letters "DNF, DPF, RPF, RNF" and will start with the number 7. Example: RPF-7XXXXXX-... Note: Single DecisionTool products will have their initial letter in their Activation IDs: • ENF-7XXXXXX (Evolver) • SNF-7XXXXXX (StatTools) • NNF-7XXXXXX (NeuralTools) If you have a textbook license, the textbook details will appear instead of a serial number. Course License (Academic) If you have a course license through your college or university, the serial number may or may not appear. If no serial number appears, you can find it in the Palisade_Course.lic file that you received from your school; it's the 7-digit number just after "SN=". If you've already installed the DecisionTools Suite course license, the Palisade_Course.lic file will be in C:\Program Files (x86) \Palisade\System or C:\Program Files\Palisade\System. Finding Version 5.x Serial Number Please launch your Palisade product. In the Excel menu, click @RISK (or the appropriate product name), then Help, then License Activation. Look at the Activation ID that appears. The serial number is the group of seven digits starting with a 5. If no Activation ID appears, then either this is a trial version or it is a bought version that has not yet been activated. Please check your back emails for the serial number. We have a video illustrating this procedure. Finding Version 1.x or 4.x Serial Number Please launch your Palisade software. In the Excel or Project menu line (the one that starts File, Edit, View) click on @RISK (or another Palisade product name). A submenu will drop down. In that submenu, click Help. A third menu will open at the side. Click on About @RISK. In the About screen, look for S/N. The number after that is your serial number. Last edited: 2022-02-09 1.7. New user interface in Palisade License Manager Applies to: DTS and @RISK 8.2 and newer. Available in version 8.2, Palisade License Manager will have slight changes in its user interface: • The advance option window has been replaced for a drop-down menu. • Faster download and progress count. • New option to Install or Save installer for later after an update is downloaded. Last Update: 2021-07-29 1.8. Which Version of Excel Is Opened by Palisade Software? Applies to: @RISK For Excel 4.5–7.x BigPicture 1.x(7.x) Evolver 4.0–7.x NeuralTools 1.0–7.x PrecisionTree 1.0–7.x RISKOptimizer 1.0, 5.x StatTools 1.1–7.x TopRank 5.0–7.x I have multiple versions of Excel on my computer. @RISK (PrecisionTree, StatTools, ...) opens one version of Excel, but I want it to open the other version. My Excel was recently upgraded and now when I launch @RISK it can't start Excel. The simplest solution is to open Excel and then launch the Palisade software. If @RISK (PrecisionTree, StatTools, ...) finds a copy of Excel running, it will attach itself to that copy. If you want a more permanent solution that will cause your Palisade software to open your desired copy of Excel, you can make an edit to the System Registry, as follows: 1. Close Excel if it's running. Locate the "Excel.exe" file and take note of the full file path. Caution: you need the full path, including the program name and ".exe" extension. Some examples are C:\Program Files (x86)\Microsoft Office\OFFICE14\Excel.exe and C:\Program Files\Microsoft Office\OFFICE11\Excel.exe 2. To open the Registry Editor, click the Windows Start button, then Run. Type REGEDIT and click the OK button. 3. When the Registry Editor window appears, navigate to in the left-hand pane, or if you have 64-bit Windows then navigate to In the right-hand pane, you will see two string values called Main Directory and System Directory, and possibly some additional values. 4. If Excel Path appears in the right-hand pane, double-click it and edit the path to match the path you noted in step 1. 5. If Excel Path does not appear in the right-hand pane, right-click in the right-hand pane and select New » String Value. Name the new string value Excel Path, with a space between the two words. Double-click the name Excel Path and edit in the path that you saved in Step 1. 6. Test your edit by launching the Palisade software when no version of Excel is running. If the correct Excel does not come up, edit the value of the Excel Path string. If the correct Excel comes up, close the Registry Editor by clicking File » Exit. Reminder: This Registry setting is used only when you launch our software while Excel is not running. If Excel is already running and you click a shortcut or icon for our software, it will attach itself to the running copy of Excel, regardless of version. Last edited: 2017-12-21 1.9. Which Version of Excel Am I Running? Tech Support wants to know which version of Excel I'm running, and whether it's 32-bit or 64-bit Excel. How do I find that? There are different menu selections for this in the different versions of Excel, but you can also get clues from the appearance of Excel. Just follow along with this questionnaire. 1. Do you see old-style menus, as opposed to the new ribbon? Then you are running Excel 2003 or earlier, and it is 32-bit Excel. (@RISK 7 does not run in Excel 2003. @RISK 6 does, but not in earlier Excels. See the full compatibility matrix.) 2. Is there a round "Office button" at the top left of the Excel window, as opposed to the word File? Then you are running 32-bit Excel 2007. 3. Does the word FILE, in all capitals, appear at the top left of the Excel window? Then you are running Excel 2013. To find whether it is 32-bit or 64-bit Excel, click FILE » Account » About Excel, and look at the top line of the "About Microsoft Excel" box that opens. 4. Otherwise, click File in the ribbon, and look at the selections that appear under File. □ If you see Account under File, you are running Excel 2016. To find whether it is 32-bit or 64-bit Excel, click Account » About Excel, and look at the top line of the untitled box that opens. □ If you don't have Account under File, you are running Excel 2010. Click Help under File, and look at the first line under "About Microsoft Excel" to find whether you're running 32-bit or 64-bit Excel 2010. See also: Microsoft's documents What version of Office am I using? and Find details for other versions of Office. Last edited: 2016-04-26 1.10. Getting Better Performance from Excel Disponible en español: Conseguir un mayor rendimiento de Excel Disponível em português: Obtendo o melhor desempenho do Excel Applies to: @RISK and other Palisade add-ins to Excel How much of the calculation time in my model is actually spent by Excel? Can I make my Excel worksheets calculate more efficiently? The impact varies. @RISK (particularly RISK Optimizer), Evolver, PrecisionTree with linked trees, and TopRank are most affected by Excel calculation speed. • @RISK and TopRank recalculate all open workbooks ("Excel recalc") once per iteration. • PrecisionTree does a couple of Excel recalcs while analyzing the tree. With a linked tree, PrecisionTree also does an Excel recalc for each possible path through the tree (each end node). • Evolver does an Excel recalc once per trial. • StatTools does virtually all of its calculations outside of Excel, so tuning Excel will have little effect on the speed of its operations. • NeuralTools does virtually all of its calculations outside of Excel, so tuning Excel will have little effect on the speed of training or testing a network. Microsoft has a number of suggestions for how to get better performance out of your Excel model: • Excel 2010 Performance: Tips for Optimizing Performance Obstructions (accessed 2015-04-10) gives many specific techniques for getting better performance out of your Excel model. Most of these apply to other versions of Excel as well as Excel 2010. • Excel 2010 Performance: Improving Calculation Performance (accessed 2015-04-10): See especially "Iteration Settings" and the large sections "Making Workbooks Calculate Faster" and "Finding and Prioritizing Calculation Obstructions". • Excel 2010 Performance: Performance and Limit Improvements (accessed 2015-04-10) also applies to Excel 2007 explicitly and Excel 2013 implicitly. See also: Last edited: 2018-01-30 1.11. Opening a Second Instance of Excel While I'm using Palisade software, can I open a workbook and not have the Palisade software in the ribbon for that workbook? Yes, you can, and this lets you work in that second copy of Excel while @RISK (Evolver, NeuralTools, ...) is running its analysis in the first copy of Excel. In that second copy of Excel, don't run a Palisade product. (If you want to work on a workbook that contains @RISK functions, they will all appear in the cells as #NAME. However, you can edit the formulas in the formula bar, copy/paste formulas, and so on.) The terminology is important here—opening a second instance of Excel is not the same thing as opening a second workbook in Excel. If you open a second workbook, the existing copy of Excel opens it, so you have one copy of Excel running and there's one Excel line in Task Manager. You can have multiple workbooks open when running our software, but don't switch workbooks while a simulation or other analysis is running. By contrast, when you open a second instance, Windows loads a fresh second copy of Excel, and Task Manager shows two Excel lines. Our software will fail with "Object initialized twice" or another message if you try to open it in a second instance of Excel. Confusingly, Excel 2013 and newer look like second instances when you simply open a second workbook. They show multiple taskbar icons, usually stacked. The Windows actions to switch to a different program will switch between those workbooks, even though they're open in the same program. The only way to be sure is to look at Task Manager (Ctrl+Shift+Esc) to determine whether there's one line for Excel, or more than one. Only these specific actions will open a second instance of Excel: • With Excel 2003 through 2010, launch a second instance in the usual Windows way. The Start menu always works. With Windows 7 and Windows 8, you can also press and hold the Shift key and click the Excel box in the taskbar. • With Excel 2013, 2016, 2019, and 365, press and hold the Alt key, right-click the Excel icon in the Windows taskbar and click the Excel icon above the Pin or Unpin option. You can release the mouse button right away, but continue holding down the Alt key until you get a prompt asking "Do you want to start a new instance of Excel?" Release the Alt key and click Yes. Important: Don't attempt to run any Palisade software in that second instance of Excel. I followed directions, but @RISK appeared in the second copy of Excel anyway. First, press Ctrl+Shift+Esc to open Task Manager, and verify that there are two Excel lines. If not, you probably let go of the Alt key too soon. Assuming there are two Excels, @RISK opened in the second instance because you have it set to launch whenever Excel launches. Remember, you can't have two copies of Excel both running palisade software, even different Palisade applications. If you want to use multiple instances of Excel, you must prevent that, as follows: 1. Close one copy of Excel. 2. In the other copy, click File » Options » Add-Ins. (In Excel 2007, click the round Office button, then Excel Options » Add-Ins.) 3. At the bottom of the right-hand panel, click the Go button next to Manage: Excel Add-Ins. 4. Remove the tick marks on all Palisade software. Last edited: 2018-11-28 1.12. Using Excel During Simulation or Optimization Applies to: @RISK for Excel 5.x–7.x RISKOptimizer 5.x (6.x and newer are part of @RISK) Evolver 5.x–7.x My simulation or optimization takes some time to run. During that time, I would like to work on another workbook. Is there any way I can use Excel for something else during a simulation or Yes, you can open a second instance of Excel and do anything in that instance, with one exception: Don't run any Palisade product in that second instance of Excel. (If you want to work on a workbook that contains @RISK functions, they will all appear in the cells as #NAME. However, you can edit the formulas in the formula bar, copy/paste formulas, and so on.) To open a second instance of your version of Excel, please see Opening a Second Instance of Excel. Last edited: 2017-11-28 1.13. Identical Settings for Multiple Computers Disponible en español: Ajustes Iguales para Computadores Múltiples Disponível em português: Utilizando as mesmas configurações para vários computadores Applies to: @RISK, Evolver, NeuralTools, PrecisionTree, and StatTools, releases 5.x–7.x RISKOptimizer 5.x (merged in @RISK starting with 6.0) I'm a site administrator, and I want to ensure that everyone has the same settings for @RISK or any of the applications in the DecisionTools Suite. Is there any way I can do this? Yes, this is easy to do. This article will give you the detailed procedure for @RISK, followed by the variations for the other applications. You can create a policy file, RiskSettings.rsf, in the RISK5, RISK6, or RISK7 folder under your Palisade installation folder. If this policy file is present when @RISK starts up, the program will silently import the Application Settings and users will not be able to change them. Application Settings actually include two types of settings: • Default Simulation Settings, such as number of iterations, whether distribution samples are collected, and whether multiple CPUs are enabled. These are applied automatically to any new model that the user creates. However, the user does have the ability to change the Simulation Settings and save the model with the changed settings. If the user opens an existing model, created by that user or by someone else, @RISK will use the Simulation Settings stored with that model, not the default Simulation Settings from the policy file. • Global options for @RISK itself, such as whether to show the welcome screen and whether to save simulation results in the workbook. These settings are "frozen" by the policy file: the user can't change them, and they're not affected by anything in a workbook. How to create a policy file for @RISK: 1. Run @RISK, and on the @RISK Utilities menu select Application Settings. 2. Make your changes to the settings that are displayed. (If a particular Simulation Setting is not shown here, then this version of @RISK does not allow setting a default.) 3. Click the Reset/File Utilities icon at the bottom of the dialog, and select Export to File. Use the suggested name of RiskSettings.rsf. 4. Move or copy the file to the RISK5, RISK6, or RISK7 folder under the user's Palisade installation folder. If you want to provide an optional settings file rather than a mandatory policy file, create the RiskSettings.rsf file as above but don't put it in the RISK5, RISK6, or RISK7 folder. Users can then import the settings by opening Application Settings, clicking the Reset/File Utilities icon, and selecting Import from File. Policy files for other applications: The procedure is the same; only the locations of the policy files change. Settings File Name Settings File Location RiskSettings.rsf RISK7, RISK6, or RISK5 EvolverSettings.rsf Evolver7, Evolver6, or Evolver5 NeuralToolsSettings.rsf NeuralTools7, NeuralTools6, or NeuralTools5 PTreeSettings.rsf PrecisionTree7, PrecisionTree6, or PrecisionTree5 RISKOptimizerSettings.rsf RISKOptimizer5 only StatToolsSettings.rsf StatTools7, StatTools6, or StatTools5 (RISKOptimizer 6 and 7 settings are merged in RiskSettings.rsf. BigPicture and TopRank do not support policy files.) See also: Transferring Settings to Other Models or Other Computers Additional keywords: RSF file, .RSF file, pre-defined settings Last edited: 2015-06-08 1.14. Transferring Settings to Other Models or Other Computers Applies to: All products, releases 5.x–7.x After I adjust the Application Settings to my liking, how can I copy these settings into other models, potentially running on other PCs? If you are concerned only with the settings for one particular model, these are stored in the workbook and there's no need to do anything special to export them. If you want to export defaults to be applied to all new models, follow this procedure: 1. In the Utilities » Application Settings window, click on the small disk icon at the bottom of the dialog and choose Export to File. Make note of the file location and name that you choose. 2. In another session or on another computer, first load your model and then click Utilities » Application Settings. Click that same disk icon and choose Import from File to load the file that you saved earlier. You can also create a policy file with application settings that should be common to all users. Please see Identical Settings for Multiple Computers. Last edited: 2015-06-08 1.15. Running "Out of Process" Applies to: @RISK 7.5.2 in 32-bit Excel 2013 or 2016 @RISK 7.6.x in 32-bit Excel 2013, 2016, or 2019 Evolver 7.5.2 in 32-bit Excel 2013 or 2016 Evolver 7.6.x in 32-bit Excel 2013, 2016, or 2019 NeuralTools 7.5.2 in 32-bit Excel 2013 or 2016 NeuralTools 7.6.x in 32-bit Excel 2013, 2016, or 2019 PrecisionTree 7.5.2 in 32-bit Excel 2013 or 2016 PrecisionTree 7.6.x in 32-bit Excel 2013, 2016, or 2019 StatTools 7.5.2 in 32-bit Excel 2013 or 2016 StatTools 7.6.x in 32-bit Excel 2013, 2016, or 2019 Does not apply to: 64-bit Excel, or 32-bit Excel 2010 or 2007 What does it mean to run out of process or in process, and what difference does it make to me? @RISK and our other add-ins are 32-bit code, and each product has a bridge to 64-bit Excel. Those bridges are called RiskOutOfProcessServer7.exe, EvolverOutOfProcessServer7.exe, NeuralToolsOutOfProcessServer7.exe, PrecisionTreeOutOfProcessServer7.exe, and StatToolsOutOfProcessServer7.exe. We say that @RISK (Evolver, NeuralTools, ...) is running out of process, meaning that it doesn't run directly as part of 64-bit Excel's process, but instead routes communications with Excel through the bridge. When you're interfacing 32-bit code with 64-bit code, that's normal. However, in 7.5.1 and earlier releases, even with 32-bit Excel 2013 and 2016, @RISK and the other tools ran out of process. With 7.5.2, that changed: when running with 32-bit Excel, the tools listed above now run in process by default. This removes a layer of code and should provide better performance and greater stability, since there's no longer a separate "out of process server" layer. If your simulations involve Microsoft Project, you'll notice a very significant speedup from running @RISK 7.6 in process. But there are many builds of Excel 2016 out there, not to mention future Excel updates, and it's not possible to test with all of them. It's possible, though not likely, that your particular build or configuration might experience a problem with running in process, such as flashing windows or windows not appearing at all, or other issues identified by Palisade Technical Support. If this happens, you can set Palisade software to run out of process and avoid the problems. How do I set the software to run out of process? These settings are recorded in the current user's profile. If people log in to this computer under different Windows usernames, the others will continue running in process unless they also follow one of these two methods. Method A: If you can launch the software, and get into Utilities » Application Settings, it's easy. In the Advanced section at the end of Application Settings, change Operating Mode to out-of-process, and click OK. TIP: If you have the DecisionTools Suite, and some tools are working, change this setting in one of the working tools, and it will offer to change it in the others, working and non-working. Remember to close Excel before re-testing the tool that has problems. Method B: If you can't launch the software, or the Application Settings dialog won't come up, you can use the attached OutOfProcess Registry file. 1. Close all open instances of Excel and Project. 2. Download the attached OutOfProcess file. 3. Change the extension from TXT to REG. (If you can't see the .TXT extension, see Making File Extensions Visible.) 4. Double-click the REG file. The REG file will set Registry keys for the five products listed above. If you don't have the DecisionTools Suite but only one or more individual products, the extra keys will do no harm. What if I want to go back to running in process? In Utilities » Application Settings » Advanced, set Operating Mode to in-process, or use the attached InProcess file. Are there any known issues when running in process? Here's what we've identified so far: Last edited: 2018-11-28 1.16. Removing Outdated References to Office from the System Registry Disponible en español: Quitar referencias obsoletas de Office del Registro del Sistema Disponível em português: Removendo referências ultrapassadas para o Office a partir do Editor de Registro Removing a version of Microsoft Office can sometimes leave behind "orphan" keys in the System Registry. These references to products that are no longer installed can prevent Palisade add-ins from working correctly with Excel, Project, or both — you may see messages such as "Application-defined or object-defined error", "Automation error: Library not registered", "Error in loading DLL", "Could not contact the Microsoft Excel application", "File name or class name not found during Automation operation", or "Object variable or with block variable not set". Results graphs or other graphs may not appear as expected. To remove the outdated references, you will need to edit the System Registry, as detailed below. If you'd rather not edit the System Registry, or you don't have sufficient privilege, you may be able to work around the problem by starting Excel first and then the Palisade software. If you'd like to make Palisade software start automatically whenever Excel starts, please see Opening Palisade Software Automatically Whenever Excel Opens. Otherwise, please proceed as follows: 1. Close Excel and Project. 2. Click Start » Run, type REGEDIT and click OK. {00020813-0000-0000-C000-000000000046} Key for Excel 3. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the curly braces {...}, into the search window: Check (tick) the Keys box and Match whole string only; clear Values and Data. 4. Click the + sign at the left of {00020813-0000-0000-C000-000000000046} to expand it. You will see one or more subkeys: □ 1.5 for Excel 2003. □ 1.6 for Excel 2007. □ 1.7 for Excel 2010. □ 1.8 for Excel 2013. □ 1.9 for Excel 2016. Identify the one(s) that do not match the version(s) of Excel you actually have installed. If all of them do match installed Excel versions, omit steps 5 and 6. 5. You are about to delete the key(s) that correspond to versions of Microsoft Excel that you do not have. For safety's sake, you may want to back them up first. Right-click on {00020813-0000-0000-C000-000000000046}, select Export, and save the file where you'll be able to find it. 6. Right-click the 1.something key that does not belong, select Delete, and confirm the deletion. Repeat for each 1.something key that does not belong. 7. The {00020813-0000-0000-C000-000000000046} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 4 through 6 (click the + sign, export the key to a new file, and delete the orphaned 1.something entries). {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52} Key for Office 8. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the curly braces {...}, into the search window: Check (tick) the Keys box and Match whole string only; clear Values and Data. 9. Click the + sign to expand the key. You will see one or more subkeys: □ 2.3 for Office 2003. □ 2.4 for Office 2007. □ 2.5 for Office 2010. □ 2.6 and 2.7 for Office 2013. (2.6 and 2.7 are okay for Office 2016 as well, if there is a reference to Office16 under 2.7.) □ 2.8 for Office 2016. Identify the one(s) that do not match the version(s) of Office you actually have installed. If all of them do match installed Office versions, omit steps 10 and 11. 10. You are about to delete the key(s) that correspond to versions of Microsoft Office that you do not have. For safety's sake, you may want to back them up first. Right-click on {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52}, select Export, and save the file where you'll be able to find it. (Choose a different name for this file, such as Key2.) 11. Right-click the 2.something key that does not belong, select Delete, and confirm the deletion. Repeat for each 2.something key that does not belong. 12. The {2DF8D04C-5BFA-101B-BDE5-00AA0044DE52} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 9 through 11 (click the + sign, export the key to a new file, and delete the orphaned 2.something entries). 13. Close the Registry Editor. If you run @RISK with Microsoft Project, please follow the additional steps in Removing Outdated References to Project from the System Registry to find and remove outdated references to Microsoft The software should now run normally. After verifying @RISK (PrecisionTree, etc.), and running Excel independently of our software, you can delete the saved .REG files. Last edited: 2016-04-01 1.17. Removing Outdated References to Project from the System Registry Removing a version of Microsoft Project can sometimes leave behind "orphan" keys in the System Registry. These references to products that are no longer installed can prevent Palisade add-ins from working correctly with Project — you may see messages such as "Application-defined or object-defined error", "Automation error: Library not registered", "Error in loading DLL", "Could not contact the Microsoft Excel application", "File name or class name not found during Automation operation", or "Object variable or with block variable not set". Results graphs or other graphs may not appear as To search for and remove outdated COM Type Library registrations relating to Microsoft Project, please follow this procedure, which requires administrative rights: 1. Close Excel and Project. 2. Click » Start » Run, enter the command REGEDIT and click OK. 3. Click on Computer at the top of the left-hand panel, then press Ctrl+F to bring up the search window. Paste this string, including the {...}, into the search window: Check (tick) the Keys box and Match whole string only; clear Values and Data. If the key does not exist, Microsoft Project may be able to re-create it. Close Registry Editor and open Project. There's no need to open an .MPP file. Close Project, then reopen Registry Editor and search for the key. If the key still does not exist, see Microsoft Project Installation below. 4. If the key does exist, click the + sign at the left to expand it. You will see one or more subkeys: □ 4.5 for Project 2003 □ 4.6 for Project 2007 □ 4.7 for Project 2010 □ 4.8 for Project 2013 □ 4.9 for Project 2016 Identify the one(s) that do not match the version(s) of Microsoft Project you actually have installed. If all of them do match installed Project versions, omit steps 5 and 6. 5. You are about to delete the key(s) that correspond to versions of Microsoft Project you do not have. For safety's sake, you may want to back them up first. Right-click on {A7107640-94DF-1068-855E-00DD01075445}, select Export, and save the file where you'll be able to find it. 6. Right-click the 4.something key that does not belong, select Delete, and confirm the deletion. Repeat for each 4.something key that does not belong. 7. The {A7107640-94DF-1068-855E-00DD01075445} key can occur in more places. Usually they all have the same subkeys, but not always, so you need to examine each instance. Tap the F3 key to get to each of the others in turn. For each one, repeat steps 4 through 6 (click the + sign, export the key to a new file, and delete the orphaned 4.something entries). 8. Close Registry Editor. Launch @RISK, and you should now be able to import .MPP files. After verifying @RISK, and running Project independently of @RISK, you can delete the saved .REG file. Microsoft Project Installation If the {A7107640-94DF-1068-855E-00DD01075445} key does not exist, and Microsoft Project did not re-create it, you may have a problem with your installation of Project. 9. In Project, make sure you have the latest service pack installed. If not, download it and install it. 10. In Control Panel » Programs and Features (or Add or Remove Programs), do a Repair of Microsoft Project. (If Project, Microsoft Project, or Microsoft Office Project does not appear on a separate line, it is part of the Microsoft Office line and you should repair that.) 11. If all else fails, uninstall and reinstall Project, or uninstall and reinstall Office if Project is part of the Office install. (We had one customer with this issue, and none of the above worked for him, but the uninstall and reinstall solved the problem.) Excel's COM Registrations If the above don't solve the problem, the culprit could be Excel's COM registrations rather than Project's. Please see Removing Outdated References to Office from the System Registry to check for incorrect Excel COM registrations and remove them. The procedure is similar to the procedure in the first section of this article, but the keys are different. Last edited: 2016-04-01 1.18. Programming Languages and History of @RISK Applies to: All products, releases 6.x–7.x I've always wondered -- what language are your products written in? What programming language do you use? When was @RISK first released? Current versions use a mix of C++, Visual Basic 6, Visual Basic for Applications, Visual Basic .NET, and C#. The first version of @RISK for Lotus 1-2-3 was released at the beginning of October 1987. Its predecessor product, PRISM for DOS, was released in April of 1984. Earlier versions of PRISM for Apple II were in use starting in 1982, before Palisade was organized as a company in 1984. @RISK for Excel first appeared some time before July 1993—the earliest user manual in our archives has that date for @RISK 1.1.1—and @RISK for Project in 1994. In July 2012, @RISK 6.0 integrated support for Excel and Project. For more Palisade history, please see About Palisade. Last edited: 2018-06-28 1.19. "Update Available" (7.x) Disponible en español: "Actualización Disponible" Applies to: All products, releases 7.x If you have a 6.x release, even if the update message references a 7.x release, see "An update is available." (6.x). When I launch my Palisade software, I get a popup telling me a product update is available. I'd like to update, but I have to wait for my IT department to install it; or maybe I just prefer not to update at this time. Can I suppress the popup? If you just click "Don't update", the reminder will appear again the next time you run the software. But you can "snooze" it for about a month by clicking "Remind me in 30 days". To disable the reminder for yourself alone, download the attached DisablePalisadeUpdateAutoCheck(user).reg and double-click it. To check for updates once, click Help » Check for Software Updates. To re-enable the automatic check, download EnablePalisadeUpdateAutoCheck(user).reg and double-click it. To disable the reminder for all users on a given computer, follow this procedure: 1. Close Excel and Project. 2. Click Start » Run, type regedit and press the Enter key. 3. To suppress updates for all users on this machine, navigate to HKEY_LOCAL_MACHINE\Software\WOW6432Node (if it exists) or HKEY_LOCAL_MACHINE\Software. 4. Expand that key, and under it click Palisade. 5. In the right-hand panel, you should see a value called CheckForUpdatesDisabled — if it's not there, right-click an empty area and select New » String Value to create it. 6. Double-click CheckForUpdatesDisabled and enter the value True to disable update notices. Users can still check for updates when they wish, by running the software and clicking Help » Check for Software Updates. To re-enable the automatic check for updates, change the value to False or simply delete CheckForUpdatesDisabled. Additional keywords: Upgrade available, Upgrade prompt, Update prompt, Upgrade message, Update message Last edited: 2020-07-28 1.20. "An update is available." (6.x) Disponible en español: "Hay una actualización disponible" (6.x) Disponível em português: "Há uma atualização disponível" (6.x) Applies to: All products, releases 6.2.0–6.3.1 When you launch your Palisade software, you get a message similar to An update is available. A newer @RISK (version 6.3.0) was released on 2014-06-30. Your maintenance contract entitles you to this update free of charge. You'd like to update, but you have to wait for your IT department to install it. Or perhaps you prefer not to update at this time. Can you suppress the message? If you just click "Don't update", the reminder will appear again the next time you run the software. But you can "snooze" it for about a month by clicking "Remind me in 30 days". If you want to suppress the reminder for a longer time, you can follow this procedure: 1. Click the "Remind me in 30 days" button. (This is an easy way to create the necessary key in the System Registry. But if your Palisade software isn't currently running, you can create the key in step 5 below.) 2. Close Excel and Project. 3. Click Start » Run, type regedit and press the Enter key 4. Navigate to HKEY_CURRENT_USER\Software\Palisade. 5. In the right-hand panel, you should see a value called SuppressProductUpdateMessages — if it's not there, right-click an empty area and select New » DWORD. 6. Double-click SuppressProductUpdateMessages and enter a Julian day, such as 41934 for 2014-10-22 or 47848 for the last day of the year 2030. (Easy way to find a Julian day: type a date in Excel and then format it as a number.) The update notice will not appear again until the selected day. last edited: 2015-06-17 1.21. Which License Gets Used? Disponible en español: ¿Qué licencia esta siendo utilizada? (6.x/7.x) Disponível em português: Múltiplas Licenças - Qual será utilizada? (6.x/7.x) Applies to: All 6.x/7.x releases, standalone and Concurrent Network client I have more than one license, possibly even mixed between network client, activated standalone, and trial. How does the software decide which license to use? In general, each application—@RISK, Evolver, NeuralTools, PrecisionTree, StatTools, TopRank—remembers which license you used last, and tries to reuse it the next time you run that same application. That seems like a simple idea, but in particular situations the rule can work out in surprising ways. This article explains how a Palisade application decides which license to use each time. The key concept is the "license to use". Each application remembers the license to use, and tries to use the same license that it used last time. If you go into License Manager » Select License and pick a different license, the application remembers the new license to use for you only, not for anyone else who might log in to the same computer. Each application remembers this separately, so different components of the DecisionTools Suite can use different licenses. But what about the first time I run @RISK? There's no "license to use" in my user profile, because I've never run the application, so how does @RISK know which license to use? Each application actually has two "license to use" settings, one at the machine level and one at the user level. The application uses whichever one was set more recently. The machine-level license to use is set at install time, and the user-level license to use is set at run time. Details: • The installer for a standalone license presents a Customer Information screen, and sets the machine-level license to use depending on your selection. If you select "I am upgrading" on the Customer Information screen, the installer doesn't change the license to use. • The installer for a Concurrent Network client sets the machine-level license to use to "Network:", which tells the client software to get a license dynamically from servers listed in the System Registry key HKEY_LOCAL_MACHINE\Software\WOW6432Node\FLEXlm License Manager\PALISADE_LICENSE_FILE. (Omit WOW6432Node in 32-bit Windows.) • The installer for a course license or a textbook license sets the machine-level license to use to that license. • License Manager in each application sets the user-level license to use, when you click OK in Select License or when you activate a license. If you select or activate a DecisionTools Suite license, License Manager sets the user-level license to use for all products in the Suite. • When you deactivate a license in License Manager, the license to use is not changed, so the next time you run the application License Manager will appear and prompt you to select or activate a Technical detail: Licenses activated during install are remembered in HKEY_LOCAL_MACHINE in the System Registry, but licenses activated or selected in License Manager are remembered only in If I have a DecisionTools Suite license and an @RISK license, or @RISK Industrial and @RISK Professional licenses, what determines which one is used the first time I run @RISK? (@RISK is just an example here. All the same rules apply to Evolver, NeuralTools, PrecisionTree, and StatTools. TopRank requires a DecisionTools Suite license.) The first time you run @RISK, when no user-level license to use has been set, @RISK looks for an available license depending on which installer was run most recently. If the latest or only install was the DecisionTools Suite, then @RISK will try, in order, to use a DecisionTools Industrial license, DecisionTools Professional, @RISK Industrial, @RISK Professional, and @RISK Standard. If the latest or only install was @RISK, then @RISK will try first for an @RISK license and then for a DecisionTools Suite license. Whichever one it finds, it records that as the user-level license to use and will use the same one next time you run @RISK. When you run a Concurrent Network client version of @RISK for the first time, it goes through the same process if it was set up to use just one license server. If it was set up with multiple license servers, then it looks on all available servers for each type of license, before moving on to look on all available servers for the next type of license. Whichever one it finds, it records the license type but not the specific server in the user-level license to use. Then @RISK can use a DecisionTools Suite license? @RISK can use a DecisionTools Suite license, even if only @RISK is installed and not the whole Suite. If the Suite is installed, and you have an @RISK license, you can use @RISK on that license but the other components of the Suite can't use the @RISK license. This gives you a lot of flexibility. For example, you might have an activated license of @RISK but decide to install the DecisionTools Suite as a trial, or on a short-term training license. Via License Manager » Select License, you can use your activated @RISK license but run the other components of the Suite on your trial or training license. Concurrent Network "seats" for the DecisionTools Suite are not divisible. If you have a one-user Concurrent Network license, two people can't use the Suite at the same time, whether they're using the same component or different components. If you have a two-user Concurrent Network license, two people can use the Suite at the same time, but they are taking both "seats" between them, whether they're using the same component or different components. What if the license I was using becomes unavailable—it expires, or it's a Concurrent Network license and all seats happen to be taken? Is there automatic failover if another license is available? In a Concurrent Network client, the software will automatically fail over to any unexpired license for the same product and edition, if one exists on any server listed in PALISADE_LICENSE_FILE (above), but it won't automatically use license for a different product or edition on any server. In the latter case, the user can still click Select License in License Manager to see if any suitable licenses are available. For other license types, there's no automatic failover. The software will tell you that the license is no longer usable. In License Manager, you can then click Select License and select the other license. The application will remember that choice next time. Can you give some examples? 1. You have DecisionTools Suite Professional (activated) and you install a trial of @RISK Industrial (trial). Whenever you run Evolver, NeuralTools, PrecisionTree, StatTools, or TopRank, it will continue to use the DecisionTools Suite Professional license. The next time you run @RISK, you will get the Industrial trial license, but you can switch to the activated Professional license by clicking Help » License Manager » Select License. 2. You have @RISK (activated), and you install a DecisionTools Suite Industrial trial. Whenever you run any application in the Suite, including @RISK, it will use the DecisionTools Suite Industrial trial license. If you are preparing a presentation and want to avoid the Trial watermarks in your graphs, you can switch @RISK to the activated license by clicking Help » License Manager » Select License. After that, @RISK will continue to use the activated license, but the other components will use the trial license. 3. You install @RISK without activating it, so all user profiles are running on a trial license. Later, you activate the software. @RISK remembers to use the activated license for you, but it still remembers the trial for another user who previously ran on the trial license. That user can select the activated license via Select License in License Manager. See One User Doesn't Get the Activated License. 4. A Concurrent Network client of the DecisionTools Suite was installed on your computer, and you launch @RISK. Your company's license server has an @RISK Industrial license and a DecisionTools Suite Professional license. Since the last product installed was the Suite, @RISK uses the DecisionTools Suite Professional license. However, you can open License Manager » Select License and select the @RISK Industrial license, and @RISK will remember your selection the next time. 5. Your company has two license servers, A and B, and the Evolver install on your computer is set up to use both of them in that order. A has a Concurrent Network license for the DecisionTools Suite Professional, and B has Evolver Industrial. The first you run Evolver, even though server A is listed first Evolver will use the Evolver Industrial license from server B, because the software tries to choose a license that matches exactly what was installed. 6. You are a university IT administrator, and you install standalone copies of the DecisionTools Suite in your computer lab, with the course license. A year later, you place the next year's license on all the lab computers. Each student who tries to run gets a message that no license is available, and must use Select License to elect the new license. (This happens because the license to use is separately stored for each user. To override the old setting for all users, you must reinstall the software with the new license, or use the System Registry edit shown in Changing Standalone Workstation to Concurrent Client.) Last edited: 2015-07-30 1.22. Automating Palisade Software Applies to: Palisade Custom Runtime (PCR) Palisade's Excel add-ins, releases 5.x–7.x How can I control @RISK or my other products through Visual Basic for Applications (VBA)? The Excel Developer Kits (XDKs) ship with the Professional and Industrial Editions of our products. These are Visual Basic libraries that let you control @RISK and our other applications. They let you exercise maximum control with minimum development time, but your user must purchase and install the Palisade application. For an introduction to using an XDKs, run the product and click Help » Developer Kit (XDK) » Automation Guide. For a complete reference to all objects, properties, and methods, see the XDK Reference in the same menu. Can I include the calculations in my own applications with my own user interface instead of Palisade's? For this requirement, we offer the Palisade Custom Runtime (PCR). The PCR contains the calculation engines of most of our Excel add-ins. This means that PCR applications can run on computers that don't have @RISK or our other add-ins, or even Excel. Applications using the PCR are developed as part of a customized development agreement with Palisade. Please visit our Custom Development page or consult your Palisade sales manager or sales@palisade.com for more information about the PCR and custom development. Additional keywords: Automation of Palisade software Last edited: 2017-06-19 1.23. Example Files from Palisade-Published Books Applies to these titles: Decision Making under Uncertainty with RISKOptimizer Decisions Involving Uncertainty: @RISK for the Petroleum Industry Energy Risk Modeling Evolver Solutions for Business Financial Models Using Simulation and Optimization, v1 or v2 Learning Statistics with StatTools Modelos Financieros com Simulación y Optimización RISKOptimizer for Business Applications El Riesgo en la Empresa: Medida y control mediante @RISK @RISK Bank Credit and Financial Analysis I bought a book from Palisade. Where can I download the examples used in the book? Please follow this link to download the examples from any of the listed books. Last edited: 2019-03-05 1.24. Heartbleed Bug and Palisade Software Applies to: All products I've been hearing about this Heartbleed security problem in OpenSSL code. Are @RISK and the other Palisade products vulnerable? No. The only Internet operations are Automatic Activation (if you select it) and checking for updates, both of which connect with our server. Our server does not use OpenSSL to support these operations. We have checked with Flexera Software, which provides our licensing software, and they have verified that the modules that we use are clean. See also: Heartbleed is listed as CVE-2014-0160 by the U.S. Department of Homeland Security, for example on these pages: last edited: 2014-05-07 1.25. Troubleshooter for Releases 7.x: PalDiagnostics7 Applies to: All releases 7.x BigPicture releases 1.x and 2016 If you have earlier software, these diagnostics won't work. Use Troubleshooter for Releases 6.x: PalDiagnostics6 or Troubleshooter for Releases 5.x: PalDiagnostics5. Exception: Enhanced testing for system DEP status and process DEP status, introduced on 2017-01-09, is available only in PalDiagnostics7. Those tests will work even if you have release 5.x or 6.x Palisade software, though many other tests will fail. This utility will take a snapshot of the license and other settings on your computer for Palisade software release 7.x and for Excel, to help us figure out just what's wrong and how to fix it. Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics7.txt in your temporary folder. 1. Download the attached file to your desktop. (In your browser, select Save, not Open or Run.) 2. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.) 3. Click the button Run Tests. (Enable Runtime Logging is not needed.) 4. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics7.txt. Click File » Save As, and save the file to your desktop or any convenient location. 5. Attach the saved PalDiagnostics7.txt file to your email to Tech Support; don't paste the contents of the file into the body of your email. Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly Last edited: 2018-07-26 1.26. Troubleshooter for Releases 6.x: PalDiagnostics6 Applies to: All products, releases 6.x If you have earlier or later software, these diagnostics won't work. Use Troubleshooter for Releases 7.x: PalDiagnostics7 or Troubleshooter for Releases 5.x: PalDiagnostics5. Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly This utility will take a snapshot of the license and other settings on your computer for Palisade software release 6.x and for Excel, to help us figure out just what's wrong and how to fix it. Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics6.txt in your temporary folder. 1. Click this link. 2. In your browser, select Save, not Open or Run, and save the file to your desktop. 3. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.) 4. Click the button Run Tests. (Enable Runtime Logging is not needed.) 5. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics6.txt. Click File » Save As, and save the file to your desktop or any convenient location. 6. Attach the saved PalDiagnostics7.txt file to your reply email; don't paste the contents of the file into the body of your email. Last edited: 2018-05-21 1.27. Troubleshooter for Releases 5.x: PalDiagnostics5 Applies to: All products, releases 5.x If you have later software these diagnostics won't work. Use Troubleshooter for Releases 7.x: PalDiagnostics7 or Troubleshooter for Releases 6.x: PalDiagnostics6. Please note: The diagnostics utility is meant for your use in conjunction with Palisade Technical Support. While you're certainly free to look at the output, it's not presented in a user-friendly This utility will take a snapshot of the license and other settings on your computer for Palisade software release 5.x and for Excel, to help us figure out just what's wrong and how to fix it. Nothing is installed or changed on your computer. This utility simply copies the relevant settings and information to a file called PalDiagnostics5.txt in your temporary folder. 1. Click this link. 2. In your browser, select Save, not Open or Run, and save the file to your desktop. 3. If you have Windows 7, 8, 10, or Vista, right-click on the file and select Run As Administrator. If you have an earlier version of Windows, double-click on the file to launch the utility. (Either way, it's important to run the diagnostics within the end user's login, because some settings vary from one user profile to another. Ask an IT person to help you if you are the end user and unable to run the utility.) 4. Click the button Run Tests. 5. When the utility finishes, you'll see a window on your desktop that contains a file called PalDiagnostics5.txt. Click File » Save As, and save the file to your desktop or any convenient location. 6. Attach the saved PalDiagnostics5.txt file to your reply email; don't paste the contents of the file into the body of your email. Last edited: 2016-06-01 1.28. Re-Register All Libraries Applies to: All products, releases 7.5.x/7.6.x This file is intended for use under guidance of Palisade Tech Support. We have seen one or two cases where the installer ran without error, but the interfaces for our software were not registered. We don't know what prevented the registrations from being made during install or broke them after install, but we have a batch file that should re-register everything without running the installer. It also contains extra error checking aimed specifically at this issue. However, it doesn't diagnose missing files–it assumes those are part of products you haven't installed. If Palisade Technical Support representatives direct you to this article, please follow the directions and report the results back to them. 1. Save the attached file to any convenient folder. 2. Open the folder where the file was saved. Press and hold the Shift key, right-click the saved KB1676 file, and select Copy as path. Release the Shift key. 3. Open an administrative command prompt—see this article if you're not sure how to do it. (You must open an administrative command prompt. It's not enough to right-click the saved file and select Run as Administrator.) 4. Click into the command prompt window, right-click and select Paste. Press the Enter key. If the registrations all run successfully, you'll get "Success!" in the window. In that case, retry the operation that was a problem before. If you get "FAILURE" in the window, make a screen shot of the command window and of any popup window and send them to the Palisade representative. Last edited: 2018-10-09 2. @RISK: General Questions 2.1. Getting Started with @RISK I've just installed @RISK. How do I learn to use it? Where do I start? Welcome to @RISK! If you can represent your problem as a base case in Excel, you can add @RISK to that model to analyze and model uncertainty. We have videos for every stage of your learning: • Are you new to the concepts of risk analysis with Monte Carlo simulation? View our Introduction to Risk Analysis Using @RISK. • Understand risk analysis in general, but new to @RISK? Watch the Quick Start from beginning to end. It's not very long, but it will show you the steps in modeling with our software, and give you some best practices. • Need more on particular topics? Take our Guided Tour. When it begins to play, a menu down the left side lets you jump to whatever topic you want to know more about. @RISK also comes with numerous examples preinstalled. (In the @RISK menu, click Help » Example Spreadsheets.) These are generally small "toy" models to show you particular techniques or illustrate various applications or features of the software. You'll find answers to a lot of frequently asked questions under Techniques and Tips in our searchable Knowledge Base. And if you see a message you don't understand, chances are good you'll find it, with a solution, in our Troubleshooting section. Tech Support can also help you with messages that aren't clear, or with particular features of the software. More intensive training is available in on-demand webinars and live webinars. We also offer in-person regional training. Last edited: 2018-12-14 2.2. @RISK User Groups I'd like to connect with other @RISK users on line. Do you have any kind of user group? Yes, there is a LinkedIn group here: "Palisade Risk and Decision Analysis". There are also Palisade's own blogs for all products. For other venues like Facebook and Twitter, please visit our corporate directory and hover your mouse on SUBSCRIPTION near the top of the page. last edited: 2021-09-28 2.3. Iterations versus Simulations versus Trials Applies to: @RISK 6.x/7.x ("Trials" applies to @RISK Industrial) What's the difference between iterations and simulations in Simulation Settings? Which one should I set to which number? An iteration is a smaller unit within a simulation. At each iteration, @RISK draws a new set of random numbers for the @RISK distribution functions in your model, recalculates all open workbooks or projects, and stores the values of all designated outputs. At the end of a simulation, @RISK prepares any reports you have specified. For example, if you run 5000 iterations and 3 simulations, then at the end of the analysis you can look at three histograms for each @RISK output. Each histogram summarizes the 5000 values for the 5000 iterations of one of the three simulations. You can set the numbers of iterations and simulations in the @RISK ribbon, or on the General tab of Simulation Settings. For most analyses, you will want N iterations and 1 simulation. If you use the same set of assumptions for all simulations, you will usually get better results with one simulation of 15000 iterations than with three simulations of 5000 iterations. But setting simulations greater than 1 is useful in several situations, such as these examples: • Suppose one or more unknown quantities are under your control, such as several different prices you might charge or several different raw materials you might use. You would like to know what the different choices would do to your bottom line. In this case the different values of the unknown quantity(ies) would be in one or more RiskSimtable functions. See the topic "Sensitivity Simulation" in your @RISK manual or @RISK help. • In a similar way, if you have several assumptions or scenarios you can embed them in one or more RiskSimtable functions and run one simulation on each, all as part of one analysis. • To test the stability of your model, you might run several simulations with the same model and without RiskSimtable functions. If the simulation results are fairly close, you know that your model is stable; if they vary significantly, you know that your model is unstable or you are not running enough iterations. For simulation settings to set the random number seed, see "Multiple @RISK Simulation Runs" in Random Number Generation, Seed Values, and Reproducibility. I'm running an optimization with RISKOptimizer. How to trials relate to simulations or iterations? Why is the number of valid trials different from the number of trials? RISKOptimizer places a set of values in the adjustable cells that you designated in the Model Definition, then runs a simulation. At the end of the simulation, RISKOptimizer looks at the result and decides whether enough progress has been made o declare the optimization finished. That is one trial. On the next trial, RISKOptimizer places a different set of values in the adjustable cells—using the results of earlier trials to decide which values—and then runs another simulation. The difference between trials and valid trials depends on your hard constraints. A valid trial is one that meets all hard constraints. If a trial is not a valid trial, RISKOptimizer throws away the result of that simulation. If your proportion of valid trials to total trials is small, you may want to look at restructuring your model so that the optimization can make progress faster. For more, see For Faster Optimizations. Additional keywords: Simtable Last edited: 2018-06-11 2.4. Simulation versus Optimization Applies to: @RISK 5.x and newer, Industrial Edition Evolver, all releases What's the difference between simulation and optimization? Does a simulation just add stochasticity to an optimal value? It kind of goes the other way, actually. Initially, you probably have a deterministic model in mind. If you want to know what choices you should make in a deterministic setting, you use Evolver to do a deterministic optimization. But it's more common to take that deterministic model and replace some constants with probability distributions. This reflects your best estimates of the effects of chance — events you can't control. These probability distributions are inputs to @RISK. You also identify outputs of @RISK, Excel cells that are the results of your logic, and whose values you want to track in the simulation. Then, you run an @RISK simulation to determine the range and likelihood of outcomes, taking chance effects into account. This can be done in any edition of @RISK. See also: Risk Analysis has much more about deterministic and stochastic risk analysis. An optimization asks a higher-level question while still keeping the probabilistic elements: what about the things you can control? What choices can you make that improve your chances of a favorable outcome? You identify in your model the constants that represent choices you can make; these are called adjustable cells. You can place constraints on those cells, and additional constraints on the model if appropriate. Your model still keeps the probability distributions mentioned above for events that are outside your control. Now you run an optimization in the RISKOptimizer menu within @RISK Industrial Edition. RISKOptimizer starts with one possible set of choices — one set of adjustable cell values — and then runs a simulation to find out the probabilistic range of outcomes if you made those choices. It chooses another set of adjustable cells and runs a new simulation. The optimizer continues this process, making different sets of choices for the adjustable cells and running a full simulation on each set. Some sets of choices have a better outcome than others, as measured by the target you specified for optimization; this guides @RISK in deciding which sets of choices to try because they're more likely to improve the outcome. Every set of choices gets a full simulation. At the end of the optimization, you have a set of best values for your adjustable cells. These tell you the choices to make so as to maximize your chance of getting the most favorable outcome, based on your target. And the simulation with that set of adjustable cells tells you the range and probabilities of your outcomes. Last edited: 2016-03-14 2.5. Different Results with the Same Fixed Seed for various distributions También disponible en Español: Resultados diferentes con la misma semilla fija para varias distribuciones When comparing the random numbers sequence generated for each distribution in a model, they are different when simulated with these scenarios: 1. Simulate and compare random number sequences for the same model simulated with @RISK and a Custom Development API run with PCR or SDK. 2. Simulate and compare the random number sequences for the same model simulated simultaneously with two open workbooks. See also Different Results with Multiple Workbook Copies The difference can occur because distributions in the model are defined differently. One model could have more distributions or the same number of distributions, but these were defined in a different Since @RISK is sampling a different number of distributions or a list of distributions ordered differently, they are effectively distinct models, and the same results should not be expected. The same model will always produce the same results using the same fixed seed. For example, suppose I am using a fixed seed that happens to generate the following numbers using RiskUniform(0,1): 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7. (Of course, a pattern like this is enormously unlikely; it's just chosen to make the example easy to follow.) If I use that same fixed seed and run a simulation with seven iterations, I will always get those values precisely in that order. In other words: Iteration value of RiskUniform(0,1) 1 0.1 2 0.2 3 0.3 4 0.4 One difference is the number of distributions. So, if I add another RiskUniform(0,1) to the spreadsheet and run seven iterations, the results will be different. The same seed list is generated and used, but now two distributions are sampled from it. In other words: Iteration Value of first RiskUniform(0,1) Value of second RiskUniform(0,1) 1 0.1 0.2 2 0.3 0.4 3 0.5 0.6 4 0.7 [next value in seeded list] Another reason for the difference is distributions were defined in a different order. So if I add a RiskPoisson(5) and another RiskUniform(0,1), or first a RiskUniform and then a RiskPoisson and run seven iterations, the results will be different. The same fixed seed is used, but now three distributions are sampled from it, and these are the numbers generated for the RiskPoisson: 2,4,7,5,1,3,6. In other words: Iteration Value of first RiskUniform(0,1) Value of second RiskUniform(0,1) value of RiskPoisson(5) 1 0.1 0.2 7.0 2 0.4 0.5 3.0 3 0.7 [next value in seeded list] Or, with a different order Iteration Value of first RiskUniform(0,1) Value of RiskPoisson(5) Value of second RiskUniform(0,1) 1 0.1 4.0 0.3 2 0.4 1.0 0.6 3 0.7 [next value in seeded list] The first model with only one distribution will always produce the same results for the same fixed seed. The other models with two or three distributions will always have the same random number sequence for the same fixed seed. But the samples will not be the same because they are assigned differently. Ultimately, all samples converge to the same desired distributions, that is, the same statistics, making their results correct and comparable. Also, consider that If Latin Hypercube sampling is in effect, we can no longer draw purely random samples since we need to ensure that a random sample is drawn from each Latin Hypercube bin. For this reason, you can see that samples will not be identical after a few iterations and will only match the last digits. Read more about this in: Latin Hypercube Versus Monte Carlo Sampling If identical answers are critical, perhaps another approach is in order. These are some possible solutions: 1. Use Monte Carlo as your Sampling Type. 2. Supply the variable data directly. For instance, distribute a list of numbers saying, "Here are the monthly interest rates for the next five years." 3. Add the RiskSeed property function to each distribution in the model. This way, each distribution will have its unique sequence of random numbers, no matter their defined order. Last update: 2023-02-09 2.6. Placing Number of Iterations in the Worksheet Applies to: @RISK for Excel 4.x–7.x @RISK puts the number of iterations into reports and windows, but how can I get it into my worksheet? I need to use it in calculations. With @RISK 6.x/7.x, use the formula =RiskSimulationInfo(4). With @RISK 5.7 and below, place =RiskCurrentIter( ) in any convenient cell, say for example AB345, and then the formula =RiskMax(AB345) in any other cell will give you the number of iterations upon completion of the simulation. Additional keywords: SimulationInfo, CurrentIter Last edited: 2015-06-08 2.7. How Many Iterations Do I Need? Applies to: @RISK for Excel 5.x–7.x How many iterations do I need to run in my simulation so that the estimate of the mean is calculated within a specific confidence interval? The answer depends on whether you're using traditional Monte Carlo sampling or the default Latin Hypercube sampling. Monte Carlo Sampling: (This part of this article is adapted from "How Many Trials Do We Need?" in the book Simulation Modeling Using @RISKby Wayne L. Winston [Duxbury, 2000].) The attached example, ConfIntervalWidth2.xls, uses traditional Monte Carlo sampling. Let's suppose that we want to use simulation to estimate the mean of the output in cell B11 and be accurate within 5 units 95% of the time. The number of iterations needed to meet these requirements can be calculated using the following formula: n = [ z[α/2] S / E ] ² In this formula, • n is the number of iterations needed. • S is the estimated standard deviation of the output. • E is the desired margin of error (in this case, 5 units). The width of the confidence interval is twice the margin of error. • z[α/2] is the critical value of the normal distribution for α/2, the z value such that the area of the right-hand tail is α/2. It is the number that satisfies P(Z>z[α/2]) = α/2, where Z follows a normal distribution with mean 0 and standard deviation 1. α/2 can be found by setting the desired confidence level equal to 100(1-α) and solving for α. For a 95% confidence level, as shown in the attached example, 95 = 100(1-α). Then α is 0.05 and α/2 is 0.025. To compute z[α/2] in Excel, use the NORMSINV function and enter =NORMSINV(1-α/2, 0, 1). Cell E13 of the attached example shows a Z value of approximately 1.96 for a 95% confidence interval. To obtain an estimate for the standard deviation of the output, the @RISK statistics function RiskStdDev was placed in cell B14 and a simulation was run with just 100 iterations. This gave us a standard deviation of approximately 53.5. If we plug the above information into our formula, we get n = [ 1.96 × 53.5 / 5 ] ² = 440 Thus, if you use Monte Carlo sampling, you should run at least 440 iterations to be 95% sure that your estimate of the mean of the output in cell B11 is accurate within ±5 units. Latin Hypercube Sampling: The Latin Hypercube method produces sample means that are much closer together for the same number of iterations. With the Latin Hypercube method, a smaller number of iterations will be sufficient to produce means within the desired confidence interval, but there's no simple calculation to predict the necessary number. See Latin Hypercube Versus Monte Carlo Sampling, and the section "Confidence Interval with Latin Hypercube Sampling" in Confidence Intervals in @RISK. Rather than try to pre-compute the necessary number of iterations, you may find it simpler just to set your convergence criteria and let @RISK run until the desired level of confidence has been reached. In Simulation Settings, on the General tab, set the number of iterations to Automatic. Then on the Convergence tab, set your convergence criteria. Notice that the margin of error ("Convergence Tolerance") is set to a percentage of the statistic being estimated, not to a number of units. Last edited: 2015-06-08 2.8. SQL for @RISK Library on Another Computer Applies to: @RISK Professional and Industrial Editions, releases 6.2, 6.3, and 7.x Do I need SQL on my computer to read and write @RISK libraries on other computers? Yes, the @RISK Library on the local computer needs compatible SQL software to talk to the remote database. If you're not storing @RISK Library databases on your local computer, you could use SQL Native Client or SQL Server to access the remote databases. A computer that hosts an @RISK Library database needs SQL Server, and it must be running a Server version of Windows. Please see SQL Versions and Installation: SQL with @RISK 6.2 and later for more. Connecting to an existing database on a remote computer: 1. Make the appropriate selection to open the @RISK Library window: □ In Define Distribution, click the books icon, "Add Input to Library". □ After a simulation, in the ribbon click Library » Add Results to Library. □ If you want to access existing library entries rather than add new ones, in the ribbon click Library » Show @RISK Library. 2. Once in the @RISK Library window, click the books icon near the middle of the screen. The icon displays "Connect, Create, or Attach to Databases" if you hover your mouse over it. 3. Click Connect. 4. On the SQL Connection screen, select an authentication method. "Microsoft Authentication" means your standard Windows login will be used; this is correct for most users. If necessary, change to "SQL Server Authentication" and enter your user name and password. 5. @RISK will search your network for installed SQL servers; this can take some time. The list includes all computers with SQL server software installed, whether they actually have any databases or 6. When the list appears, click the name of the computer where you want to access an @RISK Library database. @RISK will then query that server for available databases. If no databases appear, either none exist on that server, you don't have privilege to access them, or you didn't specify the correct user name and password. Please see Library Can't Connect to Networked Database for troubleshooting. 7. Select the desired database and click Connect. The database will be added to your list of Current SQL Server Connections, and @RISK will remember the connection next time. Creating a new database on a remote computer: 1. Make the appropriate selection to open the @RISK Library window: □ In Define Distribution, click the books icon, "Add Input to Library". □ After a simulation, in the ribbon click Library » Add Results to Library. 2. Once in the @RISK Library window, click the books icon near the middle of the screen. The icon displays "Connect, Create, or Attach to Databases" if you hover your mouse over it. 3. Click Create. 4. On the SQL Connection screen, select an authentication method. "Microsoft Authentication" means your standard Windows login will be used; this is correct for most users. If necessary, change to "SQL Server Authentication" and enter your user name and password. 5. @RISK will search your network for installed SQL servers; this can take some time. 6. When the list appears, click the name of the computer where you want to create an @RISK Library database. 7. Type a database name and click Create. The database will be added to your list of Current SQL Server Connections, and @RISK will remember the connection next time. If the screen closes when you click Create, but the new database is not shown on the Current SQL Server Connections screen, it was not created. Either you don't have the necessary access rights on that computer, or you didn't enter the correct authentication information. Please see Library Can't Connect to Networked Database for troubleshooting. See also: "Library" in the Guided Tour of @RISK is a short video that shows you how to save distributions and results in a library and how to make use of them in your model. Last edited: 2015-03-26 2.9. SSAS and SSRS with @RISK Library Applies to: @RISK 5.x–7.x, Professional and Industrial Editions Since the @RISK Library is an SQL database, can I use SSAS (SQL Server Analysis Services) or SSRS (SQL Server Reporting Services) with it? The @RISK Library is an SQL database, and to use the @RISK Library you must have SQL Server installed. Since SSAS or SSRS can work with SQL databases, in principle they could work with the @RISK Library. However, it's hard to see what useful information they could obtain. The @RISK Library is really intended to be accessed only by @RISK, and not by external programs. Therefore, we have not prepared any documentation about the organization of the @RISK Library. External programs definitely should not alter the @RISK Library databases in any way. Technical Support is unable to assist with setting up or debugging SQL database queries, but we have a Custom Development department that can assist you if there is some reason why you need read-only access to the @RISK Library outside of @RISK. Please contact your Palisade sales manager if that is of interest to you. Last edited: 2018-08-06 2.10. Sharing @RISK Models with Colleagues Who Don't Have @RISK Disponible en español: Compartir modelos de @RISK con Colegas que no poseen @RISK Disponível em português: Compartilhar modelos do @RISK com colegas que não possuem @RISK I have @RISK for Excel. I would like to ship my worksheet with results to a colleague who has Excel but not @RISK. Can I do this? If you have @RISK 5.0 or later, your colleague doesn't need any special software. The Swap Out Functions feature makes it very easy to share workbooks with colleagues who don't have @RISK. • In @RISK 7.x, click "Swap Out @RISK" in the ribbon. This replaces @RISK functions with static numbers (see below). In addition, @RISK will offer to embed thumbnail graphs of functions, set color cells to show inputs and outputs, and add a new worksheet that summarizes all @RISK functions with statistics and graphs. • In @RISK 6.x, click Utilities » Swap Out Functions. • In @RISK 5.x, click on the "Swap @RISK Functions" or "Swap Functions" icon toward the end of the ribbon. (Depending on your specific 5.x version, this could be a @ with a / mark through it, or it could be the @RISK logo in front of a worksheet grid.) After you swap out functions, the @RISK functions are all replaced with numbers. Save your workbook, and your colleague can view it in Excel with no need for other software. (This replaces the Spreadsheet Viewer that was used with @RISK 4.x.) When you reopen the workbook, the functions should be swapped back in automatically. If you have any difficulties, please see @RISK Functions Don't Reappear after Swapping Out. If you have an earlier version of @RISK and you'd like to use this feature, please contact your Palisade sales manager to obtain the current version. Which numbers does @RISK place in the cells in place of the distribution functions? By default, @RISK will replace functions with the displayed static values of the functions, as defined in Setting the "Return Value" of a Distribution. But when you request the swap, you can open the Swap Options dialog to override this. In Swap Options, you can specify expected values, most likely values (mode), or a percentile for all functions that don't have RiskStatic property functions See also: Sharing @RISK Models with Colleagues Who Don't Have @RISK Last edited: 2017-01-06 2.11. Precedent Checking (Smart Sensitivity Analysis) Also available in Spanish: Verificación de precedentes (análisis de sensibilidad inteligente) Applies to: @RISK for Excel version 5.x–7.x When I run my model in @RISK, it seems to take a long time before the first iteration. The status bar shows that it is checking precedents. Is something wrong? Precedent checking (also known as precedent tracing or Smart Sensitivity Analysis) is a new feature in 5.0 and later versions. Its purpose is to prevent @RISK inputs from incorrectly showing up in Regression/Sensitivity analysis such as tornado graphs. For example, consider a simple model with two inputs. The two inputs are correlated – let's say to the full extent, with a correlation coefficient of 1.0. One input is used in a calculation for a RiskOutput. The other input is not involved in any calculation which impacts the RiskOutput. In earlier versions, both inputs would be displayed as having equal impact on the output. With precedent checking, @RISK determines that only one of these inputs contributes, and filters out the other one from graphs, reports, etc. The tradeoff is that it may take quite a bit of time to go through the full precedent tree before a simulation is run. By turning off data collection for some or all inputs, that process can be sped up, though at the cost of not being able to analyze those inputs. • If you set Collect to None, you can still collect certain inputs by designating them as outputs. You will then be able to get statistics and iteration data on them, but they won't be available for sensitivity analysis. • You can disable Precedent Checking while still collecting inputs. In Simulation Settings, on the Sampling tab, change Smart Sensitivity Analysis to Disabled to disable precedent checking for a particular model. If you want to change the default for all models, open Utilities » Application Settings and look in the Default Simulation Settings section. • You could also use RiskMakeInput( ) functions to exclude some particular inputs from precedent tracing. See Combining Inputs in a Sensitivity Tornado, Excluding an Input from the Sensitivity Tornado, and Same Input Appears Twice in Tornado Graph. Does Smart Sensitivity Analysis have any limitations? Certain formulas are correct Excel formulas, but Smart Sensitivity Analysis cannot work with them. Either their values can change at run time in ways that @RISK can't predict, or they are too complex and would take too much time when the simulation starts. These include: • INDIRECT( ) functions. • OFFSET( ) functions. • INDEX( ) when used to return a reference. (When used to return a value, INDEX( ) does not interfere with Smart Sensitivity Analysis.) • VLOOKUP( ) and HLOOKUP( ) functions. These are a special case in that they don't actually prevent Smart Sensitivity Analysis from happening. However, @RISK can't know in advance which values in the lookup table Excel will return, so it considers every non-constant cell in the cell to be a precedent of the output. • 3-D references such as a sum across multiple worksheets. • Structured references in tables, such as [column name]—a problem only with @RISK 6.2.1 and older. (@RISK 5.x cannot handle through any structured references. @RISK 6.x can trace precedents through all structured references, except that @RISK 6.0.0–6.2.1, in Excel 2010 and 2013 only, cannot trace precedents through formulas that use @ for [#This Row].) • References to external workbooks. (@RISK 6.1.1 and later don't display a message for these.) • References to Internet resources, such as http links. (@RISK 6.1.1 and later don't display a message for these.) In all the above cases, calculations are still done correctly during your simulation; it's just that for these cases @RISK cannot trace precedents. If your model contains one of these formulas, when you start simulation a message will pop up: "could not be parsed", "invalid formula", or similar. To proceed with the simulation, click the Yes button in the message. If you want to prevent this message from appearing in the future, either change the formula (if you can), or disable Smart Sensitivity Analysis for this model. To disable Smart Sensitivity Analysis, click the Simulation Settings icon, select the Sampling tab, and change Smart Sensitivity Analysis to Disabled. Click OK and save the workbook. Last edited: 2020-03-20 2.12. Excel Tables and @RISK Applies to: @RISK 5.7.1–7.x TopRank 5.7.1–7.x Can @RISK and TopRank work with Excel tables? There are actually three types of tables in Excel: tables, data tables, and pivot tables. • An Excel table (called a list in Excel 2003) is the simplest type. It is described in Microsoft's article Overview of Excel Tables (accessed 2013-09-16). • An Excel data table "shows how changing one or two variables ... will affect the results of those formulas." See Microsoft's article Calculate Multiple Results by Using a Data Table. • An Excel pivot table report "is useful to summarize, analyze, explore, and present summary data." See Microsoft's article Overview of PivotTable and PivotChart reports. Excel tables and data tables: @RISK and TopRank can handle Excel tables and data tables without any special action on your part. If the table contains @RISK functions, it will get re-evaluated in every iteration, which can be time consuming. Also, if you have @RISK functions in a data table, as @RISK rewrites formulas while setting up the simulation, the data table will get re-evaluated once for each @RISK function, and therefore the simulation will take longer to start. (For why @RISK must do this, see @RISK Changes Worksheet Formulas.) If your model's logic really does not need @RISK functions inside a data table, removing them may speed up your simulations. If you have release 5.7.0 or earlier, you should know about an Excel behavior that looked like a problem in @RISK. When you enter or edit a formula in a cell adjacent to a table, Excel may expand the table to include the additional row or column. As mentioned above, when starting a simulation or analysis, @RISK and TopRank rewrite all formulas that include @RISK or TopRank functions. That rewrite sometimes triggers Excel to expand the table. This doesn't affect 5.7.1 and newer releases, but if you have an earlier release, either upgrade your software or structure your models with at least one blank row or column between your table and the rest of your model. Pivot tables: Pivot tables are not automatically recalculated in an @RISK simulation, and in fact you don't want to recalculate a pivot table if it doesn't depend on any @RISK functions. If you have any pivot tables that do depend on @RISK functions, create an after-iteration macro that calls Excel's RefreshTable method for each pivot table, and register that macro on the Macros tab of @RISK's Simulation Settings. A very basic example is attached. TopRank does not provide for executing macros within an analysis. If you have pivot tables that depend on any of your TopRank inputs, the analysis may not be correct because those pivot tables are not recalculated. If your pivot tables don't depend on your TopRank inputs, then the analysis will be performed correctly. Last edited: 2015-06-08 2.13. Avoiding "Do you want to change the current @RISK settings to match those stored?" Applies to: @RISK for Excel 5.x–7.x When I open certain Excel files, I get the message The workbook workbookname has @RISK simulation settings stored in it. Do you want to change the current @RISK settings to match those stored in this workbook? I try to clear all data via @RISK utilities before opening the workbook, but I still get this message. I need to get rid of this message as it makes all my VBA code stop. How can I suppress it? The message is telling you that the workbook you're about to open has simulation settings inconsistent with the currently open workbook or with your defaults stored in Application Settings. It wants you to decide which set of settings should be in effect, since all open workbooks must have the same settings. You may be able to eliminate or at least reduce these warnings by adjusting your Application Settings, if you always make the same choices for all models. That is the simplest and safest approach. You can suppress the warning and either accept or ignore the new workbook's settings by executing a line of Visual Basic code. Please see the @RISK for Excel Developer Kit manual for instructions on setting the required reference to @RISK to allow this code to execute. • To open the manual in in @RISK 5.2.0 and newer, in @RISK's Help menu select Developer Kit (XDK) and then @RISK XDK Reference. (The Automation Guide is a good introduction, but to keep thing simple it omits many properties and methods.) • To open the manual in in @RISK 5.5.1–6.1.2, in @RISK's Help menu select Developer Kit. • To open the manual in @RISK 5.5.0 or earlier, click the Windows Start button, then Programs or All Programs » Palisade DecisionTools » Online Manuals. (In addition to the specific code functions mentioned below, you will need to create one or more references in the Visual Basic Editor. Please see Setting References in Visual Basic for the appropriate reference and how to set it.) To suppress the message and load settings from the new workbook, use the Risk.SimulationSettings.LoadFromWorkbook method. For details and an example, please see "LoadFromWorkbook Method" in the @RISK for Excel Developer Kit manual referenced above. To suppress the warning and ignore the settings in the new workbook, execute the following code in a macro before opening the workbook: Risk.DisplayAlerts = False After you open the workbook, we strongly recommend(*) executing this code: Risk.DisplayAlerts = True (*) Caution: Setting DisplayAlerts to False is potentially dangerous, because it suppresses all warnings from @RISK. Therefore, we strongly recommend that your macro set it back to True immediately after opening the workbook. My problem is similar, but I'm getting that prompt when I open workbooks that didn't have any @RISK functions in them. I don't want to insert this macro in every workbook; what can I do? Here is how that situation can arise: When you save a workbook while @RISK is running, if the current simulation settings are different from the current Application Settings, @RISK stores the current simulation settings in a hidden sheet in the workbook. This occurs whether or not the workbook contains any @RISK functions (because for all @RISK knows you might intend to add some @RISK functions to it later). If you later open your @RISK model and change some settings, the new stored settings in the @RISK workbook are different from the old stored settings in your non-@RISK workbook, so when you open the non-@RISK workbook you get the prompt. To solve this, you need to remove the @RISK settings from the non-@RISK workbooks and ensure that they are not written again in the future: 1. Run @RISK, and open the workbook that contains your @RISK functions plus the workbook(s) that do not. 2. Change Application settings (in the @RISK Utilities menu) to match simulation settings. 3. Save the workbook that contains @RISK functions. 4. In the @RISK Utilities menu, select Clear @RISK Data, tick all four boxes, and click OK. 5. (This step can be skipped with @RISK 5.7 and above.) In the @RISK Utilities menu, select Unload @RISK Add-in. 6. Save the workbooks. If later you want to change simulation settings in your @RISK workbook, do it by changing Application Settings. Remember, when you store a non-@RISK workbook, you want Application Settings and simulation settings to be the same, so that simulation settings don't get stored in the non-@RISK workbook. As an alternative, you can unload the @RISK add-in before storing the non-@RISK workbooks. See also: @RISK Changes Simulation Settings When Non-@RISK Workbook Is Opened (only with @RISK 6.3) Last edited: 2015-12-04 2.14. Markov Chains Can I use @RISK to build a model with Markov chains? With @RISK, either alone or in combination with PrecisionTree, you can create a Markov chain. But you have to create the statefulness yourself, either in Visual Basic code or possibly in RiskData worksheet functions. Please see the attached example, Price Evolution in Markov Chain. In the @RISK help file and user manual, the section "Reference: Time Series Functions" says "GBM processes have the Markov (memoryless) property, meaning that if the current value is known, the past is irrelevant for predicting the future." But those are unrelated Markovs, not useful in creating a Markov chain. last edited: 2013-10-22 2.15. Stochastic Dominance in @RISK Applies to: @RISK 5.x–7.x Can I use @RISK to test for stochastic dominance? Yes. The basic technique is overlaying the two cumulative ascending curves. In @RISK 6.x/7.x, click Help » Example Spreadsheets » Statistics/Probability and select the last highlighted example, Stochastic Dominance. If you have @RISK 5.x, the Stochastic Dominance example is not included, but you can download the attached copy. Last edited: 2015-06-08 2.16. Circular References Applies to: @RISK 5.x–7.x TopRank 5.x–7.x How do @RISK and TopRank deal with circular references? @RISK and TopRank are fully able to cope with them, if you have set Excel's option to perform iterative calculations: • Excel 2010 and later, File » Options » Formulas » Enable iterative calculation. • Excel 2007: click the round Office button and then Excel Options » Formulas » Enable iterative calculation. • Excel 2003: Tools » Options » Calculation » Iteration For more about appropriate settings for this option, see Microsoft's Knowledge Base article Make a circular reference work by changing the number of times that Excel iterates formulas (accessed 2015-06-08, part of Remove or allow a circular reference). Microsoft provides more detailed information about circular references, including troubleshooting tips and a tutorial on iterative calculation in a different Microsoft Knowledge Base article with the same title: Remove or allow a circular reference (accessed 2015-06-08). How @RISK and TopRank respond to circular references: During precedent tracing: When @RISK or TopRank hits a cell that has been previously encountered, it stops tracing precedents in that particular path, to avoid an infinite loop. For example, if A1 depends on B1, and B1 on C1, and C1 on A1, and the program starts tracing at A1, it will find B1 and C1 as precedents, but stop tracing when it hits A1 again. However, when it starts tracing precedents of B1, it will find C1 and A1 as precedents, and so forth. The net result is that when there is a circular reference, @RISK and TopRank treat all members of the circle as precedents of each other. During calculation: If there are circular references, Excel calculates the model multiple times within each @RISK or TopRank iteration, depending on your Excel settings for circular references. Each @RISK or TopRank function returns the same value through all the recalculations within any given iteration. In other words, Excel recalculations to resolve circular references all use the same samples within any one iteration of @RISK or TopRank. Last edited: 2015-06-08 2.17. Converting Crystal Ball Models Applies to: @RISK 6.x/7.x I have developed a model in Crystal Ball, but I would like to run it in @RISK. Can @RISK run Crystal Ball models? Starting with @RISK release 6.0, an automatic converter in @RISK lets you open and run risk models that were created in Crystal Ball 7.3 or later. (You must have Crystal Ball and @RISK on the same When you open a Crystal Ball spreadsheet, @RISK will ask if you want to convert your model; or you can start a conversion with the Utilities » Convert Workbook command. @RISK will convert Crystal Ball distributions and other model elements to native @RISK functions. • @RISK 6.0 or newer. If you have @RISK 5.7 or earlier and you want to convert Crystal Ball models, please see Upgrading Palisade Software. • 32-bit Excel and 32-bit Crystal Ball installed on this computer. • Model was developed in Crystal Ball 7.3 or newer. If your model was developed in Crystal Ball 7.0 or earlier, you cannot do an automated conversion but our consultants may be able to help. Please contact your Palisade sales manager for more Certain features cannot be converted automatically. See the topic "Restrictions and Limitations" in the @RISK help file (Help » Documentation » Help). If @RISK finds one of these features, it will display an error or warning message in the conversion summary. Last edited: 2020-05-28 2.18. File Compatibility: @RISK 4.5–7.x Applies to: @RISK 4.5–7.x I have some models that were developed with an older version of @RISK. Will they work in @RISK 7? @RISK 4.5, 5, 6, and 7 model files (Excel workbooks) are generally compatible. Models created in an older version of @RISK should run just fine in a later version. Simulation results between the two, on the same model, should be the same within normal statistical variability. They will typically not be identical, iteration for iteration, because of precedent checking and other features introduced in newer releases. (See Random Number Generators for more details on this point.) There are three caveats with using older models in @RISK 5, 6, or 7: • @RISK 4.5 and 5.0 recomputed statistic functions like RiskMean and RiskPercentile in every iteration. @RISK 5.5 and later compute them at the end of simulation. This is more logical, and makes for better performance. But if your model depends on having partial results available in the middle of the simulation, you will want to change your model or change that setting. Please see "No values to graph" Message / All Errors in Simulation Data for more about this. • Linked fits (fitted distributions that update automatically when the underlying data change) from @RISK 4.5 will have to be re-run in later versions of @RISK. But the program itself will prompt you for this the first time you run a simulation. • Functions with Multi in their names are legacy functions and will get #NAME errors in later @RISK, unless you also have TopRank loaded. To solve the problem, simply remove Multi from the function Models created in a newer version of @RISK should run fine in an older version, as long as they use only features that were available in the older version. If you define a model using new features of @RISK, you probably will not be able to use that model with older versions of @RISK. Two notes: • @RISK 4.5 will ignore distribution property functions that were added in later, such as RiskUnits. However, functions that contain those property functions will sample properly. • Newer functions, such as RiskCompound and RiskTheoMean, will return #NAME in older releases of @RISK. I saved simulation results with the older version of @RISK. Can the new version read them? @RISK 5, 6, and 7 can read each other's simulation results, whether stored in the Excel workbook or in an external .RSK5 file, but they cannot read @RISK 4.5 simulation results. Exception: If you filtered simulation results (Define Filter command) in @RISK 5 or @RISK 6 and stored the filtered results in the Excel workbook, @RISK 7.0.0 can't read the file. This is fixed in 7.0.1, but if you still have 7.0.0 please see "Could not read data from file ...tmp." for a workaround. @RISK 4.5 cannot read simulation results that were created by later releases, whether stored in the Excel workbook or in an .RSK5 file. See also: Last edited: 2016-08-17 2.19. Latin Hypercube Versus Monte Carlo Sampling The @RISK and RISKOptimizer manuals state, "We recommend using Latin Hypercube, the default sampling type setting, unless your modeling situation specifically calls for Monte Carlo sampling." But what's the actual difference? About Monte Carlo sampling Monte Carlo sampling refers to the traditional technique for using random or pseudo-random numbers to sample from a probability distribution. Monte Carlo sampling techniques are entirely random in principle — that is, any given sample value may fall anywhere within the range of the input distribution. With enough iterations, Monte Carlo sampling recreates the input distributions through sampling. A problem of clustering, however, arises when a small number of iterations are performed. Each simulation in @RISK or RISKOptimizer represents a random sample from each input distribution. The question naturally arises, how much separation between the sample mean and the distribution mean do we expect? Or, to look at it another way, how likely are we to get a sample mean that's a given distance away from the distribution mean? The Central Limit Theorem of statistics (CLT) answers this question with the concept of the standard error of the mean (SEM). One SEM is the standard deviation of the input distribution, divided by the square root of the number of iterations per simulation. For example, with RiskNormal(655,20) the standard deviation is 20. If you have 100 iterations, the standard error is 20/√100 = 2. The CLT tells us that about 68% of sample means should occur within one standard error above or below the distribution mean, and 95% should occur within two standard errors above or below. In practice, sampling with the Monte Carlo sampling method follows this pattern quite closely. About Latin Hypercube sampling By contrast, Latin Hypercube sampling stratifies the input probability distributions. With this sampling type, @RISK or RISKOptimizer divides the cumulative curve into equal intervals on the cumulative probability scale, then takes a random value from each interval of the input distribution. (The number of intervals equals the number of iterations.) We no longer have pure random samples and the CLT no longer applies. Instead, we have stratified random samples. The effect is that each sample (the data of each simulation) is constrained to match the input distribution very closely. This is true for all iterations of a simulation, taken as a group; it is usually not true for any particular sub-sequence of iterations. Therefore, even for modest numbers of iterations, the Latin Hypercube method makes all or nearly all sample means fall within a small fraction of the standard error. This is usually desirable, particularly in @RISK when you are performing just one simulation. And when you're performing multiple simulations, their means will be much closer together with Latin Hypercube than with Monte Carlo; this is how the Latin Hypercube method makes simulations converge faster than Monte Carlo. The easiest distributions for seeing the difference are those where all possibilities are equally likely. We chose five integer distributions, each with 72 possibilities, and a Uniform(0:72) continuous distribution with 72 bins. The two attached workbooks show the result of simulating with 720 iterations (72×10), both the Monte Carlo sampling method and the Latin Hypercube method. For convenience, the workbooks already contain graphs, but you can run simulations yourself too. Of course, those are artificial cases. The other attached workbooks let you explore the how the distribution of simulated means is different between the Monte Carlo and Latin Hypercube sampling methods. (Select the StandardErrorLHandMC file that matches your version of @RISK.) Select your sample size and number of simulations and click "Run Comparison". If you wish, you can change the mean and standard deviation of the input distribution, or even select a completely different distribution to explore. Under every combination we've tested, the sample means are much, much closer together with the Latin Hypercube sampling method than with the Monte Carlo method. If you'd like to know more about the theory of Monte Carlo and Latin Hypercube sampling methods, please look at the technical appendices of the @RISK manual. See also: Last edited: 2023-02-10 2.20. Random Number Generators Which random number generator does @RISK use? Can I choose a generator? By default, @RISK/RISKOptimizer 5.0 and later use the Mersenne Twister. Earlier versions of @RISK and RISKOptimizer used RAN3I. In @RISK 5.0 and later, you can select a random number generator on the Sampling tab of the Simulation Settings dialog. The random number generator is not user selectable in RISKOptimizer 5.x or in earlier versions of either product. Mersenne Twister is superior to RAN3I in that it has been more extensively studied and characterized. It has been proved that random numbers generated by the Mersenne Twister are equi-distributed up to 623 dimensions, and that its period is 2^19937 - 1, which is more than 10^6000. Please see What is Mersenne Twister (MT)? (accessed 2013-04-09) for more information. Can I duplicate @RISK 4.5 simulation results by setting the random number generator to RAN3I? If you run an @RISK 4.5 model in @RISK 5.x or 6.x with any random number generator, simulation results should be the same within normal statistical variability. But the simulation data will typically not be identical iteration for iteration, even with a fixed seed and the RAN3I generator, because of @RISK 5.x's new precedent checking and other features. See also Random Number Generation, Seed Values, and Reproducibility. What about versions within 5.x, 6.x, and 7.x? From one version to the next, new features, and improvements in our code, may cause distributions to be evaluated in a different order. Thus, you cannot count on reproducing @RISK 5.0 results iteration for iteration in 5.5, or reproducing @RISK 5.7.1 results iteration for iteration in 6.0.1, and so forth. Of course, the results will always match within normal statistical variability. last edited: 2015-06-16 2.21. Random Number Generation, Seed Values, and Reproducibility Applies to: @RISK for Excel 4 and newer @RISK for Project 4.x @RISK Developer's Kit 4.x Tell me more about the algorithm that generates random numbers in @RISK. What is the difference between a fixed seed and a random seed? How does this work when executing a multiple simulation run? Why might my model not be reproducible even though I am using a fixed seed? Generation Algorithm: The random number generator used in @RISK is a portable random number generator based on a subtractive method, not linear congruential. The cycle time is long enough that in our testing the cycle time has had no effect on our simulations. Press et al (References, below) say that the period is effectively infinite. The starting seed (if not set manually) is clock dependent, not machine dependent. The method used to generate the random variables for all distributions is inverse transform, but the exact algorithms are proprietary. Seed Values: In the @RISK Simulation Settings dialog box, you can set the random number seed. The seed value may be chosen randomly in Simulation Settings by activating the Choose Randomly option, or you can specify a fixed seed by activating the Fixed option and then entering a seed value that is an integer between 1 and 2147483647. If the Fixed option is chosen, the result from your simulation will not change each time it is run (unless you have changed your model or added some random factor out of @RISK's control). If the Choose Randomly option is active, a random seed is chosen based on the computer's clock. Why choose a fixed seed? There are two main reasons. When you are developing your model, or making changes to an existing model, if you have a fixed random number seed then you can see clearly how any changes in your model affected the results.. With a finished model, you can send the model to someone else and know that if they run a simulation they will get the same results you got. (Both of these statements assume that you're using the same release of @RISK on the identical model and that nothing in the model is volatile; see Reproducibility, below.) You can also use a RiskSeed() property function on an input distribution to give that distribution its own sequence of random numbers, independent of the seed used for the overall simulation. (RiskSeed() is ignored when used with correlated distributions.) Multiple @RISK Simulation Runs: • If the Multiple Simulations Use Different Seed Values box is checked, and the Choose Randomly option is active, @RISK will use a different seed each simulation in a multiple simulation run. • If the Multiple Simulations Use Different Seed Values box is checked, and the Fixed option is active, each simulation in a multiple simulation run will use a different seed, but the same sequence of seed values will be used each time the run is executed. • If the Multiple Simulations Use Different Seed Values box is not checked, and the Choose Randomly option is active, each simulation within a multiple simulation run will use the same seed, but a different seed will be used for each run. • If the Multiple Simulations Use Different Seed Values box is not checked, and the Fixed option is active, the same seed will be used both within and between multiple simulation runs. @RISK Monte Carlo vs. Latin Hypercube: The sampling done to generate random numbers during a simulation in @RISK may be Monte Carlo, or it may be Latin Hypercube, depending on which Sampling Type is chosen in the @RISK Simulation Settings dialog. See Latin Hypercube Versus Monte Carlo Sampling or the @RISK manual for more details. Number of Iterations: If you change the number of iterations, you have a different model even if nothing else has changed. The overall results will be similar (within normal statistical variation) but not identical. Even the data drawn during the initial iterations may not be the same. For example, if you have a 100-iteration model and increase the number of iterations to 500, the distributions in the new model may sample different values in the first 100 iterations than they had in the 100 iterations of the old model. If you have a RiskSeed() property function in any distributions, those will preserve the same sequence. For example, if you have a 100-iteration model and increase the number of iterations to 500, the distributions with their own RiskSeed() functions will show the same data for the first 100 iterations as they did for the 100 iterations of the original simulation. (RiskSeed() is ignored when used with correlated distributions.) The results of a simulation are reproducible from run to run if you use a fixed seed value, if your model has not been changed between runs, and if you avoid the following pitfalls: • The Excel function =RAND(). The numbers generated by these functions are controlled by the spreadsheet, which uses its own independent random number stream. Instead, use RiskUniform(). Consider replacing RAND() functions with RiskBernoulli() or RiskUniform(). • Other volatile Excel functions like NOW() or TODAY(). • Macros that run during the simulation, if the macro code itself is not reproducible from run to run. • Adding or removing worksheets or opening additional workbooks, even if they don't contain @RISK functions. Results of the new simulation may not be identical because @RISK's order of scanning may be affected. The same applies if you move things around within a worksheet or within a workbook, even if the cells that you moved don't contain @RISK functions. • References between iterations, when you have Multiple CPU enabled; please see the next section. Any @RISK inputs that have RiskSeed() property functions will be reproducible, even if the model is changed. Exception: RiskSeed() has no effect in correlated distributions. See also: Different Results with Same Fixed Seed, Different Results with Multiple Workbook Copies, and What Was My Random Number Seed? Single or Multiple CPU: Assuming the model is otherwise reproducible, results should be identical whether the simulation runs with multiple CPU enabled or disabled. There's an important exception. When you're running multiple CPUs, the master CPU parcels out iterations to one or more worker CPUs. During a simulation, one CPU doesn't know the data that were developed by another CPU. So if you have anything in your model that refers to another iteration, directly or indirectly, a simulation with multiple CPUs will not behave as expected. (It won't just be irreproducible; it will be wrong.) Examples would be RiskData() functions that are used in formulas, statistics functions like RiskMean( \) and RiskPercentile() that are used in formulas if you have them set to be computed at every iteration, and macro code that stores data in the workbook or in static variables. In such cases, it is necessary to disable multiple CPU in Simulation Settings. Versions of @RISK: Results from a given release of @RISK Standard, Professional, and Industrial should be the same, assuming the model is otherwise reproducible. Trial version versus activated version makes no Results from different versions of @RISK on the same model will typically match within normal statistical variation, if you use the same random number generator. For the relationship between @RISK 4.x and 5.x random number generation, please see Random Number Generators. • Donald E. Knuth: Seminumerical Algorithms: Third Edition (1998, Addison-Wesley), vol. 2 of The Art of Computer Programming. • William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling: Numerical Recipes, The Art of Scientific Computing (1986, Cambridge University Press), pages 198 and 199. Last edited: 2019-02-15 2.22. Different Results with Multiple Workbook Copies También disponible en Español: Resultados diferentes con múltiples copias de libros de trabajo Applies to: @RISK 5.x–newer I'm simulating multiple copies of a workbook in one @RISK session. There are no links between the workbooks, so I'd expect every workbook to get the identical results that it gets when it's simulated alone. Why doesn't that happen? All open workbooks are part of an @RISK simulation, and @RISK draws all the random numbers for iteration 1, then all the random numbers for iteration 2, and so on. These numbers all come from a single stream, which you can specify as a fixed seed on the Sampling tab of Simulation Settings. (It's more complicated if any inputs are correlated, but the principle is the same.) Suppose you have 100 distribution functions in your workbook. When you simulate that workbook by itself, iteration 1 gets the first 100 random numbers from the stream, iteration 2 gets random numbers 101–200, iteration 3 gets random numbers 201–300, and so in. But when you simulate two copies in the same run, those two copies together consume the first 200 random numbers in iteration 1, random numbers 201–400 in iteration 2, and so on. Thus the results will be different, though still within expected statistical variability for your number of iterations. You can overcome this by putting RiskSeed( ) property functions in the distributions. RiskSeed( ) gives that input its own random-number sequence, separate from the single stream for the simulation. Two identical distributions with identical seeds will produce the same identical sequence of data, iteration by iteration. For instance, will always produce the same sequence of random numbers, no matter what other formulas the workbook may contain. If you have two open workbooks that contain that same function, both copies will produce an identical sequence of iterations. RiskSeed( ) is not effective with correlated inputs. See also: Random Number Generation, Seed Values, and Reproducibility Last edited: 2023-02-10 2.23. Statistical Calculations in @RISK and Excel Applies to: @RISK, all releases I have come across a research paper that details some problems in Excel's statistical calculations. Is there anything to this, and is @RISK affected? How can I validate the generation of random numbers in various distributions by @RISK? The computations in all @RISK functions are done by Palisade's own program code and do not rely on Excel's numerical functions in any way. By way of example, here are some details about the two types of functions we are most often asked about: • Probability distributions (@RISK inputs): @RISK generates all its own random numbers, and these calculations are completely independent of Excel. During a simulation, @RISK produces the random numbers using Palisade program code. Excel's role in a simulation is simply to perform the computations in the Excel formulas in your worksheet. You can easily examine the random numbers produced by @RISK. After a simulation, open the Simulation Data window (x-subscript-i icon). This will give one column per input or output variable. You can copy these numbers in the usual way and perform any desired statistical tests on them. • Summary statistics functions such as RiskMean, RiskPercentile (also called RiskPtoX), and RiskCorrel: @RISK uses Palisade program code, not Excel, to compute all of these. Again, you can verify these calculations from the raw data in the Simulation Data window. Microsoft has acknowledged some issues with some statistical calculations in Excel 2007 and earlier, but has addressed these beginning in Excel 2010. Microsoft gives details in the paper Function Improvements in Microsoft Office Excel 2010 (PDF). But again, none of these issues affect @RISK in any version of Excel, because @RISK does its own statistical calculations for every @RISK finction and does not use Excel functions for them. last edited: 2013-03-20 2.24. Wilkie Investment Model All editions of @RISK can easily model time series according to the Wilkie model, using parameters that you select. With @RISK Industrial, you also have the option to fit to historical data using Time Series. The attached prototype builds two Wilkie models, Retail Price Index (RPI) and Share Dividend Yield (SY), to illustrate those techniques. Let's start with the RPI model. Here you can either set the parameters yourself — recommended values from the literature are shown on the 'Wilkie Models' sheet — or use @RISK to estimate them using Time Series fitting with the AR1 model. @RISK lets you estimate the parameters for the price index model (mean, standard deviation, and autoregressive parameter), but in this case we fitted the transformed historical data set in column C of the 'Data' sheet and extracted those parameters from the AR1 fit; see the 'Parameters RPI' sheet. Notice that Wilkie model requires a logarithmic transformation and first order differencing detrend. Once you have found the parameters by running a fit, or picked them from the table, you can easily create the time series model with @RISK, as shown on the 'RPI' sheet. For the SY model, we used parameters that are recommended in the literature and constructed the model directly. Please see the 'SY' sheet. last edited: 2014-03-13 2.25. Geometric Mean in @RISK Applies to: @RISK, all releases How can I obtain the geometric mean of an output in @RISK? Unfortunately, @RISK doesn't have a function to get the geometric mean from an output directly but it is possible to compute it using the arithmetic mean of logarithms; it can be expressed as the exponential of the arithmetic mean of logarithms as described in the model attached. last edited: 2019-04-16 3. @RISK Distributions 3.1. Which Distribution Should I Use? How do I know which probability distribution I should use? Do you have some book you can refer me to? Different industries tend to prefer a different selection of distributions. We don't have any one book that directly addresses your question, but we do have a number of resources to offer, both within @RISK and externally. One very powerful tool, assuming you have the Professional or Industrial Edition, is distribution fitting. This lets you enter historical data; then @RISK attempts to fit every relevant distribution to the data. You can instantly compare different distributions to see which one seems best suited to the data set. If you're not fitting to existing data, here are some things to think about: • Your first decision is whether you need a continuous or discrete distribution. Continuous distributions can return any values within a specified range, but discrete distributions can return only predefined values, usually whole numbers. The Define Distributions dialog has separate tabs for discrete and continuous distributions. • Then ask whether your distribution should be bounded on both sides, bounded on the left and unbounded on the right, or unbounded on both sides. The thumbnails in Define Distributions will give you an idea of whether each distribution is bounded, • Finally, do you have a general idea of the shape of distribution you want — symmetric or skewed? strong central peak or not? The shapes in Define Distributions are just a partial guide for this, because changing the numeric parameters of some distributions can change the shape drastically. There's more than one way to specify a distribution. Every distribution has standard parameters that you can enter explicitly, as numbers or cell references. But you might prefer to specify distributions by means of percentiles, as a way of specifying those parameters implicitly. To do this, in Define Distributions select the Alt Parameters tab, and you'll see the distributions that can be specified by means of percentiles. On the other hand, if you know the mean, standard deviation or variance, skewness, and kurtosis that you want, the RiskJohnsonMoments distribution may be a good After you select a distribution, the Define Distribution window gives you instant feedback about the shape and statistics of the distribution as you alter the parameters or even the functions Additional resources: • We have an on-demand Webinar, Which Distribution Should I Choose in @RISK? • From a user conference, here's a guide to Selecting the Right Distribution in @RISK, by Michael Rees. • We offer live Web-based training several times a month as well as regional training seminars. • If travel is a possibility, consider attending one of our worldwide User Conferences and Industry Seminars. In addition to presentations, this gives you the opportunity to compare notes with other industry professionals and possibly spend time one-on-one with our software developers. Your Palisade sales representative can tell you which conferences are coming up. • Our industry consultants can work with you one on one in any phase of developing a model. Again, your Palisade sales representative can put you in touch with them. Additional keywords: Johnson Moments, Choose a distribution, Picking a distribution Last edited: 2017-06-19 3.2. Swap Overlayed Distributions Applies to: @RISK 8.2 onward I'm not sure about which distribution would be best for my input, is there a quick way to compare several distributions at once? Starting in version 8.2, @RISK allows for overlays to be swapped in the Define Distribution. To do this you first need to add an overlay to the currently defined distribution. This is done by clicking the Overlays button in the bottom left hand corner of the window, and choosing Add Overlay from the menu. You then add the desired distribution from the window, and edit the parameters on the left hand side of the window. The original distribution will remain as a solid image, and the overlay will appear over the top as an unfilled line. You can include many overlays if this is useful for you to compare the shape and limits of multiple distributions. To change the distribution you click the three dots next to the overlay you think fits your needs best, and then choose Set as Main Distribution. This will change the overlay to the solid filled distribution on the graph. You can either close the Define Distribution window at this stage, or you can remove the overlays by choosing Clear Overlays from the Overlays button. Last edited: 2021-07-23 3.3. Cell References in Distributions Applies to: @RISK 4.x–7.x Must I specify the X's and P's as fixed numbers in the RiskDiscrete, RiskCumul, RiskCumulD, RiskDUniform, RiskHistogrm, or RiskGeneral distribution, or can I replace them with cell references? What about RiskSimtable — can I use cell references instead of fixed numbers? You can replace the list of numbers with cell references without braces, but the referenced cells must be a contiguous array in a row or a column. It's not possible to collect cells from multiple locations in the workbook. Example 1: =RiskCumul(0, 10, {1,5,9}, {.1,.7,.9}) If the probabilities are in cells C1, C2, C3, then you replace the second set of braces and numbers with an array reference, like this: =RiskCumul(0, 10, {1,5,9}, C1:C3) If the numbers (X values) are in cells D1, D2, D3, then you replace the first set of braces and numbers with an array reference, like this: =RiskCumul(0, 10, D1:D3, {.1,.7,.9}) And you can replace both the X's and the P's with array references, like this: =RiskCumul(0, 10, D1:D3, C1:C3) Example 2: If the scenario numbers are in cells IP201 through IS201, then you replace the braces and numbers with an array reference, like this: These rules might seem arbitrary, but they're actually standard Excel. The @RISK distribution functions mentioned above take one or two array arguments. Excel lets you specify a constant array, which is a series of numbers enclosed in braces; or you can specify a range of cells, which is a contiguous series of cells in one row or one column. Excel doesn't have any provision for making an array out of scattered cells. Additional keywords: Simtable, Discrete distribution, Cumul distribution, CumulD distribution, DUniform distribution, Histogram distribution, General distribution Last edited: 2015-06-19 3.4. Setting the "Return Value" of a Distribution Applies to: @RISK 5.x–7.x For cells in my model that are probabilistic (directly or indirectly), how do I change the value that is displayed in the cell when a simulation is not running? By default, when a simulation is not running you will see static values: the displayed values of @RISK distributions won't change during an Excel recalculation. The default static value for continuous distributions is the mean value (expected value). For discrete distributions, the default static value is not the true expected value but rather the value within the distribution that is closest to the expected value: for example, RiskBinomial(9,0.7) will display 6 rather than 6.3. Outputs, and other values computed from the inputs, generally don't display their own mean or expected value but rather the value computed from the displayed values of inputs. Please see Static Value of Output Differs from Simulated Mean. There are several ways you can change the displayed values of input distributions, and the resulting displayed values of outputs. None of these methods will affect a simulation in any way. Which static values are displayed? @RISK lets you choose to display the expected value, true expected value, mode, or a selected percentile for all distributions. You can make this choice in either of two places: • Simulation Settings, bottom half of the General tab: applies to all distributions in all open workbooks. • Utilities » Application Settings » Default Simulation Settings section » Standard Recalc: applies to all distributions in all open workbooks and to any new workbooks you create in the future when @RISK is running. What's the difference between "expected value" and "true expected value"? For continuous distributions, "expected value" and "true expected value" are the same, the mean of the distribution. For discrete distributions, "true expected value" is the mean of the distribution, but "expected value" is the mean rounded to the nearest value that is a member of the distribution. For example, RiskBinomial(3, .44) has a mean = "true expected value" of 1.32, but an "expected value" of 1 because that is the nearest to the mean out of the distribution's possible values 0, 1, 2, 3. In other words, for the mean or expected value of discrete distributions, as defined in textbooks, you need to set @RISK to display the "true expected value". You can also put a RiskStatic( ) function in an individual distribution to override the general settings, for that distribution only. Example: =RiskNormal(100, 10, RiskStatic(25) ) Display random values instead of static values You can suppress the static values and have @RISK generate new random values for each distribution when Excel does an automatic recalculation or when you press F9 to force a manual recalculation. To switch between random and static values for all open workbooks use either method: • "Rolling dice" icon (Random/Static Standard (F9) Recalculation). • Simulation Settings, bottom half of General tab, select Random Values or Static Values. (The "rolling dice" icon switches between these two.) You can also change @RISK's default from static values to random values in the Application Settings dialog, as mentioned above. Last edited: 2017-10-10 3.5. Generating Values from a Distribution Applies to: @RISK 4.x and newer I'd like to generate 10,000 sample values from a particular distribution, for example RiskLogLogistic(-0.0044898, 0.045333, 2.5862). Is there some way to do it? Here is a choice of five methods. Simulation methods: Create an empty workbook and put that distribution in a cell. Set iterations to the desired number of values, and run a simulation. Then, do one of the following: • Click the x-subscript-i icon to open the Simulation Data window. Right-click anywhere in the data column, and select Copy, then paste the values into your Excel sheet. • Click Excel Reports » Simulation Data (Inputs) or Simulation Data. @RISK will create a new sheet with your data. • Click Browse Results, click on the cell, and use the drop-down arrow at upper right to change Statistics grid to Data Grid. Right-click on the column heading and select Copy, then paste the values into your Excel sheet. Since you are running an actual simulation, these methods use the Sampling Type you have set in Simulation Settings. By default, that is Latin Hypercube, which is better than traditional Monte Carlo sampling at matching all percentiles to the theoretical cumulative probability of a distribution. Also, correlations are honored in a simulation. Non-simulation methods: You can also sample a distribution without running a simulation. In this case, the Sampling Type is always Monte Carlo, regardless of your Simulation Settings, and any correlations are disregarded. Here are two methods that don't involve running a simulation: • If you have @RISK Industrial or Professional, you can write a simple loop in Visual Basic to perform the sampling and save the random numbers. The Risk.Sample method is explained in Sampling @RISK Distributions in VBA Code. • Click the Random/Static "rolling dice" icon to make it active. Insert your distribution function in a cell, and then click and drag to create as many duplicates as you need random numbers. Highlight those values, and press Ctrl+C to copy, then Paste Special » Values. (You can either paste the values in the same cells to overwrite the formulas, or paste them in other cells if you want to keep the formulas.) Last edited: 2015-11-09 3.6. Specifying a Descriptive Name for a Distribution Applies to: @RISK 4.x–8.x In the sensitivity analysis or tornado charts, I'm observing some odd descriptions for the bars. My model is built with the cell description in the cell to the left of that cell's formula. After running the simulation, in the tornado graphs I usually see the respective cell descriptions. However, I'm currently observing that some parameters seem to be using other text from the worksheet instead of the description next to the cell formula. Is there some way to tell @RISK what descriptions to use for the bars in tornado charts? Every @RISK input distribution has a name for use in graphs and reports as well as the @RISK Model Window. When you first define a distribution, either in the Define Distribution window or by directly entering a formula in the worksheet, @RISK assigns it a default name. This default name comes from text that @RISK finds in your worksheet and interprets as row and column headings. If the name is acceptable, you don't need to do anything. If it's not acceptable, or if it's blank because @RISK couldn't find any suitable text, you can easily change it, using any of these methods: • In the first box in the Define Distribution window, enter the desired name. (You can do this when first creating the distribution. For an existing distribution, click on the cell and then click Define Distribution to reopen the window.) • Enter or change the name in the Model Window. Click anywhere in the row for the desired input, right-click, and select Function Properties. The Name box is first in the Properties dialog. • Edit the formula directly, in Excel, to insert a RiskName property function. For example, might become =RiskPoisson(.234, RiskName("Number of claims")) or, with the name in a separate cell, =RiskPoisson(.234, RiskName(A17)) Avoid complex name formulas that use Excel Table Notation as not all versions of @Risk are able to interpret it correctly. You can use regular cell specification as below =RiskPoisson(.234, RiskName(A17&" "&A18)) Save your workbook after entering or editing any names. The new names will be used in subsequent graphs and reports. (Graphs and reports of simulation results will use the new names when you run a new simulation.) Last edited: 2021-09-30 3.7. Shift Factor in a Distribution Applies to: @RISK 4.x–7.x What is the shift factor of a distribution, and why it is used? The shift factor of a distribution is shown in the RiskShift( ) property function. It moves the function toward the right on the x-axis (positive shift factor) or toward the left on the x-axis (negative shift factor). In other words, it shifts the domain of the distribution. This is equivalent to taking every point on the distribution and adding the shift factor to it, in the case of a positive shift. With a negative shift, that amount is subtracted from every point on the distribution. Shift factor in defined distributions When you're defining a distribution, click the down arrow next to "Parameters: Standard", and select Shift Factor on the pop-up dialog. The shift factor is now added to the Define Distributions dialog for this distribution, and you can enter various values and see how they change the distribution. If you don't want to have to do that, go into Utilities » Application Settings » Distribution Entry and change Shift Factor to Always Displayed. You can always add a shift factor to an existing distribution by editing the Excel formula directly. For example, if you change =RiskLognorm(10,10) to RiskLognorm(10,10,RiskShift(3.7)), the entire distribution shifts 3.7 units to the right. In general the shift factor should only be used in cases where the distribution function itself does not contain a location parameter. For example, you shouldn't use a shift factor for a normal distribution, since the mean of the normal is already a location parameter. Shift factor in fitted distributions In fitting distributions to data, the purpose of the shift factor is to allow fitting a particular distribution type because it has the right shape, even though the values in the fitted distribution might actually violate the defined parameter limits for that distribution. For example, the 2-parameter log-normal distribution defined in @RISK cannot return negative numbers. But suppose your data have a log-normal shape but contain negative numbers. @RISK inserts a negative shift factor in the fitted distribution, thus shifting it from the usual position of a log-normal to the position that best approximates your data. In effect, this makes a 3-pararameter version of the log-normal distribution. See also: Truncate and Shift in the Same Distribution Additional keywords: log normal, Lognorm, RiskLognorm Last edited: 2017-03-29 3.8. Cutting Off a Distribution at Left or Right Applies to: @RISK 5.x–7.x I have a regular distribution, but I want to truncate one tail. For example, maybe I have a RiskNormal(50,10) and I want to ensure that it never goes below 0. The RiskTruncate property function limits the sampling of a distribution. Specify only a lower bound, only an upper bound, or both lower and upper bounds. With any such truncation, the "lost" probability is redistributed proportionally across the remaining range of the interval. This is better than using Excel's MIN and MAX functions, which distort the distribution by taking all the probability beyond the truncation point and adding it to the truncation point. You can set truncation limits by editing formulas, or in the Define Distribution dialog (later in this article). Either way, the limits can be fixed numbers or cell references, although the examples in this article all use fixed numbers. • To specify only a minimum, with maximum unbounded, just omit the maximum argument of the RiskTruncate function. For example, in the RiskTruncate function below, the minimum has been specified as 0, but the maximum is +∞: RiskNormal(10, 5, RiskTruncate(0, ) ) The comma is optional when you specify only a minimum: RiskNormal(10, 5, RiskTruncate(0) ) • Likewise, you can specify only a maximum, with minimum unbounded, by omitting the minimum argument. (Notice the required comma before the maximum.) This example specifies a minimum of −∞ and maximum of 15: RiskNormal(10, 5, RiskTruncate(, 15) ) • Finally, you can specify both minimum and maximum, by supplying both arguments. This example specifies a minimum of 2 with a maximum of 15: RiskNormal(10, 5, RiskTruncate(2, 15) ) You're not limited to naturally unbounded functions; you can also truncate a bounded function like a RiskTriang. For example, if you want a RiskTriang(100,200,300) shape, but with no values above 250, code it this way: RiskTriang(100, 200, 300, RiskTruncate(, 250) ) If you prefer not to edit property functions in formulas, you can enter one-sided or two-sided truncations in the Define Distribution window: 1. Right-click the Excel cell with the distribution function you want to truncate, and choose @RISK » Define Distributions from the popup menu; or, left-click the cell and then click the Define Distributions icon in the ribbon. The Define Distribution dialog appears. 2. In the left-hand section of the dialog, find the Parameters entry and click into the box that says Standard. 3. A drop-down arrow appears at the right of that box; click the arrow. 4. Check (tick) the "Truncation Limits" box, and select Values or Percentiles at the right. Click OK. 5. Specify a minimum by entering it in the box labeled "Trunc. Min", or specify a maximum by entering it in the box labeled "Trunc. Max". Again, these can be fixed numbers or cell references. If you leave the minimum empty, the distribution will not be truncated at left; if you leave the maximum empty, the distribution will not be truncated at right. As soon as you enter either a minimum or maximum, the RiskTruncate function appears as an argument of the distribution function in the cell formula displayed at the top of the Define Distribution window. Because the default minimum parameter is −∞, and the default maximum parameter is +∞, the parameter you do not specify is automatically omitted from the RiskTruncate function. 6. Click the OK button to write the formula to Excel. If you often use truncation limits in your distributions, you can configure @RISK to make the "Trunc. Min" and "Trunc. Max" boxes a regular part of the Define Distributions dialog box. In @RISK, click Utilities » Application Settings » Distribution Entry and change Truncation Limits to Always Displayed (Values) or Always Displayed (Percentiles). See also: Last edited: 2016-09-01 3.9. Truncate and Shift in the Same Distribution Applies to: @RISK for Excel, all releases My @RISK distribution function is not obeying the minimum and maximum set by the truncation function. Here is the function I am using: =RiskPearson5(47, 6018, RiskShift(-78), RiskTruncate(11,100)) But in a simulation, I am getting many values below 11. What is the problem? We tend to think of truncation as applying limits after shifting. However, when simulating a distribution function, @RISK always truncates first and then shifts, regardless of the order of these arguments in the distribution function. In your example, after truncating at 11 and 100, @RISK shifts the distribution left 78, so that the actual min and max for your Pearson5 are –67 and 22. You need to take this into account when figuring out how to manipulate the function to get the desired result. Subtract your desired shift factor from your desired final limits for the distribution. For example, if you want a Pearson5 that is truncated at 11 and 100 after shifting left by 78 units, compute the pre-shift truncation limits as 11–(–78) = 89 and 100–(–78) = 178, and code your function this way: =RiskPearson5(47, 6018, RiskShift(-78), RiskTruncate(89,178)) The truncation limits 89 to 178 before shifting become your desired limits 11 to 100 after shifting. See also: Last edited: 2015-06-19 3.10. Statistics for an Input Distribution Applies to: @RISK 5.x–7.x How can I place the mean or a given percentile of an input distribution in my workbook? Can I choose between simulation results and the perfect theoretical statistics? @RISK has two sets of statistic functions that can be applied to inputs. The statistics of the theoretical distributions all have "Theo" in their names — RiskTheoMean( ), RiskTheoPtoX(), and so forth. You can get a list by clicking Insert Function » Statistic Functions » Theoretical. All the same statistics are available for simulated results; click Insert Function » Statistic Functions » Simulation Result. The theoretical ("Theo") return the correct values before a simulation runs, during a simulation, and after a simulation, and they don't change unless you change the distribution parameters. The simulation results (non-"Theo") change with each simulation, within ordinary statistical variability. Before the first simulation, they don't return meaningful numbers; during a simulation, they return #N/A. You can change them to be computed at every iteration, except in @RISK 5.0 — see "No values to graph" Message / All Errors in Simulation Data — but if you need those statistics during a simulation the better approach is usually to use the "Theo" functions. Last edited: 2015-06-26 3.11. Statistics for Just Part of a Distribution Applies to: @RISK 5.x–7.x I want to get the mean and standard deviation for just part of my input distribution. If I enter truncation limits in the Define Distribution window or include RiskTruncate( ) in the distribution formula, then the mean and standard deviation of my distribution change and that is not what I want. I want the regular distribution to be simulated, but then after simulation I want to consider only part of it when computing the statistics. I have applied a filter, but the statistics functions are still computed on the whole of the output distribution. Is there a way to get the mean of the filtered data set using RiskMean? To use the whole distribution in simulation but then get the statistics of just a portion of it, put a RiskTruncate( ) or RiskTruncateP( ) function inside the RiskMean( ). A very minimal example is • A1 contains: =RiskNormal(100,10). • A2 contains: =RiskMean(A1, RiskTruncate(95)), which computes the mean of the part of the distribution from 95 to infinity. This is equivalent to =RiskMean(A1, RiskTruncate(95, 1E+99)). RiskTruncate( ) specifies truncation limits by values. • A3 contains: =RiskMean(A1, RiskTruncateP(0.8,1)), which computes the mean of the part of the distribution from the 80th to the 100th percentile, the top 20% of the distribution. RiskTruncateP( ) specifies truncation limits by percentiles. The other statistics functions can have RiskTruncate( ) or RiskTruncateP( ) applied in the same way. This you can get the mean of part of a distribution, percentiles of part of a distribution, standard deviation of part of a distribution, and so on. About accuracy of theoretical statistics: Most distributions have no closed form for the mean of a truncated distribution. Therefore, if you're using a statistic function such as RiskTheoMean( ) with RiskTruncate( ) or RiskTruncateP( ), @RISK has to do a little mini-simulation to approximate the theoretical mean of the truncated distribution. This may differ from the actual theoretical mean by a small amount, usually not more than a percent or two. With a truncated simulated distribution, using a statistic function such as RiskMean( ) with RiskTruncate( ) or RiskTruncateP( ), @RISK uses actual simulation data. Thus results are accurate with respect to that simulation, but another simulation with a different random number seed would of course give slightly different results. See also: Cutting Off a Distribution at Left or Right for truncating an input distribution and using only the truncated distribution in simulation. Last edited: 2018-05-09 3.12. All Articles about RiskMakeInput Applies to: @RISK 5.x–7.x The RiskMakeInput( ) function seems to have a lot of capabilities. Can you give me an overview? The short answer is: RiskMakeInput lets you treat a formula as though it were an @RISK input distribution. That seemingly simple statement has a lot of implications, which we explore in various Knowledge Base articles: Special applications: Last edited: 2019-02-15 3.13. Event or Operational Risks Applies to: @RISK 5.x–7.x A risk has a certain chance of occurring, let's say 40%. If it does occur, there's a probability distribution for its severity; let's say a Triang. I've been multiplying RiskBinomial(1, 0.4) by my RiskTriang. Should I do anything in my risk register beyond just multiplying? Caution: The technique in this article will is intended only for "light-switch" risks that either happen once or don't happen. For risks that could happen multiple times in one iteration, please see Combining Frequency and Severity and use RiskCompound. Probably you should. You probably want to wrap the multiplication inside a RiskMakeInput function. If probability is in cell C11 and impact in C12, your function for actual impact in any given iteration would look like this: If you wish, you can give it a name: =RiskMakeInput(C11*C12, RiskName("my name for this risk") ) Why introduce an extra distribution instead of just multiplying? Don't they get the same answers? Nearly the same, though not identical. Here's why. Suppose your simulation has 10,000 iterations, and your risk has a 40% probability of occurring. There are 10,000 values of your RiskTriang for the 10,000 iterations. Only 4,000 of them (40% of the 10,000) get used. but statistics and graphs will all report based on all 10,000 values. RiskMakeInput treats the product as a distribution, so that now you have 6,000 zero values and 4,000 non-zero values, and the statistics reflect that. But using RiskMakeInput can make the greatest improvement in your tornado graphs. Without RiskMakeInput, you might get a bar in your tornado for the RiskBinomial, or for the RiskTriang, or both, or neither. With RiskMakeInput, if the risk is significant you get one bar in the tornado, and if the risk isn't significant there's no bar for it. The attached example shows a risk register, both with plain multiplication and with improvement by way of RiskMakeInput. Run a simulation. Though the two output graphs don't look very different, the two tornado graphs show very different sets of bars. (In this particular example, most of the tornado bars in Method A come from the RiskBinomial functions, which probably isn't helpful.) Also, with plain multiplication in method A, there's no way to get an accurate graph of the impacts in all 10,000 iterations; with RiskMakeInput, just click on one and click Browse Results. Can I correlate RiskMakeInput? Unfortunately, no. This is a limitation of RiskMakeInput, and of the plain multiplication method also. There is a workaround in Correlating RiskMakeInput or RiskCompound, Approximately. See also: All Articles about RiskMakeInput Additional keywords: Event risk, operational risk, Risk register Last edited: 2018-10-24 3.14. Combining Frequency and Severity Applies to: @RISK 5.x–7.x I have a risk that may or may not occur, or it might occur a variable number of times. But the impact or severity of each occurrence is a probability distribution, not a fixed number. How can I model this in @RISK? The RiskCompound function, available in @RISK 5.0 and later, is the solution. It takes two arguments: a discrete function for frequency or probability, and a discrete or continuous function to govern severity or impact. (Two additional arguments are optional; see How are the deductible and limit applied, below.) Suppose the impact or severity is according to RiskNormal(100,10). If you want to say that the risk may or may not occur, and has 40% probability of occurrence, code it this way: =RiskCompound(RiskBinomial(1,0.4), RiskNormal(100,10)) (For more about a risk that can occur only zero times or one time, see Event or Operational Risks.) If you want to say that the risk could occur a variable number of times, choose one of the discrete distributions for frequency. For example, if you choose a Poisson distribution with mean 1.4 for the distribution of possible frequencies, then your complete RiskCompound function would be =RiskCompound(RiskPoisson(1.4), RiskNormal(100,10)) In any iteration where the frequency is greater than 1, @RISK will draw multiple random numbers from the severity distribution and add them up to get the value of the RiskCompound for that iteration. (There is no way to get at the individual severity values that were drawn within one iteration.) Must frequency and severity be @RISK distributions, or can they be references to cells that contain formulas? You can embed the frequency and severity distributions within RiskCompound( ), as shown above, or to use cell references for frequency and severity and have those distributions in other cells. There are two caveats: • Performance: If your frequency is large, or if you have many RiskCompound functions, your simulation will run faster — possibly much faster — if you embed the actual severity distribution within the RiskCompound( ). Using a cell reference for the frequency distribution doesn't hurt performance. (The attached CompoundExploration.xls uses cell references to make the discussion easier to follow, but it is a very small model and so performance is not a concern.) • Calculation: If the severity argument is a cell reference, and the referenced cell contains an @RISK distribution, then the severity will be evaluated multiple times in an iteration, just as if the severity were physically embedded in the RiskCompound( ) function. For instance, suppose that the severity argument points to a cell that contains a RiskTriang( ) distribution, either alone or within a larger formula. If the frequency distribution has a value of 12 in a given iteration, then the referenced formula will be re-evaluated 12 times during that iteration, and the 12 values added together will be the value of the RiskCompound( ). But if the referenced cell does not contain any @RISK distributions, it will be evaluated only once every iteration, even if the cell contains a formula that ultimately refers to an @RISK distribution. For example, consider the function =RiskCompound(F11,S22), and suppose that on one particular iteration the frequency value in F11 is 12. If the severity cell S22 contains a formula such as =RiskNormal(B14,B15)+B16*B17, it will be evaluated 12 times during this iteration, and the value of the RiskCompound will be the sum of those twelve values. But if the severity cell S22 contains a formula such as =LOG(B19), and B19 contains a RiskNormal( ) function, the formula will be evaluated only once in his iteration, and the value of the RiskCompound( ) for this iteration will be 12 times the value of that formula. You can think of it this way: RiskCompound( ) will drill through one level of cell referencing to find distributions, but only one level. What if the frequency distribution is a continuous distribution? How does @RISK decide how many severity values to add up? "Frequency" implies a number of occurrences, which implies a whole number (0 or a positive integer). Therefore we recommend a discrete distribution, returning whole numbers, for the frequency. But if you use a continuous distribution, or a discrete distribution returning non-integers, @RISK will truncate the value to an integer. For example, if your frequency distribution returns a value of 3.7, @RISK will draw three values from the severity distribution, not four. How are the deductible and limit applied in a RiskCompound( ) function? Is it on a per-occurrence or an aggregate basis? RiskCompound( ) takes up to four arguments: RiskCompound(dist1, dist2, deductible, limit) Both deductible and limit are applied per occurrence. For example, suppose that the frequency distribution dist1 has a value of 6 in a particular iteration. Then the severity distribution dist2 will be drawn six times, and deductible and limit will be applied to each of the six. The limit argument to RiskCompound( ) is meant to be the actual maximum payout or exposure per occurrence. If the actual maximum payout is the policy limit minus the deductible, then you should use the actual maximum payout for the fourth argument to the RiskCompound( ) function. For each sample drawn from dist2, out of the multiple samples during an iteration, the result returned is MIN( limit, MAX( sample - deductible, 0 ) ) In words: 1. If sample is less than or equal to deductible, zero is returned. 2. If sample is greater than deductible and (sample minus deductible) is less than limit, (sample minus deductible) is returned. 3. If (sample minus deductible) is greater than or equal to limit, limit is returned. Again, limit and deductible are applied to each of the samples of dist2 that are drawn during a given iteration. Then the values of all the occurrences are summed, and the total is recorded as the value of the RiskCompound( ) function for that iteration. (It's not possible to get details of the individual occurrences within an iteration.) You can download the attached workbook to try various possibilities for RiskCompound. See also: All Articles about RiskCompound Last edited: 2018-08-09 3.15. All Articles about RiskCompound Applies to: @RISK 5.x–8.x The RiskCompound( ) function seems pretty complex. Can you give me an overview? The short answer is: RiskCompound lets you model a risk that could occur a varying number of times, with different severities—and model it in one function. The main article is first in the list below, and the others explore specialized issues. Special applications: Last edited: 2021-11-18 3.16. Sum of Distributions Must Equal Fixed Value Applies to: @RISK 4.x–7.x I have several continuous distributions that vary independently, but I need them always to add up to a certain value. Is there any way to accomplish this? Please see the attached example. The technique is to let the distributions vary randomly, but have an equal number of helper cells. Each helper cell is a scaled version of the corresponding distribution. "A scaled version" means that the first helper cell equals the first actual distribution multiplied by the desired total and divided by the actual total, and similarly for each of the helper cells. In this way you are guaranteed that the helper cells always add up to the desired value. Your workbook formulas should all refer to the helper cells, and not to the original distributions. If you want to record the values of the helper cells during a simulation, you can designate them as @RISK output cells. Please note that this technique is suitable for continuous distributions. If you need discrete distributions to add up to a fixed total, you can't use this technique because the scaled versions usually won't be whole numbers. Additional keywords: Total of distributions equals constant, Fixed value for total Last edited: 2016-12-15 3.17. Multinomial Distribution Applies to: @RISK 5.x and newer Does @RISK have a multinomial distribution? The multinomial distribution is a generalized form of the binomial distribution. In a binomial, you have a fixed sample size or number of trials, n. Every member of the population falls into one of two categories, usually called "success" and "failure". The probability of success on any trial is p, and the probability of failure on any trial is 1–p. The RiskBinomial distribution takes the parameters n and p, and at each iteration it returns a number of successes. The number of failures in that iteration is implicitly n minus the number of successes. In a multinomial, you have three or more categories, and a probability is associated with each category. The total of the probabilities is 1, since each member of the population must be a member of some category. As with the binomial, you have a fixed sample size, n. At each iteration you want the count of each category, and the total of those counts must be n. @RISK doesn't have a multinomial distribution natively, but you can construct one using binomial distributions and some simple logic. This workbook shows you how to do it. Last edited: 2016-03-18 3.18. Cumulative Probability Applies to: @RISK 5.x and newer Excel has functions like NORM.DIST (NORMDIST in older Excels) to return the cumulative probability in a normal distribution. Does @RISK have anything like that? Yes, @RISK has functions to find the cumulative probability for any distribution. Instead of a separate cumulative-probability function for each distribution, @RISK uses the same function for cumulative probability of any distribution. Actually, there are two functions, one to obtain simulation results and one to query the theoretical distribution. • Suppose you have an @RISK input or output, or even just an Excel formula, in cell AB123. To obtain the cumulative probability to the left of x = 14, for the most recent simulation, use the function =RiskXtoP(AB123,14). This function won't return a meaningful value until after a simulation has been run. • For @RISK distributions, you can access the theoretical distribution. For example, if you have =RiskNormal(100,10) in cell XY234, the function =RiskXtoP(XY234,120) will return 0.97725, give or take, but varying from one simulation to the next. But the "theo" function, =RiskTheoXtoP(XY234,120) will return the exact theoretical cumulative probability, limited only by the accuracy of floating point. The theoretical value is not dependent on running a simulation. With the "theo" functions, you can even embed the distribution right in the function, as for instance =RiskTheoXtoP (RiskNormal(100,10), 120). Instead of the probability from –∞ to an x value, how can I get the probability between two x values? Just subtract the two cumulative probabilities. For example, the cumulative probability of cell PQ456 between x = 7 and 22 would be =RiskXtoP(PQ456,22) – RiskXtoP(PQ456,7). How do I get the probability density, which Excel returns when the last argument of NORM.DIST is FALSE? The probability density is simply the height of the curve at a given x value. Use RiskTheoXtoY instead of RiskTheoXtoP. (The RiskTheoXtoY function was added in @RISK 6.0 and is not available in @RISK Last edited: 2017-05-04 3.19. Specifying Distributions in Terms of Percentiles Applies to: @RISK 5.x–7.x I know what distribution I want to use, but I want to specify it in terms of percentiles rather than with the usual parameters. Is there a way? For many distributions, you can. We use the term "Alternate Parameters" for specifying at least one percentile in place of a usual parameter like mean, most likely, alpha, and so forth. In the Define Distributions dialog, select the Alt. Parameters tab, and you'll see the distributions that can be specified in terms of percentiles. Double-click your desired distribution to select A dialog will open with some suggested percentiles, and you can change the values in that dialog as usual. But quite possibly you'll want to specify different percentiles from the suggested ones — for example, the 10th and 90th instead of the 5th and 95th. To change which percentiles are used, click the drop-down arrow at the right of the word Alternate to open the Parameters dialog. If you open the Parameters dialog, you can change which percentiles are used, and by selecting the radio buttons you can even define the distribution based on a mix of percentiles and standard parameters. For more, with a screen shot, please search for Alternate Parameters in @RISK's help file. How do percentile parameters work internally? Does @RISK convert them to standard parameters? Yes, @RISK resolves percentile parameters into standard parameters. This has to be done in every iteration in a simulation, because it's possible for your workbook's logic to change the parameters of a distribution from one iteration to the next. In general terms, resolving alternate parameters is a kind of optimization problem. Say you have a potential candidate for the resolved (non-Alt) distribution. You can calculate an error for this candidate by computing the difference between the desired percentiles specified in the Alt function, and the percentile values your candidate actually has. Finding the correct non-Alt distribution requires juggling the parameters until that error goes to zero. That's the simplest method, and indeed you could use Palisade's Evolver or RISKOptimizer, or Excel Solver, to resolve alternate parameters yourself using this method. But there's a problem with this method: it's just not fast enough, especially for complicated cases like the BetaGeneralAlt with its four parameters. The time to solve an optimization problem goes way up as you increase the dimensionality. If you were resolving parameters just one time, it would probably be fine to do it in this brute-force way; but, given the possibility of different parameters in each iteration, the resolution process needs to be really fast. Fortunately, we can usually reduce the dimensionality of the optimization. For example, with BetaGeneralAlt, some tricky math reduces the problem from four dimensions to two. (The differing mathematical tricks for each distribution are proprietary. Our developers put a lot of work into making them as efficient as Can I see the standard parameters that @RISK computes from the percentiles? Yes, please see Distribution Parameters from "Alt" Distributions. Last edited: 2018-07-02 3.20. Specifying Distributions in Terms of Desired Mean and Standard Deviation Applies to: @RISK 4.x–7.x RISKOPTIMIZER 1.x–5.x I want a BetaGeneral distribution with a given mean and standard deviation. What α1 and α2 (alpha1 and alpha2) should I enter in the Define Distribution window? Can I do this if I know other statistics, such as the mode and the variance? Can I do this for other types of distribution? Let's take the easiest alternatives first. If you have @RISK 5.5.0 or newer, the JohnsonMoments distribution is available. It lets you specify mean, standard deviation, skewness, and kurtosis, and it comes up with an appropriate distribution shape for those parameters. If you have a particular distribution in mind and you want to target percentiles (including the median), you may be able to use a form of the distribution that specifies percentiles in place of one or more parameters. In Define Distribution select one of the distributions from the Alt. Parameters tab in the Define Distribution dialog. @RISK will then calculate the needed parameters If those alternatives don't meet your needs, you may be able to solve for the distribution parameters that give the desired mean and standard deviation (or other statistics) for your desired distribution. It all depends on whether closed forms exist for your desired statistics. (A closed form, for this purpose, is an algebraic formula that can be implemented in Excel. If you're targeting standard deviation, use the square root of the variance formula.) There are two places to look for these closed forms. In @RISK 5.x and newer, and in RISKOptimizer 5.x, look in the product's help topic for the particular distribution function that is of interest. In @RISK 4.x or RISKOptimizer 1.x, click the Windows Start button, then Programs or All Programs, then Palisade DecisionTools, then Online Manuals, then Distribution Function Summary. The attached example shows how to find alpha1 and alpha2 for a BetaGeneral that give a desired mean and standard deviation. Please download FindDistributionParams.xls and open it in RISKOptimizer or in Excel. All constraints and options are already set as needed. The green cells are the desired statistics for the distribution, here minimum, maximum, mean, and standard deviation. The red cells are the adjustable cells for Solver or RISKOptimizer; they're arbitrarily set to 1 at the start. The purple cells are the formulas for mean and standard deviation, in terms of the adjustable red cells. As you see, when α1 = α2 = 1 in a BetaGeneral distribution, the mean is 57.5 and the standard deviation is 24.5. These deviate from the desired values by a total of 27.04 units, the "error to minimize" in blue. RISKOptimizer or Solver is given the blue cell as the target to minimize. When you run RISKOptimizer or Solver, it adjusts the red α1 and α2 until it converges on parameters that give the desired mean and standard deviation, or as close as possible to them. Variations on the example: If you want to target different statistics, such as kurtosis and mode or skewness and mean, change the captions A21:A24 and the formulas E27 and H27. If you're interested in a different distribution, you may need to change the captions D21:D22 in addition to the above, and you may also need to edit the constraints in RISKOptimizer or Solver. (In a BetaGeneral distribution, α1 and α2 must be positive, but parameters for many other distributions have different constraints.) If the distribution has three parameters or more, insert the additional parameters and add appropriate RISKOptimizer or Solver constraints. Additional keywords: RiskJohnsonMoments distribution, Johnson Moments Last edited: 2015-06-19 3.21. Combining Estimates from Several People Applies to: @RISK 5.x–7.x Several people gave me their assessments of the likely impact of a risk or a benefit, but naturally their estimates vary. Also, I have higher confidence in some opinions than others. How can I combine these assessments in @RISK? We often say in ordinary language that we give more weight to one thing than another in making a decision, and it's the same in this situation. You want to set up a little table of weights and @RISK probability distributions, and the question then is how to give each distribution the appropriate weight. It would be easy to take the weighted average of the distributions, but that causes the extreme opinions to be under-represented. There are many possible approaches that don't have that problem, and the attached workbook shows four of them. • Sheet1 lets the contributors specify different distributions, not just different parameters to the same distribution. The weights are converted to percentages, and then using the number of iterations (which you specify) each distribution is sampled for the appropriate number of iterations. • Sheet1A is similar, and in fact it uses the exact same distributions as Sheet1. But it uses a RiskDiscrete function to sample the individual distributions in the appropriate proportions. This one does not need you to place the number of iterations in the workbook. • Sheet2 takes a different approach, computing weighted averages of the cumulative probabilities (the CDFs, not the PDFs. This could have been done with different distributions like Sheet1, but we also took the opportunity to show how you could set up a table of pessimistic, most likely, and optimistic cases and use the same distribution for all of them. • Sheet3 uses a multinomial distribution. Over the course of the simulation, each of the five distributions is samples in the appropriate proportion, based on the weights. In all four cases, the combined function is wrapped in a RiskMakeInput function. That ensures that only the combined distribution, not the individual assessments, will show up in sensitivity graphs and figures. See also: All Articles about RiskMakeInput. Last edited: 2016-03-28 3.22. Algorithm Used by RiskJohnsonMoments Applies to: @RISK 5.5.0–7.x The @RISK documentation says, "RiskJohnsonMoments(mean,standardDeviation,skewness,kurtosis) chooses one of four distributions functions (all members of the so-called Johnson system) that matches the specified mean, standard deviation, skewness, and kurtosis. This resulting distribution is either a JohnsonSU, JohnsonSB, lognormal, or normal distribution." How does @RISK choose among the four underlying distributions? We use Algorithm AS 99, Journal of the Royal Statistical Society Series C (Applied Statistics) vol. 25, p. 180–189 (1976). This article is available through JSTOR. Additional keywords: JohnsonMoments distribution, Johnson Moments Last edited: 2015-06-19 3.23. Distribution Parameters from "Alt" Distributions Applies to: @RISK 5.x–7.x When I specify a distribution in terms of percentiles or "alt parameters", how does @RISK figure out the parameters of the distribution? That's a good question. If you have a RiskPertAlt or RiskTriangAlt, for example, @RISK finds what parameters of a standard RiskPert or RiskTriang would give the percentiles you specified. But there's no formula. Instead, @RISK has to use a process of successive approximations to find the right parameters for the RiskPert. And it's the same for all the other Alt distributions, as well as RiskTrigen, which specifies two of a triangle's three parameters as percentiles. RiskUniformAlt is the exception; see below if you want to know the theory. How can I find out what standard parameters @RISK computes for Alt functions? 1. In the Define Distribution window's left-hand column, click the drop-down arrow next to Parameters: Alternate. 2. A Parameters dialog opens; clear the check box for Alternate Parameters in that dialog and click OK. 3. In the Define Distribution dialog, click OK to write the non-Alt function to the cell, or Cancel to keep the Alt function. For example, paste this formula into an empty cell: Press Enter, and then click Define Distributions. (As an alternative, you could click Define Distributions on an empty cell, select the distribution, and enter the parameters in the dialog.) Click the drop-down arrow to the right of Alternate, remove the check mark for Alternate Parameters, and click OK just once. The display now shows the equivalent regular parameters, α1=1.295682, α2= 1.121222, Min=-4.990886, Max=17.204968. (Because these are rounded values, some statistics and percentiles may be slightly different from their values in the Alt distribution.) The full non-Alt distribution is shown in the Cell Formula box near the top of the Define Distributions dialog: If you now click OK, @RISK will replace the Alt distribution in your worksheet with that non-Alt distribution; if you click Cancel, the Alt distribution will remain in your worksheet. I have to convert a number of Alt functions to standard parameters. Is there some way to do this with worksheet functions? For some Alt functions, yes. The attached workbook gives examples. If the standard parameters of a distribution are statistics like min, max, and mean, you can use RiskTheoMin and other "Theo" statistic functions to find those parameters. The triangular distribution, for example, has parameters of min, mode (most likely), and max, and you can get them by applying those "Theo" functions to the TriangAlt or Trigen. If the parameters don't map directly to statistic functions, but there are formulas in the help file, you can solve those formulas to find the parameters. For instance, the help file says that the mean and variance of a BetaGeneral are μ = min + α[1](max−min)/(α[1]+α[2]) σ² = α[1]α[2] (max−min)² / ( (α[1]+α[2])² (α[1]+α[2]+1) ) Solving for α[1] and α[2] gives α[2] = ( (μ−min)(max−μ)/σ² − 1 ) (max−μ) / (max−min) α[1] = α[2] (μ−min) / (max−min) The attached workbook shows about a dozen examples, mostly less complicated than that. Unfortunately, not all distributions have closed-form expressions for the statistics in terms of the distribution parameters; for those, the only choice is the method above using the Define Distribution window. What about RiskUniformAlt? I'm curious how @RISK can use a formula to convert it to standard parameters, when the other Alt distributions require successive approximations. This section shows the algebraic solution for those who are interested, although the techniques given above are quicker and simpler. Unlike all the other Alt functions, @RISK uses formulas to convert RiskUniformAlt to the equivalent non-Alt function. Consider RiskUniformAlt(C1,x1,c,x2) where C1 and C2 are cumulative ascending percentiles >0 and <1. How is that converted to the equivalent The CDF (cumulative distribution function) for RiskUniformAlt is a straight line passing through your desired percentiles (x1,C1) and (x2,C2). But the same straight line also passes through (min,0) and (max,1), although you don't yet know the values of min and max. Therefore the equation of the CDF is C = (x − min) / (max − min) Substituting your two desired percentiles (x1,c1) and (x2,C2) gives C1⋅(max − min) = x1 − min and C2⋅(max − min) = x2 − min. Solving those as simultaneous equations in min and max gives the formulas min = (x1⋅C2 − x2⋅C1) / (C2 − C1) max = min + (x2 − x1) / (C2 − C1) Additional keywords: Standard parameters, Alt parameters, Standard distributions, Alt distributions Last edited: 2018-07-05 3.24. Turning Inputs On and Off Applies to: @RISK 5.x–7.x Is there an easy way to turn input variables on and off? I would like to try a simulation with some of the variables turned off, but I don't want to replace the distribution formulas with constants because I will want the distributions to vary again in later simulations. Yes, you can lock any inputs. To lock an input, do any of these: • In the Model Window, right-click the input and select Lock Input from Sampling. (You can select multiple distributions with Shift-click or Ctrl-click, and lock all of them in one operation.) • In the worksheet, right-click the distribution and select Define Distributions. In the Define Distribution dialog, click the fx icon near the top right to open the Properties dialog. On the Sampling tab of that dialog, click Lock Input From Sampling. • Edit the distribution formula in the worksheet to insert a RiskLock( ) property function. During a simulation, a locked input always returns its static value (if specified) or alternatively, its expected value, or the value specified through the options under When a Simulation is Not Running, Distributions Return of the Simulation Settings dialog. Last edited: 2015-06-19 3.25. RiskSixSigma Property Function Applies to: @RISK 5.x–7.x Where should I use the RiskSixSigma( ) property function? Should it be part of RiskOutput( ) or RiskCpk( )? The standard way is to use RiskSixSigma( ) with RiskOutput( ). That tells @RISK to put six-sigma labels on the graphs, and extra statistics in the statistics grid. It also lets you use the six-sigma worksheet functions to calculate Cpk and many other statistic functions. (In @RISK, click Insert Function » Statistic Functions » Six Sigma.) For example, suppose you have an output in cell A1 that looks like this: If you put the formula =RiskCpk(A1) in another cell, @RISK will do the Cpk calculation using those LSL/USL/Target values. By doing it this way, you associate the six-sigma properties with the calculated output, and all statistic functions will use the same values. Does that mean that I should never put a RiskSixSigma( ) inside RiskCpk( )? There are two situations where you would want to place a RiskSixSigma( ) property function inside a statistic function such as RiskCpk( ): • You may have an output you don't want to apply six-sigma properties to, but you still want to compute the Cpk for it with a certain set of parameters. For example, you have a regular output in cell A2 with no six-sigma properties. In another cell, you can place the formula =RiskCpk(A2,,RiskSixSigma(0,1,.5)) to get the Cpk assuming those LSL, USL, and Target values. • You may want to calculate the Cpk for an output with a different set of parameters — for instance, to make a table of different Cpk values for different LSL/USL pairs. Using the sample output formula mentioned above for cell A1, you might have RiskCpk(A1,,RiskSixSigma(0,1,1.5)). This would override the six-sigma parameters the output normally has in favor of the ones embedded in the Cpk function. How are the arguments to RiskSixSigma( ) used in computing Six Sigma statistic functions? In @RISK, please click Help » Example Spreadsheets » Six-Sigma » Six Sigma Functions.docx. The first section of that document explains where each pf the five arguments to RiskSixSigma( ) is used. Following that are technical details of each of the 19 statistic functions, including computational formulas. Additional keywords: Cp, RiskCp, Cpk, RiskCpk, Cpm, RiskCpm, DPM, RiskDPM Last edited: 2015-06-19 3.26. Replacing RAND with RiskBernoulli or RiskUniform Applies to: @RISK, all releases I use the Excel RAND function a lot in my spreadsheet, but it is causing some problems. For example, I am not getting the same results when I run my simulation a second time, even though I am using a fixed seed. And when I use shoeprint mode, the numbers in the worksheet are different from what the Data Window shows for the same iteration. Use an @RISK distribution like RiskBernoulli or RiskUniform instead of Excel's RAND function. If you have a fixed random number seed, @RISK functions will produce a reproducible stream of random numbers. See Random Number Generation, Seed Values, and Reproducibility for more about this. Sometimes people use RAND in an IF function to decide whether to draw a value from an @RISK distribution: =IF( RAND()<0.4, RiskNormal(100,10), 0 ) The direct @RISK equivalent to Excel's RAND is RiskUniform(0,1): =IF( RiskUniform(0,1)<0.4, RiskNormal(100,10), 0 ) However, RiskBernoulli is a simpler choice for IF-tests because it puts the probability right in the function: =IF( RiskBernoulli(0.4), RiskNormal(100,10), 0 ) You can simplify this expression even further. Since RiskBernoulli returns a 0 or 1, you can replace the IF with a multiplication: =RiskBernoulli(0.4) * RiskNormal(100,10) All four of these formulas say that a given risk is 40% likely to occur, and if it does occur it follows the normal distribution. But the @RISK functions give you a reproducible simulation, which RAND does not. Which of those is the recommended way to model an event risk or operational risk? The last one is the simplest, but all four have the same problem: you're modeling one risk with two distributions. This means that sensitivity measures won't be accurate, and graphs of simulated results will either show a lot of errors or show a lot of values that weren't actually used. To solve all of these problems, you want to wrap the expression in a RiskMakeInput, like this: =RiskMakeInput( RiskBernoulli(0.4) * RiskNormal(100, 10) ) See Event or Operational Risks for more information about using RiskMakeInput in this way. Additional keywords: Bernoulli distribution, MakeInput distribution, Uniform distribution Last edited: 2019-02-15 3.27. Triangular Distribution: Specify Mean or Median Instead of Most Likely Applies to: @RISK, all releases I'd like to use the triangular distribution in @RISK, but I don't know the mode (m.likely), only the mean. Can I specify a triangular distribution using the mean? Yes. The mean of a triangular distribution equals (min+m.likely+max)/3. Therefore m.likely = 3*mean – min – max Compute m.likely using that formula, and enter it along with min and max in the Define Distribution window. (If the formula yields a value for m.likely that is less than min or greater than max, then mathematically no triangular distribution exists with the specified min, mean, and max.) I'd like to use the triangular distribution in @RISK, but I don't know the mode (m.likely), only the median (50th percentile). Can I specify a triangular distribution using the median? Yes. Many distributions, including RiskTriang( ), let you specify one or more parameters as percentiles. Here's one method: 1. Open the Define Distribution window and select the Triang distribution. 2. Click in the box to the right of Parameters (the box contains "Standard"), then click the drop-down arrow that appears, and check (tick) "Alternate Parameters". 3. The window expands with a "Parameter Selection" section. Select the radio buttons to the left of Min and Max, but to the right of M. likely select Percentiles and if necessary enter 50. 4. Click OK. Now in the Define Distribution window you can specify min, median (50th percentile), and max. Here's an alternative method: 1. Open the Define Distribution window and select the Alt. Parameters tab, then TriangAlt. 2. At the left, next to Parameters, click on Alternate, then click the drop-down arrow that appears. 3. In the Triang Parameters dialog, select the radio buttons next to Min and Max, then click OK. Last edited: 2015-06-19 3.28. Double Triangular Distribution Do @RISK distributions include the Double Triangular Distribution that has been recommended by AACE at http://www.aacei.org/resources/rp/? (The relevant article is AACE recommendation number 41R-08, "Risk Analysis and Contingency Determination Using Range Estimating" by Dr. Kenneth K. Humphreys.) With @RISK 6.x–7.x: Use the RiskDoubleTriang(min,mode,max,lower_p) distribution. For example, suppose that you have a 76% probability of underrun (0 to 4) and a 24% probability of overrun (4 to 10). Then you want this formula: =RiskDoubleTriang(0, 4, 10, 0.76) With @RISK 5.x and earlier: You can create a double triangular distribution by using a RiskGeneral distribution. Suppose that you have a 76% probability of underrun (0 to 4) and a 24% probability of overrun (4 to 10). Then the RiskGeneral function would be Please paste this into an empty cell and then click the Define Distribution icon to see the graph. Where do the 0.38 and 0.08 come from? In this example, the minimum (greatest possible underrun) is 0, maximum (greatest possible overrun) is 10, and the most likely value is 4 (common side between the two triangles, repeated in {...,...}). 0.38 is the maximum probability density of the first triangle, and 0.08 is the maximum probability density of the second triangle. These are found by • 2 × (probability of underrun) ÷ (most likely minus minimum) = first value 2 × 0.76 ÷ (4 - 0) = 0.38 • 2 × (probability of overrun) ÷ (maximum minus most likely) = second value 2 × 0.24 ÷ (10 - 4) = 0.08 You don't necessarily have to use these formulas. @RISK will automatically adjust the probability densities proportionally so that the total probability of the double triangle is 1. Additional keywords: DoubleTriang distribution Last edited: 2015-06-19 3.29. Entering Parameters for Gamma Distribution Applies to: @RISK 6.x/7.x I'm trying to use a RiskGamma distribution, but the parameters don't seem to match what I found in another source. A two-parameter gamma distribution has one shape parameter and one scale parameter. @RISK specifies them in that order, as shape = α (alpha) and scale = β (beta). But there are other ways to specify the parameters, simply by using different Greek letters or even by using different parameters. Wikipedia, for instance, lists three ways: shape k and scale θ (theta), shape α or k and rate β (β = 1/θ, rate= 1/scale), or shape k and mean μ = kθ = k/β. Don't be confused by the different letters. Comparing to Wikipedia, @RISK α (alpha) matches Wikipedia's k, and @RISK β (beta) matches Wikipedia's θ (theta), not Wikipedia's β (beta). @RISK's β is a scale parameter, but Wikipedia's β is a different parameter called rate, which is the reciprocal of the scale parameter θ. Thus @RISK reports the mean of the gamma distribution as αβ, considering β as a scale parameter, and that matches Wikipedia's αθ. Wikipedia also gives a mean of α/β, because the rate β is 1/θ where θ is the scale parameter corresponding to @RISK's β parameter. I have parameters from an outside source. How do I enter them in @RISK? Here's what to enter for the α and β parameters of the RiskGamma distribution; • If you have shape and scale, regardless what letters are used by your outside source, set α = shape and β = scale. • If you have shape and rate, set α = shape and β = 1/rate. • If you have shape and mean, set α = shape and β = mean/shape. Last edited: 2018-12-28 3.30. Entering Parameters for Log-normal Distribution Applies to: @RISK, all releases RISKOPTIMIZER, all releases RDK and RODK, all releases The RiskLognorm( ) function doesn't behave like the log-normal function in the books. When I enter μ=4, σ=2, I expect the simulated distribution to have a mean of 54.6 and standard deviation of 2953.5, but instead the mean and standard deviation are very close to 4 and 2. What's wrong? @RISK, RISKOptimizer, and the developer kits have two log-normal distributions. RiskLognorm2( ) is the traditional distribution and behaves in the way described in statistics books. We also offer RiskLognorm( ), where the μ and σ (mu and sigma) you enter are the actual mean and standard deviation of the distribution, subject to the usual sampling fluctuation. The two distributions are the same except for the way you enter parameters. • If you know the desired actual mean and standard deviation, use RiskLognorm( ). For RiskLognorm(μ,σ): Actual mean of the distribution = μ Actual standard deviation of the distribution = σ • If you want to use parameters that match the log-normal distribution in many textbooks, use RiskLognorm2( ). For RiskLognorm2(μ,σ): Actual mean of the distribution = exp(μ+σ²/2) Actual standard deviation of the distribution = exp(μ+σ²/2)·sqrt(exp(σ²)−1) = (actual mean)·sqrt[exp(σ²)−1] • Finally, if you know the desired geometric mean and standard deviation of a log-normal distribution, use RiskLognorm2( ) but set μ to the natural log of the desired geometric mean, and σ to the natural log of the desired geometric standard deviation. For details, please see Geometric Mean and Geometric SD in Log-normal. These three methods are illustrated in the accompanying workbook. Additional keywords: Lognorm distribution, Lognorm2 distribution Last edited: 2015-06-19 3.31. Log-normal Distribution with 2 Percentile Parameters Applies to: @RISK 5.x–7.x In @RISK, is there any other way to generate a log-normal distribution with two percentile parameters, even if the log-normal automatically generates a third percentile parameter based on the other Yes, you can do this easily. 1. In Define Distribution, select the Alt. Parameters tab, and then LognormAlt. 2. Click in the "Alternate" box next to Parameters, in the left-hand section of the dialog, then click the drop-down arrow that appears. 3. Under Parameter Selection, click the radio button at the left of Location. Change the two remaining percentiles, if you need to. Click OK. 4. Back in the Define Distribution dialog, specify zero for Loc (location). Additional keywords: LognormAlt, RiskLognormAlt Last edited: 2015-06-19 3.32. Left Skewed or Negative Skewed Log-normal Distribution Applies to: @RISK 5.0 and newer I want to create a left skewed (negatively skewed) log-normal distribution function with @RISK, using three percentiles. Using the @RISK Define Distribution window with alt parameters, I put in 0 as my 5th percentile, .018 as my 70th percentile, and .021 as my 95th percentile. But then the Define Distribution window says "Unable to graph distribution", and the function returns a #VALUE error. Can you explain why @RISK won't allow this to be done? This is based on the definitions for skewness and the domains for mu and sigma in a log-normal (mu > 0 and sigma > 0, μ and σ both positive). The skewness will never be negative if all terms in the expression are always positive. You do have several workarounds, however: • Enter 100 minus the percents, minus the percentiles, and minus the RiskLognorm, like this: =RiskMakeInput( -RiskLognormAlt(5%,-0.021, 30%,-0.018, 95%,0) ) The 95th percentile of 0.021 becomes a 5th percentile of minus 0.021, and so on for the others. The RiskMakeInput wrapper tells @RISK to collect data and statistics on the final formula, not just the RiskLognorm. See also: All Articles about RiskMakeInput. With this technique, the Define Distribution window will show the "backwards" log-normal, with the negative percentiles. But after a simulation, the Browse Results window will show the desired distribution with +0.021 in the 95th percentile. • Enter those three (x,p) pairs in your worksheet and then fit a log-normal distribution. @RISK comes up with RiskLognorm(0.013156,0.0022747, RiskShift(0.0038144) ). • You could also use a different distribution, such as the BetaGeneral distribution, which can take on the left skewed shape. Additional keywords: RiskBetaGeneral, RiskLognorm Last edited: 2015-06-19 3.33. Geometric Mean and Geometric SD in Log-normal Applies to: @RISK, all releases I want to use a log-normal distribution. I have the geometric mean and geometric standard deviation. How can I set up this distribution? Use RiskLognorm2, but wrap each of the two parameters in a natural-log function. Example: =RiskLognorm2( LN(2.1), LN(1.8) ) You can type the function directly in Excel's formula bar, or use @RISK's Insert Function button, or use Define Distribution and select Lognorm2 from the Continuous tab. You can type the LN function right into the boxes in the Define Distribution dialog, as shown in the attached illustration. Please note: In this application you want Lognorm2, not Lognorm. For the difference between them, please see Entering Parameters for Log-normal Distribution. Last edited: 2015-06-19 3.34. Preventing Duplicates in Discrete Distributions Applies to: @RISK 5.5 and newer I have a RiskDiscrete distribution, and I want to ensure that each iteration gets a unique value from that distribution: no duplicates across iterations, in other words. How can I accomplish this? The RiskResample distribution provides an easy solution for this requirement. For the first argument of RiskResample, use sampling method 1 if you want @RISK to go through your list of values in a specified order, or sampling method 3 for random sampling from your list of values without replacement. With either sampling method 1 or method 3, if your simulation has more iterations than the number of values in your list, the RiskResample function will return an error for those extra iterations. Additional keywords: Discrete distribution, Resample distribution Last edited: 2016-04-21 3.35. Delimiters and Discrete Distributions Applies to: @RISK 5.x–7.x I have a RiskPoisson(3) distribution, and I click Define Distributions, or Browse Results after a simulation. I set the delimiters to 0 and 6, and @RISK shows a probability of 91.7% between them. But Excel's POISSON.DIST(6,3,TRUE) shows a cumulative probability of 96.6%. Which one is right? This seems strange at first, but there's an explanation. This is nothing special about the Poisson distribution; it applies to RiskBinomial, RiskDiscrete, and all the other discrete distributions. This convention avoids some anomalies. For example, suppose you set both delimiters to 3. If the rule were P(L ≤ x ≤ R), then the middle region, which has zero width, would have a probability of 22.4%, equal to P(x=3), and the visible probabilities would add up to only 77.6% instead of 100%. I clicked in an empty cell, clicked Define Distributions, and selected Poison with λ=3. Initially the graph showed delimiters of 1 and 6, with probability 90% between them. I just clicked on the delimiters, without moving them, and the probability changed to 76.7%. Why? By default, the Define Distribution graph sets delimiters at the 5th and 95th percentiles. (You can change this default in the Simulation Graph Defaults section of Application Settings.) To show the delimiters, @RISK finds the x values of those percentiles. If you change the percentages, @RISK finds new x values; and if you change the x values, @RISK finds new percentages. When you click on a delimiter, even if you don't actually change it, @RISK takes that as a signal that it should adjust the percentage to the x value instead of the other way around. So it recomputes the percentages based on the x values 1 and 6. Last edited: 2015-06-19 3.36. Cauchy Distribution Applies to: @RISK 5.0 and newer Does @RISK have a Cauchy distribution? Yes, beginning with @RISK 7.5 you can specify a Cauchy distribution (also known as a Lorentz or Lorentzian distribution) in the regular Define Distributions dialog: RiskCauchy(γ,β) where γ is the location parameter and β is the scale parameter. @RISK 7.0 and earlier did not have a Cauchy distribution among the pre-programmed list. If you can't upgrade to the current version of @RISK, you can easily create one yourself from a t distribution. According to Evans, Hastings, Peacock Statistical Distributions 3/e (Wiley, 2000), pages 49–50: "The Cauchy variate C:a,b is related to the standard Cauchy variate C:0,1 by C:a,b ~ a+b(C:0,1). ... The standard Cauchy variate is a special case of the Student's t variate with one degree of Therefore, to get a Cauchy distribution with location parameter (median) in cell A1 and scale parameter in A2, use this formula in @RISK 5.5 through 7.0: =RiskMakeInput(A1 + A2*RiskStudent(1)) In @RISK 5.0, use: =RiskMakeInput(A1 + A2*RiskStudent(1), RiskStatic(A1)) • The RiskMakeInput( ) wrapper tells @RISK that graphs, reports, and sensitivity analysis should show the Cauchy distribution from the formula, as opposed to the Student's t distribution. • The mean of a Cauchy distribution is undefined, so when a simulation isn't running you would normally see #VALUE in the cell. By using the RiskStatic property function, you tell @RISK to display the median in the cell when a simulation isn't running. Beginning with @RISK 5.5, the RiskStatic function is not necessary because @RISK will use 0 as the mean for RiskStudent(1). • The output graph may look like just a spike, because the default automatic scaling includes the few extreme values as well as the great mass in the center. If that happens, right-click on the x axis labels and select Axis Options to adjust the scaling. The attached workbook illustrates the Cauchy distribution for @RISK 7.0 and earlier. Last edited: 2016-07-12 3.37. Extreme Value Distributions: Gumbel and Fréchet Applies to: @RISK 5.x and newer @RISK does not use the type of Extreme Value distribution that I need. Is there any way I can get the other type of Extreme Value distribution out of @RISK? The Extreme Value distribution falls into two major types: Type I is also called Gumbel, and Type II is also called Fréchet; both are offered in @RISK. Gumbel Distribution (Type I Extreme Value) There are two sub-types of Gumbel distribution. The Maximum Extreme Value distribution is implemented in @RISK's RiskExtValue(α,β) function, which has been available since early versions of RISK. The Minimum Extreme Value distribution is implemented in @RISK 6.0 and newer as the RiskExtValueMin(α,β) function. In earlier versions of @RISK, use RiskExtValue( ), but put a minus sign in front of the function and another minus sign in front of the first argument. For example, for a Minimum Extreme Value distribution with α=1, β=2, use RiskExtValueMin(1,2) in @RISK 6.0 and newer, or – (RiskExtValue(–1,2)) in @RISK 5.7 and earlier. Fréchet Distribution (Type II Extreme Value) The Fréchet distribution is defined in @RISK 7.5 and newer. If you have an older @RISK and can't upgrade to the latest, you can use the technique in Add Your Own Distribution to @RISK to create one. You'll need the CDF, which is exp[–z^–α], where z = (x–γ)/β. γ is the location parameter, β is the scale parameter, and α is the shape parameter. Additional keywords: ExtValue distribution, ExtValueMin distribution Last edited: 2016-07-12 3.38. F Distribution Applies to: @RISK 5.0 and newer Does @RISK have an F distribution? With @RISK 6.0 and newer: Select Define Distributions » Continuous » F, or Insert Function » Continuous » RiskF. With @RISK 5.x: Before release 6.0, @RISK did not have an F distribution (Fisher-Snedecor distribution, variance ratio distribution) among the pre-programmed list. If you still have @RISK 5.x, you can easily create one yourself with a ratio of chi-squared distributions. According to Evans, Hastings, Peacock Statistical Distributions 3/e (Wiley, 2000), page 92: The variate F:n,m is related to the independent Chi-squared variates χ²:ν and χ²:ω by F:n,m ~ [(χ²:ν)/ν] / [(χ²:ω)/ω]" Therefore, to get a distribution of F(A1,A2), you can program =RiskMakeInput( (RiskChiSq(A1)/A1) / (RiskChiSq(A2)/A2) ) The attached workbook shows this, and requires @RISK 5.0 and later. (In earlier versions of @RISK, you can still do the calculation, but the RiskMakeInput wrapper isn't available. RiskMakeInput, which lets you treat a calculation as a distribution for most purposes, was new in @RISK 5.0.) Last edited: 2015-06-19 3.39. Generalized Pareto Distribution Does @RISK handle a generalized Pareto distribution? Yes, with the restriction that the shape parameter must be positive. The generalized Pareto distribution takes three parameters: location μ (mu), scale σ (sigma), and shape k. The RiskPareto2 distribution takes three parameters: scale b, shape q, and optionally a location shift in the RiskShift( ) property function. Conversion between the parameters: • scale: b = σ/k or σ = b/q • shape: q = 1/k or k = 1/q • location: μ = RiskShift value Conversion between the functions: • GPD(μ, σ, k) is equivalent to RiskPareto2(σ/k, 1/k, RiskShift(μ)) • RiskPareto2(b, q, RiskShift(μ)) is equivalent to GPD(μ, b/q, 1/q) Last edited: 2017-05-02 3.40. Four-Parameter Pert Distribution Applies to: @RISK 5.x–7.x Does @RISK have a four-parameter Pert distribution, with a shape parameter? The RiskPert distribution has three parameters: min, mode (most likely), and max. Some authorities, such as Wolfram, mention a four-parameter Pert distribution, the fourth parameter λ being the shape, and you may see references to a "Beta-Pert" on some Web sites. Implicitly, with RiskPert the value of λ is 4. The Pert distribution is closely related to the Beta distribution, and in fact RiskPert is a special case of RiskBetaGeneral. @RISK doesn't let you enter a shape parameter directly, but you get the equivalent of a four-parameter Pert distribution with a RiskBetaSubj(min, mode, μ, max) function, where μ = (min + max + λ·mode) / (λ + 2). To modify a regular three-parameter RiskPert(min, mode, max) by adding a shape parameter λ, change it to RiskBetaSubj(min, mode, (min+max+λ*mode)/(λ+2), max). If you have the min, mode, max, and λ in cells B1 through B4, then you can put the formula =(B1 + B3 + B4*B2)/(B4 + 2) in cell B5, and use =RiskBetaSubj(B1,B2,B5,B3). A simple example is attached. Last edited: 2016-08-29 3.41. Sensitivity Simulation with RiskSimtable for Specific Values Applies to: @RISK 5.x–7.x As you know, @RISK sensitivity analysis lets you see the impact of uncertain model parameters on your results. But what if some of the uncertain model parameters are under your control? In this case the value a variable will take is not random, but can be set by you. For example, you might need to choose between some possible prices you could charge, different possible raw materials you could use or from a set of possible bids or bets. To properly analyze your model, you need to run a simulation at each possible value for the "user-controlled" variables and compare the results. A Sensitivity Simulation in @RISK allows you to quickly and easily do this, offering a powerful analysis technique for selecting between available alternatives. In @RISK, any number of simulations can be included in a single Sensitivity Simulation. The RiskSimtable( ) function is used to enter lists of values, which will be used in the individual simulations, into your worksheet cells and formulas. @RISK will automatically process and display the results from each of the individual simulations together, allowing easy comparison. To run a Sensitivity Simulation: 1. Enter the lists of values you want used in each of the individual simulations into your cells and formulas using RiskSimtable( ). For example, possible price levels might be entered into Cell B2, like this: This will cause simulation #1 to use a value of 100 for price, simulation #2 to use a value of 200, simulation #3 to use a value of 300 and simulation #4 to use a value of 400. (If you have too many values to place comfortably in the formula, see Cell References in Distributions.) 2. Set the number of simulations in the Simulation Settings dialog box (in this example, 4 simulations) and run the Sensitivity Simulation using the Start Simulation command. Each simulation executes the same number of iterations and collects data from the same specified output ranges. Each simulation, however, uses a different value from the RiskSimtable( ) functions in your worksheet. @RISK processes Sensitivity Simulation data just as it processes data from a single simulation. Each output cell for which data was collected has a distribution for each simulation. Using the functions of @RISK, you can compare the results of the different alternatives or scenarios described by each individual simulation. The Distribution Summary graph summarizes how the results for an output range change. There is a different summary graph for each output range in each simulation, and these graphs can be compared to show the differences between individual simulations. In addition, the Simulation Summary report is useful for comparing results across multiple simulations. The values entered in the RiskSimtable function can be distribution functions, so you can also use Sensitivity Simulation to see how different distribution functions affect your results. For example, you may wish to see how your results change if you alternately try RiskTriang( ), RiskPert( ), or RiskNormal( ) as the distribution type in a given cell. For more, see RiskSimtable with Distributions as Arguments. It is important to distinguish between controlled changes by simulation (which are modeled with the RiskSimtable( ) function), and random variation within a single simulation (which is modeled with distribution functions). RiskSimtable( ) should not be substituted for RiskDiscrete( ) when evaluating different possible random discrete events. Most modeling situations are a combination of random, uncertain variables and uncertain but "controllable" variables. Typically, the controllable variables will eventually be set to a specific value by the user, based on the comparison conducted with a Sensitivity Simulation. Each simulation executed when the number of simulations is greater than one in the Simulation Settings uses the same random number generator seed value. This isolates the differences between simulations to only the changes in the values returned by RiskSimtable( ) functions. If you wish to override this setting, select Multiple Simulations Use Different Seed Values in the Random Number Generator section of the Sampling tab prior to running multiple simulations. Additional keywords: Simtable, Sensitivity analysis Last edited: 2015-06-19 3.42. RiskSimtable with Distributions as Arguments Applies to: @RISK, all releases The @RISK manual says that RiskSimtable( ) can take distributions as arguments, but I can't get the syntax right. How should I code my RiskSimtable( ) function? RiskSimtable( ) actually has one argument. It's an array, either a list of values in curly braces like {14,33,68,99} or an Excel range reference without curly braces like C88:C91. To use distribution functions as arguments to RiskSimtable( ), put them in a range of cells in Excel and then specify the range as the argument to RiskSimtable( ). Please download the attached example, KB55_SimtableArguments.xlsx. It shows two methods to use RiskSimtable( ) to modify distributions from one simulation to the next. In each case, cell references are the key to making the behavior vary. In the example, Simulation Settings » Sampling specifies that multiple simulations all use the same seed. Thus, any differences between simulations are completely due to the different distributions chosen. Method 1: There is only one distribution function, RiskBinomial( ), in this example. Its second argument, p, is a cell reference to a RiskSimtable( ) function that lists the value of p for each simulation. Other formulas would use the value of the distribution function, not the RiskSimtable( ) function. Method 2: Each simulation uses a different distribution function. Those functions are defined in an array of cells, and the RiskSimtable( ) function has that array reference as its argument. Other formulas would use the value of the RiskSimtable( ) function, not the individual distribution functions. There's one potential problem with that second method. Since the RiskSimtable( ) function refers to the three cells containing the three distribution functions, all three of them are precedents of the RiskSimtable. If one of your @RISK outputs refers to that RiskSimtable(), directly or indirectly, all three of the distributions will show as precedents of the output. Logically, in each of the three simulations, a different one of the functions is a precedent of your output. But since the RiskSimtable( ) function argument refers to all three, all three show up as precedents in each of the three simulations. The solution is to wrap the RiskSimtable( ) inside a RiskMakeInput( ), as was done in the last block in the example. Then @RISK will not consider the precedents of the RiskSimtable( ) as precedents of the output, and the tornado diagram for the output will show just one bar in each of the three simulations, which makes sense logically. See also: All Articles about RiskMakeInput. Additional keywords: Simtable Last edited: 2015-10-06 3.43. Multiple Simtables — Need All Combinations of Values Applies to: @RISK, all releases I'm using several RiskSimtable functions because I want to vary multiple variables. Variable A has two values and variable B has five values. How do I set up the 2×5 = 10 simulations to use all combinations of the variables? When you have multiple RiskSimtable functions, the first simulation uses the first value of every RiskSimtable, the second simulation uses the second value of every RiskSimtable, and so on. If the number of simulations is greater than the number of values, that RiskSimtable will return error values for the extra simulations. This means that if you want 2×5 = 10 combinations, each RiskSimtable needs 10 values. There are two ways to accomplish this: list all ten combinations of values, and have your RiskSimtable functions access each list, or select values of the variables yourself based on the current simulation number without using RiskSimtable. The first method is easier, especially if you have a small number of variables and they have a small number of values. The second method is more flexible and can be extended easily, but it's more complicated. The attached workbook shows both methods, using the same three variables for each. Additional keywords: Simtable Last edited: 2014-10-17 3.44. Selecting Exactly Two Items (Two Numbers Guaranteed Different) Applies to: @RISK, all versions I have N items, and every iteration I need to select exactly two of them. Let's say N = 25, for example. I can't just use two RiskIntUniform(1,25) because they might come up with the same number in a given iteration. I need two unique items in every iteration. (I'm not worried about repetitions between iterations, just that the two numbers I get in any particular iteration are always different from each other.) The attached workbook shows two methods to accomplish this. Each method varies the two selections independently but guarantees that they'll never be equal in any one iteration. You can tap F9 repeatedly to see how each method selects two numbers, and each time the two are different. If you run a simulation, it will count the number of occurrences where the two numbers are the same; that is zero because they are always different, as desired. In Method A, you use RiskIntUniform(1,25) to select the first one. Therefore, 24 items have not been selected as the first item, so you use RiskIntUniform(1,24) to help you find the second one. Specifically, to find the second one you add the two RiskIntUniform functions together and then, if the total is greater than N, you subtract N. For example, suppose that on one iteration you get 16 from RiskIntUniform(1,25) and 19 from RiskIntUniform(1,24). Then your second selection is number 35 ( = 16+5–10). Method A is fairly straightforward, but it has a small problem: the 25 numbers are not quite equally likely to occur over the course of the whole simulation. (See the 'Method A Results worksheet in the attached workbook.) Why does this happen? Adding two independent distributions, as Method A does, tends to lose some of the advantage you normally get from the stratified sampling method of Latin Method B overcomes this problem, but at the cost of some complexity. Start with the number of ways to draw two numbers from N without replacement: that is N(N–1)/2. For N = 25, there are 300 possibilities, which you can think of as numbered from 1 to 300. Therefore, Method B uses a RiskIntUniform(1,300). In each iteration, the integer value is "decoded" to a pair of unique integers 1–25. (If you look at the formulas, you'll see some pretty involved algebra.) Now the bumpiness of the results of Method A is gone. On the 'Method B Results' sheet, you can see that all 25 numbers come up exactly the same number of times. Looking at the formulas on the 'TWO METHODS' sheet, you might be suspicious of the formulas for first and second selection with Method B. Maybe they work for 25 items but not for other numbers of items? That's the purpose of the last sheet, 'Method B Verify'. It shows that, for any number of items from 3 to 100, the RiskIntUniform of Method B does cover all possible draws of two different last edited: 2014-05-30 3.45. Add Your Own Distribution to @RISK Applies to: @RISK 5.0 and newer I need a particular distribution that isn't in the Define Distributions dialog. Can I just give @RISK a formula for the CDF and have @RISK draw the random numbers? If you have a formula for the inverse CDF, you can use it with @RISK to create your own distribution. The input to that formula is a RiskUniform(0,1), which provides a randomly selected cumulative probability; then your inverse CDF formula converts that to an x value. By enclosing the formula in RiskMakeInput( ), you tell @RISK to treat the formula as a regular distribution for purposes like sensitivity analysis and graphing. We'll illustrate this with the Burr distribution. (Starting with release 7.5, the Burr12 distribution is built into @RISK, but you would use the same method if you need to create a distribution that's not in @RISK.) Wikipedia gives the CDF of a Burr Type XII as F(x; c,k) = 1 - (1 + x^c)^-k where c and k are positive real numbers. A little algebra gives the inverse as x = [ (1-F)^-1/k - 1 ]^1/c To draw random numbers for its standard distributions, @RISK first draws a random number from ithe uniform distribution 0 to 1, which represents a cumulative probability; then it finds the x value corresponding to that percentile — in other words, it uses that cumulative probability as input to an inverse CDF. (@RISK uses special techniques for distributions that don't have a closed form for their inverse CDFs.) Therefore, your Excel formula for a Burr distribution is the combination of RiskUniform and the inverse CDF above: =( (1-RiskUniform(0,1))^(-1/k) - 1 )^(1/c) Finally, you want to wrap that in a RiskMakeInput, so that @RISK will store iteration values of this formula, let you make graphs, treat it as an input in sensitivity analyses, and so on. Your final Excel formula is: =RiskMakeInput( ( (1-RiskUniform(0,1))^(-1/k) - 1 )^(1/c), RiskName("Burr")) You'll replace the parameters c and k with numbers, or more likely with cell references. To see the formula in action, open the attached workbook in @RISK. The four graphs were made with the four combinations of c and k shown in the worksheet; you can compare these to the PDF curves show in the Wikipedia article. You can also enter your desired values of c and k in any of columns A through D, and run a simulation. See also: All Articles about RiskMakeInput Last edited: 2016-07-12 3.46. Custom Distribution Using RiskCumul Note: This article illustrates solutions to very specific problems, but you can modify them to create many different custom distributions. Example 1: I need a distribution where there's a 75% chance of a value between 0 and 8 and a 25% chance of a value between minus 12 and minus 7. A competing product does this as a "custom distribution". Can I do it in @RISK? Yes, the RiskCumul function can represent this distribution for you. In RiskCumul, you specify an array of points and a second array of cumulative probabilities at those points. Here is the function: =RiskCumul(minimum, maximum, array of x, array of cum-p) and specifically for your distribution: =RiskCumul(-12, 8, {-7,0}, {0.25,0.25}) Try pasting this formula into an Excel cell and then clicking Define Distribution to see the histogram. Here's how to read the arguments: x cum-p explanation −12 0 minimum value of distribution is −12 −7 0.25 25% probability between −12 and −7 0 0.25 0% probability between −7 and 0 8 1 maximum value of distribution is 8 The first two arguments to RiskCumul are the lowest and highest possible values in your distribution. You specified minus 12 and plus 8 in your problem statement. The array of x's and the array of cum-p's are enclosed in curly braces { }. (Alternatively, you could put the numbers in cells of your Excel sheet, and then reference the array in the form D1:D4 without braces.) The 0.25 cumulative probability for x=0 might seem a bit strange. The explanation is that you specified zero probability between minus 7 and 0. If the probability in that region is zero, then the cumulative probability at every point in the region is the same as the cumulative probability at the left edge, namely 0.25 (25%). The cumulative probability of 1 is not specified anywhere in the RiskCumul function, because it's implicit in the listing of 8 as the maximum for the distribution. Example 2: I need to set up a probability distribution as follows: • 75% of the probability occurs between 0 and 5 • 25% of the probability occurs between −15 and −5 Here's how to analyze it: • The lowest possible value is −15 and the highest possible value is 5. • The first 25% of cumulative probability occurs between the minimum and −5. • Values between −5 and 0 are impossible (zero probability), so the cumulative probability remains at 25% at x=0. • The rest of the probability, 75%, occurs between x=0 and the maximum. Paste this formula into a cell: =RiskCumul(-15, 5, {-5,0}, {0.25,0.25}) and press the Enter key. To see the distribution, click into the cell and click Define Distribution. RiskCumul takes four arguments: the minimum x, the maximum x, an array of intermediate x's, and an array of the cumulative probabilities for those x's. (Arrays are enclosed in { } curly braces.) The 75% probability for the region 0 to 5 doesn't appear explicitly — it's implied by the fact that cumulative probability is 0.25 at x=0 and is 1.00 at x=5. These particular examples show three regions (divided by two x's) between the minimum and maximum, but you could have any number of regions. Additional keywords: Cumul distribution last edited: 2013-04-11 3.47. How does RiskSplice Work? (Technical Example) Applies to: @RISK 5.5.0 and newer The help text for RiskSplice( ) says The two pieces of the distribution will be re-weighted since the total area under the (spliced) curve still has to equal 1. Thus the probability density of any given x value in the resulting spliced distribution will probably be different from what it was in the original distribution. How exactly does RiskSplice( ) work? How are the density functions adjusted to make the new distribution? Please see the attached Word document for the mathematical details and a complete example. Additional keywords: Splice Last edited: 2015-04-23 3.48. Nesting RiskSplice Distributions Applies to: @RISK 5.5.0 and newer Can I splice together more than two distributions with RiskSplice? I understand how to use RiskSplice( ) to combine two distributions into one, but is there a way to combine more? It is possible to splice together three or more distributions by nesting RiskSplice functions. For example, if you had distributions in cells A1, B1, C1, and D1, you could do: =RiskSplice(A1,B1,X) in cell A3, =RiskSplice(C1,D1,X) in cell B3, and then splice those in another cell, which would be =RiskSplice(A3,B3,X). You can also nest the distributions without using cell references if you prefer: =RiskSplice(RiskNormal(30,1), RiskSplice(RiskWeibull(2,10), RiskGamma(2,10), 10), 40) However, while splicing together more than two distributions is possible, the define distribution window is unable to graph it. It does simulate normally and you can see the results in the Browse Results window. If you want to graph a distribution with an unusual shape, you might be better off with the RiskGeneral or RiskCumul distributions, which let you define the dataset manually, or the Artist feature under Distribution Fitting, which creates a RiskGeneral distribution for you. Last edited: 2018-11-02 3.49. Bimodal or Mixed Distribution Applies to: @RISK 5.x–7.x How can I create a bimodal distribution in @RISK? I want a distribution that is a mix of one distribution some percentage of the time, and a different distribution the rest of the time. Although it's possible to do the whole thing in one cell, it's clearer if you use several "helper cells". That will also make it easier to find what is wrong if the final distribution doesn't behave as you expected. Please refer to the attached example in conjunction with these steps: 1. Place the two desired distributions in two cells (B15:B16). 2. In a third cell (C16), place the proportion of the final mix that should come from the first distribution. The rest will come from the second distribution. 3. In a fourth cell (B18), place a RiskBernoulli( ) to determine which distribution gets used in that iteration. (RiskBernoulli returns 1 the stated percentage of the time, and 0 the rest of the 4. Finally (B20), construct an IF(fourth cell, first cell, second cell). That is the final mixed distribution. 5. Recommendation: Wrap the final distribution in a RiskMakeInput( ). That way, any sensitivity analysis that you do will treat the final distribution as an input and will not go back to the original inputs or the RiskBernoulli( ). See also: All Articles about RiskMakeInput. I have a similar requirement, but instead of choosing a distribution probabilistically, I need to use one distribution below a certain x value and a different distribution above that value. The RiskSplice( ) function is designed for this application. In the @RISK ribbon, click Insert Function and find RiskSplice( ) among the special distributions. Last edited: 2015-06-19 3.50. Password Protecting a Worksheet or Workbook Applies to: @RISK 6.x/7.x I want to password one sheet in a workbook. How can I do that, and still run a simulation? @RISK can store and remember the password after you provide it once. See "Protected Workbook: This operation cannot be performed ...". I want to protect the whole workbook, and the above technique doesn't work. Is there another option? If you protect the workbook by File » Save As » Tools next to the Save button) » General Options, you can write a VBA function to provide @RISK with the password. See "GetPassword cannot be found." This technique will not work if you protect the workbook with the Protect Workbook command on Excel's Review tab. Last edited: 2018-03-09 3.51. Cell References with RiskCompound Applies to: @RISK 5.x–8.x When creating a RiskCompound function, I notice that the results are the same if the entire distribution is defined within the function or if one of the component distributions is defined in another cell that the RiskCompound function references. However, if the function references another cell which references a third cell, the results are different. The RiskCompound function is not designed to work the same with references to references as it does with a single cell reference. Please see the attached worksheet for a complete explanation. See also: All Articles about RiskCompound Last edited: 2021-11-18 4. @RISK Distribution Fitting 4.1. Capacity of Distribution Fitting Applies to: @RISK 6.x/7.x, Professional and Industrial Editions How many points can be used to fit a distribution? How many variables (columns) can be included in a batch fit? @RISK requires at least five points, and it allows up to 10 million points in a fit. Batch fits can include up to 256 variables. If you are fitting more than a few variables in a batch fit, for faster performance you may want to tell @RISK not to produce detailed reports. In the Batch Fit dialog, on the Report tab, turn off the option "Include Detailed Report Worksheet for Each Fit". Time Series batch fits are limited to 255 variables. Last edited: 2015-06-19 4.2. Bootstrapping for Distribution Fitting Applies to: @RISK 6.x/7.x, Professional and Industrial Editions I am a user of @RISK, and I wonder if it might be used for a nonparametric bootstrap method for analyzing a data set. Beginning with release 6.0, @RISK offers parametric bootstrapping. Compared to nonparametric bootstrapping, parametric bootstrapping requires less resampling and is more robust with smaller data sets. You can get parameter confidence intervals as well as goodness-of-fit statistics. Because it is computationally intensive, parametric bootstrapping is turned off by default in @RISK. You can select it on the Bootstrapping tab of the dialog for fitting distributions. Please see "Appendix A: Distribution Fitting" in the @RISK user manual or help file. There's also a nice picture in 15.3 Bootstrapping from Penn State's Eberly College of Science. See also: N/A in Results from Parametric Bootstrapping Last edited: 2018-11-09 4.3. Discrepancy from Fits Performed by Other Software Applies to: @RISK 5.x and newer, Professional and Industrial Editions When I fit my data in @RISK, I get a very different result from the ________ software. Maybe @RISK fails to converge at all, or maybe it converges on a fit but the parameters are very different. Is there some setting I need to change? Probably there is. Specifically, if the process that generated the data has a natural lower bound, you should specify that lower bound on the Distributions to Fit tab of the fitting dialog. Why is this necessary? Many software packages assume a lower bound of zero for distributions that don't have a left-hand tail. Other packages, including @RISK, take a more general approach and make the lower bound subject to fitting also, as a shift factor. This allows, for instance, a distribution shaped like a log-normal but offset to left or right, if that matches the data best. But sometimes that is actually too much freedom, and @RISK fails to converge on a fit. (In general, "convergence failed" means that the numerical process of homing in on an answer for the MLE got stuck in a loop and couldn't finish.) When the data have a natural lower bound, and you specify that lower bound to @RISK, it can do a better job of fitting more efficiently. Specifying the lower bound may even make the difference between "convergence failed" and a successful fit, as for example in some Weibull distributions with shape parameter less than 1. On the Distributions to Fit tab of the fitting dialog, "bounded but unknown" restricts the fit to distributions that don't have left-hand tails, but it doesn't affect the fitting algorithm for those distributions. But when you specify a specific lower bound, then @RISK uses that as a fixed shift factor, and the mathematics of doing the fit are simplified. Last edited: 2015-06-01 4.4. "Distributions to Fit" Dialog Doesn't Allow Every Distribution Applies to: @RISK 5,x–7.x Why won't @RISK allow me to specify that I want to fit a discrete distribution (e.g. a Binomial, Geometric, HyperGeo, IntUniform, NegBin, Poisson) in the "Distributions to Fit" dialog? @RISK lets you choose distributions that are appropriate for the type of data you specify. On the "Data" tab, change the data type to a discrete type, and then the discrete distributions will be available to you on the "Distributions to Fit" tab. Last edited: 2015-06-19 4.5. Discrete Density Data Treated as Continuous Applies to: @RISK 5.x–7.x My data set is as follows: x p 0 0.14 50 0.35 100 0.30 200 0.15 500 0.06 I calculate the mean in Excel by summing the product of each data value multiplied by its probability, and I get 107.5. But if I do a fit on this data, the Input column in the Fit tab shows the mean as 183.74. Have I used a correct method to calculate the mean for density data? If not, what is the correct way to do this? Probability is quite different between discrete and continuous distributions. In a continuous distribution, there are an infinite number of points on a continuous distribution (not just 0 and 1, for instance, but also all values in between), and therefore the probability of getting any one of those values is infinitely small. That is why we always look at the probability that something will be within a certain range, not the probability that it will be equal to a single value. For discrete distributions, the probability of each possible outcome is nonzero; for example, a coin toss has only two, not infinite, possible values, so we can talk about the probability of a single value. When you do a fit, one thing you tell @RISK is the data type, so that it can apply the proper rules for The way you have the fit set up, you are specifying 5 points on a continuous density curve, which is not the same as specifying the probability at those points. Since it doesn't have any more information, @RISK assumes a linear change in density between each of these points (it connects the dots with straight lines). In effect, it treats the data as describing a RiskGeneral distribution. When you manually calculated the mean, however, you assumed a discrete distribution: it only has values 0, 50, 100, 200, 500, and nothing else. If this is what you intended, then you want to select a data type of "Discrete Sample Data (Counted Format)" in the fitting dialog, on the first tab. As the name "counted format" suggests, the second column must be whole numbers, so you need to multiply all your probabilities by the same number. In this case, since the probabilities are all two decimal places, multiply them all by 100 to get whole numbers in the same proportion. Last edited: 2015-06-19 4.6. RMS Error Calculation in Distribution Fitting with (x,p) Pairs Applies to: @RISK 5.x–7.x How does @RISK calculate the RMS error that it uses for fit ranking? For curve data—x values with associated probability densities or cumulative probabilities—@RISK computes the root-mean-square error as a measure of goodness of fit. The equation is in the help file, but it can be hard to relate that to the computations for your particular data set. The attached example shows how @RISK computes the RMS error for (x,p) data, where p is the cumulative probability or area under the curve for all values less than or equal to that x value. See also: RMS Error Calculation in Distribution Fitting with (x,y) Pairs. Last edited: 2015-06-19 4.7. RMS Error Calculation in Distribution Fitting with (x,y) Pairs Applies to: @RISK 5.x–7.x How does @RISK calculate the RMS error that it uses for fit ranking? For curve data—x values with associated probability densities or cumulative probabilities—@RISK computes the root-mean-square error as a measure of goodness of fit. The equation is in the help file, but it can be hard to relate that to the computations for your particular data set. The attached example shows how @RISK computes the RMS error for normalized or unnormalized (x,y) data, where y is the height of the probability density curve or relative frequency curve. Although the RMS calculation is the same in @RISK 5.x and 6.x, the example requires @RISK 6.0 or higher because it uses the new RiskFit functions that were introduced in @RISK 6.0. See also: RMS Error Calculation in Distribution Fitting with (x,p) Pairs. Last edited: 2015-06-19 4.8. P-Values and Distribution Fitting Applies to: @RISK 5.x–7.x How do I get p-values, critical values, and confidence intervals of parameters of fitted distributions? In the Fit Distributions to Data dialog, on the Bootstrap tab, tick the box labeled "Run Parametric Bootstrap". You can also specify the number of resamples, and your required confidence level for the parameters. Bootstrapping will take extra time in the fitting process, particularly if you have a large data set. Click the Fit button as usual. You'll see a pop-up window tracking the progress of the bootstrap. When the fit has finished, you can click the Statistical Summary icon (last of the small icons at the bottom) to see an exhaustive chart. Or you can select one distribution in the list at the left and click the Bootstrap Analysis icon (second from right) to see just fit statistics and p-values, or just parameter confidence intervals, for that one distribution. If the information is not available because the bootstrapping failed, you will see a box "Unable to refit one or more bootstrap resamples." Why doesn't @RISK give p-values for the Kolmogorov-Smirnov and Anderson-Darling tests for most fits? Why do the ones that @RISK does give disagree with other software packages? Basically, the p-values require knowledge of the sampling distribution of the K-S or A-D statistic. In general this sampling distribution is not known exactly, though there are some very particular circumstances where it is. While we don't know the exact methodology that other packages use, it is true that there are a number of ways to deal with this problem. The method @RISK takes is very cautious. If we cannot report the p-value, either we report a possible range of values it could be (if we can determine that) or we don't return a value at all. Some people will choose the "no-parameters-estimated case", which can be determined in many cases, but which returns an ultra-conservative answer. A good reference for how @RISK handles this can be found in the book Goodness-of-Fit Techniques by D'Agostino and Do you have any cautions for my use of p-values? Sometimes too much stress is laid on p-values in distribution fitting. It's really not valid to select a p-value as a "bright-line test" and say that any fit with a higher p-value is good and any fit with a lower p-value is bad. There is no substitute for looking at the fitted distribution overlaid on the data. We recommend against using the p-values for your primary determination of which distribution is the best one for your data set. For some guidance, see "Fit Statistics" in the @RISK help file or in Appendix A of the user manual, and Interpreting AIC Statistics in this Knowledge Base. Last edited: 2017-06-29 4.9. Interpreting Anderson-Darling Test Statistics What does it mean for the inverse Gauss distribution to have an A-D test value of 1.67895 and the Loglogistic distribution to have an A-D test value of 6.78744? Does the A-D test have a unique distribution, meaning that it is not a conventional F Test or χ² (chi-squared) test? Is an A-D test value of 1.68 approximately four times better than an A-D test value of 6.79? How can the test values be compared? The A-D test value is simply the average squared difference between the empirical cumulative function and the fitted cumulative function, with a special weighting designed to accentuate the tails of the distribution. There are many good references for this, including Simulation Modeling and Analysis by Law & Kelton. What this means is that in an absolute sense A-D values can be compared from one distribution to another. An A-D test value of 6.78744 versus one of 1.67895 implies that the average squared distance between the empirical and fitted cumulative functions (including the effects of the preferential weighting of the tails) is four times as big in one case versus another. A potential drawback for the A-D test is that it does not have a convenient, unique test distribution, like the χ² test does. Actually, to be fair to the A-D test, even the χ² statistic only approximately follows the χ² distribution in the case where fit parameters have been estimated (see Law & Kelton). Because the A-D test doesn't have a usable test distribution, we can't calculate p-values and critical values for the test, except in special distributions under special conditions, and even in those cases only approximately. There is a very brief discussion of this in Law & Kelton as well, but most of @RISK's treatment of this is taken from the very specialized book Goodness-of-Fit Techniques by D'Agostino & Stephens. last edited: 2012-08-04 4.10. Number of Bins in Distribution Fitting Applies to: @RISK 5.x, Professional and Industrial Editions (The fitting methods were changed beginning with @RISK 6.0.) How does your software automatically determine the number of chi-squared bins to use when fitting distributions against sample data? What degrees of freedom does it use? Is this the same method used for the "Auto" option when specifying the number of bins in a histogram? χ² (chi-squared) binning and histogram binning are very different, and the number and position of bars on a histogram chart is almost never the same as the arrangement of the χ² bins. For starters, χ² bins are equally probable and therefore are typically not all the same width, while (at least for all Palisade products) histogram graph bars always have equal width. For histogram binning, see Number of Bins in a Histogram. For χ² (chi-squared) binning with n data points: • If n < 35, bins = nearest integer to [n/5] • If n >= 35, bins = largest integer below [1.88 n ^ (2/5)] The small-n part is a rule of thumb that says you should have on average at least five data points per bin (a rule which is not always followed in practice). The large-n part has a real basis in statistical theory. A reference for it is in Goodness-of-Fit Tests by Ralph D'Agostino and Michael Stephens (Dekker 1986), page 70. After a fit, you can find how many bins @RISK used for computing the chi-squared statistic by clicking the "Statistical Summary" icon at the bottom of the "Fit Results" graph. The degrees of freedom for the χ² statistic is (number of points) minus 1, without regard to the number of parameters in the particular distribution. You can see this by examining the critical parameters in that same statistical summary. Law and Kelton, in Simulation Modeling and Analysis (2000), pages 359–360, say that some authors do vary degrees of freedom according to the number of parameters in the fitted distribution, but the conservative procedure is to use (number of points) minus 1, as @RISK does. It's important to remember that the χ² binning has zero effect on which fit is actually presented to the user. In other words, when @RISK is trying to fit to (say) a triangular distribution, it chooses the parameters that make the triangle as close as possible to your data as measured by MLEs. (There is no L-M optimizer in current versions of @RISK.) Changing the binning may change the statistics that purport to measure the goodness of a fit, but will have no effect on the parameters of the fitted distribution. For most data sets, from a glance at the overlay plot of the fitted distribution against your data it should be obvious which fit is best. If the binning is important to you, you can click that tab of the Fit dialog before performing the fit, and adjust the binning to your preference. Last edited: 2014-01-14 4.11. Discrepancy in AIC Calculation? Applies to: @RISK 6.x/7.x, Professional and Industrial Editions I fitted {1,2,3,4,5} to a RiskIntUniform distribution. I used the formula AIC = 2k – 2×ln(L) where k = 2 is the number of parameters and L is the likelihood. For a uniform integer distribution fitted to a sample of n = 5 points, every point has probability of 1/5, and so ln(L) = 5 ln(1/5). I computed AIC = 2k – 2×ln(L) = 2×2 – 10×ln(1/5) = about 20.0944 But @RISK gives 26.0944 in the Fit Results window. How do you reconcile this? @RISK actually computes AICc, which includes a correction for finite sample sizes. The formula is AICc = AIC + 2k(k+1)/(n–k–1) for a distribution with k parameters fitted to n data points. With k = 2 for a RiskIntUniform and n = 5 data points, AICc = AIC + 2×2×3/(5–2–1) = AIC + 6 = 26.0944 just as shown by @RISK. The finite-sample correction is important for very small samples, but much less important for samples of reasonable size. For example, for fitting a 2-parameter distribution to 30 data points, the correction would be 12/27, about 0.4444. Additional keywords: Distribution fitting, IntUniform Last edited: 2015-06-19 4.12. Interpreting AIC Statistics Applies to: @RISK 6.x/7.x, Professional and Industrial Editions @RISK gives me several candidate distributions. How can I interpret the AIC statistics? How much of a difference in AIC is significant? The answer uses the idea of evidence ratios, derived from David R. Anderson's Model Based Inference in the Life Sciences: A Primer on Evidence (Springer, 2008), pages 89-91. The idea is that each fit has a delta, which is the difference between its AICc and the lowest of all the AICc values. (@RISK actually displays AICc, though the column heading is AIC; see Discrepancy in AIC Calculation?) Example: suppose that the normal fit has the lowest AICc, AICc = –110, and a triangular fit has AICc = –106. Then the delta for the triangular fit is (–106) – (–110) = 4. The delta for a proposed fit can be converted to an evidence ratio. Anderson gives a table, which can also be found on the Web. One place is page 26 of Burnham, Anderson, Huyvaert's "AIC model selection and multimodel inference in behavioral ecology", Behav Ecol Sociobiol (2011) 65:23–35 (PDF, accessed 2014-07-11). In the table, a delta of 4 corresponds to an evidence ratio of 7.4, meaning that the normal fit is 7.4 times as likely as the triangular fit to be the right fit. If you had to choose between those two only, there's a 7.4/8.4 = 88% chance that the normal is right, and a 1/8.4 = 12% chance that the triangular is right. But of course you usually have more than two fits to choose from. To give you a further idea, delta = 2 corresponds to an evidence ratio of 2.7, and delta = 8 to an evidence ratio of 54.6. So how high does delta need to be before you reject a proposed fit as unlikely? Anderson cautions, "Evidence is continuous and arbitrary cutoff points ... should not be imposed or recognized." Yes, the higher deltas correspond to higher evidence ratios, so you can think of them as higher evidence against the lower-ranking fit, but you can never reject a fit with complete certainty. And of course if all the fits are poor then the best of them is still not a good fit. One other argument against relying solely on mechanical tests: A model that is a poorer overall fit may nonetheless be better in the region you care most about, or vice versa. It's always advisable to look at the fitted curves against the histogram of the data when making your final decision. Last edited: 2015-06-19 4.13. Bounds of Fitted Uniform and Exponential Distributions Applies to: @RISK 6.x/7.x, Professional and Industrial Editions When I fit points to the continuous distributions RiskUniform and RiskExpon, the minimum of the fitted distribution is to the left of the smallest data value. The maximum of the fitted RiskUniform is to the right of the largest data value. This seems strange at first, but it actually makes good sense if you look deeper. Here are two explanations: • The data points you're fitting are a sample from some ideal theoretical distribution that you are trying to find. How likely is it, purely by chance, that your sample data points would include both the absolute minimum and the absolute maximum of the theoretical distribution? Extremely UNlikely. Therefore, the boundaries of the theoretical distribution are almost certainly wider than the boundaries of your sample data. This issue is the famous German Tank Problem: given serial numbers of captured or destroyed tanks, how do you estimate the number of tanks that are being produced? • More formally, consider the sampling distribution of an order statistic, in this case the minimum and possibly the maximum. Let n be the number of points in your sample. If you take many, many samples of size n from the theoretical distribution, and take the minimum of each sample, you have the sampling distribution of the minimum. The mean of that distribution (μ[min]) should equal the minimum of the data points. Some values in the sampling distribution will be above μ[min] and some will be below. The minimum (the left boundary) of the sampling distribution of the minimum must be less than μ[min], which means it must be less than the minimum of the sample data. With the uniform distribution, you can make an equivalent statement about the maximum. The exponential function is unbounded to the right, so there is no maximum. @RISK lets you view a simulated sampling distribution of each parameter of each fitted distribution, such as the minimum and maximum of the fitted continuous uniform distribution. While setting up the fit, on the Bootstrap tab of the dialog, select Run Parametric Bootstrap. Then, on the Fit Results window, click the Bootstrap Analysis icon, which is the next to last one in the row at the bottom of the window. Select Parameter Confidence Intervals. Select a distribution at the left, select a parameter at the top, and see the graph of the simulated sampling distribution of that parameter. To see the statistics of the distribution, click the drop-down arrow at the top right of the graph and select Legend (with Statistics) or Statistics Grid. This issue will come up in any bounded continuous distribution, where the probability density shifts abruptly at the left from zero to a positive value, or at the right from a positive value to zero. For example, if you fit the points {11,12,13,14,15} as a continuous uniform distribution, you get RiskUniform(10,16), not RiskUniform(11,15) as you might expect at first. (Please see attached illustration.) To make μ[min] and μ[max] equal the minimum and maximum of the sample data, @RISK applies a bias correction of (max–min)/(n–1) = (15–11)/(5–1) = 1, so the minimum and maximum of the RiskUniform are 1 unit left and right of the minimum and maximum of the data. For the points {11,11.5,12,12.5,13,13.5,14,14.5,15}, the bias correction is 0.5, and the fitted uniform function is For the exponential function, the bias correction is (mean–min)/n. Again considering the points {11,12,13,14,15}, the bias correction is (13–11)/5 = 0.4. (Please see attached illustration.) Last edited: 2015-06-19 4.14. Best Fit for Small Data Sets? Applies to: @RISK 6.x/7.x, Professional and Industrial Editions When I do a fit on {1,2,3,4,5} as discrete data, @RISK prefers a RiskPoisson distribution, even though the RiskIntUniform is clearly a better fit. Why is that? In @RISK 6.x, the default statistic for measuring goodness of fit is AIC (more specifically, AICc). For small data sets, the AIC calculation strongly prefers distributions with fewer parameters. (This is an application of the principle of parsimony.) The Poisson distribution and the geometric distribution (RiskGeomet) are both one-parameter distributions, but the uniform integer distribution (RiskIntUniform) is a two-parameter distribution. With a data set of only five points, the AIC statistic's preference for distributions with fewer parameters trumps the poorer likelihood functions computed for those distributions. There are three countermeasures: • For small data sets, consider changing Fit Ranking to BIC. Although BIC also favors distributions with fewer parameters, it doesn't favor them as strongly as AIC does. (Please see attached • Don't just take the first listed fit, but examine the fitted distributions. Your data probably won't show the kind of dramatic difference that we got from this artificial data set, but you may find that a fit that doesn't have the best statistic actually does a better job in a particular region of the graph that you care most about. • Use more data points. @RISK does allow fitting to as few as five data points. But in general, the more points you have, the better the fitted distribution will match the true theoretical distribution that those points represent. Extending this made-up data set, with as few as nine points {1,2,3,4,5,6,7,8,9} @RISK computes the smallest AIC statistic for the integer uniform Last edited: 2015-06-19 4.15. N/A in Results from Parametric Bootstrapping Applies to: @RISK 6.x/7.x, Professional and Industrial Editions In the dialog box for distribution fitting, I selected parametric bootstrapping. The results columns for some distributions shows N/A for bootstrap results, instead of numbers. What does this mean? N/A for bootstrapping means that the bootstrap failed for that distribution. If the bootstrap fails for one distribution, it will not necessarily fail for all distributions. The bootstrapping process is done separately for each type of distribution. What does @RISK consider to be a failure of bootstrapping? In the bootstrapping process, @RISK takes each fitted distribution and generates a large number of new sample data sets from it, each with the same size as the original data set. It then refits these new data sets and tabulates information about each of the resampled fits. @RISK takes a conservative approach. If it is unable to fit a distribution to even one of the new sample data sets that it generated (meaning that the parameters of that distribution did not converge for that new data set), then @RISK considers that the bootstrap has failed for that distribution. Does that mean that the fit itself is bad? Fits aren't good or bad in absolute terms. Instead, you can say that one distribution is better or worse than another for your data set. Evaluating fits is both objective and subjective. You have the guidance of the fit statistics; for example, see Interpreting AIC Statistics. But your own judgment plays a part, too. For one thing, you have to decide which statistic to use — by a different statistic, fits may rank differently. Also, as you compare your data set to the distributions that @RISK came up with, you might decide to use distribution A rather than B, even though B has a more favorable fit statistic. Maybe A is a better fit than B in a region that you feel is most important, or maybe you have some more general reason for preferring one type of distribution over another. Last edited: 2016-01-14 4.16. "Auto Detect" Button in Time Series Fitting Applies to: @RISK 6.x/7.x, Industrial Edition How does @RISK try to achieve stationarity in fitting data to time series? Does the "Auto Detect" button use the Dickey-Fuller test or KPSS? The first thing to say is that, without knowing the source of the data, it's impossible to do auto detect perfectly; by necessity, it's a heuristic. When you choose Auto Detect, you need to look at the result and correct it as necessary, based on your knowledge of the source of the data, The KPSS test isn't appropriate for @RISK, since we don't really support trend-stationary time-series. Currently we only detrend using differences, although that may change in future versions of The Dickey-Fuller (DF) test—or usually augmented Dickey-Fuller—is more appropriate, but @RISK does not use that either. The Analysis of Time Series by Chris Chatfield (Chapman & Hall, 2003) p. 263 is quite negative about unit root testing. At best, it would be some additional information that could help distinguish between difference-stationarity and trend-stationarity, if we ever add the latter to @RISK. The DF tests have an additional drawback: they generally assume you have already removed other things that are making the data non-stationary. The obvious hard one is seasonality, especially when combined with a functional transform. But it's Catch-22, because every paper we found with routines for determining the periodicity of seasonality assumes you have already removed trends. Therefore, we had to take a different approach. @RISK uses a technique adapted from electronic signal processing. There, one takes small "windows" or subsets of the data, calculates statistics of these subsets, and then does standard statistical tests to compare if these statistics are changing as a function of time. @RISK looks through all the possible transformations (functional, detrending, and deseasonalization) to find the combination that produces acceptable test statistics. We also use proprietary techniques to address some nasty details: such as how to avoid over-differencing, how to determine the seasonal period, and so forth. Last edited: 2015-12-21 4.17. ARIMA Model in Time Series Applies to: @RISK 6.x/7.x, Industrial Edition Does @RISK support ARIMA-based time series? I couldn't find anything about ARIMA in the help file. To use ARIMA forecasting in @RISK, select ARMA(1,1) and specify trending.. First-order integration gives ARIMA(1,1,1), and second-order integration gives ARIMA(1,2,1). For all the details, step by step with illustrations, please open the attached Excel file. (You can view the file without starting @RISK.) Last edited: 2016-05-12 4.18. Constraining Time Series to Return Positive Results Applies to: @RISK 6.x/7.x Industrial Edition I'm using Time Series in @RISK 6. I want to ensure that the projected results are greater than zero. I tried using RiskTruncate, but the manual says that RiskTruncate and RiskShift aren't effective with time series. Is there a way to do what I want? Yes, assuming that the original data are greater than zero. In the fit dialog, select Function and then Logarithmic. The data will be transformed according to that function. After the fit, the projected data are de-transformed. De-transforming a logarithm means exponentiating, and the range of the exponential function is all positive numbers. You'll want to experiment a bit and make sure that doesn't have any undesirable side effects with your particular data set. Last edited: 2015-06-19 4.19. Number of Periods to Forecast in Time Series Applies to: @RISK 6.x/7.x, Industrial Edition In time series fitting, when I click Write to Cells, @RISK gives me a default array of 24 cells. Can I change the default number of periods that time series will project into the future? Yes, you can. In Utilities » Application Settings, expand the section "Time Series Graph Defaults". The last item, "Num. Default Data Points", tells the time series fit how many periods to forecast past the end of your historical data. You can still change this when you do any particular fit. Last edited: 2018-05-03 4.20. Technical Details of Distribution Fitting Applies to: @RISK 6.x/7.x, Professional and Industrial Editions How does @RISK estimate distribution parameters? Can you give me any details? In general, we use Maximum Likelihood Estimators (MLEs). For details, please use the Search tab in @RISK help to find the topic "Sample Data — Maximum Likelihood Estimators (MLEs)". After reading, click the Next button at the top and continue reading the subtopic "Modifications to the MLE Method". For references to methods that we use, search Help for the term "Merran" and click on the topic "Distributions and Distribution Fitting" in the search results. It's important to realize that not all distributions are fit in exactly the same way. In the more than 30 years we've been improving @RISK, we have developed many proprietary tweaks to the standard algorithms, to do a better job of fitting particular distributions. These let the fit proceed more efficiently, handle cases where the standard MLE algorithms break down, and so on. Although the fine details of our fitting algorithms are proprietary, the fit results include many popular goodness-of-fit statistics, including AIC, Anderson-Darling, BIC, χ², Kolmogorov-Smirnov, and RMS. For details of these statistics, see the "Fit Statistics" topic in @RISK help, as well as several articles in the @RISK Distribution Fitting chapter of this Knowledge Base. Last edited: 2016-08-09 4.21. Time Series with Irregular Intervals Applies to: @RISK 6.x/7.x/8.x, Industrial Editions Can I fit a Time Series distribution with irregularly spaced data? The @RISK methodology for Time Series is only applicable to equally spaced data. So, if the you have enough data points, we would suggest that you use any interpolation method to transform the series into equally spaced observations and then use any model available in @RISK. Last edited: 2020-08-06 5. Correlation in @RISK 5.1. How @RISK Correlates Inputs Applies to: @RISK, all releases How do I specify correlations? When two or more input variables should be correlated, you can click the Define Correlations icon in the ribbon, specify correlations in the Model Definition Window, or add RiskCorrmat( ) functions directly to the distribution formulas for those variables in your Excel sheet. The correlation coefficients you specify are Spearman rank-order correlations, not Pearson linear correlations. The rank-order correlation coefficient was developed by C. Spearman in the early 1900's, and this article explains how @RISK computes the rank-order correlation coefficient. Pearson correlations assume linear distributions, but the great majority of distributions are non-linear, and Spearman is usually more appropriate for non-linear distributions. A Web search for choose Spearman or Pearson correlation will show lots of articles about the different uses of these two forms of correlation. During a simulation, how does @RISK draw random numbers to achieve my specified correlations? @RISK draws all samples for correlated variables before the first iteration of the simulation. (Non-correlated variables are sampled within each iteration.) Knowing the number of iterations to be performed, @RISK adjusts the ranking and associating of samples within each iteration to yield the defined correlation values. Again, this correlation is based on rankings of values, not actual values themselves as with the linear correlation coefficient. A value's "rank" is determined by its position within the min-max range of possible values for the variable. @RISK generates rank-correlated pairs of sampled values in a two-step process: 1. A set of randomly distributed "rank scores" is generated for each variable. If 100 iterations are to be run, for example, 100 scores are generated for each variable. (Rank scores are simply values of varying magnitude between a minimum and maximum. @RISK uses van der Waerden scores based on the inverse function of the normal distribution.) These rank scores are then rearranged to give pairs of scores which generate the desired rank-order correlation coefficient. For each iteration there is a pair of scores, with one score for each variable. 2. A set of random numbers (between 0 and 1) to be used in sampling is generated for each variable. Again, if 100 iterations are to be run, 100 random numbers are generated for each variable. These random numbers are then ranked smallest to largest. For each variable, the smallest random number is then used in the iteration with the smallest rank score, the second smallest random number is used in the iteration with the second smallest rank score, and so on. This ordering based on ranking continues for all random numbers, up to the point where the largest random number is used in the iteration with the largest rank score This process results in a set of paired random numbers that can be used in sampling values from the correlated distributions during an iteration of the simulation. This method of correlation is known as a "distribution-free" approach because any distribution types may be correlated. Although the samples drawn for the two distributions are correlated, the integrity of the original distributions is maintained. The resulting samples for each distribution reflect the distribution function from which they were drawn. Does @RISK use Cholesky decomposition? Yes. If Cholesky fails, the matrix is not self-consistent, and @RISK proceeds as in How @RISK Adjusts an Invalid Correlation Matrix. If Cholesky succeeds, @RISK proceeds as in Iman, R. L., and W. J. Conover. 1982. "A Distribution-Free Approach to Inducing Rank Correlation Among Input Variables." Commun. Statist.-Simula. Computa. 11: 311-334. Retrieved 2018-08-23 from https://www.uio.no/studier/emner/matnat/math/STK4400/v05/undervisningsmateriale/A distribution-free approach to rank correlation.pdf. Why does Excel report a different correlation from the one I specified? This correlation method yields a rank order correlation (Spearman coefficient) that is usually quite close (within normal statistical variability) to your specified value. However, Excel's =CORREL( ) function reports the Pearson coefficient. The Pearson value may vary somewhat from the Spearman value, depending on the exact nature of the correlated distributions. This difference is illustrated in the attached workbook. (Beginning with @RISK 5.5, you can use the =RiskCorrel( ) worksheet function to display the Spearman or Pearson correlation for simulated data.) Correlation of discrete distributions can be a particular problem. For more on that, please see Correlation of Discrete Distributions. See also: Correlation in @RISK collects over a dozen articles explaining various aspects. Additional keywords: Corrmat property function Last edited: 2018-08-23 5.2. Limit on Correlated Variables? Applies to: @RISK 5.x–7.x How many input distributions can correlate? Is there a limit to the size of my correlation matrix? There is no fixed limit. However, as with all other aspects of modeling, your available system resources are a constraint. Note on Excel 2003, with @RISK 6.3 and older: Excel 2003 workbooks are limited to 256 rows. You can still have larger correlation matrices, but you have to use special techniques; see Correlation Matrix Exceeds Excel's Column Limit. This does not apply to newer versions of @RISK, because they require Excel 2007 or newer. Recommendation: If possible, don't have one huge matrix, but partition your correlation into smaller matrices. For example, suppose you have 400 inputs that are correlated. A 400×400 matrix is 160,000 cells. But if those 400 inputs actually fall into four groups of about 100 each, and there's correlation within each group but not between the groups, then you should use four 100×100 matrices, for a total of 40,000 cells. @RISK can test the smaller matrices for validity and, if necessary, adjust them much faster than one large matrix. If all 400 variables really do need to be correlated with each other, you need that larger matrix. But if you can group the variables as described, it's worth having a separate correlation matrix for each group. Recommendation: If you have several groups of variables that all need the same correlations within the group, they can all use he same smaller matrix. Follow the technique in Same Correlation Coefficients for Several Groups of Inputs. A typical example is time periods or geographical regions where a number of factors are correlated in the same way, but there's no correlation between periods or between regions. Last edited: 2016-08-30 5.3. Same Correlation Coefficients for Several Groups of Inputs Applies to: @RISK 4.x–7.x I have multiple groups of inputs, and I want to use the same set of correlation coefficients for each group. But @RISK correlates all the inputs of all the groups together, which is not what I want. How do I tell @RISK that inputs A, B, C are correlated with each other, and D, E, F are correlated with each other with the same coefficients, but A, B, and C are not correlated with D, E, and F? The short answer is to use the optional "instance" argument to RiskCorrmat( ), assigning a different instance to each group of correlated inputs. See attached example CorrelationGroups.xls. After a simulation, the worksheet CorrelationAudit_Report within that workbook shows sample correlations within a group and between groups. You can set up the correlations by pointing and clicking (Model A below) or by formula editing (Model B below). These methods will work with any number of groups, and any number of inputs per group. Solution details — point and click, Model A: If the correlated groups aren't too large and there aren't too many of them, you can easily correlate separate groups of inputs through menu selections. For simplicity we'll show two groups of three inputs each. Within the attached example CorrelationGroups.xls, the two worksheets ModelA and @RISK Correlations were created by this method. In @RISK 5.x–7.x: You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well. 1. Highlight the first group of inputs you want to correlate, using Shift-click for a continuous range and Ctrl-click for non-adjacent cells. 2. Right-click and select @RISK » Define Correlations, or just click the Define Correlations icon in the ribbon. 3. A window opens into a new correlation matrix, with your selected inputs listed. Set your correlation coefficients, either above or below the diagonal. 4. Click the icon at the bottom of the window to check matrix consistency, and correct any problems. See How @RISK Tests a Correlation Matrix for Validity. 5. Near the top of the window, set the matrix location. If you wish, also give the matrix a name and description. 6. Just above the matrix, click the first icon, "Rename Instance", and enter a unique identifier for this group of inputs. It can be text or numeric, such as a year number. 7. In the same row of icons, click "Create New instance". When prompted, enter a unique identifier for the second group of inputs that will use this correlation matrix. Click the Add Inputs button at the bottom and select the second group of inputs. 8. Repeat step 7 for each group of inputs that will use this correlation matrix. After entering the last group, click OK. Special case: If the groups of inputs are in a contiguous rectangular array, either as rows or as columns, you can short-cut the above process: 1. Click Define Correlations in the the ribbon. In the dialog box, click the Create Correlated Time Series icon near the top. (Despite the name of the icon, the groups of inputs don't actually have to be a time series.) 2. With your mouse, select the rectangle that contains all the groups of inputs that you want to correlate. Select correlation by rows or by columns. 3. Set your correlation coefficients, either above or below the diagonal. 4. Click the icon at the bottom of the window to check matrix consistency, and correct any problems. See How @RISK Tests a Correlation Matrix for Validity. 5. Near the top of the window, set the matrix location. If you wish, also give the matrix a name and description. Click OK. In @RISK 4.x: 1. Open the Model window by clicking the icon "Display List of Outputs and Inputs". (Alternative: menu selections @RISK, Model, List Outputs and Inputs.) 2. In the Explorer-style list at the left, click the first correlated input in the first group, then Ctrl-click the other correlated inputs in the first group. Click the icon "Define Correlation". (Alternative: menu selections Model, Correlate Distributions.) 3. Enter your correlation coefficients, change the matrix name if you wish, and click Apply. You'll see a new Correlations category in the Explorer-style list at the left with the name of your correlation matrix, and @RISK creates a new "@RISK Correlations" worksheet in your workbook. 4. Right-click on the name of the correlation matrix in the Explorer-style list, and select Edit Correlation Matrix. In the menu line of the Model window, select Correlation, Instance, Create New instance. Give the instance a name. 5. Drag each input of the second group into the correlation matrix and when prompted select Replace. When you've done this with all the inputs of the second group, click Apply. 6. Repeat steps 4 and 5 for each additional group of correlated inputs. You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well. Solution details — formula editing, Model B: As an alternative to point-and-click, you can take advantage of Excel's ability to replicate formulas by dragging the fill handle. (Search Excel help for "fill handle" if this is unfamiliar to you.) This method scales well to larger groups of correlated inputs, or greater numbers of groups. For this example we'll show ten groups of four inputs each, representing growth in the value of stocks and a bank account over ten years. Performances of stocks in a given year are positively correlated to each other but negatively correlated to interest rates. Within the attached example CorrelationGroups.xls, worksheet ModelB was created by this method. 1. Create your correlation matrix; row and column heads are optional but help to document the model. Highlight just the actual coefficients and define a name for them (menu selection Insert, Name, Define). In our example the correlation matrix including headings is C18:G23, and SecondCorr is the name of the 4×4 array of coefficients in D20:G23. 2. Set up your first group of correlated inputs as one row or one column. Create the distribution in the usual way, but add a RiskCorrmat function as an additional argument within the distribution function. The three arguments to RiskCorrmat are the name you assigned to the correlation matrix, the input number, and the instance. For reasons that will become clear in the next step, the instance argument should be a reference to the column header. In our example, the first group is Year 1, in column E. Growth factors are the @RISK distributions in cells E9, E11, E13, and E15; look at the formulas for those cells and see how the RiskCorrmat function is used. The new values at year end are in cells E10, E12, E14, and E16. Notice that the growth factors are correlated, but the year-end values are not. 3. Highlight the cells of the first year, E8:E16, and drag the fill handle to create the additional groups through year 10 in column N. Notice how the instance argument changes in each group, but is the same for all the inputs within a group; this was the reason for the cell reference in step 2. Note also that the named correlation matrix does not change from one column to the next. You can now run the simulation. Inputs within each group will be correlated, but inputs in different groups will not be correlated. The worksheet CorrelationAudit_Report, which is created automatically within the workbook, shows that the actual correlations match the requested correlations quite well. Additional keywords: Corrmat property function Last edited: 2016-12-12 5.4. How @RISK Tests a Correlation Matrix for Validity Disponible en español: Cómo prueba @RISK una matriz de correlación para determinar su validez Applies to: @RISK 4.x–7.x How does @RISK decide whether my correlation matrix is valid? The basic principle is that if two inputs are each strongly correlated to a third, they must be at least weakly correlated to each other. For example, it would be inconsistent to correlate A and B at 0.9, A and C at 0.8, but B and C at 0.0. A valid matrix is one where the correlation coefficients are mutually consistent. When only three inputs are involved, it's pretty easy to check for valid combinations. If the coefficient of A and B is m, and the coefficient of A and C is n, then the coefficient of B and C must be in the range of m n ± sqrt( (1-m²) (1-n²) ) Source: Two Random Variables, Each Correlated to a Third at Math Forum. For example, if A and B correlate at 0.9, and A and C correlate at 0.8, then B and C must correlate in the range of 0.9 * 0.8 ± sqrt( (1-0.9²) (1-0.8²) ) = 0.72 ± 0.26153 = 0.458 to 0.982 Here's how @RISK generalizes this principle for a correlation matrix of any size: If a correlation matrix is created using a full data set, it will be positive semi-definite if there is a linear relationship between any of the variables and positive definite if there is no linear The easiest way to determine if a matrix is positive definite is to calculate its eigenvalues, and that is what @RISK does at the start of a simulation. A positive definite matrix will have all positive eigenvalues and a positive semi-definite matrix will have eigenvalues greater than or equal to zero and at least one eigenvalue equal to zero. For @RISK, a "valid" matrix is any matrix that is positive definite or positive semi-definite, and an "invalid" matrix is any matrix that has at least one negative eigenvalue. For details on how @RISK adjusts an invalid correlation matrix, please see How @RISK Adjusts an Invalid Correlation matrix. How can I determine ahead of time if my matrix is invalid? With @RISK 5.x–7.x: In the @RISK Model window, click the Correlations tab and use the Check Matrix Consistency command to have @RISK check whether the matrix is self consistent. With @RISK 4.5 and earlier: An "invalid" matrix has one or more negative eigenvalues. Excel itself doesn't have a worksheet function to calculate eigenvalues, but there are many software applications and Excel add-ins with that capability. One freeware alternative is MATRIX at http://digilander.libero.it/foxes/ (accessed 2013-03-14). (We mention this as one example, without endorsement and without prejudice to any other software for computing eigenvalues.) See also: How @RISK Adjusts an Invalid Correlation Matrix Last edited: 2015-06-23 5.5. How @RISK Adjusts an Invalid Correlation Matrix Applies to: @RISK 4.x–7.x How does @RISK decide whether my correlation matrix is valid? If the matrix is invalid, how does @RISK adjust it to create a valid matrix? A correlation matrix is valid if it is self-consistent, meaning that the specified coefficients are mutually compatible. Please see How @RISK Tests a Correlation Matrix for Validity. When you click Start Simulation, @RISK checks all correlation matrices for validity. If a matrix is invalid, @RISK looks for an adjustment weight matrix (see below), If the adjustment weight matrix exists, @RISK uses it to adjust the invalid correlation matrix, and the simulation proceeds. But if there's no adjustment weight matrix, @RISK displays this message: The correlation matrix at ... is not self-consistent. @RISK can generate the closest self-consistent matrix. OK generates a corrected matrix and continues, Cancel stops the simulation. If you want to adjust the matrix on your own or create an adjustment weight matrix, click Cancel. This is usually a good idea, because in the absence of an adjustment weight matrix @RISK may make quite large changes in your correlation coefficients. How do I set up and use an adjustment weight matrix? This feature is available in @RISK 5.5 and newer. You can create an adjustment weight matrix to guide @RISK in adjusting the correlations. The adjustment matrix is a triangular matrix the same size as the correlation matrix; a square matrix is also acceptable as long as it is symmetric. In your adjustment weight matrix, enter a weight 0 to 100 in each cell below the diagonal. A weight of 100 means that the corresponding coefficient must not be changed, and a weight of 0 means that you don't care how much @RISK changes the corresponding coefficient. Between 0 and 100, larger weights place greater importance on the original coefficients. In other words, larger weights cause @RISK to apply less adjustment to the corresponding correlation coefficients, and smaller weights let @RISK adjust the corresponding correlation coefficients more. The adjustment can be done during a simulation, or in a one-time procedure before a simulation. Both possibilities are explained below. Technical details: Your correlation matrix is not self-consistent, meaning that it has one or more negative eigenvalues. You want @RISK to find a consistent matrix that is as close as possible to your original inconsistent one, taking your adjustment weight matrix into account. This is a non-linear optimization problem. The goal is to minimize the weighted sum of squared differences between the inconsistent matrix and a candidate consistent matrix. @RISK uses the standard limited-memory BFGS algorithm to perform this optimization. As mentioned above, weights are in the range 0 to 100. Between those special weights, other weights are treated in an exponential fashion. The exact details are proprietary, but 50 versus 25 or 10 versus 5 means "more important", not "twice as important". Correcting a matrix during simulation: The name of your adjustment weights matrix must match the range name of the correlation matrix, with the suffix _Weights. For example, if your correlation matrix is named Matrix1, the associated adjustment weight matrix must be named Matrix1_Weights. If a correlation matrix is inconsistent, @RISK looks for an adjustment weights matrix with the right name, and if it finds one it will adjust the inconsistent matrix without displaying any message. You can name a matrix by highlighting its cells and then typing its name in the name box to the left of Excel's formula bar. Or, click Formulas » Define Name. (In Excel 2003 and older, click Insert » Name » Define.) Please see the attached example, KB75_AdjustDuringEverySimulation.xlsx. When @RISK adjusts an invalid matrix during simulation, it doesn't store the adjusted matrix in your workbook or anywhere permanent. @RISK does cache the adjusted matrix in your temporary folder, in a file called CORRMAT.MTX. It will reuse that file in future simulations if you haven't changed your original matrix. Correcting a matrix outside of a simulation: You can perform the adjustment up front, rather than leaving @RISK to do it in every simulation. If you have a large correlation matrix, this can make a difference in the speed of your simulation. Use the RiskCorrectCorrmat( ) array function to place the corrected matrix in your worksheet, and make all your correlated inputs refer to the corrected matrix, not the original. With this approach, you can assign any name, or no name, to the adjustment weight matrix. Please see the attached example, KB75_RiskCorrectCorrmat.xlsx. When the RiskCorrectCorrmat( ) function performs an adjustment on a large matrix, it may take considerable time. You'll see messages on Excel's status line, referring to the step number (number of candidate valid matrices tried) and the residual (sum of squared differences). @RISK keeps at the optimization till the residuals stop decreasing sufficiently. Unfortunately, there's no way to know how many steps will be necessary, so @RISK can't give you a progress indicator in the form of percent complete. What if I don't use an adjustment weight matrix? If you're running @RISK 4.x or 5.0, or if you're running a later version but you didn't specify an adjustment weight matrix, @RISK follows these steps to modify an invalid correlation matrix: 1. Find the smallest eigenvalue (E[o]) 2. To shift the eigenvalues so that the smallest eigenvalue equals zero, subtract the product of E[o] and the identity matrix (I) from the correlation matrix (C). C' = C – E[o]I The eigenvectors of the matrix are not altered by this shift. 3. Divide the new matrix by 1 – E[o] so that the diagonal terms equal 1. C'' = (1/(1−E[o])) C' The matrix that @RISK calculates by this method is positive semi-definite, and therefore valid, but in no way is it special or optimal. It's one of many possible valid matrices, and some of the coefficients in it may be quite different from your original coefficients. @RISK stores the new matrix in file CORRMAT.MTX in your temporary folder. You can use this as a guide to modify your matrix so that @RISK won't need to adjust it every time you run a simulation. See How @RISK Tests a Correlation Matrix for Validity to ensure that your edited matrix is self-consistent. Additional keywords: CorrectCorrmat function Last edited: 2017-10-20 5.6. Correlation of Discrete Distributions Applies to: @RISK, all versions I specified a correlation of 0.5 between two RiskBinomial(1, 0.5) distributions, and the actual correlation of simulated results was much lower. I understand that simulated results will only approximately match the requested correlation, but why such a big difference? This issue will occur to some extent for any discrete distribution. In general, the fewer the possible values of the distribution, the greater discrepancy you will see. Here is the explanation, using two RiskBinomial(1, 0.5) as illustration. Two of these distributions correlated at 0.5 give (0,0) 37.5% of the time, (0,1) and (1,0) each 12.5% of the time, and (1,1) 37.5% of the time. That is the same as saying that the data pairs (0,0), (0,0), (0,0), (0,1), (1,0), (1,1), (1,1), (1,1) occur with equal frequency. If you put these eight data pairs into Excel, the CORREL( ) function does indeed return 0.5. So why doesn't @RISK produce those data pairs with about those frequencies? This has to do with how @RISK generates random numbers for correlated distributions. In order to generate correlated samples for any distributions, @RISK first generates 100 (or however many iterations are desired) pairs of random decimals between 0 and 1 that have the specified correlation coefficient. Call these numbers the U01 numbers. The U01 numbers are then plugged into the distribution's inverse cumulative distribution, which converts the U01 number into a sample over the range of the distribution. In the specific case of a RiskBinomial(1,0.5), any of the U01 numbers below 0.5 map to a 0, and any above 0.5 map to a 1. That all works as designed. Note, however, that in mapping all those U01 numbers in that way we lose a lot of information. For example, 0.1 maps to a value of 0 the same as 0.49 does, and 0.51 and 0.99 both map to a value of 1. At this point, a lot of the correlation information is lost, because the 0.1 U01 number from the first distribution is more likely paired with a U01 number from the second distribution close to 0.1 than close to 0.49, but that information is lost, since they are both samples of 0. Another way of looking at it is that at the end, when we have 100 samples from each distribution that are all either 0 or 1, there are many ways to assign ranks to those samples to calculate the Spearman rank-order correlation coefficient. Assigning them all the mid-rank (25.5 & 75.5) is just one way to do it. (For more on this, see How @RISK Computes Rank-Order Correlation.) If we could assign the 0 sample that came from a lower U01 to a lower rank, and a 0 sample that came from a higher U01 to a higher rank, we'd get an observed correlation coefficient closer to what was asked for. But after the simulation, a 0 is a 0 and a 1 is a 1, and the information about where they came from is not available to @RISK's RiskCorrel( ) function or Excel's CORREL( ) function. See also: How @RISK Correlates Inputs. Additional keywords: Binomial distribution, Bernoulli distribution Last edited: 2013-04-11 5.7. Correlating RiskMakeInput or RiskCompound, Approximately Applies to: @RISK 5.x–7.x The help file says that RiskCompound or RiskMakeInput can't be correlated, but I really need to use correlation in my model. Is there any workaround available? You can come close, and the process is the same for RiskMakeInput or RiskCompound. In brief: (1) Simulate your RiskCompound or RiskMakeInput to find its percentiles. (2) Turn that set of percentiles into a new RiskCumul distribution. (3) Replace the RiskCompound or RiskMakeInput with the new RiskCumul, which you can correlate. This technique is workable if the parameters of the RiskMakeInput or RiskCompound don't change from one simulation to another. If they do, this technique isn't practical. (You could, however, use the @RISK XDK to automate the process, in a before-simulation macro.) Here are details of the procedure. (Please open the attached workbook and run a simulation.) Step 1. Get a lot of percentiles. RiskCumul needs the minimum, the maximum, and some percentiles. The attached workbook is already set up to find every half-percentile in cells G8:H208. • This example finds every half-percentile: P[0.5], P[1], P[1.5], and so on to P[99.5]. Depending on your distribution, you might need more percentiles, or fewer percentiles might be enough, or you might need more percentiles in one region of the distribution but fewer percentiles in another region of the distribution. • This example runs 100,000 iterations to find those statistics, but if your distribution is highly irregular you might need more. At the end of this preliminary simulation, the RiskCumul functions no longer show #VALUE, because the percentiles of the RiskMakeInput and RiskCompound are now available. But you can't graph the RiskCumul functions at this stage, because the percentiles weren't available during the preliminary simulation. Step 2. Turn the percentiles into a RiskCumul. After a simulation, all the percentiles are formulas. But you want to use them as values, without depending on the original RiskMakeInput or RiskCompound. Highlight the percentiles array with your mouse. Press Ctrl+C for copy, then Alt+E, S, V, Enter for Paste Special: Values. The RiskCumul distribution is now independent of the original RiskCompound or RiskMakeInput. Step 3. Replace RiskMakeInput or RiskCompound in your model with the new RiskCumul. To do this, click the cell containing the RiskCumul, highlight the formula in the formula bar with your mouse, and press Ctrl+C then Esc. Click the cell that you want to replace, and press Ctrl+V then Enter. This copies the formula without changing the receiving cell's formats. You can now correlate the RiskCumul in the usual way. See also: Last edited: 2018-07-03 5.8. Correlation of RiskCompound Applies to: @RISK 5.x–7.x I have a risk register with columns for cost and schedule risk, and I'm using formulas like these: =RiskCompound(RiskBinomial(1,D5), RiskTriang(F5,G5,H5), RiskName("Cost")) =RiskCompound(RiskBinomial(1,D5), RiskTriang(I5,J5,K5), RiskName("Duration")) The RiskBinomial with n = 1 indicates that the risk will or will not happen, with the probability given in cell D5. But here's my problem. In some iterations, the cost risk is zero but the duration risk is nonzero, or vice versa. Logically I want them to be zero or nonzero together. The RiskCompound( ) function itself can't be correlated, because @RISK has no way to know in advance how many times it will need to draw values from the severity distribution during the simulation. However, you can correlate elements of the RiskCompound, as follows: • Correlate the frequency distributions, or even use the same frequency distribution for both RiskCompound functions. Both methods are illustrated in the attached workbook, Correlating RiskCompound.xlsx. Using the same frequency distribution in both RiskCompound( ) functions is simpler, and it guarantees that you'll never have a zero for one risk while the other is nonzero. • Unpack the severity distribution, so that you have multiple copies of a distribution and use Excel's SUM( ) function to add them up. This replaces RiskCompound with a frequency distribution and multiple copies of the severity distribution. Please see the attached workbook Unpacking RiskCompound.xlsx. For an alternative method, correlating a RiskCompound by converting it to a RiskCuml, see Correlating RiskMakeInput or RiskCompound, Approximately. By the way, @RISK 6.0 and later offer a Bernoulli distribution for events that may or may not happen. In those releases of @RISK, you could replace RiskBinomial(1,D5) with RiskBernoulli(D5). For more about RiskCompound( ), see All Articles about RiskCompound. Additional keywords: Compound distribution Last edited: 2018-06-28 5.9. Correlating Results of Calculations Applies to: @RISK, all releases How do I correlate the results of a calculation? There is no way to do that in a simulation. Only @RISK distributions can be correlated. Results of calculations, whether they are @RISK outputs or not, cannot be correlated. You can use a RiskCorrel function to compute the correlation coefficient that actually occurred in the simulation, but there's no way to impose a desired correlation on them. See also: How @RISK Correlates Inputs Last edited: 2018-06-29 5.10. Correlation Matrix Exceeds Excel's Column Limit Applies to: @RISK 4.x–7.x I'm using Excel 2003, and I have a correlation matrix whose size exceeds the 256-column limit in Excel. How can I use a correlation matrix of this magnitude with @RISK? Rebuilding the structure of the matrix lets you use a correlation matrix of this size with @RISK. Below is a description of how to rebuild the matrix. Also, attached are two examples that illustrate the transformation and referencing of the rebuilt matrix. Note: @RISK 4.5.7 and earlier won't be able to run a simulation on the example models, because the matrices don't exceed the column limit in Excel. If you try a simulation, these older versions of @RISK will display the error message, "The correlation matrix [matrix reference] is not square." (See this article if you have a Excel 5.0 or newer.) When you have a matrix that does exceed the column limit, you won't get this error message. The attached examples are for illustrative purposes only, as regarding those older versions. To rebuild the matrix, take the following steps: 1. Break your original matrix up into smaller blocks, moving from left to right. The number of columns in the blocks should be less than Excel's column limit, and each block should have the same number of rows as the original matrix. Make as many as the blocks the same size as possible. For example, if you have a 400 x 400 correlation matrix, you could break it up into two blocks, each with 400 rows and 200 columns. A 789 x 789 correlation matrix could be broken up into three blocks with 789 rows and 250 columns each, and one block with 789 rows and 39 columns. 2. Stack the blocks vertically to create a new matrix. Move from left to right, placing each block under the one before it. Place the second block under the first block, place the third block under the second block, and so on. See Example 1 in attached file. The last block may have fewer columns than the others. That last and smallest block should always be placed at the very bottom of the stack. See Example 2 in attached file.) 3. Define a range name for the rectangle that contains the rebuilt matrix. The matrix cell range reference must be rectangular; it can't have an irregular shape. If you end up with one section of the matrix that has fewer columns than the rest, make the matrix cell range reference rectangular by including empty cells in the reference. See Example 2 in attached file. For example, that 789 × 789 correlation matrix is rebuilt as three blocks 789×250 and one block 789×39, so you define your range name for the resulting rectangle of 3156 rows and 250 columns. 4. Add the RiskCorrmat function to your inputs. In your Excel workbook, add the RiskCorrmat function directly to each cell containing the input functions that you wish to correlate. The syntax for the RiskCorrmat function is: RiskCorrmat (matrix cell range, position, instance) I'm running Excel 2007 or later, which allows a million rows, so I don't need this technique for new models. But I've still got some older models that used this technique. Do newer versions of @RISK still support it? Yes, this technique works in any type of Excel workbook — XLS, XLX, XLSM, etc. — in any supported Excel, for any @RISK release 4.x–7.x. (Although Excel 2003 is not supported by @RISK 7.x, files created by Excel 2003 are supported by @RISK 7.x in later versions of Excel.) Additional keywords: Corrmat property function Last edited: 2018-03-08 5.11. Changing Correlation Coefficients During a Simulation Applies to: @RISK 4.x–7.x How does @RISK respond if the correlation matrix contains cell references or formulas whose values change in the middle of a simulation? @RISK will apply the new coefficients from that point forward in the simulation. Last edited: 2015-06-24 5.12. Making Correlations Conditional Is it possible to induce correlations between @RISK distribution functions in such a way that the coefficient of correlation depends on the results in other cells? That is, the level of correlation between variables would depend on the risk outcome in other variables. Conditional correlations can be created by first modeling all possible cases of correlation and then using logic to control which of the correlated variables are passed into the model. Please see the attached example. This example has three risk variables A, B, and C. The error terms for these three variables are correlated. However, the correlation between A, B, and C within a given period (t) will depend on the outcome for variable A in period (t-1). last edited: 2012-06-29 5.13. Excel Reports a Correlation Different from What I Specified I specified a correlation coefficient, but when I apply Excel's CORREL( ) function to the simulation data a different correlation is reported. Why? Briefly, in the correlation matrix in @RISK you supply rank-order correlation coefficients (Spearman), but Excel calculates product-moment correlation coefficients (Pearson). In @RISK 5.5.0 and later, you can use the RiskCorrel( ) function to show the Spearman coefficient after a simulation, and it should be close to what you specified. For full details, please see How @RISK Correlates Inputs, particularly the last few paragraphs. That page contains a downloadable example to illustrate the issues. Last edited: 2015-06-23 5.14. How @RISK Computes Rank-Order Correlation Applies to: @RISK 5.5.0 and later The RiskCorrel( ) function can return the Pearson product-moment or Spearman rank-order correlation coefficient. How is the rank-order coefficient computed? @RISK uses the method in Numerical Recipes by Press, Flannery, Teulosky, and Vetterling (Cambridge University Press; 1986), pages 488 and following. Each number in each of the simulated distributions is replaced with its rank within that distribution, as an integer from 1 to N (number of iterations). If the values in a distribution are all different, as they usually are with continuous distributions, then the rank numbers will all be distinct. If there are duplicate numbers within the distribution, as often happens with discrete distributions, then "it is conventional to assign to all these 'ties' the mean of the ranks that they would have had if their values had been slightly different. This [is called the] midrank" (quoting from the reference book above). Once the ranks are obtained, the rank-order coefficient is simply the Pearson linear correlation coefficient of the ranks. The above explains how @RISK computes rank-order correlation after a simulation is complete. @RISK also uses rank-order correlation within a simulation, when drawing numbers for correlated distributions. This page gives details: How @RISK Correlates Inputs. Last edited: 2015-06-23 5.15. Create a Correlation Matrix from Historical Data Disponible en español: Crear una matriz de correlación a partir de datos históricos With @RISK, you can use correlation coefficients reported from historical data to simulate distribution functions created from the data. This is an effective way to use past observations to predict future behavior. For example, you may have data from several years representing mortgage interest rates, mortgages sold, housing starts, and inventory of existing homes for sale. Each of these variables bears a historical relationship to the others. For example, the data may show that a rise in inventory of existing homes on the market is typically accompanied by a decrease in housing construction starts. Mortgage interest rates may exhibit a similar inverse relationship to both housing starts and mortgages sold. This web of historical relationships can be captured in correlation coefficients identified with an Excel correlation matrix. Coefficients from the Excel matrix can then be copy/pasted into an @RISK correlation matrix to control sampling of distributions that represent these To identify the correlation coefficients of your data: 1. Make sure that your data are located in adjacent columns or rows in Excel. The Excel correlation analysis tool will not report correlations for non-adjacent selections made with the Ctrl key. 2. In Excel 2010 or 2007, chose Data » Data Analysis; in Excel 2003 or earlier it's Tools » Data Analysis. The Data Analysis dialog appears. (If the Data Analysis item does not appear in the Excel menu, then you need to install the Analysis ToolPak Add-In from your Microsoft Office CD, or you need to place a check mark in the box beside Analysis ToolPak in your Excel Add-Ins dialog.) 3. Select Correlation from the list box in the Data Analysis dialog and click the OK button. The Correlation dialog appears. 1. In the area labeled Input, click the appropriate button to indicate whether your data are arranged in columns or rows, and check the box beside Labels in First Row (or Labels in First Column) if applicable. 2. To indicate the range of data for which you want correlations reported, click the Collapse/Expand Dialog button (small red and blue square) beside the Input Range text box. This shrinks the dialog temporarily so that you can select your data from the Excel spreadsheet. 3. Select your data. The cell reference for the selected range appears in the Input Range text box. 4. Click the Collapse/Expand Dialog button again to expand the dialog. 5. In the area labeled Output options, click one of the option buttons to indicate whether you want the Excel correlation matrix to appear in an Output Range in the same worksheet, a New Worksheet Ply (tab) in the same workbook, or a New Workbook. If you choose Output Range, you can enter a single cell. This cell will be the upper left cell at the beginning of the matrix location. If you choose New Worksheet Ply, you must enter a name for the new worksheet. 6. Click the OK button. A correlation matrix appears in Excel reporting the correlation coefficients of your data. 4. Create an @RISK distribution function from each column (or row) of data. (See "Distribution Fitting" in the @RISK manual or on-line help.) To ensure that variable and coefficient positions in the @RISK correlation matrix match positions in the Excel matrix, create the distributions down an Excel row, or across an Excel column, in the same order as the data columns from which the Excel matrix was created. (Alternatively, you can create the distribution functions, and then move them so that are arranged in the order reflected in the Excel matrix.) 5. Look at the Excel matrix again and note carefully the order of variables and coefficients in that matrix. 6. Select all cells in the Excel correlation matrix that contain correlation coefficients. Be sure to exclude the variable labels from the selection. 7. Open the @RISK Model Window. If you followed step 4, the Inputs to be correlated should appear in the same order in which they were arranged in the Excel worksheet. 8. From the Explorer pane (left-hand side) of the @RISK Model Window, use the Shift key to select the group of Input distributions you just created from your data. 9. Right click the selection of Inputs. A popup menu appears. 10. Choose Correlate Distributions from the popup menu. The @RISK Correlation window appears. Verify that the order of variables and coefficients in the @RISK correlation matrix matches the order of variables and coefficients in the Excel matrix. 11. In the @RISK Correlation window, select all cells that contain correlation coefficients. Be sure to exclude the variable labels from the selection. 12. From the menu in the @RISK Model Window choose Edit > Paste. Verify that the coefficients appear in their expected positions within the @RISK correlation matrix. Rename the matrix as desired by entering a new name in the Name text box. 13. Click the Apply button to enter the correlation matrix in @RISK. The name of your @RISK correlation matrix now appears beneath the list of Inputs in the Explorer pane of the @RISK Model Window. In addition, correlation icons appear beside each correlated Input in the grid to the right of the Explorer pane, and the RiskCorrmat function now appears as an argument of each of the correlated distribution functions. In Excel, a new worksheet containing your @RISK correlation matrix has been added to the workbook. last edited: 2012-08-08 5.16. How and Why to Switch from RiskDepC to RiskCorrmat Applies to: @RISK 5.5 and newer I'm using RiskIndepC and RiskDepC functions to correlate my inputs, but the manual says that's the old way, and I should use RiskCorrmat. Does it really matter? If you have only one RiskDepC for each RiskIndepC, it really doesn't matter. @RISK creates a 2×2 matrix for each DepC/IndepC pair. But multiple RiskDepC functions associated with any particular RiskIndepC can cause problems. To understand the issue, you need to know that @RISK creates a separate correlation matrix for each RiskIndepC function, using the RiskDepC functions associated with that RiskIndepC. The top row of the matrix is assigned to the variable designated with RiskIndepC, and the other rows to the RiskDepC variables with the same string identifier. The correlations you specify go in the first column of that matrix. The other columns of the matrix represent correlations among the RiskDepC variables, and since the DepC/IndepC scheme gives no way to assign those correlations, @RISK uses zeroes. (See Correlation Matrix Equivalent to RiskIndepC and RiskDepC in the attached workbook.) Now, why is this bad? First, you may unwittingly create a matrix that is not self-consistent. For example, if you have two RiskDepC functions specifying correlations of 0.9 and 0.5 with your RiskIndepC, it's not mathematically possible for those two RiskDepC variables to be correlated to each other with a coefficient of 0.0, yet that's what @RISK uses, so you get the message that the matrix is not self-consistent. If you let the simulation proceed, @RISK will adjust the matrix to be self-consistent, changing not only the zeroes but the correlations you specified, and the simulated correlations may be very different from what you specified. (See Simulated Correlations if You Click OK in the attached workbook.) Even if your matrix is self-consistent, for example two RiskDepC functions specifying 0.8 and 0.5, while it's mathematically possible for those two dependent variables to have a correlation coefficient of zero with each other, it's not very likely. Thus, your model may not be representing the real-world situation as accurately as possible. What can I do to prevent these problems? Switch to RiskCorrmat. Starting with @RISK 5.5, you can specify an adjustment weights matrix to tell @RISK to come up with a self-consistent matrix that preserves your desired correlations as far as possible, while assigning valid values to the correlations between the RiskDepC variables. The attached workbook gives an example of how to make the conversion: 1. Construct the correlation matrix for each of your RiskIndepC variables, as described. Also construct an adjustment weights matrix with 100's in the first column and zeroes elsewhere. 2. Use a RiskCorrectCorrmat array function to compute a self-consistent adjusted correlation matrix. (You may wish to give the new matrix a name in Excel, for convenience in editing formulas.) 3. Change RiskIndepC and RiskDepC to RiskCorrmat. RiskCorrmat takes two arguments, the matrix and a variable number. Use 1 for the variable that used to have RiskIndepC, and 2 through n for the The end result is that the simulated correlations will match the ones you originally assigned in RiskDepC, as closely as mathematically possible. If you wish, you can highlight the adjusted matrix and select first Copy, then Paste Special Values to replace the formulas with numbers. Then the original matrix and the adjustment weights matrix can be deleted. Last edited: 2017-08-08 5.17. Correlation Coefficient of Output Distributions Please read the full article in the "Simulation Results" chapter. 5.18. Correlation of Time Series Applies to: @RISK 6.x/7.x, Industrial Edition How does time series correlation work? Is it different from correlating regular distributions? Short answer: You can correlate between different time series, but that correlation may not be visible in the displayed numbers. Let's clarify the time series process, to help in understanding correlation of time series. You create a time series by fitting real-world data. However, the fitting process usually involves transforms, such as differencing or taking the logarithm. @RISK actually fits the transformed data, not your real-world data. The time series is a series of transformed data, not real-world data. After fitting a series, @RISK projects the series forward into the future. Each time period's prediction has two components: a formula that uses transformed data from one or more previous time periods, plus a randomly generated noise term or error value. Remember, all of the predictions are done with the underlying time series values, not with real-world data. After computing the formula and noise term for a time period, @RISK reverses the transforms that were used in the fit, and the result is the displayed numbers that you see in your Excel worksheet. The displayed numbers differ from the underlying time series values when there are transforms in that particular time series. You can correlate two or more time series functions using the @RISK Define Correlations window, or manually using RiskCorrmat( ) property functions, just as you would correlate regular @RISK distribution functions. However, correlation between time series is fundamentally different from the correlation of standard distributions. In correlating regular distributions, all the values for all iterations form one array per distribution, and the correlation is applied to those two arrays. For regular distributions, correlation is an attribute of the whole simulation, not of any particular iteration. By contrast, when two time series functions are correlated, the correlation is reapplied, from scratch, in each iteration. The two arrays that get correlated are the noise terms within the projected time periods of the two time series for that particular iteration. Again, this happens within each iteration, without reference to any other iteration. This explains why the displayed values won't match your correlation coefficients. The displayed values aren't correlated; only the noise term parts of each underlying time series value are correlated. The formula parts can't be correlated because they are generated by the time series function, so the underlying time series values as a whole are not correlated. What you see is the real-world data, computed by reversing the transforms of that particular time series. But even if there are no transforms, so that the displayed numbers equal the underlying time series values, still the correlated noise term is just part of each displayed number. Correlations are honored but are effectively buried, so you never see the numbers that are actually being correlated. Can I correlate the successive periods of a given time series? There is no way to correlate the noise terms between time periods of a given time series. Time series correlation applies only between one time series and another. And when the time series were produced by batch fit? When you generate sets of correlated time series using the time series batch fit command, it constructs a correlation matrix as part of its output. (See Correlated Time Series in Batch Fit.) The generated coefficients apply to the historical data that you supplied. You're free to alter the correlation coefficients, or add or remove time series in the matrix. Forward projections use the coefficients in that matrix just like any other correlations for time series. These correlations are applied in the same way as described above: the noise terms of the underlying time series values can be correlated, but those noise terms are not displayed separately. Can I define a copula for two or more time series? Copulas cannot be defined for Time Series array functions, only for regular @RISK input distributions. Last edited: 2016-09-19 5.19. Correlated Time Series in Batch Fit Applies to: @RISK 6.x/7.x Can you tell me more about the correlation matrix shown in the fit summary after a time series batch fit? The coefficients in that matrix don't seem to match the results of Excel's CORREL( ) function; where do the numbers come from? How are they used? The correlation matrix in the fit summary is the Spearman(*) correlations of the transformed historical data. Those transformed data are not available as numbers. However, you can see the graph of the transformed data by doing a single fit rather than a batch fit, and using the same transformations. (*)The correlation matrix was Pearson in @RISK 6.0, but was changed to Spearman (rank order) in 6.1. When distributions with very different shapes are correlated, results with Pearson can be unsatisfactory. The change to the distribution-free Spearman rank order correlation was made for this reason, and to be consistent with how other correlations work in @RISK. That correlation matrix in turn is used to correlate the projected distributions of the time series in each time period by the rank order method. The correlation is not applied to the numbers that you can see in a worksheet; it is the "raw" time series functions that are correlated. Conceptual summary of a batch fit: 1. @RISK applies transforms to the historical data and then fits the transformed data. The transforms can be user selected, or if you click Auto Detect then @RISK will determine the transforms to @RISK computes the actual Spearman correlations of the transformed historical data. This is the matrix shown in the fit summary. 2. For projections, @RISK applies the projection functions shown in the worksheet cells. This is conceptually a two-stage process: first "raw" numbers are developed, projecting from the transformed historical data according to the fit. Then the "raw" numbers are de-transformed by reversing the transforms that were applied to the historical data, and the de-transformed projections become the final output of the functions that you see in the worksheet cells.. The correlation matrix is applied to the "raw" numbers, not to the final de-transformed projections that appear in the worksheet. Last edited: 2015-06-23 5.20. Copulas Applies to: @RISK 7.x I need something more general than rank-order correlation. Does @RISK support copulas? Yes, beginning with release 7.0.0, @RISK supports copulas. See "Define Copula Command" in the @RISK user manual or help file. If you have an earlier version and would like to use copulas, please see Upgrading Palisade Software. Last edited: 2015-08-13 6. @RISK Simulation: Numerical Results 6.1. Convergence Monitoring in @RISK Applies to: @RISK for Excel 4.5–7.x I know that @RISK lets me set criteria for convergence monitoring, but how does it actually do the calculations? Answer for @RISK 5.x–7.x: Convergence monitoring means that @RISK keeps simulating until it has stable results for the outputs. To have @RISK monitor convergence, set the number of iterations to Auto, and then go to the Convergence tab of Simulation Settings. Convergence can be done on any combination of the mean, standard deviation, and a specified percentile for any or all outputs. You specify a convergence tolerance such as 3%, and a confidence level such as 95%. The simulation stops when there is a 95% chance that the mean of the tested output is within 3% of its true value. Analogous calculations are done if you monitor standard deviation or a If you specify "Perform Tests on Simulated" for two or three items, @RISK considers convergence to have occurred only if all the selected measures meet the convergence test. The setting "Calculate every ___ iterations" says how often in a simulation @RISK pauses to check whether convergence has occurred, but it has no effect on the stringency of the test. It's simply a trade-off for efficiency: if you check convergence more often, you may converge in fewer iterations but in more time because convergence testing itself imposes some overhead. If you test convergence less often, it may take more iterations but less time for a similar reason. With convergence testing selected, the Results Summary window will open automatically in a simulation to show progress toward convergence. The status column of that window shows OK for outputs that have converged. But, typically, some outputs converge faster than others. If a given output has not converged, a number from 1 to 99 is shown in the status column. That is @RISK's estimated percentage of the number of iterations done so far over the number that would be needed for this output to converge. Example: if the number is 23, and you've done 10,000 iterations, then @RISK estimates that a total of about 10,000/23% = 43,500 iterations would be required for convergence. See also: Answer for @RISK 4.5: Every N iterations (for example every 100 iterations, where N is user selectable), @RISK calculates these three statistics: • The relative change in the mean of the monitored output, which is (mean from previous test made N iterations ago - current mean) / max(abs(previous mean), abs(current mean)) • The relative change in standard deviation. • The relative change in the average percentile. For this @RISK calculates the relative change in the 5th percentile, 10th, 15th, ..., 90th, 95th, then takes the average of these relative changes. If all three of these statistics are less than or equal to the user-specified threshold, @RISK marks the simulation as converged for this test. If the simulation is marked as converged for 2 tests in a row @RISK considers it converged, and stops. Last edited: 2017-06-30 6.2. More Than 50,000 Iterations to Converge Applies to: @RISK 6.1.1 and newer I have set up convergence, with iterations set to Auto, but @RISK stops at 50,000 iterations although not all of my outputs have converged. @RISK 8.x The application comes with an interface where the user can choose the maximum number of iterations for Auto Stop. This option is enabled when the user selects the "Auto" setting in the number of iterations field. When the Simulation Settings User Interface is open, a new field labeled "Maximum of" will let the user define the maximum number of iterations. Details can be found in the Online Help @RISK 6.1.1 and 7.x By default, @RISK stops at 50,000 iterations rather than keep iterating indefinitely. However, it is true that some models will eventually converge, but at some point after 50,000 iterations. You could set a higher number of iterations explicitly instead of Auto, but then you lose convergence monitoring. Beginning with @RISK 6.1.1, you can change that 50,000-iteration limit for convergence monitoring. Create a workbook-level name RiskMaxItersForAutoStop with a value such as =100000. (The leading = sign is required.) With iterations set to Auto, @RISK will stop when outputs converge, or when the RiskMaxItersForAutoStop number of iterations is reached, whichever happens first. Excel 2007, Excel 2010, and Excel 2013, click Formulas » Name Manager » New. Enter the name RiskMaxItersForAutoStop. In the Refers-to box, enter your desired maximum number of iterations for convergence monitoring, preceded by the = sign. Click OK and then Close. To create a workbook-level name in Excel 2003, click Insert » Name » Define. In the box at the top, enter the name RiskMaxItersForAutoStop. In the Refers-to box, enter your desired maximum number of iterations for convergence monitoring, preceded by the = sign. Click Add and then OK. During simulation, the progress window at the lower left of your screen shows a percent complete. If iterations is Auto, @RISK doesn't know how many iterations will be needed until convergence has occurred, so it computes the percent complete on a basis of 100% = 50,000 iterations. If you have set RiskMaxItersForAutoStop to a larger number, and more than 50,000 iterations are needed, the percent complete will go above 100%. Last edited: 2020-06-01 6.3. Convergence by Testing Percentiles Why do different percentiles take different numbers of iterations to converge? And why do percentiles sometimes converge more quickly than the mean, even though the mean should be more stable? There can definitely be some surprises when you use percentiles as your criterion for convergence, and you can also get very different behavior from different distributions. First, an explanation of how @RISK tests for convergence. In Simulation Settings, on the Convergence tab, you can specify a convergence tolerance and a confidence level or use the default settings of 3% tolerance and 95% confidence. Setting 3% tolerance at 95% confidence means that @RISK keeps iterating until there is better than a 95% probability that true percentile of the distribution is within ±3% of the corresponding percentile of the simulation data accumulated so far. (See also: Convergence Monitoring in @RISK.) Example: You're testing convergence on P99 (the 99th percentile). N iterations into the simulation, the 99th percentile of those N iterations is 3872, A 3% tolerance is 3% of 3872 = about 116. @RISK computes the chance that the true P99 of the population is within 3872±116. If that chance is better than 95%, @RISK considers that P99 has converged. If that chance is less than 95%, @RISK uses the sample P99 (from the N iterations so far) to estimate how many iterations will be needed to get that chance above 95%. In the Status column of the Results Summary window, @RISK displays the percentage of the necessary iterations that @RISK has performed so far. Technical details: @RISK computes the probabilities by using the theory in Distribution-Free Confidence Intervals for Percentiles (accessed 2020-07-28). The article gives an exact computation using the binomial distribution and an approximate calculation using the normal distribution; @RISK uses the binomial calculation. Now, an explanation of anomalies, including those mentioned above. • P1 (first percentile) takes many more iterations to converge than P99, or vice versa. At first thought, you might expect P1 and P99 to converge with the same number of iterations, P5 and P95 with the same, P10 and P90 with the same, and so on. But it usually does not work out that way. Let's take just the first and 99th percentiles as an example. The tolerance for declaring convergence complete is expressed as a percentage of the target. If the values are all positive, then the first percentile is a smaller number than the 99th percentile, and therefore the tolerance for P1 is a smaller number than the same percentage tolerance for P99. The difference is greater if the distribution has a wide range, or if the low end of the distribution is at or near zero. For an extreme example, consider a uniform continuous distribution from 0 to 100. P1 is around zero, and P99 is around 100, so a 3% tolerance for P1 is quite small and will take about 406,000 iterations to achieve. By contrast, a 3% tolerance for P99 is relatively much larger and is achieved in only 30 iterations. On the other hand, if the values are all negative then P99 will have a smaller magnitude than P1, and will therefore converge more slowly. • Convergence happens too quickly and is very poor. This occurs when the distribution has a narrow range, so that there is little difference between one percentile and another. Consider the uniform continuous distribution from 10000 to 10100. Every percentile is in the neighborhood of 10,050, so a 3% tolerance is about 10,050±302 = 9,748 to 10,352. That range is actually larger than the data range. Therefore, the very first sample value for every percentile will be within that range, so convergence of any percentile happens on the first iteration, but that "convergence" is meaningless. • P50 takes more iterations to converge than P1 or P99, and also more than the mean. This is expected for many distributions. The percentile convergence is based on a binomial distribution, with p = the percentile being tested. The binomial distribution is fairly broad for p = 50%, and so the margins of error are greater and convergence takes more iterations. But as p gets closer to 0 or to 100%, the distribution gets more narrow, margins of error get smaller, and convergence happens in fewer iterations. As for convergence of a mean versus convergence of the 50th percentile, percentiles use the binomial distribution, but the confidence interval for the mean uses Student's t. Margins of error are usually narrower for Student's t than for the binomial, so convergence of the mean happens faster than convergence of the 50th percentile, even for a symmetric distribution. Advice: In @RISK's simulation settings you have to set convergence tolerance as a percentage of the tested statistic (mean, standard deviation, or percentile), but the appropriate percentage is not always obvious. To help you make the decision, run a simulation with a few iterations, say 100, just to get a sense of what the output distribution looks like. Then, if you expect the percentile value to be close to zero, specify a higher tolerance or choose a different statistic. Also check your tolerance against the expected range of the output, and if necessary specify a smaller Last edited: 2020-07-28 6.4. Static Value of Output Differs from Simulated Mean Applies to: @RISK, all releases @RISKOptimizer, all releases @RISK Developer's Kit (RDK), all releases In Simulation Settings » When a Simulation is Not Running, Distributions Return, I selected Expected Value or True EV. When a simulation is not running, the expected values of my inputs and outputs are visible in my model. I run a simulation, and the simulated means for my inputs closely match the expected values, but the simulated means of my outputs are very different from the expected values. What is wrong? This is normal behavior. When a simulation is not running, both inputs and outputs display their static values. For inputs, the static values are the means (unless you changed that — see Setting the "Return Value" of a Distribution). For outputs, the static values are not the means, but are values calculated from the displayed values of the inputs. In general, the mean of a calculated result does not equal the value you would calculate from the mean inputs, and the same is true for percentiles and mode. If you want to display the means of outputs in your worksheet, use the RiskMean function. For percentiles, including the median, the 50th percentile, use RiskPtoX. Please see Placing Simulation Statistics in a Worksheet. For an example, please download the attached workbook. In the example, column C is the inputs, and column D is the outputs created from them. For this example the computation is just a square or reciprocal, but the same principle holds for the calculations in your model. Column E is the simulated mean of each output. As expected, the simulated means are quite different from the static values displayed in column D. The principle here is "the mean of the squares is different from the square of the mean." But the simulated means are quite close to the theoretical means I calculated for those outputs in column F. Row 4 is a standard normal distribution (mean=0) and its square (mean=1, not 0). Row 5 is a discrete uniform with two values 1 and 9 (mean=5) and its reciprocal (mean=5/9, not 1/5). Row 6 is a continuous uniform (mean=1/2) with its square (mean=1/3, not 1/4). If you run a simulation, you can see easily that the distributions of the outputs have different shapes from the distributions of the inputs. This will always be true, to a greater or lesser extent, for a non-linear model. Since the vast majority of models are non-linear, you should expect to see the mean values of outputs be different from the static values computed from the mean inputs. See also: Last edited: 2017-09-07 6.5. Static Value of Input Differs from Simulated Mean Applies to: @RISK, all releases RISKOptimizer, all releases @RISK Developer's Kit (RDK), all releases Why is the expected value displayed in the spreadsheet cell that contains an @RISK input function different from the mean of the simulation results for that input? The simulated mean of a distribution will typically be close to the theoretical mean, but not exactly the same. This is normal statistical behavior. And it's not just the mean, but the standard deviation, median, mode, percentiles, and all other statistics. To illustrate, set up a simulation in the following way: 1. Start with a blank workbook. 2. In cell A1, define an @RISK input as a normal distribution with mean of 10 and standard deviation of 1. 3. Click the Simulation Settings icon from the @RISK toolbar. On the Iterations tab, set the number of iterations to 10,000. On the Sampling tab, select Latin Hypercube for the sampling type and a fixed random generator seed of 1. 4. Run the simulation, click the cell, and click Browse Results in the @RISK toolbar. (In @RISK 4.x, open the @RISK–Results window.) Statistical theory tells us that the expected distribution for the mean of the input is a normal distribution with a mean of 10 and a standard deviation (often called the standard error) of 1/√10000 = 0.01. Although the exact results will vary by version of @RISK, you should find that the mean of the simulated input is well within the interval 9.99 to 10.01, which is within one standard error of the theoretical mean. (The default Latin Hypercube sampling type does considerably better than classic Monte Carlo sampling.) But the displayed value is not even close to the simulated mean of the input. How can that be? Most likely this is a setting in your model. In the Simulation Settings dialog, look at When a Simulation is Not Running and verify that it is set to Static Values. Set Where RiskStatic is Not Defined, Use to True Expected Values. As the dialog box implies, a RiskStatic property function in any distribution will make that distribution display the RiskStatic value instead of the statistic you select in Static Values. Is there any way to get the exact mean of the distribution in my worksheet? We'd say "theoretical mean" rather than exact mean, and that's the key to the answer. Where RiskMean returns the mean of the last simulation that was run, RiskTheoMean returns the mean of the theoretical distribution. There's also RiskTheoStdDev, RiskTheoPercentile, and so forth. In @RISK, click Insert Function » Statistic Functions » Theoretical to see all of them. See also: Last edited: 2017-10-03 6.6. Placing Simulation Statistics in a Worksheet Applies to: @RISK For Excel 4.x–7.x RISKOptimizer 1.0, 5.x Is there a way to have result statistics from an @RISK simulation placed in a specific location of a spreadsheet automatically at the end of a simulation? Statistics for any cell—including inputs (@RISK distributions), @RISK outputs, and plain Excel formulas—can be reported directly in the spreadsheet at the end of simulation by using the statistic functions provided with @RISK. A full list of the statistic functions is available in the @RISK menu: Insert Function » Statistic Functions » Simulation Result. As an example, the formula =RiskMean(A1) will return the simulated mean of cell A1 across all iterations in the simulation. For an @RISK output or an Excel formula, that's the only option. For an @RISK input distribution, you have a choice: =RiskMean(A1) to return the mean of random values drawn for that particular simulation, or =RiskTheoMean(A1) to return the theoretical mean of the distribution. RiskMean values will vary slightly from one simulation to the next, and are not available till you have run a simulation; RiskTheoMean is based on the theoretical distribution and will always return the same result, even if you have not run a simulation. See Statistics for an Input Distribution. In @RISK 5.5 and later, by default the statistic functions are not calculated until after the last iteration of a simulation, though you can change this in Simulation Settings. In @RISK 5.0 and earlier, the statistic functions all calculate in "real time", meaning that @RISK recalculates the statistic at each iteration based on the number of samples that have been drawn. For more about the timing of calculating the statistic functions, please see "No values to graph" Message / All Errors in Simulation Data. See also: Additional keywords: Mean, simulated mean, percentile, simulated percentile, statistics functions Last edited: 2017-06-30 6.7. Simulation Statistics for Output Ranges Applies to: @RISK 5.x–7.x Many of my @RISK outputs are in output ranges, where a group of related outputs share a name. The formulas look like this: =RiskOutput(,"Profit by Month",1) + formula =RiskOutput(,"Profit by Month",2) + formula =RiskOutput(,"Profit by Month",3) + formula and so on. How can I address these outputs by name in a statistic function like RiskMean( ) or RiskPercentile( )? Outputs and inputs referenced by name in statistic functions must have unique names, according to the @RISK manual. When the same name is used for multiple outputs, including a range of outputs, statistic functions need to reference them by cell reference rather than by name. Example: =RiskMean(A15). Last edited: 2015-06-26 6.8. What Was My Random Number Seed? Applies to: @RISK 5.x–8.x I've run a simulation, but it was with the random number seed set to "Choose Randomly". How can I find out what seed @RISK actually used, so that I can reproduce the simulation later? If Quick Reports were run automatically at the end of the simulation, look at the Simulation Summary Information block on any of them. The Random Seed item gives you the random number seed that was actually used. If you haven't run Quick Reports, see Get a Quick Report for Just One Output, unless of course you want Quick Reports for all outputs. In Version 8, the Quick Report is not available. The RiskSimulationInfo function can be used to determine the seed number instead. You can find more about that function here: https:// If you don't have any outputs, you can use a snippet of Visual Basic code without being a programmer: 1. Press Alt-F11 to open the Visual Basic Editor, then F7 to open the code window. 2. Paste this VBA code into the window: Sub StoreSeed() Risk.Simulation.Settings.RandomSeed = _ End Sub 3. Please see Setting References in Visual Basic for the necessary references and how to set them. 4. Click somewhere in the middle of the StoreSeed routine, and press F5 to run the code. This will change @RISK's Simulation Settings, on the Sampling tab, to a fixed random seed. and it will insert the actual seed from the latest simulation as the seed for future simulations. 5. If you leave the code in place, Excel 2007 or above will no longer store the workbook as an .XLSX but instead will use .XLSM format. This may present you or anyone who opens your workbook with a macro security prompt. To prevent that, you can delete the pasted code before you save the workbook. The fixed random seed will remain in Simulation Settings. 6. Save the workbook to save the fixed random seed. See also: Random Number Generation, Seed Values, and Reproducibility Last edited: 2020-12-03 6.9. Mode of Continuous Data Applies to: @RISK, all versions I'm displaying the mode of my data, and it seems to be very far from the tallest bar in the histogram. What is wrong? How does @RISK compute the mode of a continuous distribution? The traditional definition of the mode of discrete data is the most frequently occurring value of the variable. An analogous definition works well for most theoretical continuous distributions: you have a smooth probability density curve pdf(x), and the mode is simply the value of x where the pdf(x) is highest. But for continuous data in simulation results, it's unusual to have identical data points, and therefore a new definition is needed. Different authorities use different definitions and therefore find different modes; the way you bin the data can also change which value you call the mode. One way to come up with a mode is to divide the n simulated data points into k bins, each with n/k consecutive data points, and then look at the widths of the bins. The narrowest bin is the one where the points are clustered closest together, which means that the probability density is greatest in that bin, so the mode must be there. @RISK uses that method. It divides the simulated data into k = 100 bins unless there are fewer than 300 data points in the simulation; in that case @RISK uses k = n/3 bins, so that a bin never has fewer than three points. @RISK then finds the narrowest bin, where the points are most closely clustered together. Finally, it computes the mean value of the n/k points in that bin, and reports that value as the mode. (This information is current as of 2015-05-01, but may change in a future release of @RISK.) The binning for purposes of finding the mode is almost always different from the binning for a histogram of the data in Browse Results or other graphs. Even if you specify a histogram of 100 bins, they will still be different from the histogram bins. When @RISK is finding a mode, the bins all contain the same number of points and have different widths. On the histogram, the bins (bars) all have the same width and contain different numbers of points. This is how the mode can be far from the tallest bar in the histogram. A simple example is attached. That example does show how changing the graph settings can reveal the mode, but that technique won't work on every distribution. By the way, when you do distribution fitting, @RISK uses this same computation. That's how it gets the approximate mode of your sample data that it shows in the fit results window, for comparison with the fitted distribution. This computation is not used in any way in the process of fitting distributions to your data; it's purely for display of the comparison. Last edited: 2015-05-01 6.10. Conditional Tail Expectation or Conditional VaR Applies to: @RISK 5.x–7.x How can I use @RISK to calculate the conditional tail expectation (conditional value at risk, CVaR) of a simulated output? Beginning with @RISK 5.5, you can compute statistics for part of a distribution by value or by percentile; @RISK 5.0 can compute statistics for a distribution delimited by value only. To compute your statistics, you insert the property function RiskTruncate( ) in an @RISK statistics function such as RiskMean( ). For example, suppose you have a simulated output in cell C11, and you want the conditional value at risk for the left-hand 5% tail. That is equivalent to the mean value of just the lowest 5% of the distribution, and you compute it like this: @RISK 5.5 and later: =RiskMean(C11, RiskTruncateP( , 0.05) ) @RISK 5.0: =RiskMean(C11, RiskTruncate( , RiskPtoX(C11,0.05) ) ) You can also compute expected value for the upper tail. For example, the upper 5% is above the 95th percentile, so you set the 95th percentile as a lower limit and compute the expected value of the 5% right-hand tail like this: @RISK 5.5 and later: =RiskMean(C11, RiskTruncateP(0.95, ) ) @RISK 5.0: =RiskMean(C11, RiskTruncate( RiskPtoX(C11,0.95), ) ) The value will be approximate if you're calculating conditional tail expectation on a theoretical distribution. See About accuracy of theoretical statistics in Statistics for Just Part of a In @RISK help or the manual, see the section "Calculating Statistics on a Subset of a Distribution". Last edited: 2017-09-01 6.11. Probability of an Interval Applies to: @RISK 5.x–7.x From a simulated output or input, I'd like to find the probability of the result occurring in an interval, between a lower and an upper limit. An easy way is to click the output, click Browse Results, and adjust the sliders to the limits you're interested in. The probability then shows in the horizontal bar between the two sliders. I really wanted this as a worksheet function. No problem! The probability of the interval is the cumulative probability of the upper limit, minus the cumulative probability of the lower limit. Assuming your output is in cell DF1, and the limits are 100 and 200, the formula is: If the limits are in cells DL1 and DH1, then the formula is: Last edited: 2016-05-04 6.12. Semivariance, Semideviation, Mean Absolute Deviation Applies to: @RISK for Excel 5.x–7.x Can @RISK compute upper and lower semivariance, semideviation, and mean absolute deviation? Yes, beginning with @RISK 7.5 you can use @RISK statistic functions to compute these quantities automatically. In all statistic functions, datasource can be the name of an input or output, in quotes, or a cell reference. • Lower semivariance: RiskSemiVariance(datasource) or RiskSemiVariance(datasource, TRUE, simnumber) • Upper semivariance: RiskSemiVariance(datasource, FALSE) or RiskSemiVariance(datasource, FALSE, simnumber) (Upper semivariance plus lower semivariance equals variance.) • Lower semideviation: RiskSemiStdDev(datasource) or RiskSemiStdDev(datasource, TRUE, simnumber) • Upper semideviation: RiskSemiStdDev(datasource, FALSE) or RiskSemiStdDev(datasource, FALSE, simnumber) (Lower and upper semideviation are square roots of lower and upper semivariance. The sum of lower and upper semideviations doesn't equal the standard deviation.) • Mean absolute deviation: RiskMeanAbsDev(datasource) or RiskMeanAbsDev(datasource, simnumber) (Lower and upper mean absolute deviation are each half of the mean absolute deviation.) But I have an earlier version of @RISK, and I'm required to use this version. Is there a workaround? In earlier versions, you can do it yourself with user-defined functions in VBA, or by manipulating the iteration data with RiskData( ) functions. The attached file illustrates both approaches, and shows that they have the same result. (To call on @RISK in user-defined functions in VBA, you need @RISK Industrial or Professional. @RISK Standard Edition does not support automating @RISK.) Lower and upper semivariance are computed in a similar way to variance: take the sum of squares of differences from the mean, and divide by number of iterations minus 1. (The minus 1 is necessary to create an unbiased estimate of variance, because the simulation is a sample, not the whole population.) However, in computing lower semivariance, use 0 in place of squared deviations above the mean; and in computing upper semivariance, use 0 in place of squared deviations below the mean. Equations might make this clearer: where n is the number of iterations, and IF(condition) has the value 1 if condition is true and 0 if it is false. Notice that values on the "wrong" side of the mean are not simply omitted; rather, they are replaced by zeroes, so the denominator of the semivariance is the same as the denominator of the variance. Though some authors replace n with the number of values lower (higher) than the mean for lower (upper) semivariance, this article follows Estrada, Rohatgi, and others. Thus the sum of lower and upper semivariance is the variance. Lower and upper semideviation are found by taking the square roots of lower and upper semivariance. The sum of lower and upper semideviations is of course different from the standard deviation of the full sample. Lower and upper mean absolute deviation (MAD) are found by taking the sum of the absolute values of deviations from the mean, divided by the number of iterations. However, in computing lower MAD, use 0 in place of deviations above the mean; and in computing upper MAD, use 0 in place of deviations below the mean. Lower and upper mean absolute deviation are numerically equal for any simulated data set, and each is equal to half of the plain mean absolute deviation. See also: Additional keywords: Semi-variance, Semi-deviation Last edited: 2017-06-30 6.13. Correlation Coefficient of Output Distributions Applies to: @RISK for Excel, releases 5.x–7.x I want to find the simulated correlation coefficient between two cells in my simulation. Is there any way to find this value? In @RISK 5.5 and newer, the RiskCorrel( ) function can compute the correlation for you, either the Pearson correlation or the Spearman (rank-order) correlation. To find the simulated Pearson correlation, enter this function in a worksheet cell: =RiskCorrel(cell1, cell2, 1) To find the simulated Spearman (rank-order) correlation, use: =RiskCorrel(cell1, cell2, 2) The cell will show #VALUE initially, replaced with the coefficient when you run a simulation. When you specify your desired correlation for two inputs, as opposed to computing the actual correlation in a simulation, @RISK applies Spearman correlation of those inputs. See Excel Reports a Correlation Different from What I Specified for more about this. How was this done in older releases of @RISK? These methods continue to work in newer releases of @RISK, though the RiskCorrel( ) function is simpler to use. In @RISK 5.0, you can display the correlation coefficient fairly easily by making a scatter plot. In the Results Summary window, select one of the outputs and click the icon for "Create Scatter Plot" at the bottom of the dialog box. Then drag the other output from the Results Summary window onto the scatter plot. You get a scatter plot, the mean and standard deviation of both outputs, and the Pearson correlation coefficient. (Y is the first output you selected, and X is the second output you You can even drag additional outputs to the Scatter Plot window, and they will be plotted as additional X's against the same Y, with their correlation coefficients displayed In @RISK 5.0, if you want the correlation coefficient to appear automatically in an Excel sheet after simulation, use the RiskData( ) function to insert data in a worksheet during simulation and use Excel's CORREL( ) function. Please see Placing Iteration Data in Worksheet with RiskData( ) and Sum of All Iteration Values for two examples using RiskData( ). If you have a large number of iterations, RiskData( ) may slow down the simulation for some models. If this is an issue for your particular model, you can remove the RiskData( ) functions and use a macro to save the simulation data. Please see Exporting Information During Simulation for an example. last edited: 2018-08-29 6.14. How @RISK Calculates Percentiles Applies to: @RISK Excel 3.5 through 7.x @RISK for Project 3.x, 4.x RISKOptimizer 1.x, 5.x @RISK Developer's Kit (RDK) 4.x RISKOptimizer Developer's Kit (RODK) 4.x How does @RISK calculate cumulative percentiles for simulation data? Depending on the nature of the simulation data, @RISK will use one of two methods for calculating cumulative percentiles. When the simulation data appear to be discrete (samples are repeated in the data), every returned percentile is chosen from the simulation data. Specifically, the software computes k = the smallest whole number greater than or equal to (your percentile target) times (number of iterations), and then the answer is the k-th smallest data value from the simulation. For a simplified example, suppose you request the 68th percentile from a simulation where there were ten iterations and the data points were {4, 7, 9, 13, 15, 19, 21, 25, 28, 30}. k = roundup(.68*10) = 7, so the 68th percentile is the 7th-lowest number, which is 21. When the simulation data appear to be continuous (none of the samples are repeated in the data), @RISK will use linear interpolation to calculate percentiles where necessary. For example, when the desired percentile does not correspond exactly with a value in the data @RISK will use linear interpolation between points in the data set to derive the percentile. See the attached spreadsheet demonstrating the linear interpolation. Does @RISK's calculation correspond to the Excel function PERCENTILE.INC( ) or PERCENTILE.EXC( )? You can specify any number from 0 to 1 inclusive as the second argument of RiskPtoX( ), and RiskTheoPtoX( ), so to that extent they are analogous to PERCENTILE.INC( ). However, it may be necessary to interpolate to find the value of a given percentile. Excel and @RISK may not necessarily return the same values, based on their different interpolation methods. (The literature showsn numerous methods of interpolation.) The larger the number of iterations, the smaller should be any difference between the two. RiskPtoX(A1,0) and RiskPtoX(A1,0) equal the smallest and largest iteration values of cell A1 in the latest simulation. RiskTheoPtoX(A1,0) returns the theoretical minimum of the distribution in A1 if it has a lower bound, or #VALUE! if there's no lower bound. RiskTheoPtoX(A1,0) returns the theoretical maximum, or #VALUE! if the distribution has no upper bound. Last edited: 2017-09-27 6.15. Which Iteration Produced a Given Percentile? Applies to: @RISK for Excel, all releases I know how to find the value of a percentile, such as the 99th percentile. But how can I find which iteration produced the 99th percentile of a given input or output? I want to look at that whole The easiest way is to open the Simulation Data window (x-subscript-i icon in the Results section of the @RISK ribbon), highlight the column for that input or output, click the sort icon at the bottom of the window, and sort in descending order. Then count down the appropriate number of iterations and you have the one you need. For example, if your simulation runs 1000 iterations, then your 99th percentile would be the 11th highest one, which is the highest of the bottom 990 iterations. If this is a frequent need, you could automate the process with a RiskData( ) array function and a VBA macro. Please see Placing Iteration Data in Worksheet with RiskData( ). Last edited: 2017-09-27 6.16. Placing Iteration Data in Worksheet with RiskData( ) Applies to: @RISK for Excel 4.x–7.x RISKOptimizer 1.x, 5.x The manual and the help file say that I can get data from a range of iterations by entering RiskData( ) as an array formula. What does that mean, and can you give an example? If you want data from all iterations of all inputs and outputs, you can use the Simulation data window (x-subscript-i icon) or select the Simulation Data Excel report. If you want only selected variables or iterations, use the RiskData( ) worksheet function. You cannot fill an array with RiskData( ) by typing the formula in one cell and dragging, the way you usually would. Instead, follow this procedure: 1. Select the row or column array where you want to place the input or output value for each iteration. 2. In the formula bar, type your formula, which involves RiskData( ). For instance, to capture the first 100 iterations of the input called The_Input, type To capture iterations 151 through 250 of cell A4, type An optional third argument to RiskData( )lets you specify the simulation number, if you're running RISKOptimizer or multiple simulations in @RISK. 3. Instead of Enter, press Ctrl-Shift-Enter to create an array formula for this array. Though Excel puts curly braces { } around the formula, you can't create an array formula by typing curly braces 4. If you haven't run a simulation, you'll see lots of #N/A appear in the array. These will change to numbers when you run your simulation. To see this happen, open the attached example and run a See also: In @RISK 6.0 and later, if you just have an occasional need you can get all the iterations for one input or output in the Browse Results window. See All Iterations of One Input or Output. Last edited: 2015-06-30 6.17. Exporting Information During Simulation Applies to: @RISK for Excel 4.x–7.x RISKOptimizer 1.x, 5.x While the simulation is running, how can I store intermediate results outside the Excel workbook? All editions of @RISK offer the ability to run a user-written macro after every iteration. The Professional and Industrial editions of @RISK include the Excel Developer Kit (XDK), a complete library of commands and functions that let you control every aspect of @RISK in your spreadsheet. You can export data to a text file during simulation by using a VBA macro. The attached sample workbooks show one way to do this. There are two workbooks, one for @RISK 4.x and one for 7.x. (You can adapt the 7.x workbook to 6.x by changing the references.) Caution: In Excel 2007 and later, watch for a security warning when you open this workbook, and enable the macros. To create a custom macro, use the @RISK VBA functions listed in the online manual. Check the @RISK Help File when @RISK is running, or click Windows Start » Programs » Palisade » Online Manuals » @RISK Macros. (In @RISK 5.5.1 and later, run @RISK and select @RISK's help, then Developer Kit.) Performing a simulation that executes a macro after the recalculation of each iteration requires two steps: 1. Create a new macro that writes the desired information to a numbered text file. In our example, this macro is named WriteToTxtFile( ). 2. Create a main macro that sets the simulation settings and runs the simulation. In this macro you must: 1. Set the property for running a macro after iteration recalc to True. 2. Store the name of the macro you created in step 1 above. 3. Open the text file where your simulation data will be written. 4. Execute the VBA method to start the simulation. You can either run the simulation by clicking View » Macros » View Macros » macroname » Run (in Excel 2003 and earlier, Tools » Macro » Macros » macroname » Run), or create a button as in the example, so that the macro runs when you click the button. Last edited: 2018-01-23 6.18. All Iterations of One Input or Output Applies to: @RISK 4.x–7.x I want to see all the iterations of one particular input or output. Is there any way other than bringing up the Simulation Data window or generating a Simulation Data report? That produces all inputs and outputs, but I need just one or two. Yes, there are two methods. In @RISK 6.0 or newer, click Browse Results and select the input or output cell. In the upper right corner of the Browse Results window, click the drop-down arrow and select Data Grid. (See the attached illustration.) You will then have all the iteration data for this input or output in a column. You can copy/paste it to an Excel sheet or another program if you wish. In @RISK 4 and later, you can place iteration data in your worksheet with RiskData( ). The RiskData functions are automatically updated at the end of a simulation. Last edited: 2015-06-30 6.19. Sum of All Iteration Values Applies to: @RISK for Excel 4.x–7.x I see a RiskMean( ) function, but no RiskSum( ). I would like a sum that totals all of the values that occur in a given cell during a simulation. Is this possible within the framework of @RISK? The accompanying workbook shows three methods.: 1. The simplest, Method 1, multiplies the simulated mean by number of iterations obtained from the RiskSimulationInfo( ) function available in @RISK 6.0 and later. 2. For earlier versions of @RISK, you can use Method 2. It's the same technique, but with a helper cell to calculate the number of iterations. 3. You can also use Method 3. Insert all the data in the worksheet, using the technique in Placing Iteration Data in Worksheet with RiskData( ), then sum the data. This method has the advantage of being a little more transparent, but it uses a lot more space in your workbook, and it fails if you change the number of iterations to more than the number you originally planned. Last edited: 2015-06-30 6.20. How Many Iterations Were within a Certain Range? Applies to: @RISK 4.x–7.x I'd like to know how many iterations, or what percent of iterations, had a particular input or output between two limits. I know I could do this through filtering, or through moving the delimiters in a Browse Results graph, but is there a worksheet function? Yes, you can do this with worksheet formulas. The basic idea is that (number of iterations in range) = [ (right percentile) − (left percentile) ] × (total number of iterations) @RISK provides the pieces you need for that formula, and the process is the same for an input or an output. Please take a look at the attached example. The numbers you can change are in blue on white; the formulas in other cells can be viewed but not changed. The method, starting from x-values: Suppose you'd like to know how many iterations saw a profit (cell C9) between $22,000 and $23,000 (cells F9 and G9). Referring to the formula above, you see that you need to ask which percentiles those limits represent. RiskXtoP will tell you that. • In cell H9, RiskXtoP(C7,F7) asks what percentage of iterations are below the value in F9. • In cell I9, RiskXtoP(C7,G7) asks what percent are below the value in G9. The percentage between F9 and G9 is the difference of those percentiles. (The percentage between F9 and G9 is the part that is below G9 but is not also below F9.) • Cell J9 contains that difference, =I9−H9. Multiply that by the total iterations from cell H2, and round to an integer. (See Placing Number of Iterations in the Worksheet.) You need to round the result, si that you don't end up with fractional iterations, because the RiskXtoP functions interpolate their values between the iterations that actually occur in any particular simulation. • Cell K9 contains =round(J9*$H$2,0). The formula was "exploded" into multiple cells to show the steps. But you can do all of it in one formula; see columns M–P. The method, starting from percentiles: If you want the number of iterations between two stated percentiles, you don't need the RiskXtoP functions. Rows 11–13 show the formulas broken down into bits, and in one cell. See also: For tracking logical values instead of computed inputs or outputs, see How Many Times Did an Event Occur? Last edited: 2018-05-08 6.21. How Many Times Did an Event Occur? Applies to: @RISK 6.x/7.x We have a combined risk register that models the risks based on Monte Carlo sampling. I would like to get a table that shows how many times risk one, risk two, risks one and three, or risks one-two-three occurred. This is a special case of a more general problem: in how many iterations did a given event or combination of events occur? Or, instead of how many iterations, you might want to know in what percentage of iterations some event occurred. The basic technique is to construct a cell formula that is 1 when a desired event occurs and 0 when it doesn't. Constructing that formula is not hard if you know these rules: • Use parentheses around each condition, to avoid problems with order of operations. If you're tracking when G7 is 120 or more, for instance, code it as (G7>=120), not plain G7>=120. • If you're tracking a simple event, as opposed to a combination, add a 0 to it: =(G7>=120)+0, not =(G7>=120). This doesn't affect the final results, but it keeps this cell from showing as TRUE or FALSE when combinations show as 1 or 0. • To join conditions with AND, simply multiply them. =(G7>=120)*(P22<11) is 1 (true) when G7 is at least 120 and P22 is less than 11. If G7 is below 120 or P22 is at least 11, or both, the formula is 0 (false). • To join conditions with OR is a little bit more complicated. You can't just use + because the expression would then be 2, not 1, if both conditions are true. Probably easiest to read is this format: =0+OR(G7>=120,P22<11). This returns 1 (true) if G7 is at least 120 or P22 is under 11, or 0 if G7 is under 120 and P22 is at least 11. You don't need parentheses around the conditions, because the comma separator avoids problems with order of operations. An example is attached to this article. It uses part of a sheet from our standard Risk Register example, in rows 1 to 9. The green box tracks seven events, showing how to compute the percentage of iterations where each event occurred, as well as the number of iterations where each event occurred. See also: This is simpler with numeric data, as explained in How Many Iterations Were within a Certain Range? Last edited: 2017-03-30 6.22. Which Sensitivity Measure to Use? Applies to: @RISK 5.x–7.x @RISK gives me a lot of options for sensitivities in my tornado graph: correlation coefficients, regression coefficients, mapped regression coefficients, change in output mean, and so on. How do I choose an appropriate measurement in my situation? After a simulation, the Sensitivity Analysis window is your handy overview of sensitivities for all outputs. In the Results section of the @RISK ribbon, click the small tornado to open the Sensitivity Analysis window. (You can also see most of this information by clicking the tornado at the bottom of a Browse Results window for an output.) Change in output statistic: The change in output statistic, added in @RISK 6, is an interesting, differencing approach to sensitivity. You can select mean, mode, or a particular percentile: click the % icon at the bottom of the Sensitivity Analysis window, or the tornado icon at the bottom of the Browse Results window and select The Change in Output Statistic tornado displays a degree of difference for just the two extreme bins, but the spider shows more information: the direction of the relationship, and the degree of difference for every bin. Regression or correlation coefficients: Regression coefficients and regression mapped values are just scaled versions of each other. Correlation coefficients are rank-order correlation, which works well for linear or non-linear correlations. In the Sensitivity Analysis window, when you select Display Significant Inputs Using: Regression (Coefficients), @RISK will display R² ("RSqr") in each column. You can use R² to help you decide between correlation coefficients and regression coefficients: • A low value of R² means that a linear regression model is not very good at predicting the output from the indicated inputs. In this case, you would focus more on correlation coefficients, because rank-order correlation doesn't depend on the two distributions having similar shape or being linearly related. • If R² is high, a linear regression model is a good fit mathematically. But even here, you should look at the variables to assure yourself that they are reasonable and to rule out a problem with multicollinearity. This would be signaled, for example, when @RISK reports a significant positive relationship between two variables in the regression analysis, and a significant negative correlation between those variables in the rank-order correlation analysis. For a more detailed explanation of correlation and regression, see Correlation Tornado versus Regression Tornado and How @RISK Computes Rank-Order Correlation. Contribution to variance: R² is a measure of the percentage of the variance in a given output can be traced to the inputs — as opposed to measurement errors, sampling variation, and so on. @RISK adds input variables to a regression one by one, and each variable's contribution to variance is simply bow much larger R² grows as that input is added. In other words, a regression equation should predict output values from a set of input values. A variable's contribution to variance measures how much better the equation becomes as a predictor when that input is added to the regression. Unlike a regression coefficient, this measurement is unaffected by the magnitude of the input. For more about this, see Calculating Contribution to Variance. See also: All Articles about Tornado Charts Last edited: 2018-08-15 6.23. Regression Coefficients in Your Worksheet Applies to: @RISK for Excel 5.x–7.x The tornado diagram shows sensitivity of a simulated output to each input in units of standard deviation. Can I get the actual regression coefficients? You can do a calculation from the coefficients that are displayed in the tornado, as explained in Interpreting Regression Coefficients in Tornado Graphs. You can also use a worksheet function to obtain the regression coefficients directly, with no need for further calculation. The function is RiskSensitivity( ). In the function, set the fifth argument to 3 (result type = equation coefficient). @RISK will then return the actual coefficient that would appear in a multiple regression. Example: Suppose you're interested in the sensitivities of the output in cell A1. Then the function =RiskSensitivity(A1, , 1, 1, 1) will tell you the name (fifth argument = 1) of the input that has the largest impact or highest rank (third argument = 1), and the function =RiskSensitivity(A1, , 1, 1, 3) will tell you the unscaled regression coefficient (fifth argument = 3) of that input for the output in A1. For instance, if that RiskSensitivity( ) function returns 0.72, it means that a one-unit increase in that input corresponds to a 0.72-unit increase in the output. Technical note: The rank number (third argument) can be anything from 1 to the number of @RISK inputs in the model; if it is too large the function returns #VALUE. However, @RISK only returns sensitivities for the inputs whose coefficients are significantly different from zero (to a maximum of 100 inputs). For all other inputs, @RISK returns zero as a coefficient. Beginning with @RISK 6, you can also get the constant term of the regression equation, by setting the RiskSensitivity( ) function's fifth argument to 4 (result type = equation constant). To find the regression constant in older versions of @RISK, please see Regression Equation from Calculated Sensitivities. The attached example shows both types of regression tornado graphs, with (scaled) coefficients and with mapped values. It also shows how to use worksheet formulas to get those two plus the actual coefficients of the regression equation, including the constant term. See also: All Articles about Tornado Charts Last edited: 2017-06-09 6.24. Regression Equation from Calculated Sensitivities Applies to: @RISK for Excel 4.x–7.x @RISK for Project 4.1 I know that @RISK for Excel and for Project display regression sensitivities in a tornado diagram, and @RISK for Excel calculates them in the worksheet function RiskSensitivity. But how can I assemble them into a regression equation? What's the constant term? Is the regression equation more accurate for some input values than for others? First, make sure you have the actual regression coefficients in units of output per unit of input. • These can be obtained directly through the RiskSensitivity worksheet function, as explained in Regression Coefficients in Your Worksheet. • If you're using regression values or mapped regression values displayed in a tornado diagram, they must be descaled as explained here: Interpreting Regression Coefficients in Tornado Graphs. Your regression equation is Y = b[0] + b[1]X[1] + b[2]X[2] + b[3]X[3] + ... In this equation, Y is the @RISK output. b[0] is the constant term (see next paragraph). The other b's are the regression coefficients, descaled if necessary (see above), and the X's are the @RISK input variables. What is the value of the constant term, b[0]? In @RISK 6.0 and newer, you can get this from RiskSensitivity( ) with a result type of 4. In earlier versions of @RISK, you have to calculate it. @RISK doesn't reveal this directly, but you can compute it from the other information. The line of best fit (the regression line) is guaranteed to include the point where all the inputs and the output have their mean values. Get those mean values from the Results Summary window or the Detailed Statistics window, and substitute in the regression equation to solve for the constant term: b[0] = Ybar - b[1]Xbar[1] - b[2]Xbar[2] - b[3]Xbar[3] - ... where Ybar is the mean value of the output and the Xbar's are the mean values of the inputs. When you have the constant term, you have the last piece of the regression equation. Where is this equation valid? The coefficients are global properties of the overall set of data, so the equation is valid through the entire region of these input values. That is, each regression coefficient refers to the line that fits best through all the points, weighted equally. The regression equation takes all points (iterations) of all variables equally into account. What if data are skewed? Just as with a simple two-variable X-Y regression, that will affect the residuals. If one region of the cloud of points is markedly different from another, the regression equation does the best it can overall, which may mean less than the best for particular regions. In that case the residuals would be large in some regions and small in others. One caveat: All of this assumes that you have captured all the inputs that have any meaningful impact on this output. If you have only some of the significant inputs, then of course the regression line will lose some of its effectiveness. See also: All Articles about Tornado Charts Last edited: 2015-06-30 6.25. Placing Change in Output Statistics in Worksheet Applies to: @RISK 6.1.1 and later Can @RISK produce a tornado graph showing change in output mean, percentile, or mode? Is there any way to write those statistics to my worksheet? Yes, you can use the RiskSensitivityStatChange function. This is documented in @RISK help. The attached workbook shows examples of retrieving change in output mean, change in output mode, and change in output percentile. (You will notice that inputs often rank differently depending on which measure you use.) See also: Last edited: 2021-03-05 6.26. Calculating Contribution to Variance Applies to: @RISK 7.5 and newer The help file describes Contribution to Variance this way: These values are calculated during the regression analysis. The sequential contribution to variance technique calculates how much more of the variance in an output is explained by adding each of a sequence of inputs to the regression model. The selection of the variables and the order in which they are added is determined by the stepwise regression procedure. As with any regression technique, when input variables are correlated, the regression can pick any of the correlated variables and ascribe much of the variance to it and not inputs correlated with it. Thus, caution in interpreting the contribution to variance results is critical when inputs are correlated. Can you expand on that? @RISK runs a stepwise regression on an output, to find several measures of sensitivity to the input distributions in the model. Stepwise regression is an iterative process where input variables enter into the regression sequentially. From the inputs that have not yet entered the regression, the next one to enter is the one with greatest significance to the output. However, rerunning the regression with that additional input variable can change the results for inputs that entered earlier. If an input no longer contributes significantly, it will leave the regression. After performing the stepwise regression, @RISK performs a second regression, this time a forward regression. Variables enter this regression in the same order as before, but only the ones that did not leave the original stepwise regression; and no variables leave. @RISK records the change in R² when each input enters the second, forward regression. (R² is between 0 and 1, and is a measure of how effectively the regression predicts output values. R² is the proportion of the output's total variance that is associated with input variables; 1–R² is the proportion associated with measurement errors, sampling variation, and random variations in general.) The change in R² when an input enters is that input's percentage contribution to the total variance of the output. It's shown in the Contribution to Variance tornado graph. You can also place those numbers in your worksheet. The total of the percentages given by the worksheet functions will equal R². Because the number of bars on a tornado graph is limited, the total in the graph will be less than R² if not all contributing inputs fit on the graph. A word on correlated variables: Some correlated variables may leave the first, stepwise regression, because some of their contribution to the output's variance overlaps with the contribution of the other correlated variables, and thus they don't add significant predictive power to the regression. In that case, they won't be part of the second, forward regression, and their contribution to variance is zero. The stronger the correlation, the stronger the tendency to omit some correlated variables. It's not easy to predict which variable is excluded in such cases; it could depend on slight changes in samples from one simulation to the next. But in that scenario it doesn't make much difference which of the correlated variables are used. See also: All Articles about Tornado Charts Last edited: 2017-10-24 6.27. Placing Contribution to Variance in Worksheet Applies to: @RISK 7.5.0 and newer I like the Contribution to Variance tornado, but how can I get those values into my worksheet? When the new graph was created in @RISK 7.5.0, new values were added to the arguments of the RiskSensitivity function. For contribution to variance, follow these patterns: • RiskSensitivity(output, , k, 4, 1) returns name of the k-th most significant input. • RiskSensitivity(output, , k, 4, 6) returns percentage of total variance contributed by the k-th most significant input. • RiskVariance(output) * RiskSensitivity(output, , k, 4, 6) returns actual variance contributed by the k-th most significant input, as opposed to percentage of variance contributed. The first three arguments to the RiskSensitivity function are output cell reference or name, simulation number (omitted = simulation 1), and input rank (>=1, where 1 selects the input with greatest effect). The fourth argument is 4 for contribution to variance. The fifth argument is 1 for the name of the input with that rank, or 6 for the percentage of variance contributed. Please open the attached workbook and run a simulation. The contributions to variance will appear in cells O9:Q15. The tornado graph is also shown for reference. See also: All Articles about Tornado Charts Last edited: 2018-08-11 6.28. Confidence Intervals in @RISK Applies to: @RISK 5.x–7.x How can I compute a confidence interval on a simulated input or output in @RISK? People don't always mean the same thing by "confidence interval" in the context of a simulation. Some want to estimate the mean of a distribution, and others want to know the middle x% of values. Prediction Interval Some people use "confidence interval" to mean the middle x% of the simulated data values, also known as a prediction interval. For instance, a 95% confidence interval by this definition would be the 2.5 percentile through the 97.5 percentile. @RISK can find these percentiles for you directly, with the RiskPtoX function. This downloadable workbook PredictionInterval.xls (attached) shows the Confidence Interval about the Mean Some people mean the confidence interval that is taught in statistics classes, an estimate of a "true population mean". The idea here is that the simulation is treated as a sample from the complete distribution, which contains infinitely many values. Your simulated result has a mean, the mean of a sample from the distribution, but if you repeated the simulation you'd get a different mean. What you want is a range that estimates the true mean of the distribution, with x% confidence in that range. This confidence interval is the simulated mean plus or minus a margin of error. In turn, the margin of error is a critical t or z times the standard error. But the estimated standard error depends on your sampling method, Latin Hypercube or Monte Carlo. Confidence Interval in a Worksheet Function Beginning with @RISK 7.5, you can use the RiskCIMean( ) function to place the lower or upper bound of a confidence interval in your worksheet. =RiskCIMean(A1,.95) or =RiskCIMean(A1,.95,TRUE) gives you the lower bound for the 95% confidence interval about the mean of cell A1, and =RiskCIMean(A1,.95,FALSE) gives you the upper bound. If you prefer, you can use the name of an input or output, instead of a cell reference. The confidence interval is computed using RiskStdErrOfMean( ), which equals the simulated standard deviation divided by the square root of the number of iterations. That's accurate if you're using Monte Carlo sampling. However, that same standard error is too large when you're using Latin Hypercube sampling. In turn, the larger standard error makes the confidence interval wider than necessary, possibly much wider than necessary. Thus, the RiskCIMean( ) function makes a conservative estimate under Latin Hypercube sampling. A truer estimate would require running multiple simulations, as explained below, which is not practical in a worksheet function. Confidence Interval with Monte Carlo Sampling The standard error is the simulated standard deviation divided by the square root of the number of iterations. The bounds of the confidence interval are therefore sample_mean ± z[critical] × standard_dev / sqrt(sample_size) (Critical z is easier to compute and is often used instead of critical t. For 100 iterations or more, critical t and critical z are virtually equal.) To find this type of confidence interval, @RISK offers several auxiliary functions but no single "confidence interval" function. The attached workbook ConfidenceInterval_MC.xlsx shows how to calculate this confidence interval using the @RISK statistics function. This worksheet is a proof of concept, and therefore the calculations are spread over several cells to show every step. In production, you would probably combine the calculations into a couple of cells, or put them into a user-defined function. To predict how many iterations will be needed to restrict the confidence interval to a specified width, please see How Many Iterations Do I Need? Confidence Interval with Latin Hypercube Sampling (For computing confidence intervals based on Latin Hypercube sampling, we rely on Michael Stein, "Large Sample Properties of Simulations Using Latin Hypercube Sampling", Technometrics 29:2 [May 1987], pages 143-151, accessed 2016-06-28 from https://r-forge.r-project.org/scm/viewvc.php/*checkout*/doc/Stein1987.pdf?revision=56&root=lhs.) The simulated sample means are much less variable with Latin Hypercube than with Monte Carlo sampling. (See Latin Hypercube Versus Monte Carlo Sampling.) Therefore: • standard_dev/sqrt(sample_size) over-estimates the standard error of the mean, quite possibly by a large amount. • A confidence interval using that standard error will therefore be very conservative: the interval and the margin of error will be much wider than necessary. • The RiskStdErrOfMean( ) and RiskCIMean( ) worksheet functions, as mentioned above, use that traditional calculation, and therefore they also overstate the standard error and produce an overly-wide confidence interval. We recommend Latin Hypercube sampling, and it's the default in @RISK, because it does a better job of simulating your model than traditional Monte Carlo sampling. Just be aware that the confidence intervals that you or @RISK compute don't take the increased accuracy of Latin Hypercube into account. It may be enough just to bear in mind that the confidence intervals are bigger than necessary. But if you need confidence intervals that accurately reflect Latin Hypercube sampling, here is how you can compute them. If the number of iterations is large relative to the number of input variables, and certain other conditions are met, the distribution of simulated sample means for each output will be approximately normal. Then you can find the standard error, margin of error, and confidence interval by this procedure: 1. In Simulation Settings » Sampling » Multiple Simulations, set "Use different seeds". Set a number of iterations in each simulation that is large relative to the number of input variables. 2. Run several simulations. 3. Each simulation will have a mean, which we can call x-bar. Collect the simulated means, and take the mean of those x-bars. This is your estimate for the true mean, and will be the center of your confidence interval. 4. Compute the standard deviation of the group of x-bars, and divide by the square root of the number of simulations (not iterations). This is the estimated standard error of the mean for Latin Hypercube sampling. Since the standard deviation of those simulated means is much less than the standard deviation of the iterations within any one simulation, this standard error will be much less than the standard error for Monte Carlo sampling. 5. Compute your critical t in the usual way, with degrees of freedom set to number of simulations minus 1, not number of iterations minus 1. For instance, with 10 simulations, critical t is 2.26 for a 95% confidence interval. (Since the degrees of freedom is low, use t and not z.) 6. Multiply critical t from step 5 by the standard error from step 4. This is the margin of error. Your final confidence interval is (mean of x-bars) ± t[critical] × standard_error The attached workbook ConfidenceInterval_LH.xlsx shows the calculation. The model is the same one that was presented above for Monte Carlo sampling. In the Monte Carlo example, there were 10,000 iterations in one simulation, and the standard error was on the order of $550,000. In the Latin Hypercube example, there are 1000 iterations in each of 10 simulations, totaling the same 10,000 iterations, but the standard error is much smaller, on the order of $5,000 instead of $550,000. Last edited: 2017-08-02 6.29. Customizing the Quick Reports Applies to: @RISK 5.x–7.x I would like to make some changes in the layout or contents of the Quick Reports; how can I do that? What is the best way to reproduce the Quick Report? Is there any way to access the template file that generates it? Can I create customized forms of other graphs and reports? By design, the Quick Reports are not very customizable. (You can change the type of tornado graph, see Tornado Graph in Quick Reports.) The idea is that they should always be in the same layout to make them as quick to read as possible. But there are several ways you can get the same information in customized graphs. Option A (@RISK 7): Custom Reports New in @RISK 7.0, Custom Reports let you mix and match graphs and statistics tables. By default, one Custom Report is produced for each output, but the Custom Reports tab in the Excel reports dialog lets you choose to report only particular outputs. For more, see the "Custom Reports" and "Custom Report Outputs" topics in @RISK help. Option B: Copy/Paste This is fastest if you have a one-time need. You can store some customizations in Application Settings, but some can only be done manually for each graph. 1. Open the Browse Results window for the desired input or output, and select the type of graph you want. 2. Right-click the graph area and set your distribution format and other options. 3. Size the Browse Results window to your preference. 4. Right-click the graph area again, and select one of the Copy commands. 5. Click into your worksheet and press Ctrl-V for Paste. Option C: Report Templates You can create one or more templates for your own customized reports and use them instead of the Quick Reports, or in addition to the Quick Reports. Set up a template on a dedicated tab (worksheet) within your workbook. The worksheet name must have the form RiskTemplate_reportname, where reportname is the desired name of the report sheet. See Creating and Using Report Templates for more. Any Excel or @RISK formulas can be part of your template sheet. You can easily include statistics like means and percentiles through Insert Function » Statistic Functions » Simulation Results, and include graphs through Insert Function » Other Functions » Miscellaneous » RiskResultsGraph. This is an automated solution, and it's quick to set up, but when you use RiskResultsGraph only a few customizations are available. Option D (@RISK 6.2 and Newer): Visual Basic for Applications Beginning with @RISK 6.2.0, there is a more flexible alternative for placing graphs in your worksheet. The new RiskGraph object gives you many customizations, but you do need to write Visual Basic code to use it. A new Automation Guide (Help » Developer Kit (XDK) » Automation Guide) explains how to create some basic graphs. For many more options, with a listing of every property and method, see the XDK help file (Help » Developer Kit (XDK) » @RISK XDK Reference). A small example is attached, showing a RiskResultsGraph tornado and a RiskGraph tornado created through Visual Basic. (Run a simulation to see both of them. Depending on your screen resolution, one of them may hide part of the other, so that you'll need to move it.) For another example, see Placing Graphs in an Existing Worksheet with VBA. VBA automation is available in @RISK Professional and Industrial Editions only. Last edited: 2015-06-30 6.30. Status Column of Output Results Summary Report Applies to: @RISK for Excel 5.x–7.x In the Results Summary window, and in the Output Results Summary report in Excel, a column is labeled "Status". What do the numbers mean? If you have enabled convergence monitoring, this column tells you whether each output has converged by showing OK or a number. OK means the output has converged; a number is the estimated percent complete to convergence for that output. For further information, please see Convergence Monitoring in @RISK. Last edited: 2015-07-01 6.31. Changing Columns in Results Summary Applies to: @RISK 5.x–7.x I'd like to change the statistics that are displayed in the Results Summary window, or the Excel reports Input Results Summary and Output Results Summary. For example, I'd like to see the 10th and 90th percentiles rather than the 5th and 95th. Or I'd like to add the median or standard deviation to the columns. How can I do it? Follow this procedure: 1. Run a simulation and in the Results section of the @RISK ribbon click Summary. 2. Right-click in the column headings and select Columns for Table. 3. Make whatever changes you wish, by adding or removing check marks (tick marks). To change the percentiles, click the "..." next to the 5% and 95%, and change them to whatever you wish. 4. Close the Results Summary window. These changes will apply only to the Results Summary window and the Excel reports Input Results Summary and Output Results Summary created during the current session. To set these columns as defaults for all @RISK workbooks, both new and existing workbooks, follow this additional step: 5. Click Utilities » Application Settings. In the Windows section, you'll see that "Results Window Settings" is set to Automatic. Click on the word Automatic to make a down arrow visible. Click on that arrow and select "Set to Current Window Columns". Click OK. Last edited: 2015-07-01 6.32. Detailed Statistics with More Than Seven Significant Digits? Applies to: @RISK 5.x–7.x The Detailed Statistics window shows only seven significant digits, and if I choose Report in Excel I again get only seven significant digits. Is there any way to get more precision? First, consider that these statistics are the result of a stochastic process, and it's highly unlikely that so many significant digits are meaningful. This is why @RISK rounds its results even though it actually does the calculations in full double precision. But if you truly want to see more significant digits, you can get them from the @RISK statistics functions. These functions, including RiskMean( ) and RiskStdDev( ), return full double precision. You can put them in your worksheet. If you have a whole lot of them, for greater efficiency you could call them from a macro that you set to execute automatically at the end of simulation. Last edited: 2015-07-06 6.33. Detailed Statistics: Live or Static? Applies to: @RISK 4.x–7.x On the Detailed Statistics sheet, I enter a target percent (P) and the target value (X) doesn't update, or I enter an X and the P doesn't update. There are two Detailed Statistics sheets in @RISK: the Detailed Statistics report, which is prepared as an Excel worksheet, and the Detailed Statistics window, which is part of @RISK. Only the @RISK window is "live", meaning that when you enter an X or a P the other member of the pair changes automatically. The report in Excel is static and does not update. Last edited: 2015-07-06 6.34. Detailed Statistics: Setting Default Targets Applies to: @RISK 6.x/7.x In the Detailed Statistics window after a simulation, @RISK gives me the 5th, 10th, 15th, ..., 95th percentiles. I can get additional percentiles in the Target section below that, but is there a way to make them appear automatically? Open Utilities » Application Settings. In the Windows section, find the Detailed Stats Window Targets line. Click to the right of Automatic, click the drop-down arrow, and enter your desired target percentiles in the form 1, 2.5, 97.5, 99 You can specify up to ten percentiles in this way, with or without % signs. @RISK will display these percentiles, in addition to its standard ones, in the Detailed Statistics window and on the Detailed Statistics report. Last edited: 2015-09-30 6.35. Some Iterations Show Error in Data Window. What Can I Do? Applies to: @RISK for Excel 5.x–7.x When I run my simulation and click the x-subscript-i icon to check the @RISK Data window, I see "Error" for some iterations in one or more outputs. What does that mean? There are no #N/A or #VALUE errors in my workbook. In the @RISK Data window, each row is an iteration and each column is an @RISK input or output. "Error" in the @RISK Data window means that the formula in that cell (column heading) has an Excel error in that iteration (row heading). But the problem may or may not be in the formula in that cell. It might in be a formula in a cell that is referenced by that cell. (In Excel, when any cell has an error status, all the cells that use it in formulas share that error status.) How can you have errors in particular iterations when there are no errors in the worksheet as displayed when a simulation is not running? For example, suppose you have RiskNormal(10,3) in one cell, and in another cell you take the square root of the first cell. The static value of the RiskNormal( ) is 10, so when a simulation is not running you won't see any error. But during an iteration, occasionally the RiskNormal( ) will return a negative value, and the square root of a negative value returns a #NUM error. If the cell that contains the square-root function, or any cell that depends on it, is an @RISK output, then you will see an error in the Simulation Data window for that iteration. To find the source of the error: In the Results section of the @RISK ribbon, click the Simulation Data icon, the small icon showing x-subscript-i. The @RISK Data window opens, showing all outputs in columns, and then all inputs. Locate your output, then an Error indication. Click on it, and then click the footprint or shoeprint icon at the bottom of the window. (The tool tip, if you hover your mouse over the icon, is "Update Excel with values from the selected iteration". If the button is grayed out, see Footprint Button Grayed Out.) @RISK will put your workbook into the state it was in during that iteration. Then you can check the error cell and trace back through the formulas till you find the source of the error. (You may need to minimize the @RISK Data window, or grab its title bar with your mouse and move it out of your way.) You can click on other iterations in the @RISK Data window to display other iterations of the (Actually, @RISK sets all inputs to the values they had during that iteration, then recalculates the workbook to let Excel fill in the outputs. If shoeprint mode shows different output values from the ones shown in the Data Window for the same iteration, see Random Number Generation, Seed Values, and Reproducibility.) When you have found the problem, click the footprint icon again to return the workbook to its normal state, or simply close the Data Window. See also: Additional keywords: Shoeprint mode, footprint mode Last edited: 2019-02-15 6.36. Additional Export Data options Applies to: @RISK 8.2 and newer Our new @RISK 8.2 release includes 2 new ways to get simulation Data: Export Simulation Data from Graph Windows and Simulation data report. Export Simulation Data from Graph Windows The Browse Results, Scatter Plot, and Summary Graphs windows now have the option to Export or Copy the Simulation Data for the displayed inputs or outputs: Simulation Data Report The new simulation data report now allows you to select inputs and/or outputs to generate a report with their simulation data in an Excel Worksheet. Last update: 2021-07-29 6.37. Writing Simulation Data to Excel Applies to: @RISK 8.2 onward I want to get all the simulation data for a specific output or input, how can I do that? You have two options on how to get this information. The first way is through the Browse Results window. In the bottom right hand corner there is an Export button. From here you can either copy the data and paste the values into Excel, or Export the values to Excel. Copying will give you only the values, exporting will give you more information regarding the distribution name and location. The second option is using a built in report. In 8.2 there is a new report added called Simulation Data. In this report you can either specify which distributions you are interested in, or select all of them. You can also choose to include the thumbnail graphs or filters. Last edited: 2021-07-23 7. @RISK Simulation: Graphical Results 7.1. Interpreting or Changing the Y Axis of a Histogram Applies to: @RISK for Excel 4.5–7.x @RISK for Project 4.0 and 4.1 How do I interpret the y axis of the histogram that is created from the results of my simulation? @RISK can show the histogram of your result data in two different formats, probability density or relative frequency. This just a matter of different scaling for the y axis; the shape of the histogram doesn't change. The default histogram is probability density for continuous data, and relative frequency for discrete data. TIP: Most people find relative frequency easier to understand than probability density. Especially for presentations, you may want to use the relative frequency format, or simply suppress the y axis. (See "How do I select the y axis format," below.) How do I interpret relative frequency numbers on the y axis? If a bar is as high as the 2% mark, for example, you know that 2% of all iterations fell within that bar. In other words, the height of each bar represents the proportion of the data (the fraction of all the iterations) in that bar. Since every data point must be in some bar of the histogram, the heights of all the bars add up to 100%. (Before @RISK 6.2, relative frequencies were shown as decimals, for example 0.02 rather than 2%, but you read them the same way.) How do I interpret probability density numbers on the y axis? This is harder. Unlike the case of relative frequency, the height of a histogram bar isn't meaningful on its own. What matters is the area of the bar. Consider the example at right. (You can click on it to get a larger image.) Look at the bar for $74,000 to $76,000. Its width on the x axis is $2,000, and its height on the y axis is about 4.9×10^-5. As with any rectangle, you find its area by multiplying width and height: $2000×4.9×10^-5 = 0.098 or 9.8%. The height of that bar by itself doesn't tell you anything, but in conjunction with the width it tells you that 9.8% of the iteration data for this input fell between $74,000 and $76,000. The total area of all the bars is 1 (or 100%). When you're looking at a theoretical probability curve for an input, or in a fitted distribution, it will be presented as probability density. Again, the height of the curve doesn't tell you anything useful on its own. But the area under part of the probability density curve tells you what percentage of the data should fall within that region, theoretically. For example, the area under the curve to the left of $72,104 is 5.0% according to the bar at the top of the graph. This tells you that theory says 5% of the data for a Normal(80000,4800) should be less than $72,104. Technically, the area under a part of the curve is the integral of the height of the curve, from the left edge of the region to the right edge. Thus, the 5% was found by integrating the height of the density curve from minus infinity to $72,104. Just as the total area of the bars in a histogram is 1, the total area under a probability density curve is 1. How does @RISK create the y axis for a probability density histogram? 1. Divide the data into intervals — see Number of Bins in a Histogram. 2. Count the number of data points in each interval. 3. Divide the counts by the total number of data points. 4. Divide that result by the interval width as shown on the x axis, to obtain the height of the bar along the y axis. In a probability density histogram or curve, the larger the numbers on the x axis, the smaller the numbers on the y axis must be to keep the total area at 1. How do I select the y axis format? In @RISK 5.x–7.x, click the histogram icon at the bottom of the Browse Results window and select Relative Frequency or Probability Density. If you prefer, you can suppress the numbers on the y-axis entirely: right-click on any of the numbers on the vertical axis and select Axis Options. Then on the Y-Axis tab, under Display, remove the check mark by Axis. TIP: If you find yourself changing the y axis often, you might want to change the default. In Utilities » Application Settings » Simulation Graph Defaults, change Preferred Distribution Format to Relative Frequency, or whatever you prefer. In @RISK 4.x, right-click the histogram and select Format Graph...; then select the Type tab. In the Histogram Options section, click the drop-down arrow next to the Format field and choose Density or Relative Frequency. Last edited: 2018-09-21 7.2. Number of Bins in a Histogram Applies to: @RISK 5.x–7.x When it makes a histogram, how does @RISK choose a number of bars? In other words, how many bins or intervals does @RISK divide the data range into? Can I change this? By default, @RISK determines the number of bins from the number of iterations or data points n, as follows: n Bins Less than 25 5 25 to 100 Nearest integer to n / 5 More than 100 Largest integer below 10 × log(n) If you want to change this for a particular histogram, right-click the graph and select Graph Options. On the Distribution tab, the lower section lets you specify a number of bins (bars), as well as a minimum (left edge of the first bar) and maximum (right edge of the last bar). "Automatic" uses the calculation shown above, but you can specify a number from 2 to 200. Last edited: 2015-07-15 7.3. Setting Y Axis Maximum Not to Exceed 1 Applies to: @RISK 4.5–7.x @RISK for Project 4.x I want the y axis values on my histogram to be between 0 and 1. But when I change the default maximum from 2 to 1, the top of the histogram is chopped off. Shouldn't probabilities be greater than 0 and less than or equal to 1? You are probably looking at the default histogram format, which is probability density. With probability density, the heights of the bars are adjusted so that the total area of all the bars is 1. If your data range is small, then the heights of the bars may be greater than 1. If you change the graph format to relative frequency, the y axis will have a maximum of 1 or less. Beginning with @RISK 6.2, relative frequencies are shown as percentages. This gives a visual indication whether you're looking at probability density or relative frequency. For more detailed information, please see Interpreting or Changing the Y Axis of a Histogram. Last edited: 2015-07-06 7.4. Log Scale in Output Graphs Applies to: @RISK 6.2 and newer Beginning with @RISK 6.2, you can display the x and y axes of most graphs in logarithmic scales, using any of these methods: • Tick the "Log" box on the X-Axis or Y-Axis tab of the Graph Options dialog. • Right-click the graph and on the context menu select Log Scale X-Axis, Log Scale Y-Axis, or both. • In VBA, you can use the RiskGraph.XAxis.LogScale and RiskGraph.YAxis.LogScale properties. Generally, graphs with numeric scaling support log scaling. Here are the major exceptions: • Histograms with some data values less than or equal to 0. However, you can switch to a cumulative display (S-curve) and get logarithmic scaling. • Histograms in probability density format. (Interpretation of these would be confusing.) However, histograms in relative-frequency format can be displayed on a log scale. If you are using the default Automatic formatting, and you select a log scale, @RISK will automatically change the histogram to relative-frequency format. • The x axes of tornado graphs and summary graphs. (A summary graph may appear to have a numeric x axis, but actually those numbers are just treated as labels.) Last edited: 2013-09-25 7.5. Area Graphs Applies to: @RISK 5.x–7.x Can I smooth out a histogram to create an area graph, as I could in older versions of @RISK? Yes, you can, though you have to go through a dialog box because you get more choices. When you have the results graph displayed, follow this procedure: 1. Right-click the graph and select Graph Options. 2. Select the Curves tab. 3. In the list at the left, select the histogram that you want to smooth if it's not already selected. 4. Change Style to either Line or Solid. (The Automatic box will uncheck itself.) 5. For Interpolation, select Spline Fit for a smooth curve or Linked Midpoints for a polygon. If you wish, you can also change color and style. 6. Click OK and you will see your smoothed graph displayed as you wish. Last edited: 2015-07-06 7.6. Copy/Pasting Thumbnails Applies to: @RISK 7.x I like the new Thumbnails feature of @RISK (in the Utilities menu). I'd like to paste a thumbnail into my Excel sheet, a PowerPoint slide, a Word document, or my graphics program. How can I copy a thumbnail to the clipboard? Just hover your mouse over the input or output cell, then slide the mouse pointer over the thumbnail. Right-click and select Copy. That places a copy of the thumbnail on your clipboard, and you can then paste it anywhere with the usual commands (typically Ctrl+V). Last edited: 2015-07-06 7.7. All Articles about Tornado Charts Applies to: @RISK 6.x/7.x The Knowledge Base has many articles about various aspects of tornado graphs. In Technical Support, we sometimes get multiple tornado-related questions at the same time, and it seems useful to collect all the links in one place. Interpreting tornado graphs: Which variables are in a tornado? Putting numbers from tornado graphs into your worksheet: Other articles: Additional keywords: Tornado chart, tornado charts, tornado graph, tornado graphs, sensitivity tornado, sensitivity tornados, sensitivity tornadoes, sensitivity coefficients, sensitivity chart, sensitivity charts, sensitivity graph, sensitivity graphs Last edited: 2018-08-15 7.8. Interpreting Regression Coefficients in Tornado Graphs Applies to: @RISK for Excel 5.x–8.x How can I interpret the regression coefficients on the tornado diagram or sensitivity report produced by @RISK? The regression coefficients are calculated by a process called stepwise multiple regression. The main idea is that the longer the bar or the larger the coefficient, the greater the impact that particular input has on the output that you are analyzing. A positive coefficient, with bar extending to the right, indicates that this input has a positive impact: increasing this input will increase the output. A negative coefficient, with bar extending to the left, indicates that this input has a negative impact: increasing this input will decrease the output. In Browse Results and with the RiskResultsGraph function, you can get "regression coefficients" or "regression coefficients—mapped values". With the RiskSensitivity function, you can get either of those measures and also the unscaled coefficients that would be used in a regression equation. Please open the attached workbook and click Start Simulation. It shows both types of regression tornados and all three types of coefficients. Regression Coefficients The graph labeled simply "regression coefficients" does not express them in terms of actual dollars or other units. Rather, they are scaled or "normalized" by the standard deviation of the output and the standard deviation of that input. For the output, Input A has a regression coefficient (standard b) of 0.78. That means that for every k fraction of a standard deviation increase in Input A, the output will increase by 0.78k standard deviations (SD). To get from that coefficient to the actual coefficient in terms of units of input and units of output, multiply by the SD of the output and divide by the SD of the input. 0.78 × 12784 / 1000 = about 10,and therefore a 1-unit increase of A corresponds to a 10-unit increase of the output. Regression – Mapped Values The mapped regression values are scaled versions of the regression coefficients. Mapped values are in units of output per standard deviation of input. For example, if Input A has a mapped coefficient of 10,023.53, it means that an increase of k standard deviations in Input A produces an increase of 10,023.53 * k units (not standard deviations) in the output. If the standard deviation of input A is 1000 and k = 2, it means that increasing input A by two standard deviations (1000 * 2) increases the output by 20,001.06 (10,023.56 * 2) units. Actual (Unscaled) Regression Coefficients There is no option to show these on the graph, but you can get them from the worksheet function RiskSensitivity. The attached workbook shows examples in row 26; for more information please see Regression Coefficients in Your Worksheet. Additional keywords: Sensitivity analysis See also: All Articles about Tornado Charts Last edited: 2021-04-19 7.9. Correlation Tornado versus Regression Tornado Applies to: @RISK, all releases Why do we have both? Why can a correlation tornado sometimes show bars that aren't on the regression tornado, or vice versa? Regression and correlation both indicate the direction of the relationship. A positive means that as that input increases, the output increases; and a negative means that as that input increases, the output decreases. You can say that a regression coefficient shows the strength of the relationship, and a correlation coefficient shows the consistency of the relationship. Correlation first. Imagine a scatter plot of just this input (horizontal axis) and output (vertical axis). Each point represents the value of that input and output in one iteration. As you sweep from left to right, you are going from low to high values of the input, in order. Now, consider two consecutive points in that sweep. The second point is to the right of the first, so it has a higher input value. But is the second point higher on the graph than the first (larger output value) or lower? In almost any simulation, the points will show some ups and some downs, but let's suppose that for every single pair of points, the point to the right is also higher. In this case you have a perfectly consistent relationship: increasing the input always increases the output. The correlation coefficient is +1 (maximum possible correlation). Now suppose that the relationship is a little more realistic: usually when you go from left to right, the points are rising, but sometimes the right-hand point is lower than the nearest point to its left. Now the relationship is not perfectly consistent. Usually increasing the input increases the output, but not always. The higher the correlation coefficient, the more consistently increasing the input increases the output; the lower the correlation coefficient, the less often increasing the input increases the output. The lower correlation coefficient means that the relationship has less consistency to it. Take a situation where, moving from left to right, half the time the second point is higher than the first and half the time it's lower. Increasing the input is just as likely to decrease the output as increase it. Your correlation coefficient is zero. It works the same with negative correlations. A coefficient of –1 (the lowest possible) means that every single pair of points has the second output lower than the first. The relationship is perfectly consistent: every time you increase the input, the output decreases. As the correlation coefficient gets further from –1 and closer to 0, there is less and less consistency. The output still decreases with increasing input, more often than not, but the lower the coefficient the closer you get to 50-50 increase or decrease and zero correlation. So the correlation coefficient tells you whether increasing the input generally increases or decreases the output, and how consistent that trend is, but it tells you nothing about the strength of the So much for correlation. What about regression coefficients? Regression coefficients tell you the size of the effect each input has on the output. For example, a regression coefficient of 6 means that the output increases 6 units for a 1-unit increase in the input; a coefficient of –4 means that the output decreases 4 units for each one-unit increase in the input. (It's a little more complicated than that in @RISK, because you can get only scaled regression coefficients on a tornado; see Interpreting Regression Coefficients in Tornado Graphs. But you can get the actual regression coefficients in a worksheet; see Regression Coefficients in Your Worksheet.) For more, see Which Sensitivity Measure to Use?. Also see the "Regression and Correlation" topic in the @RISK Help file. See also: All Articles about Tornado Charts Last edited: 2016-04-20 7.10. Interpreting Change in Output Statistic in Tornado Graphs Applies to: @RISK 6.x/7.x How do I interpret the double-sided tornado graphs in Quick Reports, Browse Results, and Sensitivity Analysis? What's the default behavior, and how can I change it? Let's talk first about the default behavior for Change in Output Mean, which is the default statistic, and then we can go into the variations. We'll suppose that you have 2500 iterations in your The baseline is the overall simulated mean of that output. The double-sided tornado has one bar for each selected input, and each bar has numbers at its edges. Each bar is prepared by considering one input and ignoring everything else but the output. (The other inputs are not held constant; their values from the simulation are simply not used in the computation.) The inputs are first sorted in ascending order and binned in that order, then an output mean is computed for just the iterations in each bin and shown on the bar in the tornado chart. Details for Change in Output Mean: 1. @RISK puts all the iterations in order by ascending values of that input. (If an input value occurs multiple times, @RISK sub-sorts by ascending iteration number.) 2. @RISK divides those ordered iterations into 10 bins or "scenarios". With 2500 iterations, the first bin contains the 250 iterations with the 250 lowest values of this input; the second bin contains the 250 iterations with the 251st to 500th lowest values of this input; and so on to the last bin, which contains the 250 iterations with the 250 highest values of this input. Note: The bins all have the same number of iterations. For a uniform distribution that means they all have the same width, but for most distributions the bins will have different widths so that they all have the same number of iterations. Another way to look at it is that the bins have equal probability and the same number of iterations, but most likely not equal width based on the shape of the distribution. 3. @RISK computes the mean of the output values within each bin. Exception for discrete inputs: If every iteration in two or more bins has the same input value, @RISK pools the iterations for those bins, computes the output mean, and assigns the same output mean to each of those bins. 4. @RISK looks at the ten output means from the ten bins. The lowest of the ten output means becomes the number at the left edge of the bar for this input, and the highest of the ten output means becomes the number at the right edge of the bar. Different shading, beginning with @RISK 7.5, shows you which end of each bar represents high input values and which represents low input values. Thus you can easily tell which inputs have positive impact on this output (high inputs at the right) and which have negative impact (high inputs at the left). In @RISK 6.0–7.0, there's no way to see from the graph which bin produced which output mean. For instance, if a Change in Output Mean bar goes from 1500 to 4980, you don't know whether that output mean of 1500 came from the bin with the 250 lowest input values, or the bin with the 250 highest input values, or a bin with intermediate input values. This is where correlation sensitivities or a scatter plot can help, to tell you whether increasing values of an input tend to associate with increasing values of the output, with decreasing values of the output, or with some more complicated Note: The change in output values does not necessarily indicate any influence of that input on the output. For more, see Change in Output Mean Inconsistent with Sensitivity Tornado. Variation: number of scenarios (bins or divisions) The default number of bins is 10, but you can change that. While displaying a tornado graph, click the tornado icon in the row at the bottom, and choose Settings. The first setting, "Divide input samples into ____ scenarios", controls the number of bins (number of divisions, number of scenarios) that @RISK uses to construct the tornado. If you increase the number of bins, @RISK will have more output means, each representing a smaller number of inputs. For most models, that translates to a greater range of output means. For a very simplified example, please have a look at the attached workbook. (You don't want so many bins that each one has only a few iterations; see next paragraph.) Variation: number of iterations With more iterations, from one simulation to the next you'll see less variability in Change in Output Mean, just as with any other output statistic. In other words, output statistics are more stable with more iterations. With fewer iterations, you'll see more variability in all your output statistics. However, from one simulation to the next, output statistics should vary only within normal statistical variability for the number of iterations. Variation: choice of statistic You can display a change in output percentile rather than a change in output mean. In this case, the computation is similar but instead of an output mean for each bin @RISK computes an output percentile for each bin. For example, with 2500 iterations and 10 bins, if you select Change in 90th Percentile then @RISK will compute the 90th percentile of the output values within each of the 10 bins (within each group of 250 iterations sorted by input values), and the edges of the bar will be the smallest and largest of those 90th percentiles. The baseline of the output becomes the overall 90th percentile of the simulation. Setting preferences for double-sided tornado To set default number of bars for double-sided tornado in Browse Results, Sensitivity Analysis, and Quick Reports, select Utilities » Application Settings » Sensitivity Defaults. If necessary, set Preferred Calculation Method to Change in Output Statistic. You can then change the number of bars and your preferred statistic. (The Change in Output Statistic tornado will never show more than 10 bars. If you set a maximum greater than 10, it applies only to the correlation and regression sensitivity tornado graphs. However, you can still use a worksheet function to retrieve the change in output statistic for lower-ranked inputs.) To set the preferences for a particular Browse Results graph, click the tornado icon at the bottom of the graph window and select Settings. See also: All Articles about Tornado Charts Last edited: 2018-08-13 7.11. Variable Selection in Tornado Graphs Applies to: @RISK for Excel 4.x–7.x @RISK for Project 4.x How does @RISK decide which variables to include when I create a tornado diagram with regression coefficients? To choose inputs to include in the tornado diagram, @RISK uses a stepwise multiple linear regression procedure. By default, each variable is accepted or rejected for the regression procedure at the critical value of 3.29 in the F distribution. For a technical reference, please see "The Stepwise Regression Procedure" in Draper and Smith, Applied Regression Analysis (Wiley, 1966). An example is attached. For comparison, the Sensitivity sheet shows regression coefficients computed by @RISK, and the Regression sheet shows regression coefficients computed by StatTools. The order of variables is different between the two, because the two products read and store data in different ways. However, both sheets show the same coefficients, because they're doing the same type of To set the maximum number of tornado bars in @RISK 6.x/7.x, please see Tornado Graph — How to Set Defaults. In @RISK 5.x, use Utilities » Application Settings » Simulation Graph Defaults. See also: All Articles about Tornado Charts Additional keywords: Sensitivity analysis Last edited: 2016-10-20 7.12. Combining Inputs in a Sensitivity Tornado Applies to: @RISK 5.x–7.x How can I aggregate multiple inputs in the tornado graph, so that I see the output's sensitivity to the combination instead of sensitivities to the individual inputs? I might be combining countries in a region, or I might want an NPV instead of individual cash flows, or ... The RiskMakeInput( ) functions lets you do exactly this. If you already have a formula in your workbook that computes the aggregate you're interested in, just wrap it in a RiskMakeInput( ), like =RiskMakeInput(formula, RiskName("name to appear in tornado") ) As an alternative, you could just create that formula in an empty cell. RiskMakeInput( ) tells @RISK that its contents should be treated as an @RISK input distribution for purposes of sensitivity analysis, and RiskMakeInput( )'s precedents should be ignored in sensitivity analysis. Implications: • Precedent tracing stops with the RiskMakeInput( ), effectively. @RISK looks at the cells that the formula in RiskMakeInput( ) refers to, and all precedents of those cells. Any distributions that are direct or indirect precedents of any RiskMakeInput( ) function are excluded from all sensitivity calculations for all outputs. Even a RiskMakeInput( ) among those precedents is excluded from • The RiskMakeInput need not be a precedent of the output. For example, suppose you have =RiskMakeInput(A1+A2) in cell A3, and an @RISK output in cell A4. If the formula in A4 refers to A1 or A2 or to any of their precedents, then @RISK will treat the RiskMakeInput( ) in A3 as a precedent of the output in A4, even if the formula in A4 doesn't refer to A3 directly or indirectly. In this respect, @RISK treats RiskMakeInput( ) as a precedent even though Excel may not. Another way to look at it is that @RISK treats a RiskMakeInput( ) function as a precedent of an output if the two have any precedents in common. • The RiskMakeInput( ) affects all sensitivity measures, including all graphs and the RiskSensitivity( ) and RiskSensitivityStatChange( ) worksheet functions. For an example, in @RISK 6 or 7 click Help » Example Spreadsheets » Statistics/Probability » Using RiskMakeInput Function. In @RISK 5, click Help » Example Spreadsheets » RiskMakeInput.xls. See also: Last edited: 2015-06-21 7.13. Excluding an Input from the Sensitivity Tornado Applies to: @RISK 5.x–7.x How can I tell @RISK not to include one or more inputs in tornado charts and other sensitivity results, including the spider graph and the RiskSensitivity( ) and RiskSensitivityStatChange( ) You might want to do this if you have two inputs that are very highly correlated. This creates multicollinearity, which adds a redundant bar to the Change in Output Statistic tornado and distorts the Regression Coefficients tornado. The key is the RiskMakeInput( ) function. @RISK excludes all the predecessors of a RiskMakeInput( ) from sensitivity analysis, whether or not that RiskMakeInput( ) is a precedent of any output. Thus, all you have to do to exclude P11 and J15 from sensitivity measurements is to put them in a simple RiskMakeInput in a previously empty cell: A nice feature of this approach is that you don't have to make any changes to the formulas in your actual model. Also, if you add or remove rows or move cells, Excel will automatically update the cell references, just as with any other formula. However, the RiskMakeInput( ) itself will now appear as an input in sensitivity functions and graphs. To prevent that from happening, multiply the included expression by 0, so that every iteration value is the same: RiskMakeInput( ) will work as described here, whether Smart Sensitivity Analysis is enabled or disabled in Simulation Settings. I don't want to recalculate results; I just want to suppress one or more bars of the tornado. Right-click each bar you want to suppress, and click Hide Bar. If you want to bring the hidden bars back, right-click the graph and select Restore Hidden Bars. (The Hide command isn't available with the spider graph.) See also: Last edited: 2017-07-10 7.14. Missing Labels in Tornado Graphs Applies to: @RISK for Excel 5.x–7.x @RISK generated a tornado graph, but only some of the inputs are labeled. By design, all tornado graphs are created at the same size. If there are too many input labels to fit on the y axis, @RISK will show only every second (third, fourth, etc.) label. (This is also true with numeric labels, for instance in histograms.) To display all labels, simply make the window larger vertically. See also: All Articles about Tornado Charts Last edited: 2015-07-06 7.15. Tornado Graph — How to Set Defaults Applies to: @RISK 6.x/7.x How do I set the defaults for tornado graphs? I looked in Application Settings but I couldn't find that section. To set defaults for the tornado graphs in Browse Results and Quick Reports, click Utilities » Application Settings » Sensitivity Defaults. Tornado Maximum # Bars can be any whole number from 1 to 16. However, a value greater than 10 will apply only to the correlation, regression, and contribution to variance tornado graphs. The Change in Output Statistic tornado never displays more than 10 bars, even if you specify a higher number. If the limit of 16 (or 10) is too small, you can still use a worksheet function to retrieve correlation coefficients, regression coefficients, change in output statistic, or contribution to variance for lower-ranked inputs. See also: All Articles about Tornado Charts Additional keywords: Number of bars in tornado Last edited: 2017-10-24 7.16. Which Tornado Graph for Quick Reports? Applies to: @RISK 6.x/7.x My Quick Reports have three graphs: a histogram, a cumulative S-curve, and a change in output mean (tornado plot). I'd like that third graph to be correlation or regression sensitivities rather than change in output mean. How can I do it? The format of the tornado graph in Quick Reports is controlled in Application Settings. In @RISK, click Utilities » Application Settings » Sensitivity Defaults. Change Preferred Calculation Method to Regression Coefficients, Regression Mapped Values, or Correlation Coefficients. Click OK, then Yes to the confirming prompt. Any Quick Reports you generate after this will use the new format, both in the graph itself and in the table to the right. Beginning with @RISK 7.0, as an alternative you can simply create a Custom Report. Before a simulation, click Simulation Settings » View » Automatically Generate Reports at End of Simulation; after a simulation click the large Excel Reports icon in the ribbon. Either way, select Custom Reports in the list, then go to the Custom Reports Settings tab of that dialog. Click Sensitivity Graph, then Edit, and change the type of sensitivity. Can I limit the number of bars that appear in the tornado charts in Quick Reports? Yes, this setting is also in Application Settings » Sensitivity Defaults. It's called Tornado Maximum # Bars. See Tornado Graph — How to Set Defaults. See also: All Articles about Tornado Charts Last edited: 2015-10-02 7.17. Quick Report for Just One Output Applies to: @RISK 5.x–7.x I have quite a few outputs. When I generate Quick Reports, @RISK produces a separate worksheet for each of them. This takes a long time, and really I only want to report on just a few outputs. Is there a way to create Quick Reports just for one or a few selected outputs? You can create a Quick Report for a single output from the Browse Results screen. While browsing that output, click the Edit and Export icon, next to the help icon at the bottom of the window. Quick Report (singular) is the first selection. If you want a Quick Report for another output, click on that output and then the Edit and Export icon. TIP: You can use the Tab key to move the Browse Results window to the next output, and the next, and the next. Shift+Tab does the same, but in the opposite order. Last edited: 2017-09-27 7.18. Interpreting Scenario Graphs Applies to: @RISK 6.x/7.x I clicked the "%" icon in Browse Results and selected a scenario. How do I interpret the numbers in the bars? Let's work with this graph, which was produced by the first scenario in the attached workbook. (The workbook has a fixed random number seed, so that you can run a simulation and get the same results we're using here.) "Scenario" is just a name for a subset of the iterations. In this graph, the title tells you that the subset is iterations where the Profit output is above its 75th percentile; in other words, it's the most profitable 25% of the iterations. Where do the numbers 83.84% and 0.99 in the first bar come from? They are two different measures of the median of the Revenue input in the filtered subset, versus the median of the Revenue input in the whole simulation. Looking at Browse Results for the Revenue input for the whole simulation, we see that the median is very close to $100,000, and the standard deviation is very close to $6,000. To find the median Revenue value in the scenario, we apply an iteration filter for Profit output greater than its 75th percentile. When we do that, the median of the Revenue input for that filtered subset is $105,923. The decimal measure in the graph is 0.99. It says that the median Revenue in the subset is 0.99 standard deviation above the median Revenue in the whole simulation. Let's check that. The median in the subset is $105,923, which is 5,923 above the $100,000 median in the whole simulation. $5,923 is about 0.99 of the $6,000 standard deviation of the Revenue input in the whole simulation, so that checks with the scenario graph. The percentage in the graph is 83.84%. It says that the median Revenue in the subset is at the 83.84 percentile of the median Revenue input in the whole simulation. Let's check that. The median in the subset is $105,923. If we disable filtering and type 105,923 in the right delimiter in the Browse Results window for revenue, the percentage shown to the right is 16.2%. 100% minus 16.2% is 83.8%, so the subset median of $105,923 is at about the 83.8 percentile of the Revenue distribution for the whole simulation, and that too agrees with the scenario graph. Both numbers in the bar are derived from the median of the Revenue input within the subset of iterations where the Profit output is above its 75th percentile within the whole simulation. The decimal 0.99 says that the median Revenue within the subset is 0.99 of a standard deviation above the median Revenue of the whole simulation, and the percentage 83.84% says that the median revenue within the subset equals the 83.84 percentile of Revenue within the whole simulation. With that under your belt, you can interpret the other bar. The median of Cost within the subset is 0.77 of a standard deviation below the median Cost of the whole simulation, and it's also equal to the 22.1 percentile of Cost in the whole simulation. That's fine for the graph. What about the numbers in the Output Scenarios window, if I click the "%" icon in the Results section of the @RISK ribbon? Those are exactly the same deal, though the dropdown boxes use slightly different words. Select Display Inputs ... using: All, and you'll see the same numbers we've just discussed. Last edited: 2018-09-26 7.19. Creating and Using Report Templates Applies to: @RISK for Excel 5.x–7.x New in @RISK 7.0: If you have @RISK 7.0 or newer, and you're just trying to customize the Quick Reports, take a look at the Custom Reports option in Excel Reports. Items there can be edited, deleted, and rearranged; you can also add further items. That may meet your needs, but if not then read on ... You can use RiskResultsGraph( ) and the @RISK Statistics Functions to place simulation results in any worksheet. They are filled in automatically when you run a simulation. However, the next time you run a simulation the results will be overwritten with new results. If you want to create a set of custom-formatted results that do not get overwritten, place them in a report template. To create a report template: 1. Within this workbook, create a new worksheet. 2. Give the sheet a name that begins with RiskTemplate_, such as RiskTemplate_Projections. (The underscore is required.) 3. Set up a worksheet the way you want your results, using any combination of @RISK and Excel functions. You can put any valid Excel formulas and @RISK functions in the template sheet, but these are especially useful in reports: □ To embed means, percentiles, or other statistical results, use the statistic functions. In @RISK, click Insert Function » Statistic Functions » Simulation Results. □ To create graphs of several types, use the RiskResultsGraph function. Click Insert Function » Other Functions » Miscellaneous; or click into an empty cell, type =RiskResultsGraph( including punctuation, and press Ctrl+A. The RiskResultsGraph function provides limited customizations for the supported graph types; please see the help text for details. (If you want to do more customization, use the RiskGraph object in Visual Basic. For more information, please click Help » Developer Kit (XDK). In the submenu, Automation Guide gives a brief introduction in the topic "Displaying Graphical Results of a Simulation"; @RISK XDK Reference documents all objects and methods in detail, and the Examples include several on creating graphs and reports. You need the Professional or Industrial Edition, release 6.2 or later, for the RiskGraph object.) 4. In Simulation Settings » View, check (tick) "Automatically generate reports at end of simulation". On the selection dialog that opens, check (tick) "Template Reports". Each time you run a simulation, @RISK will create a copy of any template sheets and will put that simulation's results in the copy. The original template, and any previous results created from the template, are undisturbed. @RISK includes an example showing how to use a report template. Click Help » Example Spreadsheets » Other @RISK Features and select RiskTemplate.xlsx. See also: Template Report Contains Formulas in Place of Numbers Last edited: 2015-08-07 7.20. Excel Themes in @RISK Graphs and Reports Applies to: @RISK 7.5 or newer How do I get @RISK graphs and reports to use Excel themes? @RISK's own windows are formatted by @RISK and don't use themes. When you use the Chart in Excel command to place a graph in an Excel worksheet, you can choose Excel format, or Image (Picture in some dialogs). Choose Excel format, and then the graph in the Excel sheet will update automatically when you change themes. Here are some hints for particular reports: Custom Reports: When choosing each custom report, choose Edit and change the format from Image to Excel Format. @RISK will remember this as a default, so you won't have to change this for the same report in other workbooks. Tables in Custom Reports always use Excel themes for fonts and colors; graphs in Custom Reports will use Excel themes if the graph is in Excel format. Quick Reports: Tables always use Excel themes for fonts and colors. Graphs are always images (static pictures) and will not respond to Excel theme changes. RiskResultsGraph: The fourth argument (Excel format) must be TRUE, and in addition you must set a System Registry key. Under HKEY_CURRENT_USER\Software\Palisade\@RISK for Excel\7.0\Application Settings\Reports, create a string value GraphThemeOrStandardColor if it doesn't already exists. If the data for that string value is Theme, and you've specified Excel format in RiskResultsGraph, then the generated graph will use Excel themes. If the string value is set to StandardColor (or the GraphThemeOrStandardColor string value doesn't exist), then the generated graph will use standard colors and will not respond to Excel theme changes. Even without that string value in the System Registry, if you selected Excel format in RiskResultsGraph then you can use all of Excel's graph editing tools on the generated graph. If you didn't select Excel format in RiskResultsGraph, then @RISK generates a static image and it can't be edited. Graphs you create in VBA: Use the ChartInExcelFormatEx or ChartInExcelFormatEx2 method of the RiskGraph object. These new functions are not yet documented, but a simple example workbook is attached to this article. When writing your own code, use Visual Basic Editor's auto-complete to help you fill in the function arguments. Last edited: 2017-02-02 7.21. Cell References in Tornado Graphs (RiskResultsGraph) Applies to: @RISK 5.x/6.x I used RiskResultsGraph( ) to make a tornado graph in my worksheet. The bars were labeled with both the names of the inputs and the worksheet names and cell references. How can I tell RiskResultsGraph( ) to show just the name of each input, not its location? RiskResultsGraph( ) does place both input names and locations at the left of the bars in some releases of @RISK, and there's no way to change this. If this is a one-time need, the easy way is to use Browse Results for the output in question. In tornado graphs in the Browse Results window, only the input names appear, not the input locations. Perform any customizations you want, then right-click the graph and select Copy Graph. Click at the desired location in your worksheet and press Ctrl+V, or right-click and select Paste Special. if you want an automated solution, the Quick Reports don't show cell references in the tornado graphs. The same is true of the Custom Reports introduced in @RISK 7.0.0. Beginning in @RISK 6.2.0, you can use Visual Basic for Applications to create your tornado and place it in your worksheet. These tornado graphs label each bar with the name of the input, not its location, and you can do many types of customization. A very simple example is attached. • For an introduction to making @RISK graphs in VBA, click Help » Developer Kit (XDK) » Automation Guide and look at the section "Displaying Graphical Results of a Simulation". • You need to set references when you have code that automates @RISK. See "Setting Library References", earlier in the Automation Guide. • You can place the code that generates the graph in a macro that @RISK will execute at the end of simulation, and register the macro in Simulation Settings » Macros. See also: All Articles about Tornado Charts Last edited: 2015-07-06 7.22. Custom Color Selection in Graphs Applies to: @RISK 6.2/6.3/7.x, Professional and Industrial Editions I right-click a graph and select the Curves tab. When I try to change the color, the Define Custom Color button is grayed out. How do I unlock it? That button is grayed out by design, but if you want a color different from the available colors you can create the graph in VBA. Please click Help » Developer Kit (XDK) » Automation Guide for a friendly introduction to controlling @RISK with VBA. The Automation Guide contains sample code for generating reports, though not for setting colors. For setting colors specifically, you need the CurveColor method of the RiskGraph object. It's in the @RISK XDK Reference in the Help » Developer Kit (XDK) menu, but probably it's easier just to look at an example. Please take a look at the attached file, which is a modified form of our example spreadsheet found at Help » Developer Kit (XDK) » Examples » RISK XDK – Creating Graphs 2.xlsx. In this example, on the Model worksheet, click Run Simulation and then Distribution Graphs of Outputs. All of the graphs are placed on the Graphs1 worksheet, in a 5×4 array. Press Alt+F11 to view the code, and look at the Graphs1 subroutine. Before calling the ImageToWorksheet method, use CurveColor to set the color of the curve. (CurveColor takes an index argument to let you use different colors when a graph contains more than one curve.) RGB is an Excel macro that lets you set red, green, and blue, in that order, to any value from 0 to 255 inclusive. Last edited: 2015-08-25 7.23. Multiple Browse Results Windows Applies to: @RISK 5.x–7.x Here are two techniques. METHOD A: Change the regular callout window for Browse Results to a floating window by clicking the icon at the lower right; see attached screen shot. You can then click Browse Results again and select another output or input. METHOD B: Paste graphs into an Excel sheet. Chart in Excel creates a new worksheet for each graph, but you can place more than one graph — Browse Results or other graphs — in the same worksheet as 1. When you have the graph the way you want it, Edit and Export (third icon in the row of small icons at the bottom) and select Copy Graph, or Copy Graph and Grid if you want the statistics grid 2. Click into the worksheet and press Ctrl-V to paste the graph or graph and grid. (You don't need to close the Browse Results window first.) Caution with @RISK 5.x/6.x: The pasted graphs will be correct in the Excel worksheet, but if you try to convert them to another format, such as PDF, you may lose details. See Pasted Graph Loses Some Details. @RISK 7.x uses a different technique and does not have this issue. Last edited: 2017-03-30 7.24. "OnScreen Control" app causing display issues in @RISK Applies to: @RISK 7.6 Issue: The use of the software causes the error: "Run-time error '91': Object Variable or with block variable not set" After multiple testings from our end, we could confirm this application generates display issues in @RISK version 7.6 and crashes if the Browsing feature is enabled in the Browse Results window, however, those issues don’t show in version 8.x. So, to fix it you have three options: 1. Upgrade to version 8. 2. Turn off the OnScreen Control app before running v7.6. 3. Make sure to run @RISK v7.6 as administrator (Right-click and run as administrator) so that OnScreen Control app doesn’t interfere with @RISK windowing which is running using higher privileges. Last Update: 2020-04-27 7.25. Setting DPI for Images Generated by @RISK Applies to: @RISK 6.x/7.x I need a particular DPI (dots per inch) setting for publication. When I click Chart in Excel, the DPI doesn't seem to be one of the options. How can I set it? Really this question is not so much about customizing the DPI or PPI setting, as it is about specifying the size of the image in pixels. Once you generate an image that's big in terms of the pixel width and height (and still sharp), you can change the DPI with a tool like Photoshop or the free Irfanview. If you don't want to install software you can use a Web page like Change DPI of Image. The DPI setting will tell the printer or publisher how big the printed image will be. But of course most pieces of software have other ways of specifying the printed size, without changing DPI. So DPI may not be the primary concern here. The @RISK GUI (Graphical User Interface) doesn't have an option for specifying the sizes of images in pixels. This shouldn't be a big problem when handling one graph or a handful of them, since you can resize a picture in Excel manually and get one with a bigger pixel size that is as sharp as the original. This is generally true when you insert a raster/bitmap image into Excel—you resize it and it stays sharp, as if it was a vector image. Attached is a sample @RISK graph that we resized in Excel, by dragging the corner, to get an image with about 2800x1900 pixel size. We then changed the DPI from 96 to 300 using the Web page mentioned above. You can see that the image is quite sharp. If I can't specify image size in @RISK, can I do it in VBA? Yes, you can do it in VBA (Visual Basic for Applications), if you have @RISK Professional or Industrial. Click Help » Developer Kit (XDK) » XDK Reference, and search for the ImageToFile method. The optional third and fourth arguments specify the width and height in pixels, with 600 × 400 as default. Last edited: 2017-02-22 7.26. More Than 10 Overlays? Applies to: @RISK 5 and 6 It seems that @RISK will only let me overlay 10 variables on my results graph. Is there any way to get more? No, not as overlays. If you want more than 10 variables on one graph, you can do a Summary Trend graph to get key numbers from each distribution. last edited: 2014-06-20 7.27. Automatic Overlays from Multiple Worksheets Applies to: @RISK for Excel 5.x–7.x I want to have several inputs or outputs overlaid on one graph, to be displayed automatically at the end of a simulation. How can I do it? You can embed overlay graphs in your worksheet with the RiskResultsGraph( ) function. The graphs will be updated automatically when you run a simulation, and the latest versions will be stored with your workbook even if you don't store simulation results. Please download the attached example (about 130 KB). To pull together multiple distributions in one overlaid graph, you need a contiguous set of cells (row or column), defined as an output range with a common name. If the cells you want to graph are in different parts of a worksheet, or even in different worksheets, you can create a range of contiguous cells that are set equal to the cells you actually want to graph, and then designate the range as an @RISK output. The example has four such ranges, in column C. Some additional points about this example: • The RiskResultsGraph( ) functions are hidden behind the graphs. To make them visible, delete the graphs, use Excel search for RiskResultsGraph in formulas, or press F5 (Go To) and enter F3, O3, F24, or O24. • The four graphs illustrate four possible formats for the results. (Graphs 1 and 4 are both histograms, but the first is probability density and the last is relative frequency.) • RiskResultsGraph( ) uses default graph titles, but you can set a title as in cell F3 in the example. Some limited customizations are available; search RiskResultsGraph in the help file. • The arguments to RiskOutput, in C3:C14, make all of them part of an output range. Note the comma immediately after the opening parenthesis. • The INDIRECT( ) functions in C3:C14 make the example more general by using worksheet names that are in worksheet cells rather than embedded in the formula. This inhibits Smart Sensitivity Analysis. Therefore, for your specific model, you probably want to replace the INDIRECT( ) functions with plain cell references, like =Sheet11!C45. If you do use INDIRECT( ) functions, you will want to disable Smart Sensitivity Analysis in Simulation Settings » Sampling, which has been done in this example. More about this issue is in the article Found invalid formula ... Continue without Smart Sensitivity Analysis? An alternative is available in @RISK Professional and Industrial releases 6.2 and newer, if you're willing to use Visual Basic for Applications. The GraphDistribution method takes an Array-type argument that lets you specify non-contiguous cells for overlays. The Automation Guide, in the Help » Developer Kit (XDK) menu, introduces you to VBA programming and gives a couple of examples of that function; complete documentation is in the @RISK XDK Reference in the same menu. Last edited: 2015-07-14 7.28. Sharing @RISK Graphs with Colleagues Who Don't Have @RISK Applies to: @RISK 7.x (Several options are also available in earlier @RISK releases.) How can I output an @RISK histogram and/or tornado graph so that someone without @RISK can see them? There are several possibilities, but the basic idea is that if you put a graph in the workbook, it is permanently there and independent of @RISK, so that a colleague can see it even if they don't have @RISK. The graph will be static, in that changing numbers in the workbook won't change the graph. Most graphs give you a choice of Excel format or a picture. An Excel-format graph can respond to themes, and you can edit its axes or titles or change colors. A picture is just that, a static image. Here are some methods to place graphs in your workbook: • Swap Out @RISK. In @RISK 7.0 and newer, you have the option to embed thumbnail graphs in the workbook. • Use VBA (requires @RISK 6.2 or newer, Professional or Industrial Edition). To get started, see the Automation Guide under Help » Developer Kit (XDK) in the @RISK menu. For an example, see Placing Graphs in an Existing Worksheet with VBA. • Use the RiskResultsGraph function. To access the function, click into an empty cell, type =RiskResultsGraph and press Ctrl+A. The function has many arguments, so use the scroll bar at the right to see all of them. (Only the first two arguments are required.) For more complete help text, click the link at lower left, Help on this function. • Create your graph in Define Distributions, Browse Results, or another menu selection. Click the Edit and Export icon, which is near the left end of the row of tiny icons at the bottom of the graph window. Select Chart in Excel. In the Chart Setup dialog, choose either Excel Chart or Picture. The picture will be more faithful to the graph, but the Excel Chart option lets you use Excel themes and edit the properties of the generated graph. • Create your graph as before, then click Edit and Export and either Copy Graph or Copy Graph and Grid. That places the graph on the Windows clipboard as a picture, and you can paste it with Ctrl+V anywhere you like—into an Excel sheet, email, Word document, etc. • If you want the graph as a separate file, rather than embedded in an Excel sheet, click Edit and Export » Save Image File. You can choose from several image formats: BMP, JPG, PNG, and EMF. See also: Sharing @RISK Models with Colleagues Who Don't Have @RISK Last edited: 2017-06-06 7.29. Creating Scatter Plots with Inputs from Multiple Sheets Applies to: @RISK 8.x How can I create a scatter plot in @RISK using inputs that appear in different sheets of the same workbook? In @RISK version 8, you can quickly create a scatter plot of two or more inputs by clicking Explore > Scatter Plot, and then adding cells to a dialogue box, note that it will be prefilled with cells with @RISK functions There is another option to create scatter plots that allow you to choose inputs from different sheets. Click Explore > Results Summary to bring up a window that summarizes all the inputs and outputs in the model. Then you can select multiple inputs simultaneously from anywhere in the workbook and click the Explore button at the bottom of the window to create a scatter plot. The same method can be used to create other graph types such as a summary box plot as well. Last edited: 2021-07-29 7.30. Reproducing the Data from Spider Graphs Applies to: @RISK 6.x–8.x @RISK has a graphing option known as a spider graph that shows how various inputs affect a given output. How can I reproduce this data in Excel? The attached workbook gives an example of how to do this. Here are the basic steps for the calculations that create the shape of the graph. 1. Run the simulation. 2. Use the RiskData function to extract the simulation samples. For more on this function, see Placing Iteration Data in Worksheet with RiskData(). 3. Use Excel sorting to order the samples. 4. Calculate the percentiles for each input distribution and assign a bin number to each sample depending on the number of scenarios of the spider (i.e. 10). For more on bins, see Interpreting Change in Output Statistic in Tornado Graphs. 5. Finally, calculate the average for each bin in the samples and assign it to the Change in Output column, which is the Y axis of the Spider Graph. Last edited: 2021-03-04 8. Advanced Analyses in @RISK 8.1. Fix Distribution to Base Value When Not Stepping Applies to: @RISK 5.x–7.x In Advanced Sensitivity Analysis, on the Input Definition dialog, there is a check box, "Fix distribution to base value when not stepping". What does this mean? You specify one output and one or more inputs in the Advanced Sensitivity Analysis dialog box. This check box matters only when you specify more than one input. When multiple inputs are specified, Advanced Sensitivity Analysis begins with a set of simulations to determine the impact of the first input on the output. In each of those simulations, the first input has a particular value, which is one of the steps that you specified on the Input Definition screen. So for this first set of simulations as a group, we say that @RISK is stepping the first Then Advanced Sensitivity Analysis continues with another set of simulations to determine the impact of the second input on the output. In this set, @RISK is stepping the second input. For each input, @RISK runs one simulation for each step you specified on the Input Definition screen for that input. After all the specified inputs have been stepped independently, @RISK prepares the sensitivity reports. The question is, while @RISK is stepping a given input, what are all the other inputs doing? There are two possibilities: • If you leave the box "Fix distribution to base value when not stepping" empty, then when @RISK is stepping another input this one will take on a different random value at each iteration, just as it does during a regular simulation. • If the box is checked (ticked), then when @RISK is stepping other inputs this one is held constant at its base value. What is the base value? Usually it is the expected value (mean) of the distribution. But if you specified a static value on the Define Distribution dialog or in a RiskStatic( ) property function within the distribution, then the base value is that specified static value. (The setting "Where RiskStatic is not defined, use" in Simulation Settings does not determine the base value of a distribution during Advanced Sensitivity Analysis.) The above applies only to inputs that are selected in the Advanced Sensitivity Analysis. During the simulations for that analysis, any @RISK inputs that are not selected as inputs in the Advanced Sensitivity Analysis dialog will vary randomly, just as they do during a regular simulation. You do have the option of locking any of them to prevent them from varying, but then you are doing an analysis without taking into account the uncertainties that you programmed into your model. Last edited: 2015-07-14 8.2. Stressing Each Input in Its Own Simulation Applies to: @RISK 6.x/7.x, Professional and Industrial Editions I need to run a Stress Analysis in @Risk. In the Option dialog, I have a choice of "Stress Each Input in Its Own Simulation" or "Stress All Inputs in a Single Simulation". If I choose the first option, what are the other inputs doing while @RISK is running those separate simulations? Do the other inputs run the full range of their distributions, or are they fixed to single numbers? Either way, you get one simulation called the baseline, where all variables vary according to their distributions. • With "Stress Each Input in Its Own Simulation", you also get one simulation per designated stress input, where one input is stressed while the others vary according to their distributions. • With "Stress All Inputs in a Single Simulation", you get a total of two simulations, the baseline plus a second simulation where all inputs are stressed simultaneously. The two results are then Take a look at the attached example. A stress analysis is run with two stress ranges: Revenue (cell C3) is restricted to the lower 5% of its distribution, and Cost (C4) is restricted to the upper 5% of its distribution. • With "Stress Each Input in Its Own Simulation", you get three simulations: 1. Baseline — Revenue varies through its full range, and so does Cost (not shown). 2. C3 0% to 5% — Revenue varies only through the lower 5% of its distribution, while Cost (not shown) varies through its full distribution. 3. C4 95% to 100% — Cost (not shown) is restricted to the upper 5% of its distribution, while Revenue again varies through its full range. • With "Stress All Inputs in a Single Simulation", you get two simulations: 1. Baseline — Both inputs vary through their full range. (Only Revenue is shown.) 2. Stress Analysis — In this single simulation all inputs are restricted to their stress ranges. (Again, only Revenue is shown.) By the way, to prepare these graphs after running the Stress Analysis, we clicked on Revenue (cell C3) and then the Browse Results icon. Then we clicked the # icon in the row at the bottom to select each simulation in turn. Right-clicking on the graph and selecting Copy, we then pasted each graph into the worksheet. Last edited: 2015-07-14 9. @RISK Performance 9.1. How Long Did My Simulation Run? Applies to: @RISK 4.x–7.x Is there an @RISK function that will show the duration of a simulation? All releases of @RISK show the simulation run time in the Quick Reports as "simulation duration". In @RISK 6.x/7.x, place the function RiskSimulationInfo(2) in a worksheet cell, and after the simulation the cell will contain the simulation run time in seconds. In @RISK 4.x/5.x, there's no specific @RISK function, but you capture the simulation run time in your worksheet as follows: 1. Put =NOW( ) in one cell, such as E10. 2. Put this formula in another cell: At the end of simulation, that cell will contain the number of seconds that the simulation took. (The 24*60*60 converts fractional days to seconds by multiplying by the number of hours in a day, minutes in an hour, and seconds in a minute.) Additional keywords: SimulationInfo, simulation info Last edited: 2015-07-14 9.2. For Faster Simulations Disponible en español: Para simulaciones más rápidas Disponível em português: Para simulações mais rápidas Applies to: @RISK 5.x–8.x What can I do to speed up @RISK's performance? Here's our list of things you can do within @RISK, and things you can do outside @RISK. The ones that make the biggest difference are marked in bold face. How should I set up Excel? • For really big simulations, switch to 64-bit Excel. All 32-bit Excels are limited to 2 GB of RAM and virtual memory combined. 64-bit Excel doesn't have that memory limit. (You need at least @RISK 5.7 if you have 64-bit Excel. Please contact your Palisade sales manager if you need to upgrade.) Note: 64-bit Excel lets you run larger simulations, but it is not intrinsically faster than 32-bit Excel; please see Should I Install 64-bit Excel? for more information about the trade-offs. • Enable multi-threaded calculations (Excel 2007 and newer). In Excel 2010, File » Options » Advanced » Formulas » Enable multi-threaded calculations. In Excel 2007, click the round Office button and then Excel Options » Advanced » Formulas » Enable multi-threaded calculations. Note: Though this is a global Excel option, it can also be changed by opening a workbook where the option was set differently. Check the status of the option while your particular workbook is • If your computer is running Excel from a network, install Excel locally instead. This eliminates slow-down due to network traffic. (If you're running @RISK on a terminal server, this is not a problem because everything happens on the remote computer.) • To make Excel start faster, remove any unnecessary add-ins. □ In Excel 2010 and newer, click File » Options » Add-Ins. At the top of the right-hand panel, notice whether the unneeded add-ins are Excel add-ins or COM add-ins. Then at the bottom of the right-hand panel, after Manage select Excel or COM and click Go. □ In Excel 2007, click the Office button then Excel Options » Add-Ins. At the top of the right-hand panel, notice whether the unneeded add-ins are Excel add-ins or COM add-ins. Then at the bottom of the right-hand panel, after Manage select Excel or COM and click Go. □ In Excel 2003 or older, click Tools » Add-Ins. (Only Excel add-ins will be displayed; COM add-ins will not be visible.) Remove the check mark from any add-ins that will not be needed during your @RISK session. The next time you load Excel and @RISK after doing this, any slow-down due to loading extra add-ins should be eliminated. • If you have Excel 2007, install the latest service pack, or upgrade to Office 2010 or later. Office 2007 Service Pack 1 improved Excel's speed and fixed some bugs; Service Pack 2 fixed further bugs and improved Excel's stability. After you install an Office 2007 Service Pack, run a repair by following these instructions: Repair of Excel or Project. • Follow Palisade's and Microsoft's suggestions in Getting Better Performance from Excel and Recommended Option Settings for Excel. Any hardware suggestions? • Add RAM, unless your computer already has plenty. Insufficient RAM is probably the biggest single bottleneck on a simulation. Watch your hard-drive usage light while a simulation is running. If it is constantly writing or reading information from disk during the simulation, you should consider increasing your system's memory. To estimate memory needs, see Memory Used by @RISK Simulations and Hardware Requirements or Recommendations. • Enable Turbo Boost, Turbo Core, Power Tune or similar if available on your computer. This is not overclocking (which we don't recommend). Turbo Boost and the others are technology built into some CPUs by Intel, AMD, and others to adjust processor speed dynamically, depending on your computing needs from moment to moment. Please consult your computer's documentation to learn whether you have this technology, how to find the current status, and how to enable it if it's not currently enabled.. What can I do in Windows? • For larger simulations, you may want to override the default page file size. See Virtual Memory Settings. • The temporary folder (%TEMP% environment variable) should be on the local computer, not in a network location. If you're not sure where it is, see Opening Your Temp Folder. • Clean out your temporary folder. See Cleaning Your Temp Folder. (This may also solve some problems with Excel crashing.) • Make sure to have plenty of free space on your disk. Applications and Windows itself get very slow if it doesn't have enough disk space. (Defragmenting your disk can't hurt, but in recent versions of Windows it's unlikely to have enough of an effect that you'd notice.) • Close other applications and background services, such as Windows Indexing Service. Other programs take CPU cycles from @RISK. Also, by taking up physical memory they may force Excel and @RISK to swap more information out to disk, which can really slow down a simulation. • Tell your antivirus program not to scan .XLS or .XLSX files. (Use this setting with caution if you run .XLS files that come to you from someone else.) How should I structure my @RISK model? In our experience, poorly structured models are the most common cause of poor performance. So it's worth spending time to structure your model efficiently. • If your RiskCompound( ) distributions contain only cell references, with the actual distributions in other cells, the simulation can run noticeably faster if you embed the actual severity distributions within RiskCompound( ). (This is not important for the frequency distributions, only the severity distributions.) The more RiskCompound( ) functions in your model, the more difference this will make; and the same is true if you have large frequencies in even a small number of RiskCompound( ) functions. See Combining Probability and Impact (Frequency and Severity) for more on RiskCompound( ). • Fix all invalid correlation matrices (non-self-consistent matrices). If your @RISK distributions reference any invalid matrices, you'll have to answer a pop-up every time you simulate, and @RISK will have to take time to find valid matrices every time. The time to do this increases by a power of the number of rows in the matrix, so your simulation will take a lot of extra time if you have any medium to large correlation matrices that aren't valid. See How @RISK Tests a Correlation Matrix for Validity for how you can check matrix consistency, and How @RISK Adjusts an Invalid Correlation Matrix for how you can adjust an invalid matrix once and for all. • Remove extraneous elements from your model: □ Consider removing unnecessary graphs and tables from your model. These may take significant time to calculate and update. □ Eliminate external links if possible, particularly links to a network resource. • Eliminate linked pictures. If a workbook contains linked pictures, Excel's performance in updating cells can slow to a crawl. @RISK may appear to crash or hang, but actually it is just waiting for Excel to finish the cell updates. • If you have @RISK functions inside Excel tables, move them outside. For details, please see Excel Tables and @RISK. • Avoid unneeded INDIRECT, VLOOKUP, HLOOKUP, and similar functions. In our experience, these are rather slow, and if your model contains a lot of them it will definitely run slowly. VLOOKUP and HLOOKUP can be replaced with INDEX+MATCH functions. There are great resources on the Web, and you'll find them with this Web search: excel "index function" "match function" • Don't save simulation results in your workbook, or if you do then clear them before starting the simulation. Saved results will cause Excel to take longer to recalculate each iteration; how much difference this makes depends on the size of the results. See Excel Files with @RISK Grow Too Large. • Open only the workbook(s) that are part of the simulation. During a simulation, in every iteration Excel recalculates all open workbooks. If you have extraneous workbooks open, it can slow down your simulation unnecessarily. • See also: Microsoft's article How to clean up an Excel workbook so that it uses less memory (applies to Excel 2013 and 2016). What do you recommend for @RISK simulation settings? • General tab: Set Multiple CPU Support to "Automatic" or "Enabled" if you have dual core, quad core, etc. Starting with @RISK 5.5, this is available in all editions of @RISK, not just Industrial. If you have @RISK Professional or Standard, and you have a quad core or better, you will probably see significant speed improvements if you upgrade to Industrial. In rare situations, if you have a large number of CPUs the overhead of the parallel processing might exceed the CPU cycles shared. Or, with all those CPUs sharing a fixed amount of RAM you may find that virtual memory gets used much more and the disk swaps slow down the simulation. In this case, reducing the number of CPUs available to @RISK may help. Unfortunately, there's no way to predict this, and you just have to experiment after you've tried the other tips. For instructions, please see CPUs Used by @RISK 7.x or CPUs Used by @RISK 4.x–6.x. Simulations with Microsoft Project cannot use multiple CPU. If your simulation settings have Multiple CPU Support: Enabled, it is automatically changed to Disabled when you click Start Simulation on a Project simulation. • View tab: Deselect Demo Mode. Uncheck Update Windows During Simulation. Uncheck Show Excel Recalculations. • Sampling tab: □ Set Sampling Type to "Latin Hypercube" (the default). Particularly if you are testing for convergence, this will make the simulation faster. Exception: if you select iterations in the millions, Latin Hypercube will slow down dramatically and Monte Carlo will be faster. However, it would be an extremely rare model that would need millions of iterations in Latin Hypercube. □ Set Collect Distribution Samples to "None" or "Inputs Marked with Collect". For the implications, see Collecting Input Distributions in the article Out of Memory. □ Disable Smart Sensitivity Analysis. This won't make the iterations any faster, but it can significantly speed up the start of the simulation. For the meaning of Smart Sensitivity Analysis and the implications of disabling it, see Precedent Checking (Smart Sensitivity Analysis). □ Set Update Statistic Functions to "At the End of Each Simulation". (This is the default in @RISK 5.5 and above, but is not available in @RISK 5.0.) This will greatly increase the speed of your simulation, if you have a lot of statistics functions such as RiskMean and RiskPercentile. Apart from the speed increase, there may be good logical reasons to choose this setting; however, your simulation results may differ from earlier versions of @RISK. Please see "No values to graph" Message / All Errors in Simulation Data. • Convergence tab: Consider enabling convergence testing. If you are running more iterations than necessary, you're just wasting simulation time. On the other hand, testing convergence itself involves some minor overhead. Try convergence testing and see if the simulation converges in significantly less time than you were before. If so, you leave convergence testing turned on; otherwise, go back to your fixed number of iterations. (When you enable convergence testing, also set the number of iterations to "Auto" on the General tab and select "Latin Hypercube" on the Sampling tab.) With convergence monitoring, by default, the simulation will stop after 50,000 iterations even if not all outputs have converged. More Than 50,000 Iterations to Converge explains how you can override that limit. What can the progress window tell me during simulation? Take a look at the number of iterations per second. It should increase during the first part of the simulation, and then stay steady, assuming no other heavyweight Windows programs start up. But sometimes, if Excel doesn't have focus, the number of iterations per second will gradually fall, as the simulation runs slower and slower. In this case, give focus to Excel by clicking once in the title bar of the Excel window. You should see the number of iterations gradually rise to its former level. This doesn't always happen, and it's not clear exactly what interaction between Excel and Windows causes it when it does happen, but giving focus to Excel usually reverses a falling iteration rate. (If that doesn't work, try giving focus to the simulation progress window by clicking in its title bar.) What about using @RISK with projects? This applies to @RISK 6.x/7.x only, Professional and Industrial Editions. • Upgrade to the latest @RISK if you have @RISK 6.0. The accelerated engine introduced in @RISK 6.1 makes many simulations with projects run dramatically faster, and there were further improvements in later versions. • Use the accelerated engine. In Project » Project Settings » Simulation, ensure that the simulation engine is set to automatic, and @RISK will then use the accelerated engine if your model is compatible with it. See the topic "Simulation Engine" in the @RISK help file for a list of fields that are compatible with the accelerated engine. □ If you see that @RISK still uses the standard engine, your model contains features that are not compatible with the accelerated engine. Click the Check Engine button on the same dialog, and @RISK will list the problem features in your model. (Also see the topic "Check Engine Command" in the @RISK help file.) If you can change those without losing essential functionality, your simulation should run much faster. □ If @RISK still uses the standard engine, click View » Simulation Settings and turn off "Demo Mode" and "Show Excel Recalculations". • If you have experience with @RISK 4, you may have used probabilistic branching. This is intrinsically time consuming because of the changes that have to be made to the predecessor/successor relationships each iteration, and reset prior to the next iteration. In @RISK 6.x/7.x, these issues are magnified by the communication between Microsoft Excel and Microsoft Project. To incorporate risk events, consider a risk register rather than probabilistic branching. For examples, click Help » Example Spreadsheets » Project Management. • If you have Project 2007, switch to Project 2010 or newer. Project recalculations are slowest in Project 2007; see Simulation Speed of @RISK with Microsoft Project. Project 2003 is fastest if you have @RISK 6.x, but @RISK 7.x requires Office 2007 or newer. • On the Project Settings » Simulation tab, if you don't need the information for Calculate Critical Indices, Calculate Statistics for Probabilistic Gantt Chart, and Collect Timescaled Data, uncheck those boxes. • On the Project Settings » Simulation tab, set Date Range for Simulation to "Activities After Current Project Date" or "Activities After Project Status Date". This will make your simulation run faster because @RISK won't simulate tasks that have already completed. • Don't re-import .MPP files. You only need to import the .MPP file once, and store the Excel workbook when @RISK prompts you. After that, in @RISK don't open the .MPP file directly. When you open the Excel workbook associated with your project, @RISK will automatically connect to the linked .MPP file and use any changes to update the workbook. This takes much less time than re-importing from scratch. What settings do you recommend in Microsoft Project? • If the project is on a network drive, copy it to your C: drive or another local drive (optimally, a local SSD drive) before opening it. • Zero out margin spans. • Set future constraints to ASAP. • Remove all deadline dates. • Check for negative slack and unstatused tasks, and correct any issues. • Create a table that contains just the fields you will want to see in @RISK, and apply it before importing the project. See also: For Faster Optimizations Last edited: 2020-07-28 9.3. Simulation Speed of @RISK with Microsoft Project Applies to: @RISK 6.x/7.x, Professional and Industrial Editions @RISK for Project, all releases I recently upgraded Microsoft Project 2003 to a newer version, and my simulations seem to take longer to run. Do I need to change some setting? Recalculation speed has changed between versions of Microsoft Project, and this impacts the run times of @RISK simulations. Why? Because for each iteration of a simulation @RISK must fully recalculate Microsoft Project. Recalculations are fastest in Microsoft Project 2003 and slowest in Microsoft Project 2007. Microsoft Project 2010 is an improvement over 2007, but still is substantially slower than Microsoft Project 2003. However, Project 2010 offers many new features over Project 2003, and Project 2003 can't support @RISK 7.x. If you have large projects in which simulation run time is an issue, use the fastest possible hardware configuration. How do Excel and Project 2013 and 2016 compare to 2010? Benchmarking Windows programs is problematic, because there are so many variables, not only different hardware but different Windows configurations, different programs running in background, and so forth — not to mention different @RISK models. We ran tests with 10,000 iterations of our Parameter Entry Table example from Help » Example Spreadsheets. We used @RISK 7.5.2 in 32-bit Excel and Project 2010, 2013, and 2016, on 64-bit Windows 8, with a 2.8 GHz i7 chip and 8 GB of RAM. We offer our results as anecdotal evidence; they may or may not apply to your system, or your model. And obviously the Parameter Entry Table example is a small one, only eight tasks, so any real project is going to take significantly longer to With those caveats, here is what we found in that example with that system: Average Times in Seconds Excel and Project versions 2010 2013/2016 32-bit 32-bit Standard Engine 255 s 199 s Accelerated Engine 34 s 56 s Multiple runs of one Excel/Project version showed little variation. Differences between Excel/Project 2013 and Excel/Project 2016 were not significant. The accelerated engine is available when @RISK distributions and outputs are in just a few commonly used fields of Project; the standard engine allows distributions and outputs in any Project field. How does our test system compare to your system? Almost everyone has 64-bit Windows. There's more of a split between 32-bit and 64-bit Office, but the majority have 32-bit Office. Switching to 64-bit Office will not increase simulation speed for most @RISK models. See also: For Faster Simulations Last edited: 2018-03-05 9.4. CPUs Used by @RISK 7.x Applies to: @RISK 7.x (If you have an older @RISK, see CPUs Used by @RISK 4.x–6.x.) How many CPUs (cores or processors) do @RISK and RISKOptimizer use? When you click Start Simulation, by default @RISK estimates how long a simulation will take and uses one or more CPUs to complete the simulation as quickly as possible. • @RISK 7.x Industrial will use anywhere between one core and all the cores in your computer, depending on its estimate of the tradeoff between overhead of starting and managing multiple copies of Excel versus the savings from parallel processing. • @RISK 7.x Standard and Professional will use no more than two cores, no matter how many you have. • Simulations and optimizations with RISKOptimizer 7.5 use multiple cores, This lets the optimization run multiple simulations in parallel, to make progress faster. • Simulations and optimizations with RISKOptimizer 7.0 use only one CPU. • Simulations with Project use only one CPU. If for any reason you want to limit @RISK to only one core when simulating or optimizing this workbook, open Simulation Settings and, on the General tab, change Multiple CPU to Disabled. @RISK recognizes as a "CPU" anything that Windows recognizes as a CPU. To find the number of CPUs in your computer, press Ctrl-Shift-Esc to open Task Manager, then select the Performance tab. Real CPUs should make a major improvement in speed of large simulations, but hyperthreaded CPUs will give only modest speed improvement. Multithreading, as opposed to multiple CPUs, is an Excel option, and you should generally turn it on in any edition of @RISK. See For Faster Simulations and Recommended Option Settings for Excel. Can I limit the number used by @RISK, thus leaving some CPUs (cores) available for other programs? If @RISK decides to use only some cores, can I tell it to use more? The default simulation setting of Multiple CPU — "Automatic" beginning with 7.5, "Enabled" in 7.0 — tells @RISK to decide the optimum number of CPUs. To tell @RISK to use only one CPU when simulating this workbook, go into Simulation Settings and change Multiple CPU Support to Disabled. To specify a number of CPUs greater than 1, the mechanism is different between @RISK 7.5 and @RISK 7.0. Number of CPUs in @RISK 7.5 and newer: Click Simulation Settings. On the General tab, look at the third setting, Multiple CPU Support. You have three options: • The default is "Automatic" (equivalent to "Enabled" from earlier releases of @RISK). @RISK will decide how many cores to use — between 1 and the number in your computer in @RISK Industrial, 1 or 2 cores in @RISK Professional and @RISK Standard. • "Enabled" has a new meaning. You specify a number in the #CPUs box, and then @RISK will always create that number of copies of Excel, even if the number is greater than the number of cores on your computer. If you use this setting, don't specify a number so high that the extra Excels bog down your computer. (Regardless of the number you specify, @RISK Professional and Standard won't create more than one "worker" Excel, for a total of two.) • "Disabled" means that @RISK always uses just one core. In earlier releases of @RISK, your setting for Multiple CPU Support applied only to simulations. Beginning with @RISK 7.5, it also applies to optimizations with RISKOptimizer. The System Registry values RiskUseMultipleCores, ForceMultiCore, and NumCPU, and the Excel name _AtRisk_SimSetting_MaxCores, are no longer used in @RISK 7.5, and will be ignored if they are set. Number of CPUs in @RISK 7.0: On Excel's Formulas tab, click Name Manager. If the name RiskUseMultipleCores already exists, click it and click Edit; otherwise click New and enter that name. The value can be any of the following: • A specific number of cores that you want @RISK to use. If you specify more than the computer has, @RISK will use as many as you have but won't display an error message. If you specify a number greater than 2 in @RISK Professional or Standard, @RISK will use two cores but won't display an error message. • The keyword all. • The keyword off (equivalent to 1). • The keyword auto (tells @RISK to decide the optimum number of CPUs). • An absolute cell reference with leading equal sign, such as =$B$12. This lets you place the setting in the workbook in case you want to change it later without going through Name Manager, for instance if you're testing simulation speed with various numbers of cores. If you define the name RiskUseMultipleCores in a workbook, it overrides the Multiple CPU setting (Enabled or Disabled) in Simulation Settings when that workbook is open. The System Registry values ForceMultiCore and NumCPU, and the Excel name _AtRisk_SimSetting_MaxCores, are no longer used in @RISK 7.0, and will be ignored if they are set. Additional keywords: Number of cores, multiple cores, how many cores, how many CPUs Last edited: 2017-10-05 9.5. CPUs Used by @RISK 4.x–6.x Applies to: @RISK 5.5, 5.7, and 6.x, all editions @RISK 4.x and 5.0, Industrial Edition only RISKOptimizer, releases 1.x and 5.x (If you have @RISK 7, see CPUs Used by @RISK 7.x.) What is the maximum number of CPUs (cores or processors) that @RISK and RISKOptimizer will use? Can I limit the number used by @RISK, thus leaving some CPUs (cores) available for other programs? Is there any other reason to limit the number of CPUs used? @RISK recognizes as a "CPU" anything that Windows recognizes as a CPU. To find the number of CPUs in your computer, press Ctrl-Shift-Esc to open Task Manager, then select the Performance tab. Real CPUs should make a major improvement in speed of large simulations, but hyperthreaded CPUs will give only modest speed improvement. Multithreading, as opposed to multiple CPUs, is an Excel option, and you should generally turn it on in any edition of @RISK. See For Faster Simulations and Recommended Option Settings for Excel. @RISK uses a heuristic to guess how long a simulation will take. If @RISK judges that the overhead of starting multiple copies of Excel would outweigh the time saved through parallel processing, it will use only one core even if you have enabled Multiple CPU. For much more about this, please see Multiple CPU — Only One CPU Runs. With larger simulations, • @RISK Industrial can use all CPUs that exist in your computer. You can enable or disable multiple CPUs on the first tab of the Simulation Settings dialog. • @RISK Standard and @RISK Professional 5.5 and later can use up to two CPUs if present. You can enable or disable multiple CPUs on the first tab of the Simulation Settings dialog. (@RISK Standard and @RISK Professional 5.0 and earlier run all simulations with one CPU.) • Simulations and optimizations with RISKOptimizer use only one CPU. (There is significant overhead to communicating between the master CPU and the workers. In the great majority of optimizations that we have seen, this overhead would eliminate all or nearly all the savings from multiple CPUs.) • Simulations with Project use only one CPU. Optimum number of CPUs Up to around four, more CPUs is almost always better. Beyond that, at some point you can actually have too many CPUs. You can reach a point where CPUs are starved for RAM and have to use virtual memory, which means relatively slow disk operations instead of fast operations in real memory. Or if you have a lot of CPUs in a simulation, the overhead can swamp the savings and a simulation can actually take longer. To some extent, determining the optimum number is a matter of experimentation, because it depends on the size of your model, the Memory Used by @RISK Simulations, and the available RAM in your computer. You can get an idea of an appropriate number of CPUs for this model and the amount of RAM in your computer. Follow the process in Memory Used by @RISK Simulations to determine how much memory your simulation needs. Take the amount of RAM in your computer, subtract what's used by Windows and other programs, and divide by the amount of RAM used in a single-CPU simulation. You don't want to use more CPUs than that, though it might still be more efficient to use fewer. Limiting the number of CPUs used You can do this via a System Registry key or by defining a special name in Excel. If you do both, in @RISK 6.2 or 6.3, @RISK will use the workbook name and ignore the Registry key. Either way, if your Simulation Settings have Multiple CPU set to Disabled, @RISK will use just one CPU. Registry setting (all releases 4.x–6.x) If you want to have @RISK Industrial use multiple CPUs, but not all the CPUs in your computer, you can do this by editing the System Registry: 1. With Excel not running, click Start » Run and enter the command REGEDIT. Click OK. 2. Navigate to the key HKEY_LOCAL_MACHINE\Software\Palisade, or HKEY_LOCAL_MACHINE\Software\WOW6432Node\Palisade in 64-bit Windows. 3. In the right-hand panel, right-click and select New » DWORD Value and type the name NumCPUs — note the "s" at the end. 4. Double-click NumCPUs, enter your desired maximum in the Value Data box, and click OK. 5. Select File » Exit to close the Registry Editor. When you enable Multiple CPU in Simulation Settings, @RISK will not use more than the number of CPUs specified in the System Registry. (If you actually have fewer CPUs in your computer, @RISK will just use the ones it finds.) Workbook setting (@RISK 6.2 and 6.3) If you can't edit the System Registry or prefer not to, create a name _AtRisk_SimSetting_MaxCores • To create the name in Excel 2007 and later, click Formulas » Name Manager » New; in Excel 2003, Insert » Name » Define. • The name begins with an underscore. • The "Refers to" value must be preceded by an = sign, as shown in the illustration. • If you actually have fewer CPUs in your computer, @RISK will just use the ones it finds. • If you set a limit with NumCPUs in the System Registry (above), @RISK will honor a lower number specified in AtRisk_SimSetting_MaxCores, but will ignore a higher number. • If you have several workbooks open, and more than one of them defines this name, @RISK will use the lowest number. Additional keywords: Number of cores, multiple cores, how many cores, how many CPUs Last edited: 2018-10-26 9.6. Memory Used by @RISK Simulations Applies to: @RISK for Excel 5.x–7.x How much memory is used during a simulation? @RISK saves the values of each output, each input (unless you have changed the default on the Sampling tab of Simulation Settings), and each cell referred to by a statistics function such as RiskMean ( ) or RiskPtoX( ). The memory required is 8 bytes per value per iteration per simulation. However, to avoid overflowing 32-bit Excel's limited memory space (below), @RISK pages data to disk as @RISK needs additional memory for its own code and for data other than the iterations of simulation inputs and outputs. To get an idea of overall memory requirements for your simulation: 1. In Simulation Settings » General, change Multiple CPU to Disabled and run a simulation with the number of iterations unchanged. 2. When the @RISK progress window shows iterations being run, open Task Manager (Ctrl+Shift+Esc) and look in the Commit Size column to see how much memory Excel.exe is using. □ In Windows 7, Vista, or XP, look at the Processes tab. If you don't see the Commit Size column, click View » Select Columns » Memory–Commit Size. □ In Windows 8 or 10, look at the Details tab. If you don't see the Commit Size column, right-click on any column head and click Select Columns » Commit Size. 3. You can then shut down the simulation with the "stop" button in the progress window. When you re-enable Multiple CPU, the master CPU will use about this much and each worker CPU will use somewhat less. When I disable Smart Sensitivity Analysis, my simulation starts faster, but does it also reduce memory use? Yes and no. After running a Smart Sensitivity Analysis, @RISK saves the results of the precedent tracing but frees the memory used for the trace. So there is no appreciable memory saving once the simulation starts. However, if your model is large and complicated enough, @RISK could run out of memory during the process of tracing precedents. In that case, turning off Smart Sensitivity Analysis will bypass precedent tracing and the associated out-of-memory condition. I have heard that Excel has a memory limit of 2 GB. Does @RISK have such a limit? Well, sort of. 64-bit Excels have effectively no limit to memory space. The part of @RISK that runs in the Excel process shares in this. The part of RISK that is separate executables, such as the model window and progress window, used to be subject to the 2 GB limit, but as of @RISK 7.5.2 those executables are Large Address Aware (next paragraph). As for 32-bit Excel, it's complicated. Historically, every 32-bit process, including 32-bit Excel, was limited to 2 GB of address space. However, during the year 2017, updates to Excel 2013 and 2016 gave 32-bit Excel the ability to access 4 GB of memory space when running in 64-bit Windows, or 3 GB in 32-bit Windows. See Large Address Aware in Should I Install 64-bit Excel? • Beginning with 7.5.1, the operations of @RISK that share 32-bit Excel's memory space are Large Address Aware. • Beginning with 7.5.2, all parts of @RISK are Large Address Aware. • All versions of @RISK will page data to disk during a simulation to avoid overrunning the Excel's memory limit. If you are using multiple processors, then each Excel process has a separate memory limit, so in 32-bit Excel the overall simulation can use up to 2 GB (or 3 or 4 GB) times the number of processors. Add to that whatever is used by executables whose names start with Pal or Risk. If you want to limit the number of processors used by a simulation, please see CPUs Used by @RISK. All the above is subject to additional constraints. Not all the RAM in your computer is available to Excel and @RISK: the operating system and other running applications need some as well. You should make certain that you've allocated enough virtual memory. On the Processes tab of Task Manager, you can see how much memory is in use by which processes. Does @RISK take advantage of 64-bit Excel? The great majority of simulations run just fine in 32-bit Excel and @RISK and do not see significant benefit from switching to a 64-bit platform. If your simulation generates gigabytes of data, and you have enough RAM to hold it all, you may see some benefit. Please see Should I Install 64-bit Excel? for more information. See also: "Out of Memory" and "Not enough memory to run simulation" for techniques to reduce the memory used. Last edited: 2018-02-12 9.7. GPU Computations to Speed up @RISK? Applies to: @RISK 5.x–7.x Does @RISK take advantage of CUDA functionality, using the GPU (graphics processing unit) in addition to the main CPU to increase simulation speed? As I understand, the graphic card CPUs are very good at parallel processing, which is what is needed to increase simulation speed. CUDA is one type of GPGPU (general-purpose computation on graphics processing units), and is specific to NVidia GPUs. AMD has a different scheme, called OpenCL. In a typical simulation, most of the compute power is used not by @RISK but by Excel, in recalculating all open workbooks for each iteration. And as of this writing (July 2017), Excel versions up through Excel 2016 don't use GPGPU. @RISK 5.x–7.x do use multiple threads, to try to use all CPU resources available, but not GPGPU. There are few calculations within @RISK itself that could benefit from GPGPU. We will continue to re-evaluate this issue as technology advances. Last edited: 2017-07-28 10. VBA Programming with @RISK 10.1. Automating Time Series Fitting in VBA Applies to: @RISK 6.x/7.x, Industrial Edition What are the VBA objects and methods for Time Series fitting? I'd like to automate my fitting process. Unfortunately, there is no VBA interface in the @RISK XDK for Time Series. This may be added in a future release, but for now Time Series can be done only through the user interface. Last edited: 2016-10-03 10.2. Setting References in Visual Basic Applies to: @RISK 5.x–8.x (Professional and Industrial Editions) Evolver 5.x–8.x NeuralTools 5.x–8.x PrecisionTree 5.x–8.x StatTools 5.x–8.x You can set up VBA macros (macros written in Visual Basic for Applications) to automate these programs or to access their object model without depending on worksheet functions. To do this, you must tell the Visual Basic editor where to find the definitions of objects; this is known as setting references. Therefore, if your VBA code needs to access objects, properties, and methods that are part of Palisade software, you must set references to one version of whichever Palisade tool contains the objects you need. Typically this comes up when you want to control @RISK or another application, for instance by setting simulation options, running a simulation, or fitting a distribution. On the other hand, if you just want @RISK to execute your code before or after every iteration or simulation, and your code doesn't directly access any @RISK objects, you don't need to set references in VBA. To set references: 1. In Visual Basic editor, click Tools » References. 2. Remove check marks from any outdated libraries, such as AtRisk, RiskXL, Risk, and Risk5. 3. In the References window, select the appropriate item or items for your program and release number, as listed below. (You will see many Palisade entries. Select the ones listed in this article, and no others. Select only one version; you cannot have both versions 6 and 7 checked, for instance.) 4. Click OK to close the References window. References are stored in the workbook when you click Save. When you double-click a workbook that has references set, or open such a workbook through File » Open in Excel, the indicated Palisade software will open automatically, if it's not already running. Release 8.x (using "8.x" as an abbreviation for 8.0, 8.1, or 8.5 as appropriate): If you share a workbook with someone who has a different 8.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 8.x versions, but between 5.x, 6.x, 7.x, and 8.x you must change the reference manually. • For @RISK 8.x: both RiskXLA and Palisade_RISK_XDK8. • For Evolver 8.x: both EvolverXLA and Palisade Evolver 8.x for Excel Developer Kit. • For NeuralTools 8.x: NeuralTools only (without "Palisade"). • For PrecisionTree 8.x: both PtreeXLA and Palisade PrecisionTree 8.x Object Library. • For StatTools 8.x: Palisade StatTools 8.x Object Library only. Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. In @RISK, to access the Automation Guide, click Resources » Automating @RISK (XDK) » XDK Automation Guide. In other applications to access the Automation Guide, click Help » Developer Kit (XDK) » Automation Guide. Release 7.x (using "7.x" as an abbreviation for 7.0, 7.5, or 7.6 as appropriate): If you share a workbook with someone who has a different 7.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 7.x versions, but between 5.x, 6.x, and 7.x you must change the reference manually. • For @RISK 7.x: both RiskXLA and Palisade @RISK 7.x for Excel Object Library. If you have RISK Industrial and you want to use the RISKOptimizer part of the object model, select Palisade RISKOptimizer 7.x for Excel Developer Kit also. • For Evolver 7.x: both EvolverXLA and Palisade Evolver 7.x for Excel Developer Kit. • For NeuralTools 7.x: NeuralTools only (without "Palisade"). • For PrecisionTree 7.x: both PtreeXLA and Palisade PrecisionTree 7.x Object Library. • For StatTools 7.x: Palisade StatTools 7.x Object Library only. Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. To access an Automation Guide, click Help » Developer Kit (XDK) » Automation Guide. Release 6.x (using "6.x" as an abbreviation for 6.0, 6.1, 6.2, or 6.3 as appropriate): If you share a workbook with someone who has a different 6.x release number, the reference will adjust automatically on that person's computer. If they edit the workbook and send it back to you, the reference will again adjust automatically to match your computer. This works within 6.x versions, but between 5.x, 6.x, and 7.x you must change the reference manually. • For @RISK 6.x: both RiskXLA and Palisade @RISK 6.x for Excel Object Library. If you want to use the RISKOptimizer part of the object model, select Palisade RISKOptimizer 6.x for Excel Developer Kit also. • For Evolver 6.x: both EvolverXLA and Palisade Evolver 6.x for Excel Developer Kit. • For NeuralTools 6.x: NeuralTools (without "Palisade"). • For PrecisionTree 6.x: both PtreeXLA and Palisade PrecisionTree 6.x Object Library. • For StatTools 6.x: Palisade StatTools 6.x Object Library. Beginning with release 6.2, Automation Guides are included with the Professional and Industrial Editions of @RISK, Evolver, NeuralTools, and PrecisionTree. The Automation Guides introduce you to VBA programming in general and automating Palisade software in particular. To access an Automation Guide, click Help » Developer Kit (XDK) » Automation Guide. Release 5.x (using "5.x" as an abbreviation for 5.0, 5.5, or 5.7 as appropriate): • For @RISK 5.x: Palisade @RISK 5.x for Excel Object Library • For RISKOptimizer: Palisade RISKOptimizer 5.x for Excel Developer Kit. • For Evolver 5.x: Palisade Evolver 5.x for Excel Developer Kit. • For NeuralTools 5.5 and 5.7: NeuralTools (without "Palisade"). (There was no NeuralTools 5.0 automation interface.) • For PrecisionTree 5.x: Palisade PrecisionTree 5.x Object Library. • For StatTools 5.x: Palisade StatTools 5.x Object Library. See also: Using VBA to Change References to @RISK. last edited: 2017-02-08 10.3. Using VBA to Change References to @RISK Applies to: @RISK 5.x–7.x, Professional and Industrial Editions Evolver 5.x–7.x NeuralTools 5.x–7.x PrecisionTree 5.x–7.x StatTools 5.x–7.x We wrote a bunch of automation code for @RISK (Evolver, NeuralTools, PrecisionTree, or StatTools) release 5 or 6. Now we've upgraded to release 7, and all the references in all our workbooks need to be updated. Is there any kind of automated solution, or do we have to make a lot of mouse clicks in every single workbook? For the problem at hand, you could write a macro to delete the obsolete references and add the new ones; see the references below. You could put that macro in a separate workbook, then have it available for people to run when references in @RISK model workbooks need to be updated. The problem is that there are two prerequisites for executing such code: you need to tick "Trust access to the VBA project object model" in Excel's Trust Center settings, and you need to set a reference to Microsoft Visual Basic for Applications Extensibility. These can't be done programmatically and must be done by hand. There's significant risk with "Trust access to the VBA project object model". That lets workbook macro code do pretty much anything, and if you unknowingly download and open a malicious workbook you'll have a serious security breach on your hands. (See also Enable or disable macros in Office documents.) This kind of risk is one reason why we don't offer automatic code to adjust the See also: Are there any programming practices we can follow so that we're not in this position again, when we upgrade from version 7 to version 8? We have a couple of suggestions. One possibility is late binding, where you don't have references set in the workbooks but instead connect to the @RISK (Evolver, ...) object model at run time. While this preserves maximum flexibility, you lose the benefit of Intellisense (auto-complete of properties, tool tips for function arguments, and so forth) during code development. To learn more about late binding, in your Palisade software click Help » Developer Kit (XDK) » Automation Guide. Look near the end for the topic "Demand Loading @RISK" ("Demand Loading Evolver", ...). If the workbooks have mostly the same macro code, another possibility is to move your macros to one workbook. In effect, you write your own add-in to @RISK. Then when references have to be updated you can do it only once, and redistribute the updated workbook. If you have enough commonalities, this will also reduce your maintenance burden — if any kind of problem is discovered in macro code, it can be fixed once, with no need to try to find all the workbooks that contain the problem code. Last edited: 2015-12-29 10.4. Sampling @RISK Distributions in VBA Code Applies to: @RISK 5.0 and newer, Professional and Industrial Editions How can I generate a random sample within a VBA macro or function? Use the Sample method with the Risk object. Here's an example: x = Risk.Sample("RiskBinomial(10,0.2)") The Sample method normally returns a numeric value, but if there's an error in the definition of the distribution then the method returns an error variant in the usual way for Excel. The sampled values are not the same numbers you would see from that function in a worksheet. They always use the Monte Carlo method, as opposed to Latin Hypercube; RiskCorrmat and RiskSeed are ignored. If you want to access simulation data, use members of the Risk.Simulation.Results object after the simulation finishes. To call @RISK functions from Visual Basic, you must set up a reference from Visual Basic Editor to @RISK via Tools » References in the editor. Setting References in Visual Basic gives the appropriate reference(s) and how to set them. Please see the XDK or Developer Kit manual for details on the objects and methods mentioned in this article, as well as alternative methods. (Beginning with @RISK 6.2, start with the Automation Guide for a high-level introduction: Help » Developer Kit (XDK) » Automation Guide.) Am I restricted to just numeric arguments, or can I use cell references? Yes, you can use cell references: x = Risk.Sample("RiskBinomial(A1,B1)") The cell references must be in A1 format, not R1C1, and they are taken to refer to the active worksheet. If you don't want to worry about which sheet is active, specify the worksheet or use defined x = Risk.Sample("RiskBinomial('My Sheet'!A1,'My Sheet'!B1)") x = Risk.Sample("RiskBinomial(BinomialN,BinomialP)") I want to sample a RiskDiscrete with a long list of x and p. How can I use cell references? It follows the pattern of Cell References in Distributions. Here's an example: x = Risk.Sample("RiskDiscrete('My Sheet'!A1:A10,'My Sheet'!B1:B10)") As an alternative, in the worksheet you can define names for the arrays, and then use the names in the Risk.Sample function: x = Risk.Sample("RiskDiscrete(Xarray,Parray)") Can I write the sampled value to my workbook? Yes, just use Excel's Value property. You can apply it to a specific cell or to a defined range name: Range("B1").Value = Risk.Sample("RiskBinomial(A1,A2)") Range("myKeyLocation").Value = Risk.Sample("RiskBinomial(A1,A2)") If the Risk.Sample method returns an error such as #VALUE, that will be written to the worksheet. This will not register as a VBA error that interrupts execution of your macro. See also: • Generating Values from a Distribution includes several methods for generating sample values directly in an Excel sheet, without using Visual Basic. • Accessing Simulation Data in VBA Code explains how to use VBA to get the data from a particular input or output after a simulation. Last edited: 2018-02-28 10.5. Automating @RISK Simulations in VBA Applies to: @RISK 6.x/7.x, Professional and Industrial Editions (@RISK Standard Edition does not support automation.) How can I write Visual Basic for Applications code to automate several independent simulations? I want to simulate the files one at a time, not all at once. This is easy to do with the @RISK and Excel object models. Just open the first workbook, call Risk.Simulation.Start, and close the workbook. An example is attached. It is set up for @RISK 7.x, but you can change the references in the Visual Basic Editor and run it in @RISK 6.x as well. The Automation Guide that was introduced in 6.2.0 is a good introduction to automating @RISK with VBA. but it's not intended to document the complete object model. For methods and properties not mentioned in the Automation Guide, consult the XDK Reference, which is also found in the @RISK help menu. Last edited: 2015-12-18 10.6. Accessing Simulation Data in VBA Code Applies to: @RISK 5.x–7.x Can I access @RISK worksheet functions in Visual Basic? To call @RISK functions from Visual Basic, you must set up a reference from Visual Basic Editor to @RISK via Tools » References in the editor. Please see Setting References in Visual Basic for the appropriate reference and how to set it. Please see the XDK or Developer Kit manual for details on the methods mentioned here, as well as alternative methods. (Beginning with @RISK 6.2, start with the Automation Guide for a high-level introduction: Help » Developer Kit (XDK) » Automation Guide.) How can I retrieve simulation data, in a way similar to the RiskData( ) worksheet function? Use the GetSampleData method to fill an array with the simulated data. Here's an example: numSamples = _ Risk.Simulation.Results.GetSimulatedOutput("MyOutput"). _ GetSampleData(sampleData, True) This fills the VBA array sampleData with all the data from the named output, and returns the number of samples. (Although this example shows getting data from an output, you can also use GetSampleData with GetSimulatedInput.) How can I get the statistics of a simulated input or output, such as a simulated mean or percentile? Use Mean, Percentile, or a similar property of the RiskSimulatedResult object. Here's an example: MsgBox "The mean of MyOutput is " & _ (Again, you could also use this technique with GetSimulatedInput to get statistics of a simulated input.) See also: Sampling @RISK Distributions in VBA Code to get random numbers from an @RISK distribution without running a simulation. Last edited: 2018-02-28 10.7. Alternatives to =IF for Picking Distributions Applies to: @RISK 6.2 and newer, Professional and Industrial Editions I want to set up my worksheet so that I can use different distributions depending on a code. I've got a bunch of formulas like this: =IF(C47=1,RiskBinomial(D47,E47), IF(C47=2,RiskPert(F47,G47,H47), IF(C47=3,RiskLognorm(I47,J47), RiskTriang(K47,L47,M47)))) Is there any reason not to do it this way? Is there an alternative that might be more efficient? There are several reasons why IFs in the worksheet are not the best way to model a choice of distribution. You'll have spurious entries in your Model Window, your Simulation Data window and report, etc. You'll also see error values for each distribution in all the iterations where it's not selected. Also, having four times as many distributions will definitely slow down your simulation, but whether it will slow it down by enough to matter depends on how many distributions there are, what the rest of your model looks like, and how many iterations you've chosen. A quick-and-dirty possibility is to wrap such formulas inside a RiskMakeInput function. It's quick to do, though it does add another layer and it doesn't address the efficiency issue. But at least it gets rid of the spurious data collection. For the formula above, a RiskMakeInput would look like this: =RiskMakeInput( IF(C47=1,RiskBinomial(D47,E47), IF(C47=2,RiskPert(F47,G47,H47), IF(C47=3,RiskLognorm(I47,J47), RiskTriang(K47,L47,M47)))) RiskMakeInput is very powerful and has many uses. For some examples, see Combining Inputs in a Sensitivity Tornado; Excluding an Input from the Sensitivity Tornado; Same Input Appears Twice in Tornado Graph. See also: All Articles about RiskMakeInput. Probably a cleaner approach is to move that logic into a macro, where it is executed once only. The attached example contains such a macro, linked from a button in the worksheet. For the sake of illustration, this worksheet is set up with a choice of seven distributions: triangular (RiskTriang and RiskTrigen), Pert (RiskPert), uniform (RiskUniform), normal (RiskNormal), log-normal (RiskLognorm), and Johnson (RiskJohnsonMoments). In column K, you specify which distribution to use for each risk. The parameters are in columns B through J, and cells N2:O2 give the row numbers to be handled by the macro. When you click the worksheet button, the macro looks at each entry in column K and writes the appropriate distribution and parameters in column L. The macro includes a RiskName property function referring to the risk name given in column A. If any of the cells in column K contain incorrect distribution names, the macro displays an error message; otherwise, it runs a simulation. If you want to run further simulations without changing distributions, click the Start Simulation button in @RISK or click the button in the worksheet, but you must use the worksheet button after changing any distributions in column K. Last edited: 2018-02-20 10.8. Distribution Functions as Arguments to User-Written Functions Applies to: @RISK 5.x and newer I have written a function in Visual Basic code, and I use that function in formulas in my Excel sheet. When a simulation is running the function seems to work, but when a simulation is not running my worksheet displays #VALUE. What is wrong? In @RISK 5.0 and above, during a simulation the @RISK distribution functions return a single number of type Double. But when a simulation isn't running, the @RISK distribution functions return an array, of which the first element is the random number drawn by the function. (This change from 4.x was made to support the RiskTheo statistics functions, among other reasons.) Therefore, your own function needs to declare the argument as a Variant, not a Double, and it needs to test the type of the argument at run time. Please see the accompanying example. Last edited: 2015-08-12
{"url":"https://palisade.helpspot.com/index.php?pg=kb.printer.friendly&id=3","timestamp":"2024-11-08T18:52:10Z","content_type":"application/xhtml+xml","content_length":"1050672","record_id":"<urn:uuid:8a74462f-cb7b-4a99-afbe-b6bb82334c0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00352.warc.gz"}
Solar Radio Astronomy - CESRAAcceleration of electrons in the solar wind by Langmuir waves produced by a decay cascade by Catherine Krafft and Alexander Volokitin Acceleration of electrons in the solar wind by Langmuir waves produced by a decay cascade by Catherine Krafft and Alexander Volokitin It was recently reported that a significant part of the Langmuir waveforms observed by the STEREO satellite (Graham and Cairns, 2013) during type III solar radio bursts are likely consistent with the occurrence of electrostatic decay instabilities, when a Langmuir wave \(\mathcal{L}\) resonantly interacts with another Langmuir wave \(\mathcal{L}^{\prime}\) and an ion sound wave \(\mathcal{S}^{\ prime}\) through the decay channel \(\mathcal{L} \rightarrow\mathcal{L}^{\prime}+\mathcal{S}^{\prime}\). Usually such wave-wave interactions occur in regions of the solar wind where the presence of electrons beams can drive Langmuir turbulence at levels allowing waves \(\mathcal{L}\) to decay. Moreover, such solar wind plasmas can present long-wavelength randomly fluctuating density inhomogeneities or monotonic density gradients which can modify significantly the development of such resonant instabilities (Ryutov 1969; Kontar and Pecseli, 2002; Krafft et al. 2015). If some conditions are met, the waves can encounter a second decay cascade (SDC) according to \(\mathcal{L}^{\prime}\rightarrow\mathcal{L} ^{\prime\prime}+\mathcal{S}^{\prime\prime}\). Analytical estimates and observations based on numerical simulations show that the Langmuir waves \(\mathcal{L}^{\prime\prime}\) produced by this SDC can accelerate beam particles up to velocities and kinetic energies exceeding respectively two times the beam drift velocity \(v_{b}\) and half the initial beam kinetic energy (Krafft and Volokitin, 2016). Moreover, this process can be particularly efficient if beforehand scattering effects of waves on the background plasma inhomogeneities have already accelerated a sufficient amount of beam electrons up to the velocity range where the phase velocities of the \(\mathcal{L}^{\prime\prime}\) waves are lying. The simulation results are provided by a 1D numerical code based on a Hamiltonian model (Krafft et al. 2013) which allows us to derive all the self-consistent equations governing the dynamics of the Langmuir turbulence, the beam particles and the density fluctuations \(\delta n\). The model couples modified Zakharov equations including the beam current contribution to the Newton equations governing the motion of the beam electrons. Physical parameters typical of solar wind plasmas with average levels of density fluctuations reaching up to \(\Delta n=\langle (\delta n/n_0)^2 \rangle^{1 /2}=5\%\) are used (\(n_{0}\) is the plasma average density) and, in particular, of type III solar bursts and foreshock regions (e.g. Ergun et al., 1998). Initially, \(1024-2048\) waves and around \ (300000\) resonant beam particles are distributed within a periodic simulation box of size \(L=10000-30000\lambda_{D}.\) Depending on the simulations, the space and time resolutions are around \(2-10 \lambda_{D}\) and \(0.01-0.02\omega _{p}^{-1}\), respectively; \(\lambda_{D}\) and \(\omega_{p}\) are the electron Debye length and plasma frequency. Note that the variables used below are normalized according to \(\delta n/n_{0}\), \(\omega_{p}t\), \(k\lambda_{D}\), \(v/v_{T}\) and\(\left\vert E\right\vert ^{2}/4\pi n_{0}T_{e}\); \(E\left( z,t\right) =\sum_{k}E_{k}(t)e^{ikz}\) is the slowly varying field envelope and \(E_{k}\) is the Fourier component of \(E\); \(z\) is the space coordinate and \(k\) is the wave vector; \(T_{e}\) is the electron temperature and \(v_{T}\) the electron thermal velocity; \(n_{b}\ll n_{0}\) is the beam density. Figure 1. (Left panel) Space and time variations of the normalized Langmuir energy density \(\left\vert E\right\vert ^{2}\). (Right panel) Variations with time and phase velocity \(\omega_{p}/k\) of the Fourier component \(\left\vert E_{k}\right\vert\) of the envelope of the Langmuir electric field. Main parameters are the following : \(n_{b}/n_{0}=5×10^{-5}\), \(v_{b}/v_{T}=14\), \(\Delta n\ The evolution of the Langmuir turbulence is shown in Fig. 1, where the normalized Langmuir energy density \(\left\vert E\right\vert ^{2}\) is presented as a function of the normalized time \(\omega_ {p}t\) and space coordinate \(z/\lambda_{D}\). One observes that the Langmuir packets propagate with the group velocity \(v_{g}\) until \(\omega_{p}t\simeq50000\), where the Langmuir wave decay \(\ mathcal{L}\rightarrow\mathcal{L}^{\prime}+\mathcal{S}^{\prime}% \) starts (see e.g. near \(z/\lambda_{D}\simeq7500\)). Backscattered Langmuir waves \(\mathcal{L}^{\prime}\) with wavenumber \(k_{L^{\ prime}}<0\) propagating with the inverse group velocity \(-v_{g}\) as well as ion acoustic waves \(\mathcal{S}^{\prime}\) with \(k_{S^{\prime}}>0\) appear as clear peaks in the corresponding spectra with expected values (not shown here). Later, near \(\omega_{p}t\simeq70000\), second cascades of the decay process, i.e. \(\mathcal{L}^{\prime}\rightarrow\mathcal{L}^{\prime\prime}+\mathcal{S}^{\ prime\prime}\), occur near \(z/\lambda_{D}\simeq1500\) and \(z/\lambda _{D}\simeq6000\), producing Langmuir waves \(\mathcal{L}^{\prime\prime}\) with \(k_{L^{\prime\prime}}>0\) propagating with \(v_ {g}>0\) and ion sound waves \(\mathcal{S}^{\prime\prime}\) with \(k_{S^{\prime\prime}}<0.\) The right panel of Fig. 1 shows the distribution of the Fourier component \(\vert E_{k}\vert\) along the time and the wave phase velocity \(v_{\varphi}\simeq \omega_p/k\). One observes that during a first stage the distribution is broadened around \(v_{b}=14v_{T}\) (\(v_{b}\) is the beam drift velocity), indicating that some part of the electric field intensity excited by the beam instability at \(v<v_{b}\) is transported to phase velocities higher than \(v_{b}\), as a consequence of the scattering of the beam-driven Langmuir waves on the density inhomogeneities. At later times, Langmuir energy is carried by waves with \(20v_{T}\lesssim v_{\varphi}\lesssim25v_{T}\) due to the SDC, as well as by waves with phase velocities \(u_{f}\lesssim v_{\varphi}<v_{b}\) above the beam front decelerating at speed \(u_{f}\), where the slope of \(f(v)\) remains positive during all the time evolution, even if asymptotically it tends to a quasi-plateau. In the velocity region \(v\gtrsim v_{b}\) where the beam velocity distribution presents a negative slope during all the time evolution as well as a significant density of electrons at the time of occurrence of the SDC, the waves \(\mathcal{L}^{\prime\prime}\) which have phase velocities \(v_{\varphi }\gtrsim v_{b}\) can damp and release part of their energy to particles and accelerate them. This process can be observed in Fig. 2 (lower panel), where the part of the beam velocity distribution where \(v>v_{b}\), so-called \(f_{t}(v)\), is presented for two times after the occurrence of the first decay cascade: \(\omega_{p}t_{1}\simeq59601\) and \(\omega_{p}t_{2}\simeq126720\). At time \(t_{1}\) the scattering effects are fulfilled since at least \(40000\omega_{p}^{-1}\) whereas the first cascade of the decay process is already almost saturated; one can see that a population of accelerated particles already exists, as one of the consequences of the scattering of Langmuir waves on the density inhomogeneities, of the random shift of the wave-particle resonance condition \(kv\simeq\omega_{p}\) and of the transport of wave energy to smaller \(k\). But at time \(t_{2},\) i.e. around \(66000\omega _{p}^{-1}\) after \(t_{1}\), \(f_{t}(v)\) is significantly modified: the electron density is decreased in the range \(v_{b}\lesssim v\lesssim18v_{T}\) whereas increased for \(v\ gtrsim18v_{T},\) indicating that some particles have been accelerated; this can be explained taking into account that the SDC has started near \(\omega_{p}t\simeq70000\). At times above \(t_{2}\), \ (f_{t}(v)\) stabilizes and presents a monotonic decreasing negative slope: the SDC process is saturated and \(f_{t}(v)\) evolves only slightly and slowly. Figure. 2. (Upper panel) Time variation of the normalized densities of the accelerated beam electrons in various velocity ranges: \(n_{ac}/n_{b}=n(v>v_{ac})/n_{b}\) (upper solid line), \(n(v_{b}<v<v_ {ac})/n_{b}\) (dashed line), \(n(v_{ac}<v<v_{ac}+2)/n_{b}\) (solid line 1), \(n(v_{ac}+2<v<v_{ac}% +4)/n_{b}\) (solid line 2), \(n(v_{ac}+4<v<v_{ac}+6)/n_{b}\) (solid line 3), and \(n(v_{ac}+6<v<v_ {ac}+10)/n_{b}\) (solid line 4). (Lower panel): the four above mentioned ranges of velocities are shown, as well as the normalized values of \(v_{b}\) and \(v_{ac}\), the initial \(f_{t}(v)\) – in dashed lines -, and the distribution \(f_{t}(v)\) at two different times during the beam relaxation (solid lines); \(v\) is the velocity normalized by \(v_{T}\); \(v_{ac}\simeq15.7\). Parameters are the same as in Fig 1. Let us illustrate how the normalized density \(n(v)/n_{b}\) of the beam electrons with velocities \(v\) lying within different ranges varies with time during the processes of scattering and of SDC. Fig. 2 (upper panel) shows \(n_{ac}/n_b=n(v>v_{ac})/n_b\) together with \(n(v_b<v<v_{ac})\), \(n(v_{ac}<v<v_{ac}+2)\),\(n(v_{ac}+2<v<v_{ac}+4)\), \(n(v_{ac}+4<v<v_{ac}+6)\) and \(n(v_{ac}+6<v<v_{ac} +10)\). Note that here, for simplicity purposes, \(v\) and\(v_{b}\) represent velocities normalized by \(v_{T}\). One observes that: (i) Up to the time when the scattering process is saturated (\(\ omega_{p}t\simeq20000\)), the velocities of almost all accelerated particles satisfy \(v_{ac}\lesssim v\lesssim v_{ac}+2\) (region labeled by “1” in the bottom panel of Fig. 2), as they have no time to reach higher velocities; (ii) These electrons are coming mostly from the velocity range \(v_{b}\lesssim v\lesssim v_{ac}\) which also feeds the distribution of decelerated electrons; (iii) When the scattering process is saturated (\(\omega_{p}t\gtrsim20000\)), some electrons present higher velocities (see the growth of the curve 2), due to the fact that scattered waves exist with phase velocities \(v_{ac}+2\lesssim v_{\varphi}\lesssim v_{ac}+4\) (i.e.\(17.5\lesssim v\lesssim19.5,\) that is the region labeled by “2”; see also the lower panel of Fig. 1); (iv) When the second decay cascade starts (\(\omega_{p}t\simeq70000\)), the number of electrons with velocities \(v_{ac}<v<v_{ac}+2\) significantly reduces (curve 1) as well as that of electrons with \(v_{b}<v<v_{ac}\), which continues to decrease (dashed curve), meanwhile the densities of electrons with higher velocities significantly increase (curves 3 and 4 corresponding to velocity intervals “3” and “4”). This shows that the presence of a non negligible amount of accelerated electrons up to \((2-3)v_{b}\) is due to the SDC process. The simulation results show that electron acceleration observed at large times results from damping of Langmuir waves \(\mathcal{L}^{\prime\prime}\) coming from the second decay cascade. It is useful to discuss such observations on the basis of available theoretical arguments and, in particular, the competition between the various physical processes at work during the full time evolution. The first one consists in the transfer of Langmuir wave energy to smaller \(k\)-scales due to the interactions of these waves with the plasma, that is, their scattering on the fluctuating density inhomogeneities (time scale \(\tau_{s}\)). Second, resonant wave-particle interactions govern the beam instability and the wave growth which can be described, in the frame of the weak turbulence theory for homogeneous plasmas, by quasilinear diffusion and spectral saturation (time scale \(\tau_{D}\)). Third, nonlinear wave-wave interactions occur in the form of resonant three-wave decay (time scale \(\tau_{2}\)), for which some analytical estimates can be provided by the turbulence theory; however, this concerns only limited aspects, as we know that decay is significantly affected by the presence of inhomogeneities constraining the interactions to be localized. The time ordering necessary for an efficient acceleration of electrons by the \(\mathcal{L}^{\prime\prime}\) waves has been determined. The conditions \(\tau_{s}<\tau_{D}\) and \(\tau_{2}\lesssim\ tau_{D},\) which fit with the theoretical estimates (Krafft and Volokitin, 2016) and the simulation results, require that the scattering processes are already saturated when the diffusion of the beam particles due to the waves \(\mathcal{L}^{\prime\prime}\) takes place \((\tau_{s}<\tau_{D})\), which is in turn evolving on a time scale of the order or larger than the duration of the energy transfer from the waves \(\mathcal{L}^{\prime}\) to the waves \(\mathcal{L}^{\prime\prime}\) (\(\tau _{2}\lesssim\tau_{D}\)). On the basis of these relations and the resonance conditions between the beam electrons and the Langmuir waves \(\mathcal{L}% ^{\prime\prime}\), one has determined that, for solar wind conditions typical of the type III solar bursts’ source regions and the Earth foreshock, electron beams with drift velocities in the range \(8v_{T}\lesssim v_{b}\lesssim35v_{T}\) can be the source of the acceleration process described here, by exciting Langmuir waves which in turn experience decay cascades and Landau damping. Numerical simulation results and analytical estimates have shown that the process of electron acceleration by daughter Langmuir waves \(\mathcal{L}% ^{\prime\prime}\) coming from the second cascade \ (\mathcal{L}^{\prime}\rightarrow\mathcal{L}^{\prime\prime}+\mathcal{S}^{\prime\prime}\) of the electrostatic decay \(\mathcal{L}\rightarrow\mathcal{L}^{\prime}+\mathcal{S}% ^{\prime}\) of Langmuir waves \(\mathcal{L}\) driven by an electron beam can occur at a significant level in solar wind regions where density fluctuations may exist, if the beam drift velocities \(v_{b}\) satisfy roughly the condition \(8v_{T}\lesssim v_{b}\lesssim35v_{T}.\) This range corresponds to the drift velocities of the beams actually existing in the solar wind regions of the foreshock and the type III solar bursts’ sources. Moreover, such process can be particularly efficient if beforehand scattering effects of Langmuir waves on the background plasma density inhomogeneities have already accelerated a sufficient amount of electrons. Then the Langmuir waves \(\mathcal{L}% ^{\prime\prime}\) produced by the second decay cascade can accelerate again these beam particles up to velocities and kinetic energies exceeding respectively two times the beam drift velocity \(v_{b}\) and half the initial beam energy. Note that the two acceleration processes at work, i.e. due to the scattering of waves on the density inhomogeneities and to the second decay cascade, do not occur in the same range of time, as the first one is already saturated when the rise of the second one becomes significant. Additional info This nugget is based on the paper Krafft, C., & Volokitin, A. (2016). ELECTRON ACCELERATION BY LANGMUIR WAVES PRODUCED BY A DECAY CASCADE The Astrophysical Journal, 821 (2) DOI: 10.3847/0004-637X/821/2/99 Corresponding author : Catherine.krafft@lpp.polytechnique.fr This work was granted access to the HPC resource of IDRIS under the allocation 2013- i2013057017 made by GENCI. This work has been done within the LABEX Plas@par project, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme “Investissements d’avenir” under the reference ANR-11-IDEX-0004-02. C.K. acknowledges the “Programme National Soleil Terre” (PNST) and the Centre National d’Etudes Spatiales (CNES, France). Ergun, R.E., Larson, D., Lin, R.P., et al. : 1998, ApJ, 503, 435 Graham, D.B., and Cairns, I.H.: 2013, J. Geophys. Res, 118, 3968 Kontar, E.P., Pecseli, H.L. : 2002, Phys. Rev., 65, 066408 Krafft C., Volokitin A.S., Krasnoselskikh, V.V. : 2015, ApJ, 809, 176 Krafft C., Volokitin A.S., Krasnoselskikh, V.V. : 2013, ApJ, 778, 111 Ryutov, D.D. : 1969, Sov. JETP, 30, 131
{"url":"https://www.astro.gla.ac.uk/users/eduard/cesra/?p=1072","timestamp":"2024-11-04T20:11:52Z","content_type":"text/html","content_length":"74790","record_id":"<urn:uuid:78368316-ab21-4a68-840b-130ac5ded488>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00285.warc.gz"}
Mastering Physics with Nickzom Academy Nickzom Academy offers an excellent platform for mastering physics. It provides comprehensive resources that simplify complex concepts and calculations. Understanding Fundamental Concepts 1. Newton’s Laws of Motion Nickzom Academy explains Newton’s Laws of Motion clearly. These laws are fundamental to understanding the behavior of objects. Key Points: • First Law: Objects remain at rest or in uniform motion unless acted upon by an external force. • Second Law: Force equals mass times acceleration (F = ma). • Third Law: For every action, there is an equal and opposite reaction. 2. Energy and Work Energy and work are central concepts in physics. Nickzom Academy breaks down these ideas into understandable parts. Concepts Covered: • Kinetic Energy: The energy of motion. • Potential Energy: The energy stored in an object. • Work Done: The product of force and distance (W = Fd). Simplifying Complex Calculations 1. Motion Equations Nickzom Academy provides tools to solve motion equations. These equations describe how objects move under various forces. Equations Include: • Velocity: The rate of change of displacement. • Acceleration: The rate of change of velocity. • Distance: Calculated using velocity and time. 2. Gravitation and Orbits Gravitation plays a critical role in the motion of celestial bodies. Nickzom Academy simplifies calculations related to gravitation. Key Concepts: • Gravitational Force: The force of attraction between two masses. • Orbital Motion: The path an object follows around another object due to gravity. • Escape Velocity: The minimum speed needed to escape a gravitational field. Practical Applications of Physics 1. Mechanics in Everyday Life Mechanics is a branch of physics with many real-world applications. Nickzom Academy shows how mechanics is used daily. • Automobile Dynamics: Understanding how cars accelerate and brake. • Construction: Calculating forces and stability in building structures. • Sports: Analyzing motion and energy in various sports activities. 2. Electricity and Magnetism Electricity and magnetism are critical to modern technology. Nickzom Academy provides a thorough understanding of these concepts. Key Areas: • Electric Circuits: Understanding current, voltage, and resistance. • Magnetic Fields: How magnets and electric currents interact. • Electromagnetic Induction: The process of generating electric current through magnetic fields. Interactive Learning and Practice 1. Step-by-Step Solutions Nickzom Academy offers step-by-step solutions to physics problems. This approach helps students grasp complex concepts easily. • Clarity: Each step is explained clearly. • Application: Learn how to apply concepts to solve problems. • Confidence: Builds confidence through guided practice. 2. Practice Quizzes Regular practice is crucial for mastering physics. Nickzom Academy provides quizzes that reinforce learning and test understanding. • Topic-Based Quizzes: Focus on specific areas of physics. • Instant Feedback: Receive immediate feedback on answers. • Progress Tracking: Monitor your improvement over time. Nickzom Academy is an invaluable resource for mastering physics. By providing clear explanations, simplifying complex calculations, and offering interactive learning tools, it equips students with the knowledge and confidence needed to excel in physics. Whether you are learning Newton’s laws, understanding energy concepts, or solving gravitation problems, Nickzom Academy offers the resources and support you need. Embrace the power of Nickzom Academy to deepen your understanding of physics and achieve academic success. 57 posts Introduction Physics plays a crucial role in understanding and solving real-world problems. It provides us with the tools…
{"url":"https://www.nickzom.org/blog/category/physics/","timestamp":"2024-11-10T04:38:04Z","content_type":"text/html","content_length":"231969","record_id":"<urn:uuid:383746c5-6881-4d0e-bd3c-dd506caf1a3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00735.warc.gz"}
Make It True: Subtract 1 Digit Numbers [Create Worksheet] Which number will make this true? 1. Read the given number sentence. 2. Figure out the number that will make the number sentence true. 3. Click the correct answer. EZSchool^ ® is federally registered and protected trademark. Copyright @ 1998-2024 Asha Dinesh
{"url":"https://www.ezschool.com/Games/Grade1/Math/Subtraction/OneDigitSubtraction/MakeItTrueSubtract1DigitNumbers/","timestamp":"2024-11-14T00:47:43Z","content_type":"application/xhtml+xml","content_length":"15012","record_id":"<urn:uuid:c8182f80-519b-4e4d-b5b1-7a3c47ada7b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00385.warc.gz"}
How to Synthesize the Radiation Pattern of an Antenna Array When we study and prototype a phased array antenna for high-speed and high-data-rate communication, we can save time and computational costs by using an antenna array factor. This way, we don’t need to analyze the entire structure through a full 3D wave equation. Antenna Applications in IoT, IoS, SatCom, and 5G There is a common theme among today’s trendy RF buzzwords, such as internet of things (IoT), internet of space (IoS), satellite communication (SatCom), and 5G: The need for wireless communication that can provide a higher data rate with an operating frequency and bandwidth much higher and wider, respectively, than what it used to be. When we send or receive informative signals carried over a 5G mobile network, where its expected operating frequency is much higher than that of a traditional mobile system, we inevitably suffer from significant electromagnetic wave attenuation, causing signal integrity issues. In order to make the electromagnetic wave travel a longer distance with the limited amount of power in a communication system, it is necessary to deploy a high-gain antenna that shapes the far-field radiation pattern like a very sharp, pencil-like beam. This enables us to reach longer distances for delivering uninterrupted information. Large dish antennas allow us to communicate over a long distance. Image in the public domain, via Wikimedia Commons. Aperture antennas, such as dish and horn antennas, would provide a high-enough gain for these purposes. The very sharp far-field radiation pattern from high-gain antennas has a very narrow angular scanning range, and the visible area of an electromagnetic wave is limited. To enhance the area covered for communication, its scanning capability can be extended by rotating the antenna mechanically with a gimbal. However, aperture antennas require a substantial amount of space, volume-wise, to install, and may not be suitable for use in consumer electronics — You do not want to add a big dish antenna to your cellphone! A monopole antenna array showing beam scanning capability. An antenna array is, simply put, a bunch of antennas connected by a specific space and phase configuration. Arrays can overcome the obstacles mentioned above, and they can be conformal and miniaturized based on the antenna element type, forming an array and material properties. It is important to choose a proper antenna element if miniaturization is a design factor. The design specification may decide what type of antenna element to deploy. The Benefit of Using an Array Factor Though the volume of an antenna array is smaller than that of an aperture-type antenna, its computational cost for simulation is still high compared to a single antenna study. Without running a full 3D model simulation over the entire structure and sacrificing the accuracy of the analysis too much, the far-field radiation pattern of the antenna array can still be estimated from the radiation pattern of a single antenna element by multiplying the array factor. The uniform array factor expression in a 3D model is defined as \frac{sin(\frac{n_x (2 \pi d_x sin\theta cos\phi + \alpha_x)}{2})}{sin(\frac{2 \pi d_x sin\theta cos\phi + \alpha_x}{2})} \frac{sin(\frac{n_y (2 \pi d_y sin\theta sin\phi + \alpha_y)}{2})}{sin(\frac {2 \pi d_y sin\theta sin\phi + \alpha_y}{2})} \frac{sin(\frac{n_z (2 \pi d_z cos\theta + \alpha_z)}{2})}{sin(\frac{2 \pi d_z cos\theta + \alpha_z}{2})} where nx, ny, and nz are the number of array elements along the x-, y-, and z-axis, respectively. The terms dx, dy, and dz are the distance between array elements in terms of the wavelength used in a simulation. The terms alphax, alphay, and alphaz are the phase progression in radians. In the above array factor expression, the input power is not normalized. If an antenna array is excited with a single input power distributed by a feed network, it needs to be scaled accordingly. One of the advantages of using the COMSOL Multiphysics® software is that you can type any kind of equation for the postprocessing expression. When the expression is complicated, it can be addressed using a simulation application or model methods. The user interface of an antenna array simulation application with an 8×8 virtual array, electric field distribution, and 3D far-field radiation pattern view. By multiplying the array factor equation to the antenna far-field gain variable, emw.gaindBEfar, you can compute the far-field gain of the antenna array. Array Factor Function in the RF Module Typing a long expression of an equation or programming even a simple code using method functionality could be a hindrance for a quick study. Fortunately, the RF Module, an add-on to COMSOL Multiphysics, offers an array factor postprocessing function. After a single antenna simulation with a far-field domain/calculation physics feature, the 3D uniform array factor function is accessible under Definitions > Functions from the postprocessing context menu for the plot expression as af3(nx, ny, nz, dx, dy, dz, alphax, alphay, alphaz) The definition of the input arguments is the same as in the above uniform array factor equation, and the following table explains the impact on the resulting plot. Effect Input Argument Number of array elements Antenna gain Distance between array elements Antenna gain; sidelobe level Phase progression Main lobe steering direction The effect of input parameters on the radiation pattern. The evaluation of a virtual 8×8 antenna array that has a main beam in the z-axis is expressed as emw.gaindBEfar + 20*log10(emw.af3(8, 8, 1, 0.48, 0.48, 0, 0, 0, 0)) + 10*log10(1/64) It is calculated in the dB scale, and the multiplication between the array factor and the single antenna gain is done by summation in the expression. Input Argument Description Value Unit nx Number of elements along the x-axis 8.00 Dimensionless ny Number of elements along the y-axis 8.00 Dimensionless nz Number of elements along the z-axis 1.00 Dimensionless dz Distance between array elements along the x-axis 0.48 Wavelength dy Distance between array elements along the y-axis 0.48 Wavelength dz Distance between array elements along the z-axis 0 Wavelength alphax Phase progression along the x-axis 0 Radian alphay Phase progression along the y-axis 0 Radian alphaz Phase progression along the z-axis 0 Radian Array factor input arguments of a virtual 8×8 array antenna for the main beam along the z-axis. The above expression is made under the assumption that the antenna array is fed by a uniform distribution network with a single input power source. It is necessary to scale it by a factor of 10*log10 (1/total number of elements). When nonzero phase progression values are used, the direction of the main beam, the maximum radiation, can be pointed toward a desirable direction. The distance between the array elements is 0.48 wavelengths. When the distance is between 0.45 and 0.5 wavelengths, it is expected to have a sidelobe level of approximately -12 to -15 dB. The following equation helps define the phase progression value as a function of the angle from the major axis, so you can easily point the scanning direction. \alpha_x=-kdcos\theta=\frac{2\pi d}{\lambda} cos\theta where k is wavenumber, d is the distance between antenna elements, and theta is the angle from the axis. To generate the beam with the maximum direction at 60 degrees from the x-axis, alphax (in the array factor function) is set to The Microstrip Patch Antenna tutorial in the Application Gallery shows how the single antenna radiation pattern can be evolved using the array factor. The following polar plot compares three radiation patterns: • The gain of a single microstrip patch antenna • The pattern of a uniform array factor set to have the main lobe direction 60 degrees from the x-axis and 30 degrees from the z-axis • The synthesized gain of an 8×8 microstrip patch antenna array The single patch antenna gain, 8×8 uniform array factor, and 8×8 microstrip patch antenna array gain plotted in dB scale. The far-field gain pattern of a virtual 8×8 microstrip patch antenna array. The minimum range for the plot may change the visual sharpness of the main beam pattern. The Beauty of Postprocessing an Antenna Array Simulation The various postprocessing options in COMSOL Multiphysics enable you to efficiently study your antenna prototype. Using the Full harmonic dynamic data extension for the animation sequence type in the animation settings is one very useful way to examine the beam steering feasibility without running the simulation at every angular scanning point. Animation Settings window. Dynamic data extension is used to sweep the internal phase variable. For time-harmonic, frequency-domain simulations, the solution of dependent variables can be evaluated at an arbitrary angle (phase). The full harmonic dynamic data extension changes the internally defined “root.phase” variable from 0 to 2 pi while producing the animation. The following expression generates the animation of the 8×8 microstrip patch antenna array, scanning the 180-degree range from the z-axis to the positive x-axis via the negative x-axis. emw.gaindBEfar + 20*log10(emw.af3(8, 8, 1, 0.48, 0.48, 0, -2*pi*0.48*cos(phase+pi/2), 0, 0)) + 10*log10(1/64) The far-field gain pattern of the 8×8 microstrip patch antenna array. The main beam is moving along an axis. The scanning trajectory does not have to follow a line or a rectangular grid. The rotating main beam pattern around the z-axis of a 12×12 microstrip patch antenna array can be produced using the next emw.gaindBEfar + 20*log10(emw.af3(12, 12, 1, 0.48, 0.48, 0, -2*pi*0.48*cos(pi/2-pi/8*cos(phase)), -2*pi*0.48*cos(pi/2-pi/8*sin(phase), 0)) + 10*log10(1/144) The far-field gain pattern of a 12×12 microstrip patch antenna array. The main beam is moving along a circular orbit. The main beam is tilted pi/8 radian from the axis and rotating around the axis in the animation. Concluding Remarks The 3D full-wave simulation for a large antenna array system is memory intensive, increasing the computation time and cost. By using an asymptotic approach as discussed, by multiplying the far-field postprocessing variable of a single antenna element with a uniform array factor, you can quickly estimate the radiation pattern analysis of the antenna array. However, this approach does not address the field coupling among the array elements. Therefore, it is only applicable for fast prototype feasibility studies. The full-wave analysis over the entire array structure may be required to examine the gain and sidelobe level precisely. Additional Resources Check out these tutorial models to study antenna arrays via simulation: Comments (2) Robert Malkin April 4, 2019 Will something similar be available for acoustics in the future? Mads Herring Jensen April 30, 2019 COMSOL Employee Hi Robert, this is the first time we receive this request. Could you tell me what the application in acoustics would be? I think that you can most probably set something similar up with the existing tools and a few user-defined expressions.
{"url":"https://www.comsol.com/blogs/how-to-synthesize-the-radiation-pattern-of-an-antenna-array?setlang=1","timestamp":"2024-11-07T02:53:19Z","content_type":"text/html","content_length":"103268","record_id":"<urn:uuid:f0dbf4bf-737e-4d2e-b39b-c520a386bc96>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00651.warc.gz"}
Say Goodbye to Loops: Unleash the Power of Vectorization in Python for Faster Code Vectorization is the procedure of converting operations on scalar factors, like including numbers, into operations on vectors or matrices, like adding arrays. It permits mathematical operations to be performed extra efficaciously by way of taking advantage of the vector processing skills of modern CPUs. The foremost benefit of vectorization over conventional loops is expanded performance. Loops carry out an operation iteratively on each detail, which may be gradual. Vectorized operations apply the operation to the complete vector immediately, allowing the CPU to optimize and parallelize the computation. For example, adding two arrays with a loop would look like: a = [1, 2, 3] b = [4, 5, 6] c = [] for i in range(len(a)): c.append(a[i] + b[i]) The vectorized version with NumPy would be: import numpy as np a = np.array([1, 2, 3]) b = np.array([4, 5, 6]) c = a + b Vectorized operations are faster because they utilize vector processing power on the CPU. Other benefits of vectorization include cleanliness, greater formality, and the ability to present complex mathematics concisely. In general, vectorizing your code makes it faster and more efficient. Vectorization with NumPy: NumPy is a basic Python library that provides support for many variables and matrices as well as advanced arrays. Mathematical functions that operate on these arrays. The most important thing we will benefit from is vectorization. This allows arithmetic operations on the entire array without writing any for loops. For example, if we have two arrays a and b: import numpy as np a = np.array([1, 2, 3]) b = np.array([4, 5, 6]) We can add them element-wise using: c = a + b # c = [5, 7, 9] This is much faster than using a for loop to iterate through each element and perform the addition. Some common vectorized functions in NumPy include: • np.sum() - Sum of array elements • np.mean() - Mean of array elements • np.max() - Maximum element value • np.min() - Minimum element value • np.std() - Standard deviation The key benefit of vectorization is the performance gain from executing operations on entire arrays without writing slow Python loops. Element-wise Operations: One of the most common uses of NumPy's vectorization is to perform element-wise mathematical operations on arrays. This allows you to apply a computation, such as addition or logarithms, to entire arrays without writing any loops. For example, if you have two arrays a and b, you can add them together with a + b. This will add each corresponding element in the arrays and return a new array with the results. import numpy as np a = np.array([1, 2, 3]) b = np.array([4, 5, 6]) c = a + b # c = [5, 7, 9] This works for all basic mathematical operations like subtraction, multiplication, division, exponents, etc. NumPy overloaded these operators so they perform element-wise operations when used on Some common mathematical functions like sin, cos, log, exp also work element-wise when passed NumPy arrays. a = np.array([1, 2, 3]) # [0.8415, 0.9093, 0.1411] Being able to avoid loops and vectorize math operations on entire arrays at once is one of the main advantages of using NumPy. It makes the code simpler and faster compared to implementing math operations iteratively with Python loops and lists. One of the most powerful aspects of vectorization in NumPy is the ability to easily aggregate data for calculations and analysis. With standard Python loops, you would need to iterate through each element, performing calculations like finding the sum or minimum. With NumPy's vectorized operations, you can find the sum, minimum, maximum, etc across an entire array with just one line of code. For example: import numpy as np data = np.array([1, 2, 3, 4, 5]) # Output: 15 # Output: 1 The aggregation functions like sum() and min() operate across the entire array, returning the single aggregated value. This is much faster than writing a for loop to iterate and calculate these values manually. Some other helpful aggregation functions in NumPy include: • np.mean() - Calculate the average / mean • np.median() - Find the median value • np.std() - Standard deviation • np.var() - Variance • np.prod() - Product of all elements • np.any() - Check if any value is True • np.all() - Check if all values are True These functions enable you to easily gain insights into your data for analysis and decision making. Vectorizing aggregation removes the need for slow and tedious loops in Python. Broadcasting allows element-wise operations to be performed on arrays of different shapes. For example, you can add a scalar to a vector, or a vector to a matrix, and NumPy will handle matching up elements based on standard broadcasting rules: • Arrays with the same shape are simply lined up and operate element-wise. • Arrays with different shapes are "broadcasted" to have compatible shapes according to NumPy's broadcasting rules: • The array with fewer dimensions is prepended with 1s to match the dimension of the other array. So a shape (5,) vector becomes a shape (1,5) 2D array when operating with a (3,5) 2D array. • For each dimension, the size of the output is the maximum of the input sizes in that dimension. So a (2,1) array operating with a (3,4) array results in a (3,4) output array. • The input arrays are virtually resized according to the output shape and then aligned for the element-wise operation. No copying of data is performed. Broadcasting removes the need to explicitly write loops to operate on arrays of different shapes. It allows vectorized operations to be generalized to a wider range of use cases. Universal Functions: Universal functions (ufuncs) are NumPy functions that operate element-wise on arrays. They take an array as input, perform some mathematical operation on each element, and return a new array with the resulting values. Some of the most common ufuncs in NumPy include: • np.sin() - Calculates the sine for each element in the array. • np.cos() - Calculates the cosine for each element. • np.exp() - Calculates the exponential for each element. • np.log() - Calculates the natural logarithm for each element. • np.sqrt() - Calculates the square root for each element. Ufuncs can operate on arrays of any data type, not just float arrays. The input array will determine the data type for the output. For example: import numpy as np arr = np.array([1, 2, 3]) # Output [ 2.71828183 7.3890561 20.08553692] Here np.exp() is applied to each element in the input array, calculating the exponential for each integer value. Ufuncs are extremely fast and efficient because they are written in C, avoiding the overheads of Python loops. This makes them ideal for vectorizing code. Vectorizing Loops: One of the main use cases for vectorization is converting iterative Python loops into fast array operations. Loops are convenient for iterating over elements, but they are slow compared to vectorized For example, let's say we wanted to add 1 to every element in an array. With a normal loop, we would write: import numpy as np arr = np.arange(10) for i in range(len(arr)): arr[i] += 1 This performs the addition operation one element at a time in a loop. With vectorization, we can perform the operation on the entire array simultaneously: arr = np.arange(10) arr += 1 This applies the addition to every element in the array at once, without needing to loop. Some common examples of loops that can be vectorized: • Element-wise arithmetic (add, subtract, multiply, etc) • Aggregations (sum, mean, standard deviation, etc) • Filtering arrays based on conditions • Applying mathematical functions like sine, cosine, logarithms, etc Vectorizing loops provides huge performance gains because it utilizes the optimized C code inside NumPy instead of slow Python loops. It's one of the most effective ways to speed up mathematical code in Python. Performance Gains: Vectorized operations in NumPy can provide significant performance improvements compared to using Python loops. This is because NumPy vectorization utilizes the underlying C language and leverages optimized algorithms that take advantage of modern CPU architectures. Some key performance advantages of NumPy vectorization include: • Faster computations - Element-wise operations on NumPy arrays can be 10-100x faster than performing the equivalent Python loop. This is because the computations are handled in optimized C code rather than relatively slow Python interpretations. • Better memory locality - NumPy arrays are stored contiguously in memory, leading to better cache utilization and less memory access compared to Python lists. Looping often leads to unpredictable memory access patterns. • Parallelization - NumPy operations easily lend themselves to SIMD vectorization and multi-core parallelization. Python loops are difficult to parallelize efficiently. • Calling optimized libraries - NumPy delegates work to underlying high-performance libraries like Intel MKL and OpenBLAS for linear algebra operations. Python loops cannot take advantage of these Various benchmarks have demonstrated order-of-magnitude performance gains from vectorization across domains like linear algebra, image processing, data analysis, and scientific computing. The efficiency boost depends on factors like data size and operation complexity, but even simple element-wise operations tend to be significantly faster with NumPy. So by leveraging NumPy vectorization appropriately, it is possible to achieve much better computational performance compared to a pure Python loop-based approach. But it requires rethinking the implementation in a vectorized manner rather than simply translating line-by-line. The performance payoff can be well worth the transition for any numerically intensive Python application. Limitations of Vectorization: Vectorization is extremely fast and efficient for many use cases, but there are some scenarios where it may not be the best choice: • Iterative algorithms: Some algorithms require maintaining state or iterative updates. These cannot be easily vectorized and may be better implemented with a for loop. Examples include stochastic gradient descent for machine learning models. • Dynamic control flow: Vectorization works best when applying the same operation over all data. It lacks support for dynamic control flow compared to what you can do in a Python loop. • Memory constraints: NumPy operations apply to the entire arrays. For very large datasets that don't fit in memory, it may be better to process data in chunks with a loop. • Difficult to vectorize: Some functions and operations can be challenging to vectorize properly. At some point it may be easier to just use a loop instead of figuring out the vectorized • Readability: Vectorized code can sometimes be more cryptic and less readable than an equivalent loop. Maintainability of code should also be considered. In general, vectorization works best for math-heavy code with arrays when you want high performance. For more complex algorithms and logic, standard Python loops may be easier to implement and maintain. It's best to profile performance to determine where vectorization provides the biggest gains for your specific code. Vectorization is a powerful technique for boosting the performance of numerical Python code by eliminating slow Python loops. As we've seen, libraries like NumPy provide fast vectorized operations that let you perform calculations on entire arrays without writing explicit for loops. Some of the key benefits of vectorization include: • Speed - Vectorized operations are typically much faster than loops, often by an order of magnitude or more depending on the size of your data. This makes code run faster with minimal extra • Convenience - Vectorized functions and operations provided by NumPy and other libraries allow you to express mathematical operations on arrays intuitively and concisely. The code reads like math. • Parallelism -Vectorized operations are easily parallelized to take advantage of multiple CPU cores for further speed gains. While vectorization has limitations and won't be suitable for every situation, it should generally be preferred over loops when working with numerical data in Python. The performance gains are substantial, and vectorized code is often easier to read and maintain. So next time you find yourself writing repetitive loops to process NumPy arrays, pause and think - could this be done more efficiently using vectorization? Your code will likely be faster, require less memory, and be more concise and expressive if you use vectorization. The sooner you can build the habit of vectorizing, the sooner you'll start reaping the benefits in your own projects.
{"url":"https://adminadda.com/blogs/Cyber-Security/159/Say-Goodbye-to-Loops:-Unleash-the-Power-of-Vectorization-in-Python-for-Faster-Code","timestamp":"2024-11-05T15:42:57Z","content_type":"text/html","content_length":"75154","record_id":"<urn:uuid:745f078f-cd71-4598-b74f-e11cab817b61>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00035.warc.gz"}
Merging Multiple Seasons of NCAA Div 1 Softball Statistics Actvity that uses multiple seasons of NCAA Div 1 Softball batting statistics to practice merging data tables. Please note that these material have not yet completed the required pedagogical and industry peer-reviews to become a published module on the SCORE Network. However, instructors are still welcome to use these materials if they are so inclined. Data for a particular sport is often stored across numerous locations. For example, in NCAA Division I Softball, batting statistics for each season are typically stored in separate tables. (See for example the statistics hosted by https://d1softball.com/.) Suppose we are interested in tracking the statistics of players across multiple seasons. A common way to prep the data to do this is to use join statements to merge each seasons data into one table with one row per player (and columns associated with their different statistics for each season). This module looks at some simple batting stats over two seasons through the use of joining functions for a small subset of NCAA Division 1 Softball players’ statistics for the 2021 and 2022 seasons. (This is only a small window of a much bigger dataset). There are four data tables associated with this module. Two of them, batting2021 and batting2022, contain statistics for all Division 1 Softball player who played in the 2021 and/or 2022 season. The other two, batting2021_subset and batting2022_subset, represent non-random samples taken the full data. These are displayed below. R is the number of runs scored by the player, H is the number of Additional details for the full data (batting2021 and batting2022) The 2021 data set has 2131 rows with 22 columns. The 2022 data set has 2275 rows with the same 22 columns. Each row represents a Division 1 Softball player who played in the 2021 and/or 2022 season. Variable Description Player Name of the softball player Team Name of the team (school) for each player BA Batting Average - The ratio of a player’s total base hits to their total number of at-bats, indicating their ability to make successful hits. OBP On-Base Percentage - The percentage of times a player reaches base, either through a hit, walk, or hit-by-pitch, out of their total plate appearances. SLG Slugging Percentage - The measure of a player’s power by calculating the total number of bases they accumulate per at-bat. OPS On-Base Plus Slugging - The sum of a player’s on-base percentage and slugging percentage, providing a comprehensive measure of their offensive contribution. GP Games Played - The total number of games in which a player has participated. PA Plate Appearances - The total number of times a player has come up to bat, including at-bats, walks, and hit-by-pitches. AB At-Bats - The number of times a player has officially faced a pitcher and had an opportunity to hit. R Runs Scored - The number of times a player has crossed home plate and scored a run. H Hits - The total number of successful hits made by a player. 2B Doubles - The number of hits resulting in the batter reaching second base. 3B Triples - The number of hits resulting in the batter reaching third base. HR Home Runs - The number of hits resulting in the batter scoring a run by hitting the ball out of the park. RBI Runs Batted In - The number of runs a player has driven in with a hit or sacrifice. HBP Hit by Pitch - The number of times a player has been struck by a pitched ball and awarded first base. BB Walks - The number of times a player has received a base on balls (four balls) and been awarded first base. K Strikeouts - The number of times a player has been called out after accumulating three strikes. SB Stolen Bases - The number of times a player successfully advances to the next base without a hit, during a pitch while the ball is in play. CS Caught Stealing - The number of times a player is thrown out while attempting to steal a base. CSV format data files Database (SQL) style • softball_batting.duckdb: DuckDB database containing four table: the full set of players for each season (batting2021 and batting2022) and the two subsets (batting2021_subset and • softball_batting.sqlite: SQLite database containing four table: the full set of players for each season (batting2021 and batting2022) and the two subsets (batting2021_subset and
{"url":"https://iramler.github.io/slu_score_preprints/softball/softball_div1/index.html","timestamp":"2024-11-05T18:33:13Z","content_type":"application/xhtml+xml","content_length":"72263","record_id":"<urn:uuid:0686574e-d5b3-457a-826b-124dabeb8cf7>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00372.warc.gz"}
the corba fixed type This module contains functions that gives an interface to the CORBA fixed type. The type Fixed used below is defined as: -record(fixed, {digits, scale, value}). where digits is the total amount of digits it consists of and scale is the number of fractional digits. The value field contains the actual Fixed value represented as an integer. The limitations of each field are: • Digits - integer(), -1 > Digits < 32 • Scale - integer(), -1 > Scale =< Digits • Value - integer(), range (31 digits): ±9999999999999999999999999999999 Since the Value part is represented by an integer, it is vital that the Digits and Scale values are correct. This also means that trailing zeros cannot be left out in some cases: • fixed<5,3> eq. 03.140d eq. 3140 • fixed<3,2> eq. 3.14d eq. 314 Leading zeros can be left out. For your convenience, this module exports functions which handle unary (-) and binary (+-*/) operations legal for the Fixed type. Since a unary + have no effect, this module do not export such a function. Any of the binary operations may cause an overflow (i.e. more than 31 significant digits; leading and trailing zeros are not considered significant). If this is the case, the Digit and Scale values are adjusted and the Value truncated (no rounding performed). This behavior is compliant with the OMG CORBA specification. Each binary operation have the following upper bounds: • Fixed1 + Fixed2 - fixed<max(d1-s1,d2-s2) + max(s1,s2) + 1, max(s1,s2)> • Fixed1 - Fixed2 - fixed<max(d1-s1,d2-s2) + max(s1,s2) + 1, max(s1,s2)> • Fixed1 * Fixed2 - fixed<d1+d2, s1+s2> • Fixed1 / Fixed2 - fixed<(d1-s1+s2) + Sinf ,Sinf > A quotient may have an arbitrary number of decimal places, which is denoted by a scale of Sinf. create(Digits, Scale, Value) -> Result • Result = Fixed Type | {'EXCEPTION', #'BAD_PARAM'{}} This function creates a new instance of a Fixed Type. If the limitations is not fulfilled (e.g. overflow) an exception is raised. get_typecode(Fixed) -> Result • Result = TypeCode | {'EXCEPTION', #'BAD_PARAM'{}} Returns the TypeCode which represents the supplied Fixed type. If the parameter is not of the correct type, an exception is raised. add(Fixed1, Fixed2) -> Result • Result = Fixed1 + Fixed2 | {'EXCEPTION', #'BAD_PARAM'{}} Performs a Fixed type addition. If the parameters are not of the correct type, an exception is raised. subtract(Fixed1, Fixed2) -> Result • Result = Fixed1 - Fixed2 | {'EXCEPTION', #'BAD_PARAM'{}} Performs a Fixed type subtraction. If the parameters are not of the correct type, an exception is raised. multiply(Fixed1, Fixed2) -> Result • Result = Fixed1 * Fixed2 | {'EXCEPTION', #'BAD_PARAM'{}} Performs a Fixed type multiplication. If the parameters are not of the correct type, an exception is raised. divide(Fixed1, Fixed2) -> Result • Result = Fixed1 / Fixed2 | {'EXCEPTION', #'BAD_PARAM'{}} Performs a Fixed type division. If the parameters are not of the correct type, an exception is raised. unary_minus(Fixed) -> Result • Result = -Fixed | {'EXCEPTION', #'BAD_PARAM'{}} Negates the supplied Fixed type. If the parameter is not of the correct type, an exception is raised.
{"url":"https://www.erldocs.com/r14bextra/orber/fixed","timestamp":"2024-11-03T13:06:26Z","content_type":"text/html","content_length":"7640","record_id":"<urn:uuid:12bae916-3648-48b9-a155-ca75bf93cad8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00132.warc.gz"}
User-Defined Operations for Reduce and Scan Next: The Semantics of Up: Collective Communications Previous: Scan user-defined operations reduce, user-defined scan, user-defined MPI_Op_create(MPI_User_function *function, int commute, MPI_Op *op) MPI_OP_CREATE( FUNCTION, COMMUTE, OP, IERROR) EXTERNAL FUNCTION MPI_OP_CREATE binds a user-defined global operation to an op handle that can subsequently be used in MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER, and MPI_SCAN. The user-defined operation is assumed to be associative. If commute = true, then the operation should be both commutative and associative. If commute = false, then the order of operations is fixed and is defined to be in ascending, process rank order, beginning with process zero. The order of evaluation can be changed, taking advantage of the associativity of the operation. If commute = true then the order of evaluation can be changed, taking advantage of commutativity and associativity. associativity, and user-defined operation commutativity, and user-defined operation function is the user-defined function, which must have the following four arguments: invec, inoutvec, len and datatype. The ANSI-C prototype for the function is the following. typedef void MPI_User_function( void *invec, void *inoutvec, int *len, MPI_Datatype *datatype); The Fortran declaration of the user-defined function appears below. FUNCTION USER_FUNCTION( INVEC(*), INOUTVEC(*), LEN, TYPE) <type> INVEC(LEN), INOUTVEC(LEN) INTEGER LEN, TYPE The datatype argument is a handle to the data type that was passed into the call to MPI_REDUCE. The user reduce function should be written such that the following holds: Let u[0], ... , u[len-1] be the len elements in the communication buffer described by the arguments invec, len and datatype when the function is invoked; let v[0], ... , v[len-1] be len elements in the communication buffer described by the arguments inoutvec, len and datatype when the function is invoked; let w[0], ... , w[len-1] be len elements in the communication buffer described by the arguments inoutvec, len and datatype when the function returns; then w[i] = u[i]ov[i], for i=0 , ... , len-1, where o is the reduce operation that the function computes. Informally, we can think of invec and inoutvec as arrays of len elements that function is combining. The result of the reduction over-writes values in inoutvec, hence the name. Each invocation of the function results in the pointwise evaluation of the reduce operator on len elements: i.e, the function returns in inoutvec[i] the value invec[i]oinoutvec[i], for i = 0,.....,count-1, where o is the combining operation computed by the function. General datatypes may be passed to the user function. However, use of datatypes that are not contiguous is likely to lead to inefficiencies. No MPI communication function may be called inside the user function. MPI_ABORT may be called inside the function in case of an error. Next: The Semantics of Up: Collective Communications Previous: Scan Jack Dongarra Fri Sep 1 06:16:55 EDT 1995
{"url":"https://book.huihoo.com/mpi-the-complete-reference/node118.html","timestamp":"2024-11-03T01:15:55Z","content_type":"text/html","content_length":"7090","record_id":"<urn:uuid:9cb78bce-488d-4125-a414-d60d75d0daa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00441.warc.gz"}
Feature Selection using Statistical Tests We have all been too engrosses with the deep learning and machine learning algorithms, choosing between linear regression, logistic regression, or some other algorithm, that we have forgotten a basic tenet of feature selection. Feature Selection is the process of selecting the features which are relevant to a machine learning model. It means that you select only those attributes that have a significant effect on the model’s Consider the case when you go to the departmental store to buy grocery items. A product has a lot of information on it, i.e., product, category, expiry date, MRP, ingredients, and manufacturing details. All this information is the features of the product. Normally, you check the brand, MRP, and expiry date before buying a product. However, the ingredient and manufacturing section is not your concern. Therefore, brand, MRP, expiry date are relevant features, and the ingredient, manufacturing details are irrelevant. This is how feature selection is done. In the real world, a dataset can have thousand of features and there may be chances some features may be redundant, some may be correlated and some may be irrelevant for the model. In this scenario, if you use all the features, it will take a lot of time to train the model, and model accuracy will be reduced. Therefore, feature selection becomes important in model building. There are many other ways of feature selection such as recursive feature elimination, genetic algorithms, decision trees. However, I will tell you the most basic and manual method of filtering with statistical tests used by data scientists. Now, you have a basic understanding of feature selection, we will see how to implement various statistical tests on the data to select important features. Learning Objectives • Understand the importance of feature selection in machine learning model building. • Learn the significance of statistical tests in identifying relevant features. • Explore various statistical tests including Z-test, T-test, correlation test, ANOVA test, and Chi-square test. • Gain hands-on experience in implementing statistical tests for feature selection using Python. • Recognize the implications of statistical test results on model performance and interpretability. This article was published as a part of the Data Science Blogathon Feature Selection using Statistical Tests Terminologies Before going into the types of statistical tests and their implementation, it is necessary to understand the meanings of some terminologies. • Hypothesis Testing: Hypothesis Testing in statistics is a method to test the results of experiments or surveys to see if you have meaningful results. It is useful when you want to infer about a population based on a sample or correlation between two or more samples. • Null Hypothesis: This hypothesis states that there is no significant difference between sample and population or among different populations. It is denoted by H0. Ex: We assume that the mean of 2 samples is equal. • Alternate Hypothesis: The statement contrary to the null hypothesis comes under the alternate hypothesis. It is denoted by H1. Ex: We assume that the mean of the 2 samples is unequal. • Critical Value: It is a point on the scale of the test statistic beyond which the null hypothesis is rejected. Higher the critical value, lower the probability of 2 samples belonging to the same distribution. The critical value for any test can • p-value: p-value stands for ‘probability value’; it tells how likely it is that a result occurred by chance alone. Basically, the p-value is used in hypothesis testing to help you support or reject the null hypothesis. The smaller the p-value, the stronger the evidence to reject the null hypothesis. • Degree of freedom: The degree of freedom is the number of independent variables. This concept is used in calculating t statistic and chi-square statistic. You may refer to statisticswho.com for more information regarding these terminologies. Statistical Tests A statistical test is a way to determine whether the random variable is following the null hypothesis or alternate hypothesis. It basically tells whether the sample and population or two/ more samples have significant differences. You can use various descriptive stats such as mean, median, mode, range, or standard deviation for this purpose. However, we generally use the mean. These various statistical methods give you a number which is then compared with the p-value. If its value is more than the p-value you accept the null hypothesis, else you reject it. The procedure for implementing each statistical test will be as follows: • We calculate the statistic value using the mathematical formula • We then calculate the critical value using statistic tables • With the help of critical value, we calculate the p-value • If p-value> 0.05 we accept the null hypothesis else we reject it Now you have an understanding of feature selection and statistical tests, we can move towards the implementation of various statistical tests along with their meaning. Before that, I will show you the dataset and this dataset will be used to perform all tests. The dataset which I will be using is a loan prediction dataset which is taken from the Analytics Vidhya contest. You can also participate in the contest and download the dataset here. First I imported all necessary python modules and you can check out the data points here. import numpy as np import pandas as pd import seaborn as sb from numpy import sqrt, abs, round import scipy.stats as stats from scipy.stats import norm There are many features in the dataset such as Gender, Dependents, Education, Applicant Income, Loan Amount, Credit history. We will be using these features and check whether one feature effect affects other features using several tests i.e Z-Test, Correlation test, ANOVA test, and Chi-square test. A Z-test is used to compare the mean of two given samples and infer whether they are from the same distribution or not. We do not implement Z-test when the sample size is less than 30. You would prefer to T-test in such cases. A Z-Test may be a one-sample Z test or a two-sample Z test. The One-Sample Z-Test determines whether the sample mean is statistically different from a known or hypothesized population mean. The two-sample Z-test compares 2 independent variables. We will implement a two-sample Z test. Z statistic is denoted by Please note that we will implement 2 sample z-test where one variable will be categorical with two categories and the other variable will be continuous to apply the z-test. Here we will be using the Gender categorial variable and ApplicantIncome continuous variable. Gender has 2 groups: male and female. Therefore the hypothesis will be: Null Hypothesis: There is no significant difference between the mean Income of males and females. Alternate Hypothesis: There is a significant difference between the mean Income of males and females. The above code is calculating the mean of males applicant income, mean of females applicant income, their standard deviation, and number of samples of males and females twoSampZ function will calculate the z statistic and p-value bypassing the input parameters calculated above. def twoSampZ(X1, X2, mudiff, sd1, sd2, n1, n2): pooledSE = sqrt(sd1**2/n1 + sd2**2/n2) z = ((X1 - X2) - mudiff)/pooledSE pval = 2*(1 - norm.cdf(abs(z))) return round(z,3), pval z,p= twoSampZ(M_mean,F_mean,0,M_std,F_std,no_of_M,no_of_F) print('Z'= z,'p'= p) Z = 1.828 p = 0.06759726635832197 if p<0.05: print("we reject null hypothesis") print("we accept null hypothesis") Since value p is greater than 0.5 we accept the null hypothesis. Therefore, we conclude that there is no significant difference between the income of males and females. A t-test is also used to compare the mean of two given samples like the Z-test. However, It is implemented when the sample data size is less than 30. It assumes a normal distribution of the sample. It can also be one-sample or two-sample. The degree of freedom is calculated by n-1 where n is the number of samples. In linear regression, the T-test is commonly used to determine the significance of individual coefficients (i.e., slopes) in the regression model. It is denoted by Besides the simple T-test, there is also a paired T-test which is used when the observations in one group are paired or matched with the observations in the other group. It will be implemented the same as Z-test. The only condition is sample size should be less than 30. I have shown you Z- Test implementation. Now, you can try your hands on the sample T-Test. Correlation Test A correlation test is a metric to evaluate the extent to which variables are associated with one another. Please note that the variables must be continuous to apply the correlation test. There are several methods for correlation tests i.e. Covariance, Pearson correlation coefficient, Spearman rank correlation coefficient, etc. We will use the person correlation coefficient since it is independent of the values of variables. Pearson Correlation Coefficient It is used to measure the linear correlation between 2 variables. It is denoted by: Its values lie between -1 and 1. If the value of r is 0, it means there is no relationship between variables X and Y. If the value of r is between 0 and 1, it means there is a positive relation between X and Y, and their strength increases from 0 to 1. Positive relation means if the value of X increases, the value of Y also increases. If the value of r is between -1 and 0, it means there is a negative relation between X and Y, and their strength decreases from -1 to 0. Negative relation means if the value of X increases, the value of Y decreases. Here we will be using two continuous variables or features – Loan Amount and Applicant Income. We will conclude whether there is a linear relation between Loan Amount and Applicant Income with the Pearson correlation Coefficient value and also draw the chart between them. There are some missing values in the LoanAmount column, first, we filled it with the mean value. Then calculated correlation coefficient value. pcc = np.corrcoef(df.ApplicantIncome, df.LoanAmount) [[1. 0.56562046] [0.56562046 1. ]] The values on the diagonals indicate the correlation of features with themselves. 0.56 represent that there is some correlation between the two features. We can also draw the chart as follows: Also Read: K-Fold Cross Validation Technique and its Essentials ANOVA Test ANOVA stands for Analysis of variance. As the name, suggests it uses variance as its parameter to compare multiple independent groups. ANOVA can be one-way ANOVA or two-way ANOVA. One-way ANOVA is applied when there are three or more independent groups of a variable. We will implement the same in python. F-Test can be calculated by: Here we will be using the Dependents categorial variable and ApplicantIncome continuous variable. Dependents has 4 groups: 0,1,2,3+. Therefore the hypothesis will be: Null Hypothesis: There is no significant difference between the mean Income among different groups of dependents. Alternate Hypothesis: There is a significant difference between the mean Income among different groups of dependents. First, we handled the missing values in the Dependents feature. After this, we created a data frame with the features Dependents and ApplicantIncome. Then with the help of scipy.stats library we calculated the F statistic and p-value. df_anova = df[['total_bill','day']] grps = pd.unique(df.day.values) d_data = {grp:df_anova['total_bill'][df_anova.day == grp] for grp in grps} F, p = stats.f_oneway(d_data['Sun'], d_data['Sat'], d_data['Thur'],d_data['Fri']) print('F ={},p={}'.format(F,p)) F =5.955112389949444,p=0.0005260114222572804 if p<0.05: print(“reject null hypothesis”) print(“accept null hypothesis”) Since value p is less than 0.5 we reject the null hypothesis. Therefore, we conclude that there is a significant difference between the income of several groups of Dependents. Chi-Square Test This test is applied when you have two categorical variables from a population. It is used to determine whether there is a significant association or relationship between the two variables. There are 2 types of chi-square tests: chi-square goodness of fit and chi-square test for independence, we will implement the latter one. The degree of freedom in the chi-square test is calculated by (n-1)*(m-1) where n and m are numbers of rows and columns respectively. It is denoted by: We will be using two categorical features Gender and Loan Status and find whether there is an association between them using the chi-square test. Null Hypothesis: There is no significant association between Gender and Loan Status features. Alternate Hypothesis: There is a significant association between Gender and Loan Status features. First, we retrieve the Gender and LoanStatus column and form a matrix which is also called a contingency table. Loan_Status N Y Female 37 75 Male 33 339 Then, we calculate observed and expected values using the above table. Then we calculate the chi-square statistic and p-value using the following code: from scipy.stats import chi2 chi_square=sum([(o-e)**2./e for o,e in zip(observed,expected)]) print("chi-square statistic:-",chi_square_statistic) print('Significance level: ',alpha) print('Degree of Freedom: ',ddof) chi-square statistic:- 0.23697508750826923 Significance level: 0.05 Degree of Freedom: 1 p-value: 0.6263994534115932 if p_value<=alpha: print("Reject Null Hypothesis") print("Accept Null Hypthesis") Since the p-value is greater than 0.05, we accept the null hypothesis. We conclude that there is no significant association between the two features. Also Read: Amazon launches Bedrock: AI Model Evaluation with Human Benchmarking So in this tutorial, we have discussed various statistical tests and their importance in data analysis and feature selection. We have seen the application of statistical tests i.e, Z-test, T-test, correlation test, ANOVA test, and Chi-square along with their implementation in python. Besides these, there are various other statistical tests used by data scientists and statisticians. I encourage you to share some in the comments below! Frequently Asked Questions Q1. When to use T-test over Z-test? Use a z-test for large samples (n > 30) with known population standard deviation, and a t-test for small samples (n < 30) or unknown population standard deviation. The t-test is also suitable for large samples with unknown population standard deviation. Q2. What is the difference between parametric and non-parametric tests? Parametric tests make assumptions about the distribution of the data, such as whether there is gaussian distribution or not, while non-parametric tests do not rely on specific distributional assumptions. Parametric tests typically require continuous data and are more powerful when assumptions are met, while non-parametric tests are more robust but less powerful, suitable for ordinal or non-normally distributed data. Q3. What is a classifier in data analysis? A. A classifier in data analysis is a model or algorithm used to categorize data points into predefined classes or categories based on their features or attributes. It’s commonly used in machine learning for tasks like text categorization or image recognition. Q4. What do you mean by statistical hypothesis testing? A. Statistical hypothesis testing is a method used to make inferences about a population based on sample data. It involves formulating null and alternative hypotheses, selecting a significance level, and using statistical tests to determine the likelihood of observing the sample data if the null hypothesis were true. Q5. What is the test statistic used in McNemar’s test? A. The test statistic used in McNemar’s test is typically denoted as χ² (chi-square). It assesses the difference between the discordant pairs in a matched-pairs design, comparing the frequencies of disagreement between two dependent categorical variables. The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion. Responses From Readers
{"url":"https://www.analyticsvidhya.com/blog/2021/06/feature-selection-using-statistical-tests/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2021/04/beginners-guide-to-missing-value-ratio-and-its-implementation/","timestamp":"2024-11-01T23:38:43Z","content_type":"text/html","content_length":"375548","record_id":"<urn:uuid:d08e9dbc-fb5c-417f-98da-62bffc492c98>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00154.warc.gz"}
October is Black Mathematician Month! MaRDI is a Germany-wide initiative to address challenges in mathematical research. It is aimed mainly at mathematicians to change how we do mathematics using FAIR (Findable, Accessible, Interoperable, and Reusable) research data. This way, it fits very well with our own open-source philosophy. IMAGINARY actively participates in this initiative by assisting in communicating and...Leer más
{"url":"https://www.imaginary.org/es/node/1311","timestamp":"2024-11-11T03:09:53Z","content_type":"text/html","content_length":"31823","record_id":"<urn:uuid:4d57adbd-d339-4c65-b55e-427937700c3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00145.warc.gz"}
Net Interest Margin Formula | Calculator (Excel template) Net Interest Margin Formula • Investment Return = Interest received or return on investment • Interest Paid = Interest paid on the debt • Average Assets = (Assets at the start of the year + Assets at the end of the year) / 2 Net Interest Margin tells about how profitable or good the firm is at making its decisions for its investments than just keeping up with its debtors. It is an important metric for checking financial stability and operational acumen. To have a better understanding of the concept, we will calculate the Net Interest Margin value by using the above-mentioned formula. Deven Corporation is in the Oil trading business and takes a loan for $100,000 for an interest rate of 9% per annum, and they earn $125,000 at the end of the year. What is the net Interest Margin? • Net Interest Received or Return on Investment = 125,000-100,000 = $25,000 • Interest to be paid = 9% • Average Assets = 100,000 Net Interest Margin Using Formula is calculated as: • Net Interest Margin = (Net return on investment – Interest paid) / Average Assets • Net Interest Margin = (25,000 – 9,000) /100,000 • Net Interest Margin = 0.16 or 16 % Therefore, the net interest margin for Deven Corporation is 16 %. Explanation of Net Interest Margin Formula The net interest margin formula is fundamental for evaluating a financial institution’s financial competence and stability and can effectively assess other companies. The net interest margin represents the variance between the interest received from assets and the interest paid on liabilities relative to the average assets held. The term “net interest” accurately captures the resulting net amount gained or lost based on this calculation. Furthermore, we use the term “margin” to indicate the profit or loss generated in relation to our assets. Net Interest Margin = (Investment Returns – Interest Paid) / Average Assets Using average assets is crucial in calculating the net interest margin as it provides a more comprehensive view of the actual interest paid and received throughout the year rather than solely considering the value of assets at the beginning of the year. Incorporating the average assets provides a broader perspective, encompassing all the assets over the year rather than focusing on a single point in time. Significance and Use The uses of the Net Interest Margin Value are immense and help us to be very informative and have a view ahead. • With Net Interest Margin, one can get a basic understanding of how much above and over the company is earning and thus doing well on the money raised through debt by the company. • In the finance industry, where the money is sourced from one source, retail or other institutions, and is for other banking or lending activities, the Net interest margin tells how much the company is making over the interest it is paying on all its assets. • Individuals can use this tool to determine the difference between their investments and acquired debt. Net Interest Margin Calculator You can use the following Net Interest Margin Calculator Investment Returns Interest Paid Average Assets Net Interest Margin Formula = Investment Returns − Interest Paid Net Interest Margin Formula = = Average Assets Net Interest Margin Formula in Excel (With Excel Template) Here, we will do an example of the Net Interest Margin formula in Excel. Calculating Net Interest Margin in Excel is easy and can take many variables, which can be difficult to calculate otherwise without a spreadsheet. You can easily calculate the Net Interest Margin using the Formula in the template provided. Deven Corporation has taken a loan of $100,000 at the rate of 9 % per annum and makes 4 percent compounded quarterly, find Net Interest Margin. • Assets involved can be termed with Amount = B5 = 100,000 • Interest to be paid = B6 = 9% • Interest to be paid in dollars = B5*B6 • Return earned per quarter = B7 = 4% • Return earned annually = ((1+B7) ^4 -1) = B16= 16.986% • The return earned annually in dollars = B15*B5 To Calculate the Net Interest Margin To calculate the result, subtract the interest paid from the annual return earned in dollars and then divide it by the assets or the amount involved here. • NIM = (B19-B11)/B5 = 0.07985856 = 8 % To calculate the Net Interest Margin value (NIM), we need to ensure that we use a specific function. You can also utilize this Excel illustration in Google Sheets, provided you specify the required functions and input. Recommended Articles This has been a guide to a Net Interest Margin formula. Here, we discuss its uses along with practical examples. We also provide a Net Interest Margin Calculator with a downloadable Excel template. You may also look at the following articles to learn more –
{"url":"https://www.educba.com/net-interest-margin-formula/","timestamp":"2024-11-10T11:57:59Z","content_type":"text/html","content_length":"339180","record_id":"<urn:uuid:c1da9428-a9b1-4526-9d74-23258497f7d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00858.warc.gz"}
How do you find the values of the indicated functions? | Socratic How do you find the values of the indicated functions? Given $\tan \theta = 1.036$ Find $\sec \theta$ and $\cos \theta$ 1 Answer Use arctan (also written #tan^-1)# to find out what the angle is. Then use that angle as the argument for $\sec \theta \mathmr{and} \cos \theta$. To demonstrate: ${\tan}^{-} 1 \left(1.036\right) = {46.01}^{\circ}$ Used my TI-30X calculator. $\sec \left({46.01}^{\circ}\right) = 1.44$ Used https://www.rapidtables.com/calc/math/trigonometry-calculator.html $\cos \left({46.01}^{\circ}\right) = 0.694$ Used my TI-30X. I hope this helps, Impact of this question 2341 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-find-the-values-of-the-indicated-functions#547101","timestamp":"2024-11-04T05:48:24Z","content_type":"text/html","content_length":"32414","record_id":"<urn:uuid:050602b7-e15a-4caa-913d-de0e262873da>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00150.warc.gz"}
Conditional Probability 总结自 Coursera lecture Statistical Inference section 03 Conditional probability。 1. Definition Conditional Probability refers to the probablity conditional on some new information. The conditional probability of an event B given that A has already occurred is: \[P(B \vert A) = \frac{P(B \cap A)}{P(A)}\] Notice that if A and B are independent, then $P(B \vert A) = \frac{P(B \cap A)}{P(A)} = \frac{P(B)P(A)}{P(A)} = P(B)$ 扩展到三个事件就是:$P(A,B,C) = P(A \vert B,C) \cdot P(B \vert C) \cdot P(C)$ 2. Bayes’ rule (a.k.a Bayes’ law or Bayes’ theorem) \[P(B \vert A) = \frac{P(A \vert B)P(B)}{P(A \vert B)P(B) + P(A \vert B^c)P(B^c)}\] 3. Diagnostic tests • Let $+$ and $-$ be the events that the result of a diagnostic test is positive or negative respectively. □ $+$: h(x) = 1 □ $-$: h(x) = 0 • Let $D$ and $ D^c $ be the event that the subject of the test has or does not have the disease in deed respectively. • Thus □ TP: h(x) = 1 and y = 1, $+ \cap D$ □ FP: h(x) = 1 and y = 0, $+ \cap D^c$ □ TN: h(x) = 0 and y = 0, $- \cap D^c$ □ FN: h(x) = 0 and y = 1, $- \cap D$ □ P = TP + FN, $D$ => object really has disease □ N = TN + FP, $D^c$ => object really has no disease □ TP + FP, $+$ => predicts that object has disease □ TN + FN, $-$ => predicts that object has no disease • The Sensitivity, or true positive rate (TPR), or Recall is the probability that the test is positive given that the subject actually has the disease. (Of all the objects that actually have disease, how much did we correctly predict as having disease?) □ $TPR = \frac{TP}{TP + FN} = \frac{TP}{P} = \frac{P(+ \cap D)}{P(D)} = P(+ \vert D)$ • The Specificity (SPC), or true negative rate is the probability that the test is negative given that the subject does not have the disease. □ $SPC = \frac{TN}{TN + FP} = \frac{TN}{N} = \frac{P(- \cap D^c)}{P(D^c)} = P(- \vert D^c)$ • The Precision, or positive predictive value (PPV) is the probability that the subject has the disease given that the test is positive. (Of all the objects whom we predict as having disease, how much actually has disease?) □ $PPV = \frac{TP}{TP + FP} = \frac{P(+ \cap D)}{P(+)} = P(D \vert +)$ • The negative predictive value (NPV) is the probability that the subject does not have the disease given that the test is negative. □ $NPV = \frac{TN}{TN + FN} = \frac{P(- \cap D^c)}{P(-)} = P(D^c \vert -)$ • The prevalence of the disease is the marginal probability of disease, $P(D)$ □ marginal probability 可以这样理解:$+, -$ 和 $D, D^c$ 构成了一个 2x2 的 table,table 的 cells 填的是对应的 probability,我们给这个 table 加一个 “总计” column 或者 row,这个 “总计” 就是 table 的 margin。$P(D)$ 可以看做是 $P(+ \cap D)$ 和 $P(- \cap D)$ 的总计。 • $F_1$ score □ $F_1 = \frac{2 \times Precision \times Recall}{Precision + Recall} = \frac{2TP}{2TP + FP + FN}$ • Diagnostic Likelihood Ratio of a positive test □ $DLR_{+} = \frac{P(+ \vert D)}{P(+ \vert D^c)} = \frac{TPR}{1-SPC}$ • Diagnostic Likelihood Ratio of a negative test □ $DLR_{-} = \frac{P(- \vert D)}{P(- \vert D^c)} = \frac{1-TPR}{SPC}$ • Likelihood Ratios □ 展开 $P(D \vert +)$ 和 $P(D^c \vert +)$ 会发现它们的分母是一样的,于是 $\frac{P(D \vert +)}{P(D^c \vert +)} = \frac{P(+ \vert D)}{P(+ \vert D^c)} \times \frac{P(D)}{P(D^c)}$ ☆ $\frac{P(D \vert +)}{P(D^c \vert +)}$ a.k.a Post-Test Odds of $D$ ☆ $\frac{P(D)}{P(D^c)}$ a.k.a Pre-Test Odds of $D$ ☆ Therefore $\text{Post-Test Odds of D} = DLR_{+} \times \text{Pre-Test Odds of D}$ □ 同理有 $\frac{P(D \vert -)}{P(D^c \vert -)} = \frac{P(- \vert D)}{P(- \vert D^c)} \times \frac{P(D)}{P(D^c)}$ □ 这个 likelihood 应该这样理解: ☆ 假如 prevalence of the disease $P(D) = 1\%$,$DLR_{+} > 1$,那么 Pre-Test Odds of $D$ 肯定是大于 1/99 的,进而肯定有 $P(D \vert +) > 1\%$。这也就意味着,如果对你做了 positive 的预测,你真 实患病的概率比一般的患病率(1%)是要大的,more likely to have the disease. ☆ 同理,如果 $DLR_{-} < 1$,说明如果对你做了 negative 的预测,你真的不患病的概率($P(D^c \vert -)$)会大于 99%,more likely to have no disease. ☆ More generally, ○ DLR > 1 indicates that the test result is associated with the presence of the disease. ○ DLR <0.1 indicates that the test result is associated with the absence of the disease. 4. Exercise A web site for home pregnancy tests cites the following: “When the subjects using the test were women who collected and tested their own samples, the overall sensitivity was 75%. Specificity was also low, in the range 52% to 75%.” Assume the lower value for the specificity. Suppose 30% of women taking pregnancy tests are actually pregnant. What number is closest to the probability of pregnancy given the positive test? 已知 $TPR = 75\%, SPC = 52\%, P(D) = 30\% $,求 $ P(D \vert +)$。 \[ P(D \vert +) &= \frac{P(+ \vert D)P(D)}{P(+ \vert D)P(D) + P(+ \vert D^c)P(D^c)} = \frac{TPR * P(D)}{TPR * P(D) + (1-SPC)*(1-P(D))} \newline & = \frac{0.75 * 0.3}{0.75 * 0.3 + 0.48 * 0.7} = 40\% 5. Exercise 2 (draft) Let $S={1,2,\cdots,n}$. Suppose $A$ and $B$ are two random choices of subsets of $S$; they are independent, and each of them is equally likely to be any of the $2^n$ subsets of $S$ (including the empty set or $S$ itself). Prove that $P(A \subset B) = {(\frac{3}{4})}^n$. $P$ is the uniform distribution. Let $X = \vert B \vert$, the size of the set $B$, so this is a random variable on the sample space $\Omega = { \text{pairs of subsets of }S }$. Derive the formula \[ P(A \subset B) = \sum_{i=0}^{n} P(A \subset B \vert X=i) P(X=i) \] $S$ 一共有 $2^n$ 个子集,size 为 $i$ 的有 $\tbinom{n}{i}$ 个,所以: \[ P(X=i) = \frac{\tbinom{n}{i}}{2^n} \] Let $\vert A \vert = Y$, then \[ P(A \subset B \vert X=i, Y=j) &= \frac{\tbinom{i}{j}}{2^n} \newline P(A \subset B \vert X=i) &= \sum_{j=0}^{i}\frac{\tbinom{i}{j}}{2^n} \newline &= \frac{2^i}{2^n} \] \[ P(A \subset B) &= \sum_{i=0}^{n} P(A \subset B \vert X=i) P(X=i) \newline &= \sum_{i=0}^{n} \frac{2^i}{2^n} \frac{\tbinom{n}{i}}{2^n} \newline &= \sum_{i=0}^{n} \frac{2^i \tbinom{n}{i}}{4^n} \] According to the generalization of the binomial formula, \[(1+X)^\alpha = \sum_{k=0}^\infty {\alpha \choose k} X^k\] we can get \[ \sum_{i=0}^{n} 2^i \tbinom{n}{i} = 3^n \] \[ P(A \subset B) = \frac{3^n}{4^n} \]
{"url":"https://listcomp.com/math/2014/09/08/conditional-probability","timestamp":"2024-11-08T05:55:11Z","content_type":"text/html","content_length":"25020","record_id":"<urn:uuid:00192412-4607-49b9-a0dc-5c70553a8681>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00855.warc.gz"}
Direct FEM Simulation (DFS) 1. The incompressible Navier-Stokes Equations (NSE) are discretized directly, without applying any filter. Thus, the method does not approximate any Large Eddy Simulation (LES) filtered solution, but is instead an approximation of a weak solution, satisfying the weak form of the NSE. 2. For this method, we have a posteriori error estimates of quantities of interest with respect to a weak solution, which form the basis for our adaptive mesh refinement algorithm. The a posteriori error estimates are based on the solution of an associated adjoint problem with a goal quantity (such as a drag coefficient) as data. 3. We model turbulent boundary layers by a slip boundary condition which is a good approximation for small skin friction stress, which gives enormous savings in computational cost by not having to resolve a very thin boundary layer.
{"url":"http://www.fenics-hpc.org/home/a-homepage-section/","timestamp":"2024-11-05T07:32:48Z","content_type":"text/html","content_length":"75725","record_id":"<urn:uuid:ec3a83a8-b054-462d-8a29-4eb6922f8fd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00682.warc.gz"}
This class implements a narrow-phase collision detection algorithm. More... #include <include/reactphysics3d/collision/narrowphase/GJK/GJKAlgorithm.h> enum class GJKResult { SEPARATED , COLLIDE_IN_MARGIN , INTERPENETRATE } GJKAlgorithm ()=default ~GJKAlgorithm ()=default GJKAlgorithm (const GJKAlgorithm &algorithm)=delete Deleted copy-constructor. GJKAlgorithm & operator= (const GJKAlgorithm &algorithm)=delete Deleted assignment operator. void testCollision (NarrowPhaseInfoBatch &narrowPhaseInfoBatch, uint32 batchStartIndex, uint32 batchNbItems, Array< GJKResult > &gjkResults) Compute a contact info if the two bounding volumes collide. This class implements a narrow-phase collision detection algorithm. This algorithm uses the ISA-GJK algorithm. This implementation is based on the implementation discussed in the book "Collision Detection in Interactive 3D Environments" by Gino van den Bergen. This method implements the Hybrid Technique for calculating the penetration depth. The two objects are enlarged with a small margin. If the object intersects in their margins, the penetration depth is quickly computed using the GJK algorithm on the original objects (without margin). If the original objects (without margin) intersect, we exit GJK and run the SAT algorithm to get contacts and collision data. ◆ testCollision() void GJKAlgorithm::testCollision ( NarrowPhaseInfoBatch & narrowPhaseInfoBatch, uint32 batchStartIndex, uint32 batchNbItems, Array< GJKResult > & gjkResults Compute a contact info if the two bounding volumes collide. This method implements the Hybrid Technique for computing the penetration depth by running the GJK algorithm on original objects (without margin). If the shapes intersect only in the margins, the method compute the penetration depth and contact points (of enlarged objects). If the original objects (without margin) intersect, we call the computePenetrationDepthForEnlargedObjects() method that run the GJK algorithm on the enlarged object to obtain a simplex polytope that contains the origin, they we give that simplex polytope to the EPA algorithm which will compute the correct penetration depth and contact points between the enlarged objects. The documentation for this class was generated from the following files: • include/reactphysics3d/collision/narrowphase/GJK/GJKAlgorithm.h • src/collision/narrowphase/GJK/GJKAlgorithm.cpp
{"url":"https://reactphysics3d.com/documentation/classreactphysics3d_1_1_g_j_k_algorithm.html","timestamp":"2024-11-14T18:34:14Z","content_type":"application/xhtml+xml","content_length":"13377","record_id":"<urn:uuid:e71e1e0f-757c-4298-84c4-b869890d2540>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00357.warc.gz"}
ACM10060 Applications of Differential Equations Assignment Sample Ireland Differential equations have a wide range of applications in the real world, from predicting the weather to modeling financial markets. In fact, many physical and mathematical problems can be solved more easily using differential equations than any other method. Some of the most common applications of differential equations include: • Modeling the movement of objects (such as fluids or particles) over time • Predicting changes in populations over time • Determining the equilibrium state of a system (such as chemical reactions or economic systems) • Designing and analyzing experiments. In this assignment, we will investigate the use of differential equations to model population growth. The population of a particular species of frog in Ireland has been decreasing at an alarming rate in recent years. Scientists have been trying to determine the cause of the decline and develop a plan to stop it. Buy Assignment Answers of ACM10060 Applications of Differential Equations Course In this course, there are many types of assignments given to students like individual assignments, group-based assignments, reports, case studies, final year projects, skills demonstrations, learner records, and other solutions given by us. We also provide Group Project Presentations for Irish students. In this section, we are describing some tasks. These are: Assignment Task 1: Construct intermediate linear and nonlinear mathematical models, based on concepts such as dimensional analysis and the continuum hypothesis. Intermediate linear and nonlinear mathematical models can be constructed through the use of dimensional analysis and the continuum hypothesis, respectively. Dimensional analysis is a powerful technique for constructing models that are both parsimonious and accurate, while the continuum hypothesis allows for the construction of models that are more accurate yet still tractable. Both techniques have been used extensively in the physical sciences, and their application to mathematical models has yielded great success. It is hoped that their use will become more widespread in other areas of mathematics in the future. Assignment Task 2: Solve differential equations analytically, using methods such as: Partial fraction decomposition. Partial fraction decomposition is a great technique for solving differential equations analytically. Here’s how it works: 1. Decompose the differential equation into a series of simpler equations. 2. Solve each of the simpler equations. 3. Combine the solutions from step 2 to get the solution to the original differential equation. Here’s an example: x'(t) = (3x+5)/(x²-4) 1. Decompose the differential equation into a series of simpler equations: x'(t) = (3x+5)/(x²-4), x”(t) = 6x/[x²-4]², and x'”(t) = 12x²/(x²-4)³. 2. Solve each of the simpler equations: x'(t) = (3x+5)/(x²-4), x”(t) = 6x/{x²-4}², and x'”(t) = 12x²/(x²-4)³. 3. Combine the solutions from step 2 to get the solution to the original differential equation: x'(t) = (3x+5)/(x²-4), x”(t) = 6x/{x²-4}², and x'”(t) = 12x²/(x²-4)³. x'(t) = (3x+5)/(x²-4), x”(t) = 6x/{x²-4}², and x'”(t) = 12x²/(x²-4)³. Separation of variables. We can solve differential equations analytically by using methods such as the separation of variables. This method basically involves breaking the differential equation down into simpler equations that can be solved individually. Once we have solved each of the individual equations, we can then recombine them to get the solution for the original differential equation. Here’s an example: suppose we have the following differential equation: y’ = 2x + 3y We can separate the variables in this equation to get: dy = 2dx + 3dy (1) and integrate both sides to get: y = (3/2)x^2 + C (2) Plugging Eq. (2) into Eq. (1) gives us: dy = 2dx + 3(3/2)x^2 Integrating both sides once more gives us: y = (6/5)x^3 + C Which is the solution to the original differential equation. Chain Rule. One approach to solving a differential equation is to use the chain rule. The chain rule states that if y = f(x) and dx/dy = g(x), then d[f(x)]/dx = f'(x)g(x). This can be applied to differential equations by differentiating both sides with respect to x and then solving for y. For example, consider the following differential equation: y’ – 2y = 0. Applying the chain rule, we get: d[y]/dx – 2d[y]/dx = 0. Solving for y gives us y = C. This is the general solution to the differential equation. Nonlinear mappings. Nonlinear mappings can be used to solve differential equations analytically. For example, the logistic equation can be solved using a nonlinear mapping. First, the equation is rewritten in terms of a variable x, and then the nonlinear mapping is applied: x = ƒ(x) This can be rearranged to give: ƒ(x) = log(x) – 1 Which can then be solved for x: log(x) – 1 = 0 log(x) = 1 x = e ^ 1 = 2.71828.. This is the solution to the original differential equation. Characteristic equation method. A characteristic equation method is a powerful tool for solving differential equations analytically. To use this approach, you first need to find the roots of the characteristic equation. Once you have done that, you can use those roots to solve the original equation. Here’s an example: x” + 2x’ – x = 0 The characteristic equation is: D2x/Dt2 + 2Dx/DT + 1 = 0 This can be rearranged to give: D2x/Dt2 – 2Dx/DT + 1 = 0 Now, we can use the quadratic formula to find the roots of this equation: x = (-b±√(b^2-4ac))/2a x = (-2±√(4+8))/2 x = (-2±√12)/2 x = (1±√3)/2 So, the roots of the characteristic equation are 1 and -3. We can now use these roots to solve the original differential equation: x” + 2x’ – x = 0 becomes x” + 2x’ – (1 ± √3)/2 x = 0 which is a simpler equation that can be solved easily. Integrating factor method. An integrating factor method is a powerful tool for solving differential equations analytically. It works by finding a function ƒ(x) that transforms the given differential equation into an algebraic Once the integrating factor is found, the solution can be obtained by simply taking the derivative of ƒ(x) and solving for x. Here’s an example: x’ + y’ = 10 The integrating factor is ƒ(x) = x². To find the solution, we take the derivative of ƒ(x) and solve for x: 2x² + 2y’ = 20 dy/dx = 4x x = 5. So, the solution to the original differential equation is x = 5. Phase-plane analysis: Critical points; separatrices; linearisation near critical points. Phase-plane analysis is a powerful tool for solving differential equations. It involves studying the behavior of the equation near critical points (points where the derivative vanishes). This can be done by looking at the shape of the phase plane diagram near the critical point, and by studying the stability of solutions around the critical point. Separatrices are lines in the phase plane that divide regions of stable and unstable solutions. Linearisation is a technique that can be used to approximate solutions near a critical point. It involves transforming the equation into a linear form and then solving for x. This can be helpful in determining whether a solution is stable or not. All of these techniques can be used to help solve differential equations analytically. Matrix methods. Matrix methods are a powerful way to solve differential equations, and they work by transforming the differential equation into an equivalent algebraic equation. This algebraic equation can then be solved using standard techniques. One advantage of matrix methods is that they often lead to closed-form solutions, which means that the exact answer can be found in terms of explicit mathematical expressions. This can be a big advantage when trying to analyze the solution or when trying to compare different solutions. Matrix methods also have the advantage of being relatively easy to use, and this makes them a popular choice for solving difficult differential equations. Assignment Task 3: Analyse the properties of the solutions and describe the meaning of the solutions for the phenomena studied. Applications may include: One-dimensional mechanical systems (linear and nonlinear). In a one-dimensional mechanical system, the solutions to the differential equation represent the motion of the system. The nature of the solution (e.g. stable or unstable) can tell you a lot about how the system will behave. For example, if the solution is stable, then the system will remain in equilibrium after being disturbed. If the solution is unstable, however, then the system will quickly move away from equilibrium after being disturbed. This can be helpful in predicting how a system will behave under different conditions. The falling skydiver. In the falling skydiver problem, the differential equation models the motion of a skydiver as she falls through the air. The solution to the equation determines how the skydiver will move and provides information about her speed and trajectory. The solutions to the differential equation are always stable, and this means that the skydiver will always return to equilibrium after being disturbed. This is an important feature of the equation, and it helps to make sure that the skydiver is safe during her descent. In addition, the solutions can be used to calculate key properties of the skydiver’s motion, such as her maximum speed and final destination. These calculations can be helpful in designing a safe landing strategy for the skydiver. Nonlinear motion of a projectile. The differential equation for the motion of a projectile is nonlinear, and this can make it difficult to solve. However, by using matrix methods, it is often possible to find a closed-form solution to the equation. Once the solution is found, it can be used to predict the motion of the projectile under a variety of conditions. This can be helpful in designing artillery systems or in understanding the physics of projectile motion. It should be noted that nonlinear equations can be very difficult to solve, and so care should be taken when using them. In particular, it is important to check that the solutions are correct and that they accurately represent the physics of the problem. Resonant systems with external forcing. A resonant system is one that oscillates at a particular frequency when subjected to an external force. By understanding the nature of the solutions to the differential equation, it is often possible to predict the frequency of oscillation for a given system. For example, if the solution is stable, then the system will oscillate at a fixed frequency. If the solution is unstable, however, then the system will oscillate at a variety of different frequencies. This can be helpful in designing systems that are tuned to a specific frequency. It should be noted that resonance can be a dangerous phenomenon, so it is important to understand the properties of the solutions before using them in practice. Resonant systems can easily become unstable and may cause damage or injury if not handled properly. Nonlinear high-dimensional models such as the prey-predator model. The differential equation for the prey-predator model is high-dimensional, and this can make it difficult to solve. However, by using matrix methods, it is often possible to find a closed-form solution to the equation. Once the solution is found, it can be used to predict the behavior of the predator-prey system under a variety of conditions. This can be helpful in understanding the dynamics of the system and in designing management strategies for controlling its population size. It should be noted that high-dimensional equations can be very difficult to solve, and so care should be taken when using them. In particular, it is important to check that the solutions are correct and that they accurately represent the physics of the problem. Population models: The effect of harvesting; the tragedy of the commons. Population models are used to understand the dynamics of populations over time. By understanding the nature of the solutions to the differential equation, it is often possible to predict the effects of harvesting or other interventions on the population. For example, if the solution is stable, then the population will rebound after being harvested. If the solution is unstable, however, then the population may not rebound and may even decline in size. This can be helpful in designing management strategies for controlling a population’s size. It should be noted that population models can be complex and difficult to solve. In particular, it is important to check that the solutions are correct and that they accurately represent the physics of the problem. Incorrect solutions can lead to inaccurate predictions and may even be dangerous in some cases. The famous Lorenz 3D atmospheric model leads to chaotic orbits. The Lorenz 3D atmospheric model is a famous nonlinear equation that leads to chaotic orbits. By understanding the nature of the solutions to the differential equation, it is possible to predict the behavior of the system under a variety of conditions. It should be noted that chaotic systems can be very difficult to predict and so care should be taken when using them. In particular, it is important to check that the solutions are correct and that they accurately represent the physics of the problem. Incorrect solutions can lead to inaccurate predictions and may even be dangerous in some cases. The Brusselator and other chemical clocks. The Brusselator is a famous chemical clock that can be used to model the dynamics of chemical reactions. By understanding the nature of the solutions to the differential equation, it is possible to predict the behavior of the system under a variety of conditions. It should be noted that chemical clocks can be complex and difficult to solve. In particular, it is important to check that the solutions are correct and that they accurately represent the physics of the problem. Incorrect solutions can lead to inaccurate predictions and may even be dangerous in some cases. Get the best assistance in assignment solutions from us Struggling with your assignments? Let our professional team help you out! We offer high-quality assistance with all types of academic papers. You can get help with your assignments, essays, research papers, and more. We guarantee high-quality, affordable services that will help you improve your academic performance. You can ask us to can pay someone to write my assignment and take help with my assignment without any worries. We are here to help you! Our online essay writer service is one of the best in the industry. We guarantee high-quality, affordable services that will help you get the grades you need. Contact us today and let us help you.
{"url":"https://www.irelandassignments.ie/samples/acm10060-applications-of-differential-equations-assignment-ireland/","timestamp":"2024-11-05T04:23:37Z","content_type":"text/html","content_length":"180399","record_id":"<urn:uuid:650017ad-6081-4e3a-96d8-eccb0146976c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00432.warc.gz"}
Federation of Card Games | FCG Open Face Cards is played with a deck of cards in four suits, i.e., spades ♠, hearts ♥, clubs ♣and diamonds ♦, with two jokers removed. There are 13 cards (3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, A, 2, one card each) in each suit to make a total of 52-card deck. The game is for two or three players. Each player gets 17 cards from five rounds in a hand (the number of cards in each round: 5, 3, 3, 3, 3), and chooses 13 out of them to make legal combinations and compare them with other players. Scores will be calculated for each player. Ⅱ.Basic rules 1.Rules for Card Dealing and Setting In the first round, each player gets five cards and shall set them in sequential rank on the table as required; Then each player gets three cards from each of the next four rounds. After the card dealing in each round, each player shall choose two out of the three cards, with one abandoned, and set the two cards on the table as required; After the five rounds of dealing, each player has 13 cards, which have been set on the table in sequential rank as required. In the process of card dealing and setting, players are not allowed to change the positions of cards on the table after the setting of each round. 2. Playing Rules Each player shall set their 13 cards into three hands: 3 cards in the front hand, 5 cards in the middle hand, and 5 cards in the back hand; and make the ranking of the three hands meet the rule: front hand <middle hand ≤back hand. 3. Combinations (1) High Card: five cards of different numbers, unsequential rank, and different suits. In the front hand, it refers to a combination consisting of three cards of different numbers. Comparison rule: Compare the card of the highest number (top card) first, then the card of the second highest number, and the like. Ranking of numbers from low to high: 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, A (2) One Pair: two cards of same numbers plus three cards of different numbers. In the front hand, it refers to a combination consisting of two cards of same numbers plus one single card. When comparing combinations, the pair with stronger cards counts. If pairs are same, the top card among those left counts. Rules go with those for the comparison between single cards. (3) Two Pairs: two pairs plus one singe card When comparing combinations, the pair of higher numbers among the two pairs counts. If same, then compare the other pair. If two pairs are same, then compare the single card. (4) Three of a Kind: three cards of equal rank plus two cards of different numbers. In the front hand, it refers to a combination consisting of three cards of same rank. When comparing combinations, the strength of the three cards of equal rank counts. Rules go with those for the comparison between single cards. (5) Straight: five cards of sequential numbers (the five cards in a straight may have different suits) When comparing combinations, the top card in a straight counts. The 10-J-Q-K-A is the highest, while A-2-3-4-5 is the lowest. It should be noted that straights just run to 10-J-Q-K-A. For example, J-Q-K-A-2 cannot be a straight. If two or more straights have same numbers, they are considered equal. (6) Flush: five cards of same suit (the five cards in a flush may have unsequential numbers) When comparing combinations, the top card in a flush counts; if top cards in two or more flushes have same numbers, then the card of the second highest number counts, and the like. If all the five cards in two flushes have same numbers but different suits, they are considered equal. (7) Full House: three cards of same numbers plus a pair When comparing combinations, the strength of three equal cards counts. (8) Four of a Kind: four cards of same numbers plus a single card When comparing combinations, the strength of four equal cards counts. (9) Straight Flush: Five cards of sequential numbers and same suits When comparing combinations, the strength of top card in each straight flush counts; in case that top cards in two straight flushes have same numbers, the two straight flushes are considered equal. (10) Royal Flush: 10-J-Q-K-A of same suit Royal flushes of any suit are considered equal. 4. Strength of Combinations (1) Strength comparison between combinations consisting of three cards (front hand) Only High Card, One Pair and Three of a Kind are combinations consisting of three cards. Rules for comparing same combinations are as described above; Rules for comparing different combinations are: Three of a Kind >One Pair >High Card (2) Strength comparison between combinations consisting of five cards (middle hand, back hand) Rules for comparing the strength of same combinations are as described above; Rules for comparing the strength of different combinations are: Royal Flush>Straight Flush>Four of a Kind>Full House>Flush>Straight>Three of a Kind>Two Pairs>One Pair>High Card (3) Strength comparison between combinations consisting of three cards and those consisting of five cards As the playing rules require the front hand<the middle hand, it is necessary to compare the strength between combinations consisting of three cards and those consisting of five cards. Rules for the strength comparison are consistent with those mentioned above. 5. Strength Comparison and Scoring Each player shall face up his three hands on the table after setting, and compare the strength of each hand (front hand, middle hand, back hand) with other players. (1) During the strength comparisons, the winner gets 1 point and the loser gets -1 point at a time. If two hands have the same strength, each party gets 0 point. (2) If the hands set by a player fail to meet the rules, i.e., front hand <middle hand ≤bank hand, he will be scored basis points of -12 in the round, while each of two other players will get basis points of +6. Then two other players continue the process of strength comparison. (3) If the hands set by a player meet the rules, with the front hand having a pair of Qs, a pair of Ks, a pair of As or Three of a Kind, then such hands are named a Fan. In addition to the points in each round, the player with a Fan will be scored according to the following rules: +2 points for the player whose front hand has a pair of Qs, -1 point for each of two other players; +4 points for the player whose front hand has a pair of Ks, -2 points for each of two other players; +6 points for the player whose front hand has a pair of As, -3 points for each of two other players; Notes: A Fan ends only in one hand and does not affect the next hand. Ⅲ. Process 1. At the start of a round, the staff deals cards in order after the three players are seated and agree to start. Then players take cards, abandon cards and set cards according to process, and shall make sure that all cards are placed in the positions as required on the table. 2. After a hand is over, the staff records the outcome and final scores and, after unanimously approved by the three players, collects the cards for this hand and start a new hand. 3. After a round is over, the staff calculates total scores of the three teams met in the same table in this round. The players shall confirm by signature before leaving, waiting for the next round of competition. 4. In each round of the competition, players shall not leave without any reason. Ⅳ. Duplicate System 1. Three players of one team will meet six players of the other two at the same time, with seats staggered. In the same hand, players from the three teams meet in three tables. If players A1, A2 and A3 are teammates, the seats for players in the three tables are: A1, B1, C1; B2, C2, A2; and C3, A3, B3. 2. In the same hand of the same round, same hands are distributed in each round to ensure that the hands distributed to the same team in the same position of the three tables are the same, thereby eliminating the influence of different hands on the results. Investigating different approaches of different players on the same hand will reflect the diversification of the game and technical differences of players. Ⅴ. Penalty Explanation 1. During the competition, only passing the information of the card itself during the normal playing processing is allowed. It is not allowed to pass the information through language, gestures, movements, eyes and other ways to exchange the hand situation between teammates. Players can take advantage of other information from the opponents, but the intention to mislead the opponents for deliberately sending the wrong message is not allowed. The chief referee has the right to disqualify the player who committed serious violations. 2. Players are not allowed to change their cards after placed them on the table or abandoned. The referee has the right to give a warning or an immediate ruling that the illegal player failed to set his hands according to rules (basis points of -12). 3. After a hand, no discussion or controversy is allowed. If there is any question, please indicate the judgment of the referee.
{"url":"https://fcgofficial.com/zh/rules/open-face-cards","timestamp":"2024-11-14T12:03:48Z","content_type":"text/html","content_length":"536488","record_id":"<urn:uuid:8844763d-dedc-44fe-bba1-aaa82ea3bbb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00124.warc.gz"}
Go to: Synopsis. Return value. Flags. Python examples. roundConstantRadius( string string [string string] , [append=boolean], [constructionHistory=boolean], [name=string], [object=boolean], [radiuss=linear], [side=[string, int]], [sidea=int], [sideb=int Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis. roundConstantRadius is undoable, queryable, and editable. This command generates constant radius NURBS fillets and NURBS corner surfaces for matching edge pairs on NURBS surfaces. An edge pair is a matching pair of surface isoparms or trim edges. This command can handle more than one edge pair at a time. This command can also handle compound edges, which is where an edge pair is composed of more than two surfaces. Use the "-sa" and "-sb" flags in this case. The results from this command are three surface var groups plus the name of the new roundConstantRadius dependency node, if history was on. The 1st var group contains trimmed copies of the original surfaces. The 2nd var group contains the new NURBS fillet surfaces. The 3rd var group contains the new NURBS corners (if any). A simple example of an edge pair is an edge of a NURBS cube, where two faces of the cube meet. This command generates a NURBS fillet at the edge and trims back the faces. Another example is a NURBS cylinder with a planar trim surface cap. This command will create a NURBS fillet where the cap meets the the cylinder and will trim back the cap and the cylinder. Another example involves all 12 edges of a NURBS cube. NURBS fillets are created where any face meets another face. NURBS corners are created whenever 3 edges meet at a corner. string[] (resulting NURBS surfaces' names and node name) In query mode, return type is based on queried flag. append, constructionHistory, name, object, radiuss, side, sidea, sideb Long name (short name) Argument types Properties Common flags append(a) boolean If true, then an edge pair is being added to an existing round dependency node. Default is false. When this flag is true, an existing round dependency node must be specified. See example below. constructionHistory(ch) boolean Turn the construction history on or off. name(n) string Sets the name of the newly-created node. If it contains namespace path, the new node will be created under the specified namespace; if the namespace does not exist, it will be created. object(o) boolean Create the result, or just the dependency node. radiuss(rad) linear Use this flag to specify radius. This overrides the "r/radius" flag. If only one "rad" flag is used, then it is applied to all edge pairs. If more than one "rad" flag is used, then the number of "-rad" flags must equal the number of edge pairs. For example, for four edge pairs, zero, one or four "rad" flags must be specified. side(s) [string, int] Use this flag for compound edges. It replaces the sidea/sideb flags and is compatible with Python. The first argument must be either "a" or "b". The same number of "a" values as "b" values must be specified. If no sides are specified with the "side" flag (or sidea/sideb flags), then the edges are assumed to be in pairs. See also examples below. For example, two faces of a cube meet at an edge pair. Suppose one of the faces is then split in two pieces at the middle of the edge, so that there is one face on side "A", and two pieces on side "B". In this case the flag combination: -side "a" 1 -side "b" 2 would be used. The edges must be specified in the corresponding order: // MEL roundConstantRadius -side "a" 1 -side "b" 2 isoA isoB1 isoB2; # Python maya.cmds.roundConstantRadius( 'isoA', 'isoB1', 'isoB2', side=[("a",1), ("b",2)] ) sidea(sa) int Use this flag for compound edges in conjunction with the following "-sb" flag. This flag is not intended for use from Python. Please see "side" flag instead. The same number of "-sa" flags as "-sb" flags must be specified. If no "-sa" nor "-sb" flags are specified, then the edges are assumed to be in pairs. See also examples below. For example, two faces of a cube meet at an edge pair. Suppose one of the faces is then split in two pieces at the middle of the edge, so that there is one face on side "A", and two pieces on side "B". In this case, the flag combination: -sidea 1 -sideb 2 would be used. The edges must be specified in the corresponding order: roundConstantRadius -sidea 1 -sideb 2 isoA isoB1 isoB2; sideb(sb) int Use this flag for compound edges in conjunction with the "-sa" flag. See description for the "-sa" flag. This flag is not intended for use from Python. Please see "side" flag instead. import maya.cmds as cmds # This rounds four edges of a cube with radius 0.9. Because a single # radius is specified, it is used for all edges. The edges must # be specified in matching pairs if no "sidea" or "sideb" flags # are used. cube = cmds.nurbsCube(w=5, lr=1, hr=1, d=3, ch=0) sides = cmds.listRelatives( cube[0], c=True ) rnd = cmds.roundConstantRadius( (sides[0] + ".v[0]"), (sides[2] + ".v[1]"), (sides[0] + ".u[1]"), (sides[4] + ".v[1]"), (sides[0] + ".v[1]"), (sides[3] + ".u[1]"), (sides[0] + ".u[0]"), (sides[5] + ".u[1]"), rad=0.9 ) # This adds a pair of isoparms to an existing round operation, # named $rnd[3] (from previous example) cmds.roundConstantRadius( (sides[3] + '.v[0]'), (sides[5] + '.v[1]'), rnd[3], append=True, rad=0.8 ) # This rounds 6 edges of a cube with different radius values. # The first four edges have radius 0.9 and the others have radius 1.1. # In this case the edges are specified in matching pairs # since no "sidea" or "sideb" flags are used. cube = cmds.nurbsCube( w=5, lr=1, hr=1, d=3, ch=0 ) sides = cmds.listRelatives( cube[0], c=True ) cmds.roundConstantRadius( (sides[0]+".v[0]"), (sides[2]+".v[1]"), (sides[0]+".u[1]"), (sides[4]+".v[1]"), (sides[0]+".v[1]"), (sides[3]+".u[1]"), (sides[0]+".u[0]"), (sides[5]+".u[1]"), (sides[3]+".v[0]"), (sides[5]+".v[1]"), (sides[2]+".u[1]"), (sides[4]+".u[0]"), rad=[0.9, 0.9, 0.9, 0.9, 1.1, 1.1] ) # This rounds a 2-to-1 compound edge. The sidea flag indicates # that there two edges on side A, and one on side B. # The edges must be specified in the corresponding order. pln1 = cmds.nurbsPlane(w=5, ch=0, ax=(0, 1, 0)) pln2 = cmds.nurbsPlane( p=(2.5, 2.5, 1.25), ax=(1, 0, 0), w=2.5, lr=2, d=3, u=1, v=1, ch=0 ) pln3 = cmds.nurbsPlane( p=(2.5, 2.5, -1.25), ax=(1, 0, 0), w=2.5, lr=2, d=3, u=1, v=1, ch=0 ) pln4 = cmds.nurbsPlane( p=(0, 2.5, -2.5), ax=(0, 0, 1), w=5, lr=1, d=3, u=1, v=1, ch=0 ) cmds.roundConstantRadius( (pln2[0]+'.v[0]'), (pln3[0]+'.v[0]'), (pln1[0]+'.u[1]'), (pln3[0]+'.u[1]'), (pln4[0]+'.u[1]'), rad=0.9, side=[('a',2), ('b', 1), ('a', 1), ('b', 1)] )
{"url":"https://help.autodesk.com/cloudhelp/2023/ENU/Maya-Tech-Docs/CommandsPython/roundConstantRadius.html","timestamp":"2024-11-03T02:51:49Z","content_type":"text/html","content_length":"17595","record_id":"<urn:uuid:d266951d-cccf-4ba9-91ff-c60df7199b1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00857.warc.gz"}
Partitioning Hückel–London currents into cycle contributions Chemistry, 3(4):1138–1156, October, 2021. Paper doi abstract bibtex Ring-current maps give a direct pictorial representation of molecular aromaticity. They can be computed at levels ranging from empirical to full ab initio and DFT. For benzenoid hydrocarbons, Hückel–London (HL) theory gives a remarkably good qualitative picture of overall current patterns, and a useful basis for their interpretation. This paper describes an implemention of Aihara’s algorithm for computing HL currents for a benzenoid (for example) by partitioning total current into its constituent cycle currents. The Aihara approach can be used as an alternative way of calculating Hückel–London current maps, but more significantly as a tool for analysing other empirical models of induced current based on conjugated circuits. We outline an application where examination of cycle contributions to HL total current led to a simple graph-theoretical approach for cycle currents, which gives a better approximation to the HL currents for Kekulean benzenoids than any of the existing conjugated-circuit models, and unlike these models it also gives predictions of the HL currents in non-Kekulean benzenoids that are of similar quality. title = {Partitioning {Hückel}–{London} currents into cycle contributions}, volume = {3}, issn = {2624-8549}, url = {https://www.mdpi.com/2624-8549/3/4/83}, doi = {10.3390/chemistry3040083}, abstract = {Ring-current maps give a direct pictorial representation of molecular aromaticity. They can be computed at levels ranging from empirical to full ab initio and DFT. For benzenoid hydrocarbons, Hückel–London (HL) theory gives a remarkably good qualitative picture of overall current patterns, and a useful basis for their interpretation. This paper describes an implemention of Aihara’s algorithm for computing HL currents for a benzenoid (for example) by partitioning total current into its constituent cycle currents. The Aihara approach can be used as an alternative way of calculating Hückel–London current maps, but more significantly as a tool for analysing other empirical models of induced current based on conjugated circuits. We outline an application where examination of cycle contributions to HL total current led to a simple graph-theoretical approach for cycle currents, which gives a better approximation to the HL currents for Kekulean benzenoids than any of the existing conjugated-circuit models, and unlike these models it also gives predictions of the HL currents in non-Kekulean benzenoids that are of similar quality.}, language = {en}, number = {4}, urldate = {2022-04-20}, journal = {Chemistry}, author = {Myrvold, Wendy and Fowler, Patrick W. and Clarke, Joseph}, month = oct, year = {2021}, pages = {1138--1156},
{"url":"https://bibbase.org/network/publication/myrvold-fowler-clarke-partitioninghckellondoncurrentsintocyclecontributions-2021","timestamp":"2024-11-07T15:57:00Z","content_type":"text/html","content_length":"15863","record_id":"<urn:uuid:f55ada9c-ceec-4675-ae05-0a4427b9ca7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00630.warc.gz"}
Maxwell/Square Mile Converter | Kody Tools Conversion Description 1 Maxwell/Square Mile = 3.8610215854245e-15 Tesla 1 Maxwell/Square Mile in Tesla is equal to 3.8610215854245e-15 1 Maxwell/Square Mile = 3.8610215854245e-12 Millitesla 1 Maxwell/Square Mile in Millitesla is equal to 3.8610215854245e-12 1 Maxwell/Square Mile = 3.8610215854245e-9 Microtesla 1 Maxwell/Square Mile in Microtesla is equal to 3.8610215854245e-9 1 Maxwell/Square Mile = 3.8610215854245e-11 Gauss 1 Maxwell/Square Mile in Gauss is equal to 3.8610215854245e-11 1 Maxwell/Square Mile = 0.0000038610215854245 Gamma 1 Maxwell/Square Mile in Gamma is equal to 0.0000038610215854245 1 Maxwell/Square Mile = 3.8610215854245e-15 Weber/Square Meter 1 Maxwell/Square Mile in Weber/Square Meter is equal to 3.8610215854245e-15 1 Maxwell/Square Mile = 3.8610215854245e-9 Weber/Square Kilometer 1 Maxwell/Square Mile in Weber/Square Kilometer is equal to 3.8610215854245e-9 1 Maxwell/Square Mile = 3.8610215854245e-17 Weber/Square Decimeter 1 Maxwell/Square Mile in Weber/Square Decimeter is equal to 3.8610215854245e-17 1 Maxwell/Square Mile = 3.8610215854245e-19 Weber/Square Centimeter 1 Maxwell/Square Mile in Weber/Square Centimeter is equal to 3.8610215854245e-19 1 Maxwell/Square Mile = 3.8610215854245e-21 Weber/Square Millimeter 1 Maxwell/Square Mile in Weber/Square Millimeter is equal to 3.8610215854245e-21 1 Maxwell/Square Mile = 3.8610215854245e-27 Weber/Square Micrometer 1 Maxwell/Square Mile in Weber/Square Micrometer is equal to 3.8610215854245e-27 1 Maxwell/Square Mile = 3.8610215854245e-33 Weber/Square Nanometer 1 Maxwell/Square Mile in Weber/Square Nanometer is equal to 3.8610215854245e-33 1 Maxwell/Square Mile = 1e-8 Weber/Square Mile 1 Maxwell/Square Mile in Weber/Square Mile is equal to 1e-8 1 Maxwell/Square Mile = 3.228305785124e-15 Weber/Square Yard 1 Maxwell/Square Mile in Weber/Square Yard is equal to 3.228305785124e-15 1 Maxwell/Square Mile = 3.5870064279155e-16 Weber/Square Foot 1 Maxwell/Square Mile in Weber/Square Foot is equal to 3.5870064279155e-16 1 Maxwell/Square Mile = 2.4909766860524e-18 Weber/Square Inch 1 Maxwell/Square Mile in Weber/Square Inch is equal to 2.4909766860524e-18 1 Maxwell/Square Mile = 3.8610215854245e-12 Milliweber/Square Meter 1 Maxwell/Square Mile in Milliweber/Square Meter is equal to 3.8610215854245e-12 1 Maxwell/Square Mile = 0.0000038610215854245 Milliweber/Square Kilometer 1 Maxwell/Square Mile in Milliweber/Square Kilometer is equal to 0.0000038610215854245 1 Maxwell/Square Mile = 3.8610215854245e-14 Milliweber/Square Decimeter 1 Maxwell/Square Mile in Milliweber/Square Decimeter is equal to 3.8610215854245e-14 1 Maxwell/Square Mile = 3.8610215854245e-16 Milliweber/Square Centimeter 1 Maxwell/Square Mile in Milliweber/Square Centimeter is equal to 3.8610215854245e-16 1 Maxwell/Square Mile = 3.8610215854245e-18 Milliweber/Square Millimeter 1 Maxwell/Square Mile in Milliweber/Square Millimeter is equal to 3.8610215854245e-18 1 Maxwell/Square Mile = 3.8610215854245e-24 Milliweber/Square Micrometer 1 Maxwell/Square Mile in Milliweber/Square Micrometer is equal to 3.8610215854245e-24 1 Maxwell/Square Mile = 3.8610215854245e-30 Milliweber/Square Nanometer 1 Maxwell/Square Mile in Milliweber/Square Nanometer is equal to 3.8610215854245e-30 1 Maxwell/Square Mile = 0.00001 Milliweber/Square Mile 1 Maxwell/Square Mile in Milliweber/Square Mile is equal to 0.00001 1 Maxwell/Square Mile = 3.228305785124e-12 Milliweber/Square Yard 1 Maxwell/Square Mile in Milliweber/Square Yard is equal to 3.228305785124e-12 1 Maxwell/Square Mile = 3.5870064279155e-13 Milliweber/Square Foot 1 Maxwell/Square Mile in Milliweber/Square Foot is equal to 3.5870064279155e-13 1 Maxwell/Square Mile = 2.4909766860524e-15 Milliweber/Square Inch 1 Maxwell/Square Mile in Milliweber/Square Inch is equal to 2.4909766860524e-15 1 Maxwell/Square Mile = 3.8610215854245e-9 Microweber/Square Meter 1 Maxwell/Square Mile in Microweber/Square Meter is equal to 3.8610215854245e-9 1 Maxwell/Square Mile = 0.0038610215854245 Microweber/Square Kilometer 1 Maxwell/Square Mile in Microweber/Square Kilometer is equal to 0.0038610215854245 1 Maxwell/Square Mile = 3.8610215854245e-11 Microweber/Square Decimeter 1 Maxwell/Square Mile in Microweber/Square Decimeter is equal to 3.8610215854245e-11 1 Maxwell/Square Mile = 3.8610215854245e-13 Microweber/Square Centimeter 1 Maxwell/Square Mile in Microweber/Square Centimeter is equal to 3.8610215854245e-13 1 Maxwell/Square Mile = 3.8610215854245e-15 Microweber/Square Millimeter 1 Maxwell/Square Mile in Microweber/Square Millimeter is equal to 3.8610215854245e-15 1 Maxwell/Square Mile = 3.8610215854245e-21 Microweber/Square Micrometer 1 Maxwell/Square Mile in Microweber/Square Micrometer is equal to 3.8610215854245e-21 1 Maxwell/Square Mile = 3.8610215854245e-27 Microweber/Square Nanometer 1 Maxwell/Square Mile in Microweber/Square Nanometer is equal to 3.8610215854245e-27 1 Maxwell/Square Mile = 0.01 Microweber/Square Mile 1 Maxwell/Square Mile in Microweber/Square Mile is equal to 0.01 1 Maxwell/Square Mile = 3.228305785124e-9 Microweber/Square Yard 1 Maxwell/Square Mile in Microweber/Square Yard is equal to 3.228305785124e-9 1 Maxwell/Square Mile = 3.5870064279155e-10 Microweber/Square Foot 1 Maxwell/Square Mile in Microweber/Square Foot is equal to 3.5870064279155e-10 1 Maxwell/Square Mile = 2.4909766860524e-12 Microweber/Square Inch 1 Maxwell/Square Mile in Microweber/Square Inch is equal to 2.4909766860524e-12 1 Maxwell/Square Mile = 3.0725033535238e-8 Unit Pole/Square Meter 1 Maxwell/Square Mile in Unit Pole/Square Meter is equal to 3.0725033535238e-8 1 Maxwell/Square Mile = 0.030725033535238 Unit Pole/Square Kilometer 1 Maxwell/Square Mile in Unit Pole/Square Kilometer is equal to 0.030725033535238 1 Maxwell/Square Mile = 3.0725033535238e-10 Unit Pole/Square Decimeter 1 Maxwell/Square Mile in Unit Pole/Square Decimeter is equal to 3.0725033535238e-10 1 Maxwell/Square Mile = 3.0725033535238e-12 Unit Pole/Square Centimeter 1 Maxwell/Square Mile in Unit Pole/Square Centimeter is equal to 3.0725033535238e-12 1 Maxwell/Square Mile = 3.0725033535238e-14 Unit Pole/Square Millimeter 1 Maxwell/Square Mile in Unit Pole/Square Millimeter is equal to 3.0725033535238e-14 1 Maxwell/Square Mile = 3.0725033535238e-20 Unit Pole/Square Micrometer 1 Maxwell/Square Mile in Unit Pole/Square Micrometer is equal to 3.0725033535238e-20 1 Maxwell/Square Mile = 3.0725033535238e-26 Unit Pole/Square Nanometer 1 Maxwell/Square Mile in Unit Pole/Square Nanometer is equal to 3.0725033535238e-26 1 Maxwell/Square Mile = 0.079577471545942 Unit Pole/Square Mile 1 Maxwell/Square Mile in Unit Pole/Square Mile is equal to 0.079577471545942 1 Maxwell/Square Mile = 2.569004117573e-8 Unit Pole/Square Yard 1 Maxwell/Square Mile in Unit Pole/Square Yard is equal to 2.569004117573e-8 1 Maxwell/Square Mile = 2.8544490195256e-9 Unit Pole/Square Foot 1 Maxwell/Square Mile in Unit Pole/Square Foot is equal to 2.8544490195256e-9 1 Maxwell/Square Mile = 1.9822562635594e-11 Unit Pole/Square Inch 1 Maxwell/Square Mile in Unit Pole/Square Inch is equal to 1.9822562635594e-11 1 Maxwell/Square Mile = 3.8610215854245e-7 Line/Square Meter 1 Maxwell/Square Mile in Line/Square Meter is equal to 3.8610215854245e-7 1 Maxwell/Square Mile = 0.38610215854245 Line/Square Kilometer 1 Maxwell/Square Mile in Line/Square Kilometer is equal to 0.38610215854245 1 Maxwell/Square Mile = 3.8610215854245e-9 Line/Square Decimeter 1 Maxwell/Square Mile in Line/Square Decimeter is equal to 3.8610215854245e-9 1 Maxwell/Square Mile = 3.8610215854245e-11 Line/Square Centimeter 1 Maxwell/Square Mile in Line/Square Centimeter is equal to 3.8610215854245e-11 1 Maxwell/Square Mile = 3.8610215854245e-13 Line/Square Millimeter 1 Maxwell/Square Mile in Line/Square Millimeter is equal to 3.8610215854245e-13 1 Maxwell/Square Mile = 3.8610215854245e-19 Line/Square Micrometer 1 Maxwell/Square Mile in Line/Square Micrometer is equal to 3.8610215854245e-19 1 Maxwell/Square Mile = 3.8610215854245e-25 Line/Square Nanometer 1 Maxwell/Square Mile in Line/Square Nanometer is equal to 3.8610215854245e-25 1 Maxwell/Square Mile = 1 Line/Square Mile 1 Maxwell/Square Mile in Line/Square Mile is equal to 1 1 Maxwell/Square Mile = 3.228305785124e-7 Line/Square Yard 1 Maxwell/Square Mile in Line/Square Yard is equal to 3.228305785124e-7 1 Maxwell/Square Mile = 3.5870064279155e-8 Line/Square Foot 1 Maxwell/Square Mile in Line/Square Foot is equal to 3.5870064279155e-8 1 Maxwell/Square Mile = 2.4909766860524e-10 Line/Square Inch 1 Maxwell/Square Mile in Line/Square Inch is equal to 2.4909766860524e-10 1 Maxwell/Square Mile = 3.8610215854245e-7 Maxwell/Square Meter 1 Maxwell/Square Mile in Maxwell/Square Meter is equal to 3.8610215854245e-7 1 Maxwell/Square Mile = 0.38610215854245 Maxwell/Square Kilometer 1 Maxwell/Square Mile in Maxwell/Square Kilometer is equal to 0.38610215854245 1 Maxwell/Square Mile = 3.8610215854245e-9 Maxwell/Square Decimeter 1 Maxwell/Square Mile in Maxwell/Square Decimeter is equal to 3.8610215854245e-9 1 Maxwell/Square Mile = 3.8610215854245e-11 Maxwell/Square Centimeter 1 Maxwell/Square Mile in Maxwell/Square Centimeter is equal to 3.8610215854245e-11 1 Maxwell/Square Mile = 3.8610215854245e-13 Maxwell/Square Millimeter 1 Maxwell/Square Mile in Maxwell/Square Millimeter is equal to 3.8610215854245e-13 1 Maxwell/Square Mile = 3.8610215854245e-19 Maxwell/Square Micrometer 1 Maxwell/Square Mile in Maxwell/Square Micrometer is equal to 3.8610215854245e-19 1 Maxwell/Square Mile = 3.8610215854245e-25 Maxwell/Square Nanometer 1 Maxwell/Square Mile in Maxwell/Square Nanometer is equal to 3.8610215854245e-25 1 Maxwell/Square Mile = 3.228305785124e-7 Maxwell/Square Yard 1 Maxwell/Square Mile in Maxwell/Square Yard is equal to 3.228305785124e-7 1 Maxwell/Square Mile = 3.5870064279155e-8 Maxwell/Square Foot 1 Maxwell/Square Mile in Maxwell/Square Foot is equal to 3.5870064279155e-8 1 Maxwell/Square Mile = 2.4909766860524e-10 Maxwell/Square Inch 1 Maxwell/Square Mile in Maxwell/Square Inch is equal to 2.4909766860524e-10
{"url":"https://www.kodytools.com/units/magdensity/from/mxpmi2","timestamp":"2024-11-04T21:56:28Z","content_type":"text/html","content_length":"121407","record_id":"<urn:uuid:b3c762c2-716b-41fd-b289-2feefaf580c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00441.warc.gz"}