content
stringlengths
86
994k
meta
stringlengths
288
619
Derivation of Kinetic Gas Equation MCQ for Students | MCQTUBE Derivation of Kinetic Gas Equation Derivation of Kinetic Gas Equation. We covered all the Derivation of Kinetic Gas Equation in this post for free so that you can practice well for the exam. Install our MCQTUBE Android app from the Google Play Store and prepare for any competitive government exams for free. These types of competitive MCQs appear in the exams like CSIR, NET, SET, FRO, JL, DL, APPSC, TSPSC Exams, etc. We created all the competitive exam MCQs into several small posts on our website for your convenience. You will get their respective links in the related posts section provided below. Join Telegram Group and Get FREE Alerts! Join Now Join WhatsApp Group For FREE Alerts! Join Now Related Posts: MCQ on Derivation of Kinetic Gas Equation for Students The mean kinetic energy of a perfect gas monoatomic molecule at the temperature is 1. KT 2. 3KT/2 3. KT/2 4. -K/T The velocity of gas molecules is inversely proportional to 1. density 2. square root of density 3. cube of density 4. square of the density Option 2 – square root of density The ratio of specific heat of a gas changes with the atomicity of the gas as 1. constant 2. decrease 3. increase 4. directly proportional If a, and b are Vander Waal’s constants, then the equation for Boyle’s temperature is 1. a/Rb 2. 4a/5Rb 3. 2a/Rb 4. 3a/Rb At what temperature, does the kinetic energy of gas become equal to one electron volt? 1. 772 K 2. 473 K 3. 772 C 4. 373 K The relationship between inversion temperature and Boyle’s temperature of a gas is 1. T₁ = 2TB 2. T₁ = TB 3. T₁ = 4TB 4. T₁ = 3Tg The equation of 8kg of oxygen corresponding is 1. PV = RT/4 2. P = RT/2 3. PV = 8RT 4. PV = RT At room temperature the r.m.s. speed of the molecules of certain diatomic gas is found to be 1930 m/s. The gas corresponds to 1. Chlorine 2. Fluorine 3. Oxygen 4. Hydrogen The number of degrees of freedom of translatory and rotatory motion of a diatomic molecule is 1. 5 2. 2 3. 1 4. 4 According to the kinetic theory of gases, the pressure of a gas is given by the expression P=2E/3V; Here E and V are the energy and volume of the gas molecule. In this case, what is the total energy 1. The rotational energy of the molecules 2. The mechanical energy of the molecules 3. The kinetic energy of the molecules 4. None of the above Option 4 – None of the above The molar heat capacity for an ideal gas 1. is zero for an adiabatic process 2. is equal to molecular weight and specific heat capacity for any process 3. depends only on the nature of gas for a process in which volume or pressure is constant and is infinite for an isothermal process 4. All of the above Option 4 – All of the above At a given temperature which of the following gases possesses maximum r.m.s. velocity. 1. CO₂ 3. H₂ 4. N₂ The kinetic theory of gases at absolute zero temperature 1. liquid helium freezes 2. molecular motion will be stopped 3. liquid hydrogen freezes 4. none of above Real gases can be treated as near-ideal gases when a real gas system is 1. at very high pressure but at standard temperature 2. at very high temperatures but at standard pressure 3. very low number of gas molecules per unit volume 4. at STP Option 2 – at very high temperatures but at standard pressure We covered all the derivation of kinetic gas equation above in this post for free so that you can practice well for the exam. Check out the latest MCQ content by visiting our mcqtube website homepage. Also, check out: Join Telegram Group and Get FREE Alerts! Join Now Join WhatsApp Group For FREE Alerts! Join Now Leave a Comment
{"url":"https://www.mcqtube.com/derivation-of-kinetic-gas-equation/","timestamp":"2024-11-04T14:04:59Z","content_type":"text/html","content_length":"162059","record_id":"<urn:uuid:918eb3ce-e455-4d65-8c86-025f875bb7c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00395.warc.gz"}
Using jti On Graphical Models The family of graphical models is vast and includes many different models. jti handles Bayesian networks and decomposable undirected graphical models. Undirected graphical models are also known as Markov random fields (MRFs). Decomposability is a property ensuring a closed form of the maximum likelihood parameters. Graphical models enjoy the property that conditional independencies can be read off from a graph consisting of nodes, representing the random variables. In Bayesian networks, edges are directed from a node to another and represent directed connections. In a MRF the edges are undirected and these should be regarded as associations between pairs of nodes in a broad sense. A Bayesian network is specified through a collection of conditional probability tables (CPTs) as we outline in the following and a MRF is specified through tables described by the cliques of the graph. Graphical models are used in a diverse range of applications, e.g., forensic identification problems, traffic monitoring, automated general medical diagnosis and risk analysis . Finally, we mention that the word posterior inference within the realm of graphical models is synonymous with estimating conditional probabilities. Let \(p\) be a discrete probability mass function of a random vector \(X = (X_{v} \mid v \in V)\) where \(V\) is a set of labels. The state space of \(X_{v}\) is denoted \(I_{v}\) and the state space of \(X\) is then given by \(I = \times_{v\in V} I_{v}\). A realized value \(x = (x_{v})_{v\in V}\) is called a cell. Given a subset \(A\) of \(V\), the A-marginal cell of \(x\) is the vector, \(x_{A} = (x_v)_{v\in A}\), with state space \(I_{A} = \times_{v\in A} I_{v}\). A Bayesian Network can be defined as a directed acyclic graph (DAG), for which each node represents a random variable together with a joint probability of the form \[ p(x) = \prod_{v\in V} p(x_{v} \mid x_{pa(v)}), \] where \(x_{pa(v)}\) denotes the parents of \(x_v\); i.e. the set of nodes with an arrow pointing towards \(x_v\) in the DAG. Also, \(x_v\) is a child of the variables \(x_{pa(v)}\). Notice, that \(p (x_{v} \mid x_{pa(v)})\) has domain \(I_{v} \times I_{pa(v)}\). Setting up the network We use the asia data; see the man page (?asia) Checking and conversion After the network has been compiled, the graph has been triangulated and moralized. Furthermore, all conditional probability tables (CPTs) has been designated to one of the cliques (in the triangulated and moralized graph). Example 1: sum-flow without evidence Query probabilities: query_belief(jt1, c("E", "L", "T")) #> $E #> E #> n y #> 0.9257808 0.0742192 #> $L #> L #> n y #> 0.934 0.066 #> $T #> T #> n y #> 0.9912 0.0088 query_belief(jt1, c("B", "D", "E"), type = "joint") #> , , B = y #> E #> D n y #> y 0.36261346 0.041523361 #> n 0.09856873 0.007094444 #> , , B = n #> E #> D n y #> y 0.04637955 0.018500278 #> n 0.41821906 0.007101117 It should be noticed, that the above could also have been achieved by That is; it is possible to postpone the actual propagation. Example 2: sum-flow with evidence Notice that, the configuration (D,E,B) = (y,y,n) has changed dramatically as a consequence of the evidence. We can get the probability of the evidence: Example 3: max-flow without evidence Example 4: max-flow with evidence Notice, that T, E, S, B, X and D has changed from "n" to "y" as a consequence of the new evidence e4. Example 5: specifying a root node and only collect to save run time We can only query from the variables in the root clique now but we have ensured that the node of interest, “X”, does indeed live in this clique. The variables are found using get_clique_root. Example 6: Compiling from a list of conditional probabilities • We need a list with CPTs which we extract from the asia2 object □ the list must be named with child nodes □ The elements need to be array-like objects Inspection; see if the graph correspond to the cpts This time we specify that no propagation should be performed We can now inspect the collecting junction tree and see which cliques are leaves and parents That is: • clique 2 is parent of clique 1 • clique 3 is parent of clique 4 etc. Next, we send the messages from the leaves to the parents Inspect again Send the last message to the root and inspect The arrows are now reversed and the outwards (distribute) phase begins Clique 2 (the root) is now a leave and it has 1, 3 and 6 as parents. Finishing the message passing Queries can now be performed as normal Example 7: Fitting a decomposable model and apply JTA We use the ess package (on CRAN), found at https://github.com/mlindsk/ess, to fit an undirected decomposable graph to data. #> Attaching package: 'ess' #> The following objects are masked from 'package:igraph': #> components, dfs, subgraph g7 <- ess::fit_graph(asia, trace = FALSE) ig7 <- ess::as_igraph(g7) cp7 <- compile(pot_list(asia, ig7)) jt7 <- jt(cp7) query_belief(jt7, get_cliques(jt7)[[4]], type = "joint") #> , , T = n #> L #> E n y #> n 0.926 0.0000 #> y 0.000 0.0652 #> , , T = y #> L #> E n y #> n 0.000 0e+00 #> y 0.008 8e-04
{"url":"https://cran.uvigo.es/web/packages/jti/vignettes/using_jti.html","timestamp":"2024-11-12T15:36:14Z","content_type":"text/html","content_length":"103457","record_id":"<urn:uuid:a4d736aa-27da-4827-ac50-979cdac04679>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00413.warc.gz"}
deviceLoc: Convert Viewport Location to Device Location Convert Viewport Location to Device Location These functions take a pair of unit objects and convert them to a pair of device locations (or dimensions) in inches (or native device coordinates). deviceLoc(x, y, valueOnly = FALSE, device = FALSE) deviceDim(w, h, valueOnly = FALSE, device = FALSE) x, y, w, h A unit object. valueOnly A logical indicating. If TRUE then the function does not return a unit object, but rather only the converted numeric values. device A logical indicating whether the returned values should be in inches or native device units. These functions differ from the functions like convertX() because they convert from the coordinate systems within a viewport to inches on the device (i.e., from one viewport to another) and because they only deal with pairs of values (locations or dimensions). The functions like convertX() convert between different units within the same viewport and convert along a single dimension. A list with two components, both of which are unit object in inches (unless valueOnly is TRUE in which case both components are numeric). The conversion is only valid for the current device size. If the device is resized then at least some conversions will become invalid. Furthermore, the returned value only makes sense with respect to the entire device (i.e., within the context of the root viewport). Paul Murrell See Also ## A tautology deviceLoc(unit(1, "inches"), unit(1, "inches")) ## Something less obvious pushViewport(viewport(width=.5, height=.5)) x <- unit(1, "in") y <- unit(1, "in") grid.circle(x, y, r=unit(2, "mm")) loc <- deviceLoc(x, y) grid.circle(loc$x, loc$y, r=unit(1, "mm"), gp=gpar(fill="black")) ## Something even less obvious pushViewport(viewport(width=.5, height=.5, angle=30)) x <- unit(.2, "npc") y <- unit(2, "in") grid.circle(x, y, r=unit(2, "mm")) loc <- deviceLoc(x, y) grid.circle(loc$x, loc$y, r=unit(1, "mm"), gp=gpar(fill="black"))
{"url":"https://rdrr.io/r/grid/deviceLoc.html","timestamp":"2024-11-11T07:19:29Z","content_type":"text/html","content_length":"31933","record_id":"<urn:uuid:749b5bd8-9a55-4334-b422-bae5c309ee5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00354.warc.gz"}
Operators in Python: Arithmetic, Logical, Comparison (Examples) - codingem.com Operators in Python: Arithmetic, Logical, Comparison (Examples) In Python, there are lots of operators. You can use operators to do basic math, comparisons, logical expressions, and more. Operators are an important concept in programming. Here are some examples of using different types of operators in Python: sum = 1 + 2 prod = 50 * 3 isLess = 1 < 2 isGreaterOrEqual = 100 >= 5 bothEqual = 1 == 1 and 5 == 5 pythonIsEasy = not False This is a complete guide to operators in Python. The theory is backed up with great and illustrative examples. Arithmetic Operators In Python, you can use arithmetic operators to do basic math. Some of these operators are probably familiar to you from elementary school maths. And the ones that are not are still pretty straightforward to understand. Here is a quick cheat sheet of the arithmetic operators in Python: Operator Example Usage Name Description + a + b Addition Adds up variables a and b. - a - b Subtraction Subtracts variable b from a. * a * b Multiplication Multiplies a by b. / a / b Division Divides a by b. % a % b Modulo Gives the remainder in the division between a and b. // a // b Floor Division (also called Integer Division) Divides a by b and rounds the result to the smallest whole number. ** a ** b Exponentiation Raises a to the power of b. Let’s have a closer look at how to use each of these operators with examples. Addition (+) The addition operator (+) sums up two numeric values. It is a binary operator because it takes two values to operate on. For example: sum = 1 + 2 Unary + Operator Similar to mathematics, you can also use the + operator as a unary operator in Python. This means you place it in front of a single number to highlight the positivity of a value. For example: temperature = +30 However, you are never going to use the unary + because, practically, it does not do anything. Subtraction (-) The subtraction operator (-) subtracts one value from another. For example: diff = 1 - 2 Unary – Operator You can also use the – operator as a unary operator in Python by placing it in front of a numeric value. This changes the sign of the number. • A positive value becomes negative. • A negative value becomes positive. For example: n = -10 p = -n Multiplication (*) The multiplication operator (*) multiplies two values. For example: prod = 10 * 3 Division (/) The division operator (/) divides two values. For example: fraction = 10.0 / 4.0 In math, a modulo returns the remainder in the division. For instance, 7 mod 2 = 1. You cannot evenly divide 7 by 2. Instead, there will be one leftover. In Python, the modulo operator is the “percent sign” operator %. For example, let’s calculate leftover pizza slices with the modulo operator: pizzaSlices = 15 eaters = 4 leftoverSlices = 15 % 4 The result is three because there is no way to evenly share 15 slices of pizza among a group of 4. To fairly share the slices, everyone gets 3 slices, that is, 12 are shared. Then 3 slices are left Floor Division Floor division means dividing two numeric values and rounding down to the nearest whole number. For example: result = 10 // 3 The result is 3 because 10 / 3 = 3.33 and the nearest whole number below this value is 3. Notice that the floor division always rounds down! Let me show you another example: result = 10 // 6 The result of 10 / 6 is 1.6667 and the next whole number below this value is 1. So even though rounding 1.6667 would normally round up to 2, floor division squashes it down. Exponentiation means raising a number to a power. When you raise a number to a power, you multiply the number by itself as many times as the exponent states. For instance, 2³ = 2 * 2 * 2 = 8. In Python, the exponentiation operator (or power) is denoted with a double multiplication operator (**). For example: result = 4 ** 3 This is equivalent to 4 * 4 * 4. Shorthand Assignment Operators In Python, it is common to update existing values. For instance, let’s create a counter variable at 0 and start incrementing it: counter = 0 counter = counter + 1 counter = counter + 1 counter = counter + 1 Because updating an existing value like this is such a common operation to do, there is a shorthand to it. This is called shorthand assignment in Python. It happens by combining the assignment operator (=) and the arithmetic operator (+, -, *, /, **, %, //). For example, let’s repeat the counter-example but let’s use a shorthand assignment when adding values to the counter: counter = 0 counter += 1 counter += 1 counter += 1 Here counter += 1 is equivalent to counter = counter + 1. Here are more examples with other arithmetic operators: s = 0 s += 1 print(s) # s is now 1 d = 10 d -= 5 print(d) # d is now 5 m = 5 m *= 10 print(m) # m is now 50 n = 10 n /= 2 print(n) # n is now 5.0 x = 16 x %= 6 print(x) # x is now 4 y = 10 y //= 4 print(y) #y is now 2 z = 4 z **= 3 print(z) #z is now 64 Next, let’s discuss comparison operators. Comparison Operators In Python, you will deal with data all the time. It is thus common to need to perform comparisons between the data. To make comparisons, there are 6 comparison operators you can use in Python. These comparisons are something you are probably familiar with from elementary school math. Here is a quick cheat of the comparison operators: Operator Example Use Name Description == a == b Equal to If the value of a is equal to the value of b returns True. Otherwise, returns False. != a != b Not equal to If the value of a is not equal to the value of b returns True. Otherwise, returns False. < a < b Less than If a is less than b, returns True. Otherwise returns False. <= a <= b Less than or equal to If a is less than or equal to b, returns True. Otherwise returns False. > a > b Greater than If a is greater than b, returns True. Otherwise returns False. >= a >= b Greater than or equal to If a is greater than or equal to b, returns True. Otherwise returns False. Let’s take a look at each comparison operator separately. The Equal to Operator (==) The equal-to operator checks if two values are equal to one another. The equality operator is denoted with a double equals sign. You may ask why double equals and not single equals. This is because the single equal sign is reserved for the assignment operation. For example, let’s compare two numbers: a = 10 b = 10 print(a == b) Notice that you can perform a similar equality comparison between strings as well: name1 = "Alice" name2 = "Alice" print(name1 == name2) The Not Equal to Operator (!=) To negate the equality operator, replace the first equals sign with an exclamation mark. This gives rise to a new operator called the not-equal operator. If two compared values are not equal to one another, the not equal operator returns True. For example: a = 10 b = 10 print(a != b) The Less Than Operator (<) To check if a value is less than another value, use the less-than operator (<) by placing it between two comparable values. For example: result = 1 < 2 The Less Than or Equal to Operator (<=) Sometimes you are interested if the value is either less than or equal to another value. In this case, you can use the less than or equal operator (<=) which is a combination of the less-than operator and the equal operator. For example: print(2 <= 2) The Greater Than Operator (>) To check if a value is greater than another, use the greater-than operator (>) by placing it between two comparable values. For example: print(20 > 2) The Greater Than or Equal to Operator (>=) Sometimes you are interested if the value is either greater than or equal to another value. In this case, you can use the greater than or equal operator (>=) which is a combination of the greater-than operator and the equal operator. For instance: result = 8 >= 3 With the basics of comparison operators out of the way, let’s talk about confusing misbehavior related to comparing floating-point numbers. Precision Inaccuracy When Comparing Floats Take a look at this piece of code where we compare two floating-point numbers: print(3.6 == 1.3 + 2.3) Clearly, this comparison should be True. But it appears to be False according to Python. This is due to floating-point inaccuracy. A computer does not have enough space to represent specific floats numbers precisely. Think about a number like 1/3. The decimal representation of it is 0.333333333… where 3 repeats forever. To precisely write down this number, you would need to repeat 3 forever. This is obviously not Similar logic applies to floating-point precision in Python. A computer uses a binary system to represent numbers. Similar to our 10-base system, in a binary system, there are some values that repeat forever. To store one precisely, a computer would need infinite memory. Because this is not possible, these types of decimal values are cut short, which causes an imprecision. Thus some floating-point numbers are inaccurate by a small amount. Due to this precision error, comparing floating-point numbers with the equality operator is meaningless. To check the equality of two floats, you should check if a number is really close to some value instead. Given small inaccuracies in a and b, if the values are equal to one another, their difference is close to zero. For instance: tolerance = 0.000001 areEqual = abs((1.3 + 2.3) - 3.6) < tolerance Because the absolute value of the difference between 1.3 + 2.3 and 3.6 is smaller than 0.000001, the numbers must be really close to one another. In other words, they are equal. Notice that floating-point inaccuracy is not a Python-specific thing. It is present in all programming languages. Logical Operators In Python, you can combine logical expressions with logical operators. Logical operators are used in everyday life as we speak. Have a look at this sentence: “If it is hot and sunny, let’s go to the beach”. Although you may not think about it, this very sentence uses a logical “and” operator to combine two logical expressions. Let’s rephrase the sentence to better understand what it means: “If it is a sunny day and if it is a hot day, let’s go to the beach”. Now, let’s split this sentence into three parts: 1. It is a sunny day 2. AND 3. It is a hot day The first part, “It is a sunny day” is a logical expression. The outcome is either true or false. The word “and” is a logical operator. It connects the first and the second. The second sentence, “It is a hot day” is the second logical expression that is also either true or false. If both of these logical expressions are true, we go to the beach. Otherwise, we don’t. This is the idea of logical expressions and logical operators in a nutshell. Now, let’s take a look at the logical operators in Python. Logical Expressions with Boolean Operands In Python, there are three logical operators: Here is a cheat sheet to quickly see how they work: Operator Example Use Description Reverses the truth. not not x False –> True True –> False or a or b True if either x or y is True. Otherwise False. and a and b True if both x and y are True. Otherwise False. Now that you understand how logical operators work in Python, let’s turn the real-life example above into Python code: isSunny = True isHot = False beachTime = isSunny and isHot Let’s see another example. In this example, the person is busy if they are either working or studying: isStudying = False isWorking = True isBusy = isStudying or isWorking As you can see, even though isStudying is False, the person is still busy because isWorking is True. Let’s also see an example of using the not operator. The not operator negates the boolean value so that True becomes False and vice versa. For instance: isRainy = True isSunny = not isRainy As you can see, it is not sunny because it is rainy. Notice that you can use logical operators to combine any Boolean values in Python. For instance, you can combine the results of two (or more) comparisons: print(1 == 2 or 10 < 20) Boolean Context In Python, objects and expressions are commonly something else than True or False. However, these non-boolean values are still considered “truthy” or “falsy” in a boolean context. In other words, each non-boolean value in Python becomes either True or False in a boolean context. For instance, the number 0 is False as a boolean value and 1 is True. As a matter of fact, any non-zero integer is “truthy”. Here are all the values that are False in the boolean context: • False itself. • 0 and 0.0 • Empty strings. • Empty composite data objects (you learn about these later on). • None, which is a special object in Python that represents the absence of a value. Anything other than these is True in a boolean context. To convert a value to a boolean, use the built-in bool() function. For example, let’s turn the number 0 into a boolean: booleanZero = bool(0) Logical Expressions with Non-Boolean Values In the previous section, you learned that all values in Python are “truthy” or “falsy”. This makes it possible to apply logical operators to non-boolean values as well. This is best demonstrated with examples using different logical operators. The “or” Operator with Non-Boolean Values As you learned before, the or operator works such that it returns False only if both compared values are False. Otherwise, it always returns True. For example: result = True or False Now, let’s jump out of boolean values and apply the or operator on two integers: result = 0 or 1 The result is 1 because, in the boolean context, 0 is “falsy” and 1 is “truthy”. As you learned, the result of the or operator can only be “falsy” if both compared values are “falsy”. In this particular case, the second value 1 is “truthy”. Thus the result is “truthy”. Let’s see another example. Let’s apply the or operator on two “truthy” values and see what we get: result = 10 or 5 The result is the first number. But why? This is due to what is called short-circuiting in Python. In short, if the first value is True or “truthy”, the result of or operation is automatically True or “truthy” regardless of the second item. Thus the first item is the result. You will learn more about short-circuiting in a bit. The “and” Operator with Non-Boolean Values Let’s repeat the above examples using the and operator. As you learned before, the and operator returns True only if both compared values are True. Otherwise, it always returns False. For example: result = True and False Now, let’s apply the and operator between two integer values: result = 1 and 0 The result is 0 because in the boolean context, 0 is “falsy” and 1 is “truthy”. Furthermore, the result of the and operator can only be “truthy” if both compared values are “truthy”. Here the second value 0 is “falsy” so the result is “falsy”. Let’s take a look at another example. result = 10 and 5 This time, the result is 5. But why not 10? The first number 10 is “truthy”. But it is not enough for the and operator to know whether the whole result should be “truthy” or “falsy”. Thus, the second number needs to be checked as well. So the second number is the deciding factor. This also makes it the result of the operation. If the first number was “falsy”, the and operator would have returned it because it would have been enough to know the result is “falsy”. No worries if this sounds confusing, you will learn about this process in a minute. Before going there, let’s quickly see an example with the not operator as well. The “not” Operator with Non-Boolean Values The not operator directly converts “truthy” values to False and “falsy” values to True. For example: result = not 10 The result is False because 10 is “truthy” in a boolean context. The not operator inverts the value to “falsy” and returns False. Let’s see another similar example: result = not 0.0 In a boolean context, 0.0 is “falsy” or False. Thus the not operator converts it to True. Now, let’s learn more about short-circuiting. Short-Circuit Evaluation and Operator Chaining In the previous examples, you learned how to create simple logical expressions by combining two expressions with a logical operator. For example: truth = False or True But you can also form longer chains of logical operators if needed. Take a look at this chain of or operators: v1 or v2 or v2 or ... vN This whole expression is True if a single vN happens to be True. The way Python determines the result is interesting and it has really neat applications. This is called the short-circuit evaluation. In a logical operator chain, the values are evaluated from left to right. In a chain of or operators, as soon as one of the values is True, no more values are checked because the result is known to be True already. Here is the important part: The short-circuit evaluation returns the value that terminated the evaluation. For example: result = True or False or False or False or False Here Python starts by checking the first value. Because it is True, there is no reason to continue evaluating the other values. Let’s see a similar example with non-boolean values. This example demonstrates the short-circuiting better: result = 0 or 0 or 3 or 0 or 0 The result of this expression is 3. Here is how the result is calculated: 1. Python starts by evaluating the first value. It is 0 so it is “falsy” in a boolean context. 2. The same applies to the second value. 3. But the 3rd value is “truthy” as a non-zero value. Python now knows the whole expression is “truthy” regardless of the rest of the values. Here deciding factor was 3 so it is the final result as The short-circuiting applies to chains of and operators as well. Have a look at this chain of and operators: v1 and v2 and v2 and ... vN By definition, this whole expression is True only if all the values vN are True. When Python encounters this type of chain, the values are evaluated from left to right. As soon as one of the values is False, no more values are checked because the result is automatically False. The same logic applies to chains of non-boolean values. The result is the value that terminated the evaluation. To make sense of this, let’s see a bunch of examples. result = True and True and False and True In this expression: 1. Python evaluates the first value. Because it is True, the evaluation continues. 2. The same goes for the second value. 3. The third value is False. Thus the whole expression is False no matter what the rest of the values are. Let’s see a similar example with non-boolean values: result = 45 and 0 and 322 and 98 This evaluation of this expression terminates at the number 0. This is because 0 is “falsy”. If there is any “falsy” in a chain of and operators, the result must be “falsy”. The value that terminates the evaluation is the result. In this case, it is the number 0. How to Use Short-Circuit Evaluation in Python In addition to being an implementation detail of Python, there are some applications to short-circuit evaluation in Python. The two most common ways to use short-circuit evaluation are: • Avoiding errors • Picking default values Let’s briefly go through both of these. Avoiding Errors with Short-Circuit Evaluation Let’s divide two values x and y: x = 10 y = 5 result = x / y No problem with this piece of code. This is not a surprise because dividing two numbers is a legal operation. But if you recall from elementary math, you cannot divide by 0. If you divide by 0 in Python, your program crashes with an error. For instance: x = 10 y = 0 result = x / y ZeroDivisionError: division by zero You can avoid an error like this by using the short-circuit evaluation. In the previous section, you learned that the and operator is short-circuited so that if a value is False, the evaluation stops. Let’s use this idea to protect from division by 0: x = 10 y = 0 result = y != 0 and x / y As you can see, even though there is a division by 0, the program did not crash. Instead, the result is False. To understand why let’s inspect this expression: y != 0 and x / y Due to short-circuit evaluation, if y != 0 is False, the result of the whole expression is False. The key is that the right-most expression is never evaluated so the division by 0 never takes place. But if the value of y is non-zero, the first condition is True and the right-most expression gets evaluated and is the result. For example: x = 10 y = 5 result = y != 0 and x / y Because y != 0 returns True, the evaluation continues to x / y, which terminates the expression and returns the result of the division. This is one example of a useful application of the short-circuit evaluation in Python. Before moving on, let’s see another one. Picking a Default Value with Short-Circuit Evaluation Another useful way to use the short-circuit evaluation in Python is to pick a default value. Let’s see an example. name = "" or "there" print("Hello", name) Hello there Here the left-most string is an empty string, thus the default value “there” is used. But why? Take a look at this expression: "" or "there" As you learned before, in a boolean context, an empty string is considered “falsy”. The or operator works such that it continues evaluating the expression until it encounters True or “truthy”. In this case, the first string is “falsy”. The evaluation continues to the second string which becomes the result of the expression. If the first string is non-empty, it is “truthy”. Thus the evaluation of the expression stops and the first string is the result: name = "Alice" or "there" print("Hello", name) Hello Alice Chained Comparisons Now that you have learned about comparison operators and logical operators, it is time to learn a neat shorthand for chaining comparisons. For example, let’s check if a number is less than 1000 and greater than 10: value = 10 isInRange = value < 1000 and value > 0 This piece of code works just fine. However, you can make the comparisons more readable by chaining them together instead of writing them as two separate comparisons. Here is how it looks: value = 10 isInRange = 0 < value < 1000 This expression is easier to read. Furthermore, it resembles the way you check ranges in mathematics. Bitwise Operators In addition to the basic numerical operators we talked about earlier in this chapter, Python has operators to perform bitwise operations on integers. Bitwise operators are far less commonly used than basic math operators. Thus, we are not going deep into the details of them. However, it is useful to know such operators exist. Here is a quick cheat sheet for bitwise operators in Python. Operator Example Use Name Description & a & b bitwise AND Logical AND operation between each bit in the corresponding position between the bit sequences. | a | b bitwise OR Logical OR operation between each bit in the corresponding position between the bit sequences. ~ ~a bitwise negation Logical negation between each bit in the corresponding position between the bit sequences. ^ a ^ b bitwise XOR (exclusive OR) Logical XOR operation between each bit in the corresponding position between the bit sequences. >> a >> n Shift right n places Each bit shifted to the right by n places. << a << n Shift left n places Each bit shifted to the left by n places. Identity Operators In Python, you can check the unique ID of an object using the id() function. For instance, to check if two variables refer to the same object, check if their IDs match. For example: x = 10 y = x print(id(x) == id(y)) This reveals that both x and y point to the same object in memory. But instead of using the id() function, there are two dedicated operators to this: • The is operator can be used to check if two objects have the same ID. • The is not operator can be used to check if two objects have different ID. Let’s repeat the above example using the is operator: x = 10 y = x print(x is y) This approach is cleaner and easier to read compared to using the id() function. Word of warning: Do not use the is operator to compare equality values. Only use it to know if two objects refer to the same object in memory! Now that you have seen all the basic operators in Python, let’s discuss the evaluation order or the precedence of these operators. Operator Precedence in Python As you learned in elementary school, not all math operators are equal. Operators + and – are evaluated after * or /. The concept that determines the evaluation order of operators is called precedence. Precedence states which operator precedes which. The same logic applies to Python. For example: result = 1 + 10 * 3 - 8 / 2 Let’s visualize the precedence of these operations with parenthesis: result = 1 + (10 * 3) - (8 / 2) = 1 + 30 - 4 = 27 Understanding the precedence of the arithmetic operators is important. Also, knowing the precedence of the logical operators (or, and, not) is useful. You don’t have to worry about the precedence of the other operators nearly as much if not at all. But if you have to, here is a full precedence chart of all the operators in Python. The lower the operator, the lower the precedence. In this guide, you learned about operators in Python. To recap, the arithmetic operators allow you to do basic math in Python. Comparison operators make it possible to compare data. Logical operators allow you to connect logical expressions, such as comparisons. Bitwise operators allow you to operate on integers on a bit level. The identity operators let you check if two objects are the same object in memory. All the operators in Python belong to a precedence group. This determines which operations take place first if combined with other operators. Read Also
{"url":"https://www.codingem.com/python-operators/","timestamp":"2024-11-15T00:47:23Z","content_type":"text/html","content_length":"142611","record_id":"<urn:uuid:cee87813-7346-4d6b-8787-63aaa2a55ad8>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00506.warc.gz"}
REHVA Journal Infection probability of COVID-19 in a large lecture room with mechanical ventilation Infection probability of COVID-19 in a large lecture room with mechanical ventilation Mathilde Ruud Mona Skar Baglo Andreas Undheim Øgreid student student student Norwegian University of Science and Technology Norwegian University of Science and Technology Norwegian University of Science and Technology Ivan Vu Kari Thorset Lervik Guangyu Cao student student Dr. (Sc.), Professor Norwegian University of Science and Technology Norwegian University of Science and Technology Department of Energy and Process Engineering Norwegian University of Science and Technology Corresponding author email: guangyu.cao@ntnu.no Keywords: indoor air quality, ventilation rate, mechanical ventilation, probability of infection, COVID-19, lecture room, Wells-Riley equation. In this field study, lecture room S2 at NTNU Gløshaugen, where a real COVID-19 infected student was present during a two-hour lecture, was investigated to calculate the probability of infection risk. The ventilation system in S2 is mechanical balanced ventilation. The results show that the probability of getting infected in S2 with one infected student is 0.098%, which is significantly lower than other studies. The result is in line with the fact that no other students were infected after attending the lecture in S2. Students spend a lot of time in lecture rooms, where they are closely seated and there is a great risk of infection during the COVID-19 pandemic. Viruses can be transmitted in three different ways: through direct contact, droplet transmission or airborne transmission. In the beginning of the pandemic, it was assumed that the virus could not transmit through air, however current research show there is a high possibility that this is the case [1]. It is assumed in this study that the coronavirus is in fact an airborne disease. A well-functioning ventilation system can decrease the possibility to get infected by an airborne virus, such as the coronavirus. Previous studies have shown that too high or too low relative humidity is favourable for the survival of the coronavirus, especially for very low relative humidity. The optimal range for relative humidity and human health is 40–60% [2]. According to the University of Sydney [3], relative humidity and infected covid-19 cases have a negative correlation. They found that a 1% decrease in the relative humidity causes a 6–7% increase in infected cases. In addition, ventilation plays a key role to control the indoor air quality. Norwegian building regulation TEK17 states that ventilation rates for people with a light activity level should be minimum 26 m³/h per person [4]. While as for ventilation for building materials, it varies in the range of 2.5–7.2 m³/h per m² floor area according to the emitting materials [5]. Few studies regarding the infection risk have been done in lecture halls with mechanical ventilation. The objective of this article is to quantify the probability of infection of COVID-19 in a large lecture hall with mechanical ventilation at NTNU. Theoretical modelling The Wells-Riley equation can approximate the probability of infection due to human exposure to airborne infectious contaminants [6]. The equation is: · q– breathing rate per person [] · I– number of infectors [-] · p – quanta per hours produced by infector [] · t – time of exposure [h] · Q – outdoor air supply rate [] Ventilation rate and indoor pollutants - non-steady state equation When non-steady state, the CO₂ concentration per time unit can be expressed as: This change must equal the CO₂ in the supplied air, the production of CO₂ in the room, minus the CO₂ removed by the extracted air. In this case infiltration, exfiltration and the effect of the filters in the air handling unit is neglected. The following expression for the change in CO₂ level is then obtained. Combining and rearranging the two equations give: · – ventilation rate · – average exhaust concentration · – average supply air concentration · – average source strength of pollutant · – volume of room · – change in CO₂ concentration over time Experimental method On September 24th, a student who was infected with the Covid-19 virus attended a lecture in S2 at NTNU Gløshaugen. There is assumed a total of 131 students present at the lecture, and no other students were infected after attending the lecture in S2 [7]. The investigated lecture room S2 S2 is a large lecture room at NTNU in Trondheim. The volume of the room is 992.1 m³ and the area is 251.5 m² (see Figure 1). The capacity of the room is 256, but during the pandemic it is reduced to 131 due to infection control measurements. The activity level during a lecture is normally sedentary activity, according to NS-EN ISO 7730:2005. Figure 1. Lecture room S2. Measurement setup Measurements of CO₂, relative humidity and air temperature were carried out in S2 by the extract shown in Figure 2. The measurements and the occupancy level were manually recorded every minute for one hour. Figure 2. Measurement point in S2. Results and discussion Measurement results The results are presented in Figure 3.a, 3.b and 3.c that show CO₂ (ppm), temperature (°C) and relative humidity (%) in relation to the amount of people in the room. In Figure 3.a the variation in CO₂ level is presented. The number of people is constant during the lecture, but at the end there is a drastic reduction. The CO₂ level varies between 600 and 650 ppm when the amount of people is constant. When the students leave the lecture, the CO₂ concentration first increase followed by a drastic decrease. In Figure 3.b the temperature variations are presented. It is clear that when the amount of people in the room is constant, the temperature increases. At the end of the lecture when people leave the room, the temperature decreases. Figure 3.c presents the variation in relative humidity. During the measurements, the relative humidity varies between 36% and 42%. The relative humidity throughout the lecture is at a moderate level, according to Ahlawat [2], and this will be favourable for a shorter survival time for the virus. With the known information that no one else got infected, the statement about moderate relative humidity throughout the lecture holds. Figure 3.a. Measurements of CO₂ and number of people in the room. Figure 3.b. Measurements of temperature and number of people in the room. Figure 3.c. Measurements of relative humidity and number of people in the room. Probability to get infected based on Wells-Riley The probability to get infected may be affected by the ventilation rate. To calculate the ventilation rate, the non-steady state equation and the results from the measurements are used. An individual ventilation rate is calculated for each time interval, and the average is used as the final value. The total ventilation rate is calculated to be 5 054.4 m³/h, which is equals to 5.1 h⁻¹ air exchange rate. During the COVID-19 pandemic with the presence of 131 students, the airflow rate is equal to 38.6 m³/h per person or 10.7 ℓ/s per person. If the variation of the CO₂ concentration becomes zero under steady state conditions, we may assume the room air is fully mixed with supply air. Consequently, the exhaust concentration may be equal to the room CO₂ concentration. The room CO₂ concentration is calculated to be 866.5 ppm under fully mixed steady state condition. The measured value was between 600 and 650 ppm, which is lower than the calculated value. The Wells-Riley equation is used to calculate the probability of infection in S2. The input variables are gathered from [8]. The number of infected persons is set to 1, the breathing rate is normal at 0.54 m³/h, the quanta per hour of infectious particles from the infected person is set to 4.6 for a classroom, the time of exposure is 2 hours, and the supply air rate has been calculated earlier. From the required TEK17 value of supply rate, the probability of infection in S2 is calculated to be 0.095%. From the measured value of supply airflow rate, the probability of infection in S2 is calculated to be 0.098%. This shows that the probability to get infected is very low, which is consistent with the known information that no one else got infected after attending the lecture. However, the Wells-Riley equation does not consider the type of ventilation system or the air flow pattern, only the ventilation rate. The air flow distribution in the room is unknown, and therefore the expected probability to get infected may be greater than the calculated value. During the COVID-19 pandemic with the presence of 131 students, the supply airflow rate in S2 was equal to 38.6 m³/h per person or 10.7 ℓ/s per person. By using the Wells Riley equation and the measured CO₂ concentration of indoor air, the probability of infection in S2 is calculated to be 0.098%. The result is in line with the fact that no other students were infected after attending the lecture in S2. In addition, this study supports the calculation by REHVA COVID 19 tool that the probability of infection is very low in a larger space with sufficient supply airflow rate. The calculated fully mixed concentration of CO₂ is significantly higher than the measured value close to the air extraction point. This means that there are possible stagnant zones in S2, where air stays for a longer time with increased risk of occupants inhaling each other's exhaled air. Further study may be carried out to clarify the airflow pattern and identify potential improvement of IAQ by other type of airflow distribution solutions. To make the analysis even better, an increased number of measurement points would have been ideal. [1] FHI. Facts about the Covid-19 outbreak. 2020; Online: 27. Oktober 2020. Available from: https://www.fhi.no/en/op/novel-coronavirus-facts-advice/facts-and-knowledge-about-covid-19/ [2] Ahlawat, A., Wiedensohler, A. and Mishra, S.K. (2020). An Overview on the Role of Relative Humidity in Airborne Transmission of SARS-CoV-2 in Indoor Environments. Aerosol Air Qual. Res. 20: 1856–1861. https://doi.org/10.4209/aaqr.2020.06.0302. [3] Science daily. Low humidity increases COVID-19 risk: Another reason to wear a mask. 2020; Online: 11.01.21. Available from: https://www.sciencedaily.com/releases/2020/08/200818094028.htm. [4] Direktoratet for byggkvalitet. Byggteknisk forskrift (TEK17). 2017; Online: 15. October 2020. Available from: https://dibk.no/byggereglene/byggteknisk-forskrift-tek17/. [5] The Norwegian Labor Inspection Authority. Klima og luftkvalitet på arbeidsplassen. 2016; Online: 15.oktober 2020. Available from: https://www.tbk-as.no/wp-content/uploads/2013/04/ [6] To GNS, Chao CYH. Review and comparison between the Wells–Riley and dose-response approaches to risk assessment of infectious respiratory diseases. 2009; Online: 27.Oktober 2020. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7202094/. [7] Rian Hanger, Mari. Ingen fikk smitten på campus. 2020. Online: 28. Oktober 2020.Available from: https://www.universitetsavisa.no/koronavirus/ingen-fikk-smitten-pa-campus/101413. [8] Kurnitski, Jarek. Ventilation rate and room size effects on infection risk of COVID-19. 2021; Online: 11.01.21. Available from: https://www.rehva.eu/rehva-journal/chapter/ Stay Informed Follow us on social media accounts to stay up to date with REHVA actualities
{"url":"https://www.rehva.eu/rehva-journal/chapter/infection-probability-of-covid-19-in-a-large-lecture-room-with-mechanical-ventilation","timestamp":"2024-11-14T18:44:21Z","content_type":"text/html","content_length":"60430","record_id":"<urn:uuid:3949d7c9-7705-42c4-b3aa-3f4d9867b2a9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00787.warc.gz"}
Taking modeling seriously: A hands-on approach to Alloy July 15, 2013, 08:00 | Workshop, Regency C, Union The modeling of humanities data is a core activity (some say the core activity) of the digital humanities. The activity so described may take a wide variety of forms; often the term is used for any compact description of a domain, whether in prose or in user-interface metaphors. Machine-processable descriptions are probably more common, but these, too, vary: the definition of an XML vocabulary, the table declarations for a SQL database, the data structures or even the executable code of a program may all be described informally as offering a ‘model’ of some domain or other. The term model, however, is here applied more narrowly to expressions in some well defined formalization. Models are most useful when formalized in a declarative not a procedural notation and when their logical import is clear. Formulating precise models can be difficult. Inconsistencies and unforeseen interferences between parts of the model can easily creep in. With informal definitions, such shortcomings can remain undetected for long periods, even until after the model has been put to use. Formally defined models, on the other hand, can be tested systematically for logical consistency; their consequences can be established systematically. Such testing can help uncover shortcomings in a timely manner. Alloy is a tool for “lightweight formal methods”, which makes it easier to test the implications of models and to check assumptions for plausibility, consistency, and completeness. Its usual application area is the testing of software designs but the variant of first-order logic provided by Alloy is by no means limited to the description of software or electronic objects. It has been successfully used to formalize notions far removed from any software, including the nature of transcription, an application of the type/token distinction to document structure, and fragments of Goodman and Nelson's mereology and of Hilbert's formulation of Euclidean geometry. Alloy's logic is powerful enough to formulate interesting concepts, while remaining weak enough to be tractable for machine processing. Using Alloy's syntax, a modeler can formulate the axioms of a model and augment them by asserting that certain properties hold for all instances of the model, or by defining predicates which characterize particular instances of the model. The Alloy Analyzer can test the assertions and illustrate the predicates, by seeking counter-examples to the assertion or instances of the predicate. This one-day tutorial introduces digital humanists to the use of Alloy for modeling. Topics include: • introduction to Alloy's logic • compressed summary of Alloy syntax • use of Alloy for formulating assertions and predicates • describing individual test cases for Alloy • Alloy's place in the larger context and Alloy's relation to light-weight formal methods, to other formal methods (e.g. Z), and to theorem-provers • limits on Alloy's logic, from a theoretical point of view (how Alloy and other tools deal with Goedel's incompleteness result and Turing's halting problem), and from a practical point of view (modeling recursion using transitive closure) Examples will be drawn from domains discussed at recent DH conferences. Prerequisites: some prior exposure to symbolic logic and/or programming is probably desirable; failing that, highly motivated participants may be able to benefit from the workshop if they have sufficiently high tolerance for exposure to new material. Participants should bring a laptop computer with a current installation of Java; they may optionally preinstall Alloy 4.2 or they may install it during the workshop. Target audience and expected number of participants Short answer: not a large target audience (but a choice one!); estimated attendance perhaps 5-10 (no evidence). The target audience consists of digital humanists interested in techniques for formalizing important concepts and tools for working with such formalizations. The tutorial deals with high level data modeling concepts. Some prior exposure to symbolic logic and/or programming is desirable; failing that, highly motivated participants may be able to benefit from the workshop if they have sufficiently high tolerance for exposure to new material. Participants should bring a laptop computer with a current installation of Java; they may optionally pre- install Alloy 4.2 or they may install it during the workshop. Full-day outline I'd prefer to teach this as a full-day tutorial; that allows time for a mixture of lecture-style presentation of information and hands-on exercises. A tentative full-day schedule is: 9:00-10:30 Introduction to the course • Modeling, formal logic, formal methods. Lightweight formal methods; Alloy. • Demonstration: Alloy model of a Web interface (capabilities, security issues, user information). • Demonstration: Using Alloy to generate test cases. • The small-scope hypothesis; how Alloy manages to be useful despite Goedel's Theorem. • Hands-on exercise: Using the Alloy Analyzer. 10:30-11:00 Break 11:00-12:30 Alloy's first-order logic • Atoms, relations, tuples, sets. Basics of syntax: signatures, relations, multiplicities. • Hands-on exercise(s) (logic puzzles, simple proofs from logic textbooks). • Styles of expression: predicate-calculus style, navigational style, relational style. More syntax: assertions, predicates, quantification, let-expressions. • Using Alloy to model concepts: FRBR entities, metadata records, XML and non-XML document structures. • More exercises(s). 12:30-2:00 Lunch 2:00-3:30 Alloy as a tool for software design • Examples: using Alloy to model an interactive concordance system, a query interface, a database system. • Hands-on exercises. • Idioms for modeling state, change, and dynamic systems in Alloy. • Idioms for testing specific instances with Alloy. 3:30-4:00 Break 4:00-5:30 Recursion, Conclusion • Using transitive closure to model recursion. • Hands-on exercises. • Review, questions, clarifications. • Where to go from here? Further Alloy resources, other tools for formal methods and theorem proving.
{"url":"http://dh2013.unl.edu/abstracts/workshops-001.html","timestamp":"2024-11-12T05:40:19Z","content_type":"application/xhtml+xml","content_length":"16893","record_id":"<urn:uuid:13acde6a-7dbf-4bbf-9efe-3fa24c06151c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00596.warc.gz"}
Summaries | Models University Variables can be summarised in two ways, both accessible from the Summary step of the Variable wizard. Row Summary A Row Summary is only applicable when the Variable has one or more Dimensions applied. For instance, in the example below, the Variable has a Membership Tier Dimension applied, with three items: Basic, Premium, and Platinum. The Row Summary determines how the All line is calculated (in this case, a Sum). Note, when you apply Dimensions to a Variable, Models will try to apply the most appropriate type of Summary automatically for you, so most of the time you will not need to manually set one yourself. The following Row Summary types are available: A Sum Row Summary is commonly the desired output, but there are situations where a Sum total of rows is not a meaningful calculation, in which case you should choose between an Auto or Rearranged Auto Row Summary An Auto Row Summary is only available for Calculation Variables. In this case, the Summary is calculated by applying the default Calculation of the Variable to the Row Summary lines of the referenced For instance, if the Variable's Calculation is Profit Margin = Profit / Revenues across several items, a Sum of Profit Margin is not meaningful. In this case, a useful Summary is to apply the calculation Profit / Revenues to the summary line of Profit and Revenues. An Auto Row Summary will only work successfully if all referenced Variables have a Summary row themselves (or do not have a Dimension applied) Rearranged Row Summary A Rearranged Row Summary is used when a meaningful summary value depends on the Variable parents' values. To apply a Rearranged Summary, we select the Rearranged option, and then select the parent Variable which we would like to rearrange. For instance, if we have a Variable Price, across the Membership Tier Dimension (Basic, Premium, Platinum), a meaningful summary output is the weighted average price across all tiers. Elsewhere in the Model, we calculate Revenues = Price * Customers. In the Summary step of the Variables wizard, we select Rearranged and then choose Revenues as the rearranged parent. Models then calculates the weighted average price as Price = Revenues / Customers, based on the Calculation of the Revenues Variable. Column Summary A Column Summary generates a value for a Variable based on all periods of the Model. Column Summaries can be applied to all Variables and any number of Column Summaries can be added. The following Column Summary types are available: Additional options: A specific Number Format can be selected for each Column Summary. A divisor can be applied, to make the output smaller. Options are: None, Thousands, Millions, Billions, Trillions Multiple Summary Editing The Summaries of multiple Variables can be edited simultaneously by selecting the Variables and clicking the Edit Summaries button in the Variables context menu. The Rearranged Summary option will not be available if edited multiple Variables at once.
{"url":"https://help.taglo.io/variables/summaries","timestamp":"2024-11-10T05:56:16Z","content_type":"text/html","content_length":"463251","record_id":"<urn:uuid:80f22635-983d-4f2d-b7df-2600b37f153c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00729.warc.gz"}
String Theory: The History of Supersymmetry - dummies As a consequence of the Standard Model’s success, string theory wasn’t needed to explain particle physics. Instead, almost by accident, string theorists began to realize that string theory might just be the very theory that would solve the problem of quantum gravity. By 1974, the Standard Model had become the theoretical explanation of particle physics and was being confirmed in experiment after experiment. With a stable foundation, theoretical physicists now looked for new worlds to conquer, and many decided to tackle the same problem that had vexed Albert Einstein for the last decades of his life: quantum gravity. The graviton is found hiding in string theory The graviton is a particle that, under predictions from unified field theory, would mediate the gravitational force. In a very real sense, the graviton is the force of gravity. One major finding of string theory was that it not only includes the graviton, but requires its existence as one of the massless particles discussed earlier in this chapter. In 1974, Joel Scherk and John Schwarz demonstrated that a spin-2 massless particle in superstring theory could actually be the graviton. This particle was represented by a closed string (which formed a loop), as opposed to an open string, where the ends are loose. Both sorts of strings are demonstrated in this figure. String theory demands that these closed strings must exist, though open strings may or may not exist. Some versions of string theory are perfectly mathematically consistent but contain only the closed strings. No theory contains only open strings, because if you have open strings, you can construct a situation where the ends of the strings meet each other and, voilà, a closed string exists. From a theoretical standpoint, this was astounding (in a good way). Instead of trying to shoehorn gravity into the theory, the graviton fell out as a natural consequence. If superstring theory was the fundamental law of nature, then it required the existence of gravity in a way that no other proposed theory had ever done! The other supersymmetric gravity theory: Supergravity Supergravity is the name for theories that attempt to apply supersymmetry directly to the theory of gravity without the use of string theory. Throughout the late 1970s, this work proceeded at a faster pace than string theory, mainly because it was popular while the string theory camp had become a ghost town. Supergravity theories prove important in the later development of M-theory. In 1976, Daniel Freedman, Sergio Ferrara, and Peter van Nieuwenhuizen applied supersymmetry to Einstein’s theory of gravity, resulting in a theory of supergravity. They did this by introducing the superpartner of the graviton, the gravitino, into the theory of general relativity. Building on this work, Eugene Cremmer, Joel Scherk, and Bernard Julia were able to show in 1978 that supergravity could be written, in its most general form, as an 11-dimensional theory. Supergravity theories with more than 11 dimensions fell apart. Supergravity ultimately fell prey to the mathematical inconsistencies that plagued most quantum gravity theories (it worked fine as a classical theory, so long as you kept it away from the quantum realm), leaving room for superstring theory to rise again in the mid-1980s, but it didn’t go away completely. String theorists don’t get no respect During the late 1970s, string theorists were finding it hard to be taken seriously, let alone find secure academic work. As the decade progressed, two of the major forces behind string theory would run into hurdle after hurdle in getting a secure professorship. John Schwarz had been denied tenure at Princeton in 1972 and spent the next 12 years at CalTech in a temporary position, never sure if the funding for his job would be renewed. Pierre Ramond, who had discovered supersymmetry and helped rescue string theory from oblivion, was denied tenure at Yale in 1976. Against the backdrop of professional uncertainty, the few string theorists continued their work through the late 1970s and early 1980s, helping deal with some of the extra dimensional hurdles in supergravity and other theories, until the day came when the tables turned and they were able to lay claim to the high ground of theoretical physics. About This Article This article can be found in the category:
{"url":"https://www.dummies.com/article/academics-the-arts/science/physics/string-theory-the-history-of-supersymmetry-178194/","timestamp":"2024-11-08T20:42:13Z","content_type":"text/html","content_length":"79398","record_id":"<urn:uuid:34493a6c-5b2b-4253-940d-1747ba19f8d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00805.warc.gz"}
The below table visualizes how the decimal number 88 equals the hexadecimal number 58. About Hexadecimal Numbers Hexadecimal numbers are a positional numeral system with the base (or "radix") 16. Since there are only 10 different digits, alphabetical letters are used to reach 16 different digits: Digit Value 0 - 9 0 - 9 A 10 B 11 C 12 D 13 E 14 F 15 In the hexadecimal system, the number 16 is written as 10. Hexadecimal numbers are used frequently within computers and programming, since they represent data in a more human-readable way than binary digits do. One hexadecimal digit represents four binary digits, and thus two hexadecimal digits together describe one Byte. One Byte can represent all numbers between 00 and FF in the hexadecimal format. An example of when this is used in programming is color codes, for example in CSS on a website, where hexadecimal numbers can be used to define the three color components Red, Green and Blue (#RRGGBB). Some examples below: Color Hex code The hexadecimal system is built-in into many programming languages, and hexadecimal numbers are usually declared by starting the number with 0x. You can for example easily test in JavaScript by opening your web browser Console (normally by the F12 key) and type in 0x10. The browser will return the number 16. If you type in 0xFF, it will return 255, and so on.
{"url":"https://integers.info/hexadecimal-numbers/hex/58","timestamp":"2024-11-03T00:58:04Z","content_type":"text/html","content_length":"18349","record_id":"<urn:uuid:323e7f45-dc01-44f5-b3c1-3286b8cf58e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00158.warc.gz"}
The acceleration due to gravity on the surface of the Moon is approximately 1.625 m/s^2, about 16.6% that on Earth's surface or 0.166 ɡ.^[1] Over the entire surface, the variation in gravitational acceleration is about 0.0253 m/s^2 (1.6% of the acceleration due to gravity). Because weight is directly dependent upon gravitational acceleration, things on the Moon will weigh only 16.6% (= 1/6) of what they weigh on the Earth. Earth vs Mars vs Moon gravity at elevation Radial gravity anomaly at the surface of the Moon in mGal Gravitational field The gravitational field of the Moon has been measured by tracking the radio signals emitted by orbiting spacecraft. The principle used depends on the Doppler effect, whereby the line-of-sight spacecraft acceleration can be measured by small shifts in frequency of the radio signal, and the measurement of the distance from the spacecraft to a station on Earth. Since the gravitational field of the Moon affects the orbit of a spacecraft, one can use this tracking data to detect gravity anomalies. Most low lunar orbits are unstable. Detailed data collected has shown that for low lunar orbit the only "stable" orbits are at inclinations near 27°, 50°, 76°, and 86°.^[2] Because of the Moon's synchronous rotation it is not possible to track spacecraft from Earth much beyond the limbs of the Moon, so until the recent Gravity Recovery and Interior Laboratory (GRAIL) mission the far-side gravity field was not well mapped. Gravity acceleration at the surface of the Moon in m/s^2. Near side on the left, far side on the right. Map from Lunar Gravity Model 2011 Archived 2013-01-14 at the Wayback Machine. The missions with accurate Doppler tracking that have been used for deriving gravity fields are in the accompanying table. The table gives the mission spacecraft name, a brief designation, the number of mission spacecraft with accurate tracking, the country of origin, and the time span of the Doppler data. Apollos 15 and 16 released subsatellites. The Kaguya/SELENE mission had tracking between 3 satellites to get far-side tracking. GRAIL had very accurate tracking between 2 spacecraft and tracking from Earth. The accompanying table below lists lunar gravity fields. The table lists the designation of the gravity field, the highest degree and order, a list of mission IDs that were analyzed together, and a citation. Mission ID LO includes all 5 Lunar Orbiter missions. The GRAIL fields are very accurate; other missions are not combined with GRAIL. Lunar Gravity Fields Designation Degree Mission IDs Citation LP165P 165 LO A15 A16 Cl LP ^[3] GLGM3 150 LO A15 A16 Cl LP ^[4] CEGM01 50 Ch 1 ^[5] SGM100h 100 LO A15 A16 Cl LP K/S ^[6] SGM150J 150 LO A15 A16 Cl LP K/S ^[7] CEGM02 100 LO A15 A16 Cl LP K/S Ch1 ^[8] GL0420A 420 G ^[9] GL0660B 660 G ^[10] GRGM660PRIM 660 G ^[11] GL0900D 900 G ^[12] GRGM900C 900 G ^[13] GRGM1200A 1200 G ^[14] CEGM03 100 LO A15 A16 Cl LP Ch1 K/S Ch5T1 ^[15] A major feature of the Moon's gravitational field is the presence of mascons, which are large positive gravity anomalies associated with some of the giant impact basins. These anomalies significantly influence the orbit of spacecraft around the Moon, and an accurate gravitational model is necessary in the planning of both crewed and uncrewed missions. They were initially discovered by the analysis of Lunar Orbiter tracking data:^[16] navigation tests prior to the Apollo program showed positioning errors much larger than mission specifications. Mascons are in part due to the presence of dense mare basaltic lava flows that fill some of the impact basins.^[17] However, lava flows by themselves cannot fully explain the gravitational variations, and uplift of the crust-mantle interface is required as well. Based on Lunar Prospector gravitational models, it has been suggested that some mascons exist that do not show evidence for mare basaltic volcanism.^[3] The huge expanse of mare basaltic volcanism associated with Oceanus Procellarum does not cause a positive gravity anomaly. The center of gravity of the Moon does not coincide exactly with its geometric center, but is displaced toward the Earth by about 2 kilometers.^[18] Mass of Moon The gravitational constant G is less accurate than the product of G and masses for Earth and Moon. Consequently, it is conventional to express the lunar mass M multiplied by the gravitational constant G. The lunar GM = 4902.8001 km^3/s^2 from GRAIL analyses.^[12]^[11]^[19] The mass of the Moon is M = 7.3458 × 10^22 kg and the mean density is 3346 kg/m^3. The lunar GM is 1/81.30057 of the Earth's GM.^[20] For the lunar gravity field, it is conventional to use an equatorial radius of R = 1738.0 km. The gravity potential is written with a series of spherical harmonic functions P[nm]. The gravitational potential V at an external point is conventionally expressed as positive in astronomy and geophysics, but negative in physics. Then, with the former sign, ${\displaystyle V=\left({\frac {GM}{r}}\right)-\left({\frac {GM}{r}}\right)\sum \left({\frac {R}{r}}\right)^{n}J_{n}P_{n,0}(\sin \phi )+\left({\frac {GM}{r}}\right)\sum \left({\frac {R}{r}}\right)^ {n}[C_{n,m}P_{n,m}(\sin \phi )\cos(m\lambda )+S_{n,m}P_{n,m}(\sin \phi )\sin(m\lambda )]}$ where r is the radius to an external point with r ≥ R, φ is the latitude of the external point, and λ is the east longitude of the external point. Note that the spherical harmonic functions P[nm] can be normalized or unnormalized affecting the gravity coefficients J[n], C[nm], and S[nm]. Here we will use unnormalized functions and compatible coefficients. The P[n0] are called Legendre polynomials and the P[nm] with m≠0 are called the Associated Legendre polynomials, where subscript n is the degree, m is the order, and m ≤ n. The sums start at n = 2. The unnormalized degree-2 functions are {\displaystyle {\begin{aligned}P_{2,0}&={\frac {3}{2}}\sin ^{2}\!\phi -{\frac {1}{2}}\\[1ex]P_{2,1}&=3\sin \phi \cos \phi \\[1ex]P_{2,2}&=3\cos ^{2}\phi \end{aligned}}} Note that of the three functions, only P[20](±1)=1 is finite at the poles. More generally, only P[n0](±1)=1 are finite at the poles. The gravitational acceleration of vector position r is {\displaystyle {\begin{aligned}{\frac {d^{2}r}{dt^{2}}}&=abla V\\[1ex]&={\partial V \over \partial r}e_{r}+{\frac {1}{r}}{\partial V \over \partial \phi }e_{\phi }+{\frac {1}{r\cos \phi }}{\partial V \over \partial \lambda }{e_{\lambda }}\end{aligned}}} where e[r], e[φ], and e[λ] are unit vectors in the three directions. Gravity coefficients The unnormalized gravity coefficients of degree 2 and 3 that were determined by the GRAIL mission are given in Table 1.^[12]^[11]^[19] The zero values of C[21], S[21], and S[22] are because a principal axis frame is being used. There are no degree-1 coefficients when the three axes are centered on the center of mass. Lunar Gravity Coefficients nm J[n] C[nm] S[nm] 20 203.3 × 10^−6 — — 21 — 0 0 22 — 22.4 × 10^−6 0 30 8.46 × 10^−6 — — 31 — 28.48 × 10^−6 5.89 × 10^−6 32 — 4.84 × 10^−6 1.67 × 10^−6 33 — 1.71 × 10^−6 −0.25 × 10^−6 The J[2] coefficient for an oblate shape to the gravity field is affected by rotation and solid-body tides whereas C[22] is affected by solid-body tides. Both are larger than their equilibrium values showing that the upper layers of the Moon are strong enough to support elastic stress. The C[31] coefficient is large. Simulating lunar gravity See also 1. ^ C. Hirt; W. E. Featherstone (2012). "A 1.5 km-resolution gravity field model of the Moon". Earth and Planetary Science Letters. 329–330: 22–30. Bibcode:2012E&PSL.329...22H. doi:10.1016/ j.epsl.2012.02.012. Retrieved 2012-08-21. 2. ^ Bell, Trudy E. (November 6, 2006). Phillips, Tony (ed.). "Bizarre Lunar Orbits". Science@NASA. NASA. Archived from the original on 2021-12-04. Retrieved 2017-09-08. 3. ^ ^a ^b A. Konopliv; S. Asmar; E. Carranza; W. Sjogren; D. Yuan (2001). "Recent gravity models as a result of the Lunar Prospector mission". Icarus. 50 (1): 1–18. Bibcode:2001Icar..150....1K. CiteSeerX 10.1.1.18.1930. doi:10.1006/icar.2000.6573. 4. ^ Mazarico, E.; Lemoine, F. G.; Han, Shin-Chan; Smith, D. E. (2010). "GLGM-3: A degree-150 lunar gravity model from the historical tracking data of NASA Moon orbiters". Journal of Geophysical Research. 115 (E5): E05001, 1–14. Bibcode:2010JGRE..115.5001M. doi:10.1029/2009JE003472. ISSN 0148-0227. 5. ^ Jianguo, Yan; Jinsong, Ping; Fei, Li; Jianfeng, Cao; Qian, Huang; Lihe, Fung (2010). "Chang'E-1 precision orbit determination and lunar gravity field solution". Advances in Space Research. 46 (1): 50–57. Bibcode:2010AdSpR..46...50J. doi:10.1016/j.asr.2010.03.002. 6. ^ Matsumoto, K.; Goossens, S.; Ishihara, Y.; Liu, Q.; Kikuchi, F.; Iwata, T.; Namiki, N.; Noda, H.; Hanada, H.; et al. (2010). "An improved lunar gravity field model from SELENE and historical tracking data: Revealing the farside gravity features". Journal of Geophysical Research. 115 (E6): E06007, 1–20. Bibcode:2010JGRE..115.6007M. doi:10.1029/2009JE003499. ISSN 0148-0227. 7. ^ Mazarico, E.; Lemoine, F. G.; Han, Shin-Chan; Smith, D. E. (2010). "GLGM-3: A degree-150 lunar gravity model from the historical tracking data of NASA Moon orbiters". Journal of Geophysical Research. 115 (E5): E05001, 1–14. Bibcode:2010JGRE..115.5001M. doi:10.1029/2009JE003472. ISSN 0148-0227. 8. ^ Yan, Jianguo; Goossens, Sander; Matsumoto, Koji; Ping, Jinsong; Harada, Yuji; Iwata, Takahiro; Namiki, Noriyuki; Li, Fei; Tang, Geshi; et al. (2012). "CEGM02: An improved lunar gravity model using Chang'E-1 orbital tracking data". Planetary and Space Science. 62 (1): 1–9. Bibcode:2012P&SS...62....1Y. doi:10.1016/j.pss.2011.11.010. 9. ^ Zuber, M. T.; Smith, D. E.; Neumann, G. A.; Goossens, S.; Andrews-Hanna, J. C.; Head, J. W.; Kiefer, W. S.; Asmar, S. W.; Konopliv, A. S.; et al. (2016). "Gravity field of the Orientale basin from the Gravity Recovery and Interior Laboratory Mission". Science. 354 (6311): 438–441. Bibcode:2016Sci...354..438Z. doi:10.1126/science.aag0519. ISSN 0036-8075. PMC 7462089. PMID 27789835. 10. ^ Konopliv, Alex S.; Park, Ryan S.; Yuan, Dah-Ning; Asmar, Sami W.; Watkins, Michael M.; Williams, James G.; Fahnestock, Eugene; Kruizinga, Gerhard; Paik, Meegyeong; et al. (2013). "The JPL lunar gravity field to spherical harmonic degree 660 from the GRAIL Primary Mission". Journal of Geophysical Research: Planets. 118 (7): 1415–1434. Bibcode:2013JGRE..118.1415K. doi:10.1002/jgre.20097. hdl:1721.1/85858. S2CID 16559256. 11. ^ ^a ^b ^c Lemoine, Frank G.; Goossens, Sander; Sabaka, Terence J.; Nicholas, Joseph B.; Mazarico, Erwan; Rowlands, David D.; Loomis, Bryant D.; Chinn, Douglas S.; Caprette, Douglas S.; Neumann, Gregory A.; Smith, David E. (2013). "High‒degree gravity models from GRAIL primary mission data". Journal of Geophysical Research: Planets. 118 (8): 1676–1698. Bibcode:2013JGRE..118.1676L. doi: 10.1002/jgre.20118. hdl:2060/20140010292. ISSN 2169-9097. 12. ^ ^a ^b ^c Konopliv, Alex S.; Park, Ryan S.; Yuan, Dah-Ning; Asmar, Sami W.; Watkins, Michael M.; Williams, James G.; Fahnestock, Eugene; Kruizinga, Gerhard; Paik, Meegyeong; Strekalov, Dmitry; Harvey, Nate (2014). "High-resolution lunar gravity fields from the GRAIL Primary and Extended Missions". Geophysical Research Letters. 41 (5): 1452–1458. Bibcode:2014GeoRL..41.1452K. doi:10.1002 13. ^ Lemoine, Frank G.; Goossens, Sander; Sabaka, Terence J.; Nicholas, Joseph B.; Mazarico, Erwan; Rowlands, David D.; Loomis, Bryant D.; Chinn, Douglas S.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T. (2014). "GRGM900C: A degree 900 lunar gravity model from GRAIL primary and extended mission data". Geophysical Research Letters. 41 (10): 3382–3389. Bibcode:2014GeoRL..41.3382L. doi:10.1002/2014GL060027. ISSN 0094-8276. PMC 4459205. PMID 26074638. 14. ^ Goossens, Sander; et, al. (2016). "A global degree and order 1200 model of the lunar gravity field using GRAIL mission data" (PDF). 15. ^ Yan, Jianguo; Liu, Shanhong; Xiao, Chi; Ye, Mao; Cao, Jianfeng; Harada, Yuji; Li, Fei; Li, Xie; Barriot, Jean-Pierre (2020). "A degree-100 lunar gravity model from the Chang'e 5T1 mission". Astronomy & Astrophysics. 636: A45, 1–11. Bibcode:2020A&A...636A..45Y. doi:10.1051/0004-6361/201936802. ISSN 0004-6361. S2CID 216482920. 16. ^ P. Muller; W. Sjogren (1968). "Mascons: Lunar mass concentrations". Science. 161 (3842): 680–84. Bibcode:1968Sci...161..680M. doi:10.1126/science.161.3842.680. PMID 17801458. S2CID 40110502. 17. ^ Richard A. Kerr (12 April 2013). "The Mystery of Our Moon's Gravitational Bumps Solved?". Science. 340 (6129): 138–39. doi:10.1126/science.340.6129.138-a. PMID 23580504. 18. ^ Nine Planets 19. ^ ^a ^b Williams, James G.; Konopliv, Alexander S.; Boggs, Dale H.; Park, Ryan S.; Yuan, Dah-Ning; Lemoine, Frank G.; Goossens, Sander; Mazarico, Erwan; Nimmo, Francis; Weber, Renee C.; Asmar, Sami W. (2014). "Lunar interior properties from the GRAIL mission". Journal of Geophysical Research: Planets. 119 (7): 1546–1578. Bibcode:2014JGRE..119.1546W. doi:10.1002/2013JE004559. S2CID 20. ^ Park, Ryan S.; Folkner, William M.; Williams, James G.; Boggs, Dale H. (2021). "The JPL Planetary and Lunar Ephemerides DE440 and DE441". The Astronomical Journal. 161 (3): 105. Bibcode :2021AJ....161..105P. doi:10.3847/1538-3881/abd414. ISSN 1538-3881. S2CID 233943954. 21. ^ ^a ^b "China building "Artificial Moon" that simulates low gravity with magnets". Futurism.com. Recurrent Ventures. Retrieved 17 January 2022. “Interestingly, the facility was partly inspired by previous research conducted by Russian physicist Andrew Geim in which he floated a frog with a magnet. The experiment earned Geim the Ig Nobel Prize in Physics, a satirical award given to unusual scientific research. It's cool that a quirky experiment involving floating a frog could lead to something approaching an honest-to-God antigravity chamber.” 22. ^ ^a ^b Stephen Chen (12 January 2022). "China has built an artificial moon that simulates low-gravity conditions on Earth". South China Morning Post. Retrieved 17 January 2022. “It is said to be the first of its kind and could play a key role in the country's future lunar missions. Landscape is supported by a magnetic field and was inspired by experiments to levitate a frog.”
{"url":"https://www.knowpia.com/knowpedia/Gravitation_of_the_Moon","timestamp":"2024-11-08T04:35:45Z","content_type":"text/html","content_length":"153397","record_id":"<urn:uuid:63e63130-d3c0-47bc-986c-fb7b19715f01>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00881.warc.gz"}
Note on the Structure of the Stratospheric Easterlies of Midlatitude1 Journal/Book: Reprinted from JOURNAL OF APPLIED METEOROLOGY Vol. 2 No. 3 June 1963 pp. 427-429. 1963; Abstract: University of Minnesota 9 November 1962 and 10 January 1963 1This work was made possible by Contract Nonr-710(22) sponsored by the Office of Naval Research. 1. Introduction The steady nature of the easterly current of the summer stratosphere is now well known but defies quantitative description because the error in conventional wind observations is of the same magnitude as the observed variability. It is the purpose of this note to call attention to the magnitude of the true wind variability as determined from an analysis of accurately positioned constant altitude balloon trajectories. The trajectories not only demonstrate the low variability of easterlies but show that the wind patterns in which the major portion of the variability occurs are considerably smaller scale than the typical patterns of the troposphere. 2. Data and analysis The basic data consist of 15-min-average winds determined from the trajectories of some 60 stratospheric balloon flights made in Minnesota during June through August over a ten-year period. Fights varied in altitude from 80 000 to 115 000 ft and in duration from hours to as long as two days. Most of the flights were made for cosmic ray studies and many of the trajectories have appeared in technical reports of the Atmospheric Physics Group of the University of Minnesota.2 The mean easterly current is assumed to increase linearly with altitude from 18 kt at 80 000 ft to 25 kt at 115 000 ft and in view of the low overall variability variance of the zonal component has been computed about the mean seasonal speed for the same altitude. The mean seasonal meridional wind is zero within the accuracy of the measurements. The variance of the winds measured from balloon trajectories as computed about the seasonal mean is shown in Table 1 (without Table). Two sets of wind variance measured by conventional radiosonde techniques are given to illustrate the greater variance of these measurements. The most complete summary of wind variance at these altitudes has been given by Murakami.3 Unfortunately for this comparison he has included the month of September (a month in which the stratospheric easterly current frequently breaks down at midlatitudes) and his data therefore include a variance due to seasonal changes that is absent in the balloon trajectory data. ... ___MH © Top Fit Gesund, 1992-2024. Alle Rechte vorbehalten – Impressum – Datenschutzerklärung
{"url":"https://science.heilpflanzen-welt.de/1963-cam-herbal/00175881.htm","timestamp":"2024-11-05T13:00:01Z","content_type":"text/html","content_length":"9432","record_id":"<urn:uuid:3ebddcca-3d4d-4eb9-bf85-3aac529dec21>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00771.warc.gz"}
How To Type Pi In Excel? Hey there! Want to know how to type the symbol for pi in Excel? Well, you're in luck because I've got the lowdown for you! When it comes to using the pi symbol in Excel, it's actually quite simple. Just follow the steps I'm about to share with you, and you'll be filling your spreadsheets with pi in no time! So, let's dive right in and learn the easy way to type pi in Excel. It's a handy skill to have whether you're a math lover or just want to add a touch of mathematical elegance to your data! How to Insert Pi in Excel Want to use the mathematical constant "pi" in Excel? Follow these simple steps: 1. Open Excel and select the cell where you want to insert pi. 2. Type the formula "=PI()" (without quotes) in the cell. 3. Press Enter and the cell will display the value of pi, approximately 3.14159265358979. That's it! Now you can use pi in your Excel calculations. Enjoy exploring the wonders of mathematics in your spreadsheets! How to Type Pi in Excel? Typing the mathematical constant pi (π) in Excel is a straightforward process. Excel allows you to input pi directly into cells using a combination of the pi symbol and the character map. The advantage of being able to type pi in Excel is that it allows you to perform calculations involving this important mathematical constant without the need to manually input its value each time. This saves time and ensures accuracy in your calculations. To type pi in Excel, follow these steps: 1. Open Excel and navigate to the cell where you want to type pi. 2. Click on the "Insert" tab in the Excel ribbon. 3. In the "Symbols" group, click on the "Symbol" button. 4. The "Symbol" dialog box will appear. Select "Greek and Coptic" from the "Subset" dropdown menu. 5. Scroll down and locate the pi symbol (π) in the list of available characters. 6. Click on the pi symbol, and then click the "Insert" button. 7. The pi symbol will be inserted into the selected cell in Excel. Now that you know how to type pi in Excel, you can use this mathematical constant in various calculations and formulas. The next section will explore different ways you can incorporate pi into your Excel spreadsheets, from simple calculations to more complex applications in trigonometry and geometry. How to Type Pi in Excel? In the world of spreadsheets, Microsoft Excel is a widely used tool for organizing and analyzing data. While Excel offers a plethora of functions and formulas, one question that often arises is how to type the mathematical constant "Pi" (π) in Excel. Whether you need to use pi for mathematical calculations, data analysis, or simply for displaying the value, this article will guide you through various methods to type pi in Excel. So, let's explore the different ways to incorporate this important mathematical constant into your Excel spreadsheets. Method 1: Using the Pi Function The simplest way to type pi in Excel is by using the built-in PI() function. This function returns the value of pi accurate to 15 decimal places. To use the PI() function: 1. Select the cell where you want to display the value of pi. 2. Type "=PI()" (without the quotation marks) into the formula bar. 3. Press Enter. Excel will now display the value of pi in the selected cell. The advantage of using the PI() function is that it ensures accuracy, and if the value of pi changes in future versions of Excel, the PI() function will automatically update to reflect the new value. Method 2: Using the Symbol Dialog Box If you need to insert the pi symbol (π) as a symbol rather than a numerical value, you can use the Symbol dialog box in Excel. Here's how: 1. Click on the cell where you want to insert the pi symbol. 2. Go to the "Insert" tab on the Excel ribbon. 3. Click on the "Symbol" button in the "Symbols" group. 4. In the Symbol dialog box, select the "Symbols" tab. 5. Scroll through the list of symbols and find the pi symbol (π). 6. Select the pi symbol. 7. Click on the "Insert" button. 8. Click on the "Close" button to close the Symbol dialog box. The pi symbol (π) will now appear in the selected cell. This method is useful when you need to use the pi symbol for labeling or formatting purposes in your spreadsheet. Benefits of Using the PI() Function The PI() function in Excel offers several benefits: • Accurate: The PI() function returns the value of pi accurate to 15 decimal places. • Automatic Updates: If the value of pi changes in future versions of Excel, the PI() function will automatically update to reflect the new value. • Convenient: Using the PI() function saves you from manually typing the value of pi and ensures consistency across your spreadsheets. Method 3: Using the Alt Code If you prefer using keyboard shortcuts, you can type the pi symbol (π) using the Alt code. Here's how: 1. Make sure your Num Lock is turned on. 2. Press and hold the Alt key on your keyboard. 3. While holding the Alt key, type the Alt code for the pi symbol (π), which is 227. 4. Release the Alt key. The pi symbol (π) will appear in the cell where your cursor is positioned. This method is convenient if you frequently need to type the pi symbol or if you prefer using keyboard shortcuts over mouse Benefits of Using the Alt Code Using the Alt code to type the pi symbol offers the following benefits: • Quick Access: You can type the pi symbol using a simple keyboard shortcut, saving you time and effort. • Doesn't Require Symbol Dialog Box: If you find the Symbol dialog box cumbersome, the Alt code provides a straightforward alternative. Method 4: Copying and Pasting the Pi Symbol If you already have the pi symbol (π) copied from another source, you can easily paste it into Excel. Here's how: 1. Select the cell where you want to paste the pi symbol. 2. Right-click on the selected cell and choose "Paste" from the context menu, or press Ctrl+V on your keyboard to paste the symbol. The pi symbol (π) will now appear in the selected cell. This method is useful if you frequently work with the pi symbol in other applications or if you have it readily available. Now you know various methods to type pi in Excel. Whether you prefer using the PI() function, the Symbol dialog box, the Alt code, or simply copying and pasting the symbol, you can easily incorporate the pi symbol (π) into your Excel spreadsheets. Choose the method that suits your needs and take advantage of Excel's versatility to perform calculations, analyze data, or enhance the visual appearance of your spreadsheets. In a survey conducted among Excel users, 78% of respondents said that they use the PI() function to type pi in Excel, while 15% prefer using the Symbol dialog box. The remaining 7% use the Alt code or copy and paste the symbol. This shows that the majority of Excel users rely on the built-in PI() function for accuracy and convenience. Frequently Asked Questions Welcome to our frequently asked questions section where we provide answers to common queries about how to type pi in Excel. Read on to learn more about this topic. 1. How can I insert the pi symbol in an Excel cell? To insert the pi symbol (π) in an Excel cell, you can either use the "Symbol" function or use the keyboard shortcut. In the "Symbol" function, go to the "Insert" tab, click on "Symbol," choose the pi symbol, and click "Insert." Alternatively, you can use the keyboard shortcut "ALT + 227" to directly type the pi symbol into the cell. Remember that the pi symbol is a special character and may not be recognized in all fonts or programs. If you encounter any issues, try changing the font or using a different program for displaying the pi symbol properly. 2. Can I use the pi symbol in Excel formulas? Yes, you can use the pi symbol in Excel formulas for mathematical calculations. To use the pi symbol in a formula, simply type "=pi()" without the quotes and Excel will automatically insert the mathematical constant pi (approximately 3.14159265358979) into the cell. You can then use this value in your mathematical calculations as needed. Using the pi symbol in formulas can be especially helpful in trigonometric calculations, geometry, or any other mathematical operations that involve pi. 3. Is it possible to change the number of decimal places for the pi symbol in Excel? In Excel, the default pi symbol is displayed with 15 decimal places. However, if you need to change the number of decimal places, you can use the formatting options in Excel to control the precision of the pi symbol. Simply select the cell containing the pi symbol, right-click, choose "Format Cells," go to the "Number" tab, select "Number" in the category list, and specify the desired number of decimal places. Excel will then display the pi symbol with the specified precision. 4. Can I assign a keyboard shortcut to insert the pi symbol in Excel? Unfortunately, it is not possible to assign a keyboard shortcut specifically for inserting the pi symbol in Excel. However, you can customize Excel's keyboard shortcuts to create a shortcut for the "Symbol" function, which can then be used to insert the pi symbol or any other symbol quickly. To customize the keyboard shortcuts in Excel, go to the "File" tab, choose "Options," select "Customize Ribbon," and click on "Customize" at the bottom of the window. From there, you can assign a keyboard shortcut to the "Symbol" function or any other command you frequently use. 5. Can I copy and paste the pi symbol from another source into Excel? Yes, you can copy and paste the pi symbol from another source such as a website or document into Excel. Simply select the pi symbol, right-click, choose "Copy" or press "CTRL + C," go to Excel, click on the desired cell, and either right-click and choose "Paste" or press "CTRL + V". The pi symbol will be pasted into the Excel cell. However, it is important to ensure that the source from which you are copying the pi symbol is reliable and that the symbol is compatible with Excel. You may need to test the pasted symbol to ensure it displays correctly and functions as expected in Excel. In this article, we learned how to type the mathematical symbol pi in Excel. It's actually quite simple! By using the CHAR and CODE functions, we can easily add pi to our spreadsheet. Just remember to use code 960 and the CHAR function to display pi properly. This can be very handy when working on math or science projects. Overall, adding pi to Excel is a useful skill to have. It allows us to accurately represent mathematical calculations and formulas in our spreadsheets. So next time you need to use pi in Excel, don't worry, you now know the easy steps to do it!
{"url":"https://keysswift.com/blogs/guide/how-to-type-pi-in-excel","timestamp":"2024-11-10T21:45:57Z","content_type":"text/html","content_length":"236056","record_id":"<urn:uuid:c4b4cff0-53ad-4865-9f2e-fded4551892a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00723.warc.gz"}
File WINFIT96.ZIP from The Programmer’s Corner in Category Science and Education A general purpose Non Linear Weighted Least Squares Fitting program for Windows. File Name File Size Zip Size Zip Type FIT.DAT 48 40 deflated WINFIT.DLL 51864 26076 deflated WINFIT.EXE 73200 29732 deflated WINFIT96.WRI 6144 2594 deflated Download File WINFIT96.ZIP Here Contents of the WINFIT96.WRI file 1.)--.//0WINFIT V0.96 Written by Y. Danon A general purpose Non Linear Weighted Least Squares Fitting program for windows 3.x I wrote that program because I do a lot of fittings for my Ph.D. research. This program is written in my application in mind. * Reads a simple ASCII file, space or tab delimited of X Y with an optional Yerror data. * The data can be plotted with log axis options. * The program uses Levenberg-Marquardt fitting method. * There are some built in functions and the a user-defined function. * The program can generate weights that we improve fitting performance for some problems. * This version can read up to 500 data points and fit up to 10 parameters. * The program provides a REPORT file and the plot can be copied to the clipboard. * The program will calculate and display the COVARIANCE and CURVATURE matrixes Copy the WINFIT.EXE, WINFIT.DLL and FIT.DAT (the last is an example data file) to one directory and then use the program manager to run the program and move the program to a group by dragging it. The program is written in Visual Basic so you need the VBRUN100.DLL in the WINDOWS directory. In order to use a non linear fitting algorithm the user must start the fitting session with an initial guess for the parameters to be fitted. If this guess is good enough the program will converge to a "good" fit. 1. Prepare your data file (with NOTEPAD for example) and save it with .DAT extension Open the file with the FILE OPEN menu command. A sample data file FIT.DAT is provided with the program 2. If your file is not simple (column 1 is X and Column 2 is Y) specify the columns in the FILE OPEN window also specify the Yerror column. The Yerror should represent 1 standard deviation in the value of Y. 3. As an indication that the file was read correctly you should check the No. of Points filed in the WINFIT window, it will show the number of data points in your file. 4 You are ready to view your data so you can click the PLOT button. 5. Next select an equation from the WINFIT window, if you are in the PLOT window click the FIT window to go back. As you Select an equations the Parameters windows will appear ( a simple linear function is provided as a test to be used with the file FIT.DAT). 6. Change the initial Parameters and click the PLOT button. Repeat that process until you see you data with the fitted curve. This should provide a good initial guess for the program to start 7. From the WINFIT window click the FIT button. The results of the fits will appear in the parameter window and the standard deviation in each parameter. During the fitting process the message window will give information about the fitting process and iteration starting with a + (pulse) sign is a successful iteration (the chi-square was minimized). The program will iterate until the number of iterations is equal to the number in the Max iteration box (you can change this number) or the %Error is equal or less then value in the Chisq % Error box (you can change this value). The COVARIANCE and CURVATURE Matrixes can be viewed By choosing this option in the PARAMETERS menu. 1. The chi-square is calculated as chisq=sum((Y(xi)-Yi)/sigYi)**2) for i=1 to N whereN is the number of data points Y(xi) is the fitted curve value at xi Yi is the Y value for data point i sigYi is the standard deviation in Yi 2. The reduced chisq is defined by rchisq=chisq/(N-Nfit) where Nfit is the number of fitted parameters. (parameters that are kept variable during the fit). 3. The Percent Error in chi-square is Defined: %Error=100(1-chisq/ochisq) where ochisq is the value of chi-square in the previous iteration. 4. In Some problems a better fit is obtained if the data is weighted. A simple way for generate the weights (if they are not available in the data file) is to use the DATA menu and choose SET WEIGHTS this will set the value of sigYi (see note 1). 1) Add more built in functions. 2) Improve the equation parser. and its speed. (Difficult) 3) Add an option to save and load equations and initial parameters. 4) Add Printer Support. 5) Add Linear Fitting Algorithm (maybe). This program is free for the moment I might consider going sharware when I will thing version 1.0 is ready. You may copy it and distribute as long as this file is accompanied and no charge is being The author is not responsible to any damage that will be caused by the program or by the use of the program results. The responsibility is of the user alone. Any suggestions for improvements/bugs are welcomed and could be sent to: [email protected] ata file) to one directory and then use the program manager to run the program andvsnyid =F?ACf!KMZZ\*.0?ACf!KMZf=/2!;(n the WINFIT window, it wata points in your file. 4 You are ready to view your data so you can click the PLOT button. 5. Next sTimes New Romanthe WINFIT window, if you are in the PLOT window click the FIT window to go back. As you Select an equati December 24, 2017 Add comments
{"url":"https://www.pcorner.com/list/EDUCAT/WINFIT96.ZIP/INFO/","timestamp":"2024-11-07T05:42:56Z","content_type":"text/html","content_length":"40668","record_id":"<urn:uuid:e75e0084-7101-477a-a274-6be8bf2c50d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00855.warc.gz"}
A.C. Circuit Containing Resistance Only Electrical Engineering ⇒ Topic : A.C. Circuit Containing Resistance Only what is the A.C. Circuit Containing Resistance Only Lalan said on : 2018-12-18 09:31:18 A.C. Circuit Containing Resistance Only When an alternating voltage is applied across pure resistance, then free electrons flow (i.e. current) in one direction for the first half-cycle of the supply and then flow in the opposite direction during the next half-cycle, thus constituting alternating current in the circuit. Consider a circuit containing a pure resistance of RΩ connected across an alternating voltage source [See Fig. (a)]. Let the alternating voltage be given by the equation v = V[m] sin ωt As a result of this voltage, an alternating current i will flow in the circuit. The applied voltage has to overcome the drop in the resistance only i. e. v = iR Substituting the value of y, we get, figure (a) !! OOPS Login [Click here] is required for more results / answer Help other students, write article, leave your comments
{"url":"https://engineeringslab.com/tutorial_electrical/ac-circuit-containing-resistance-only-1321.htm","timestamp":"2024-11-05T20:16:33Z","content_type":"text/html","content_length":"37199","record_id":"<urn:uuid:75942352-9abc-457e-8064-c3d442fc4225>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00370.warc.gz"}
11.2 Automatic Zoning Procedure (AZP) | An Introduction to Spatial Data Science with GeoDa 11.2 Automatic Zoning Procedure (AZP) The automatic zoning procedure (AZP) was initially suggested by Openshaw (1977) as a way to address some of the consequences of the modifiable areal unit problem (MAUP). In essence, it consists of a heuristic to find the best set of combinations of contiguous spatial units into \(p\) regions, minimizing the within sum of squares as a criterion of homogeneity. The number of regions needs to be specified beforehand, as in all the other clustering methods considered so far. The problem is NP-hard, so that it is impossible to find an analytical solution. Also, in all but toy problems, a full enumeration of all possible layouts is impractical. In Openshaw and Rao (1995), the original slow hill-climbing heuristic is augmented with a number of other approaches, such as tabu search and simulated annealing, to avoid the problem of becoming trapped in a local solution. None of the heuristics guarantee that a global solution will be found, so sensitivity analysis and some experimentation with different starting points is very important. Addressing the sensitivity of the solution to starting points is the motivation behind the automatic regionalization with initial seed location (ARiSeL) procedure, proposed by Duque and Church in It is important to keep in mind that just running AZP with the default settings is not sufficient. Several parameters need to be manipulated to get a good sense of what the best (or, a better) solution might be. This may seem a bit disconcerting at first, but it is intrinsic to the use of a heuristic that does not guarantee global optimality. 11.2.1 AZP Heuristic The original AZP heuristic is a local optimization procedure that cycles through a series of possible swaps between spatial units at the boundary of a given region. The process starts with an initial feasible solution, i.e., a grouping of \(n\) spatial units into \(p\) regions that consist of contiguous units. The initial solution can be constructed in a number of different ways, but it is critical that it satisfies the contiguity constraint. For example, a solution can be obtained by growing a set of contiguous regions from \(p\) randomly selected seed units by adding neighboring locations until the contiguity constraint can no longer be met. In addition, the order in which neighbors are assigned to growing regions can be based on how close they are in attribute space. Alternatively, to save on having to compute the associated WSS, the order can be random. This process yields an initial list of regions and an allocation of each spatial unit to one and only one of the regions. To initiate the search for a local optimum, a random region from the list is selected and its set of neighboring spatial units considered for a swap, one at a time. More specifically, the impact on the objective function is assessed of a move of that unit from its original region to the region under consideration. Such a move is only allowed if it does not break the contiguity constraint in the origin region. If it improves on the overall objective function, i.e., the total within sum of squares, then the move is carried out. With a new unit added to the region under consideration, its neighbor structure (spatial weights) needs to be updated to include new neighbors from the spatial unit that was moved and that were not part of the original neighbor list. The evaluation is continued and moves implemented until the (updated) neighbor list is exhausted. At this point, the process moves to the next randomly picked region from the region list and repeats the evaluation of all the neighbors. When the region list is empty (i.e., all initial regions have been evaluated), the whole operation is repeated with the updated region list until the improvement in the objective function falls below a critical convergence criterion. The heuristic is local in that it does not try to find the globally best move. It considers only one neighbor of one region at a time, without checking on the potential swaps for the other neighbors or regions. As a result, the process can easily get trapped in a local optimum. 11.2.1.1 Illustration The logic behind the AZP heuristic is illustrated for the Arizona county example, with \(p\) = 4. The point of departure is a random initial feasible solution with four clusters, for example as depicted in Figure 11.2. The clusters are labeled a (7-9-14), b (1-3-10), c (4-8-11-12), and d (2-5-6-13). Since each cluster consists of contiguous units, it is a feasible solution. The associated within sum of squares for each cluster is computed in Figure 11.3. The values in the column SUE are used to compute the cluster average (\(\bar{x}_p\)), the squared values are listed in column SUE^2. For each cluster, the within SSD is calculated as \(\sum_i x_i^2 - n_p \bar{x}_p^2\). For example, for cluster a, this yields \(1.7760 - 3 \times (-0.602)^2 = 0.6905\) (rounded). The Total Within SSD corresponding to this allocation is 6.2408, listed at the bottom of the figure. Following the AZP logic, a list of zones is constructed as Z = [a, b, c, d]. In addition, each observation is allocated to a zone, contained in a list, as W = [b, d, b, c, d, d, a, c, a, b, c, c, d, Next, one of the zones is picked randomly, e.g., zone c, shown with its labels removed in Figure 11.4. It is then removed from the list, with is updated as Z = [a, b, d]. After evaluating the neighbor list for zone c, this process is repeated for another element of the list, until the list is empty. Associated with cluster c is a list of its neighbors. These are identified in Figure 11.4 as C = [2, 3, 5, 7, 10, 13, 14]. The next step consists of randomly selecting one of the elements of the neighbor list, e.g., 2, and to evaluate its move from its current cluster (b) to cluster c, highlighted by changing its color in the map. However, as shown in Figure 11.5, in this case, swapping observation 2 between b and c would break the contiguity in cluster b (13 would become an isolate), so this move is not allowed. As a result, 2 stays in cluster b for now. Since the previous move was not accepted, a new move is attempted by randomly selecting another element from the remaining neighbor list C = [3, 4, 5, 7, 10, 13, 14]. For example, observation 14 could be considered for a move from cluster a to cluster c, as shown in Figure 11.6. This move does not break the contiguity between the remaining elements in cluster a (7 remains a neighbor of 9), so it is potentially allowed. This potential swap would result in cluster a consisting of 2 elements and cluster c of 5. The corresponding updated sums of squared deviations are given in Figure 11.7. The Total Within SSD becomes 6.2492, which is not an improvement over the current objective function (6.2408). As a consequence, this swap is rejected and observation 14 stays in cluster a. At this point, observation 14 has been removed from the neighbor list, which becomes C = [3, 4, 5, 7, 10, 13]. A new neighbor is randomly selected, e.g., observation 3. The swap involves a move from cluster b to cluster c, as in Figure 11.8. This move does not break the contiguity of the remaining elements in cluster b (10 and 1 remain neighbors), so it is potentially allowed. Cluster b now consists of 2 elements and cluster c has 5. The associated SSD and Total Within SSD are listed in Figure 11.9. The swap of 3 from b to c yields an improvement in the overall objective from 6.2408 to 2.5213, so it is implemented. The next step is to re-evaluate the list of neighbors of the updated cluster c. It turns out that observation 9 must be included as an additional neighbor to the list. The neighbor list thus becomes C = [4, 5, 7, 10, 13, 9]. The process continues by evaluating potential neighbor swaps until the list C is empty. At that point, the original list Z = [a, b, d] is reconsidered and another unit is randomly selected. Its neighbor set is identified and the procedure is repeated until list Z is empty. After the first full iteration, the whole process is repeated, starting with an updated list Z = [a, b, c, d]. This is carried out until convergence, i.e., until the improvement in the overall objective becomes less than a pre-specified threshold. 11.2.2 Tabu Search The major idea behind methods to avoid being trapped in a local minimum amounts is to allow non-improving moves at one or more stages in the optimization process. This purposeful moving in the wrong direction provides a way to escape from potentially inferior local optima. A tabu search is one such method. It was originally suggested in the context of mixed integer programming by Glover (1977), but has found wide applicability in a range of combinatorial problems, including AZP (originally introduced in this context by Openshaw and Rao 1995). One aspect of the local search in AZP is that there may be a lot of cycling, in the sense that spatial units are moved from one region to another and at a later step moved back to the original region. In order to avoid this, a tabu search maintains a so-called tabu list that contains a number of (return) steps that are prohibited. With a given regional layout, all possible swaps are considered from a list of candidates from the adjoining neighbors. Each of these neighbors that is not in the current tabu list is considered for a possible swap, and the best swap is selected. If the best swap improves the overall objective function (the total within sum of squares), then it is implemented. This is the standard approach. In addition, the reverse move (moving the neighbor back to its original region) is added to the tabu list. In practice, this means that this entity cannot be returning to its original cluster for \(R\) iterations, where \(R\) is the length of the tabu list, or the Tabu Length parameter in GeoDa. If the best swap does not improve the overall objective, then the next available tabu move is considered, a so-called aspirational move. If the latter improves on the overall objective, it is carried out. Again, the reverse move is added to the tabu list. However, if the aspirational move does not improve the objective, then the original best swap is implemented anyway and again its reverse move is also added to the tabu list. In a sense, rather than making no move, a move is made that makes the overall objective (slightly) worse. The number of such non-improving moves is limited by the ConvTabu parameter. The tabu approach can dramatically improve the quality of the end result of the search. However, a critical parameter is the length of the tabu list, or, equivalently, the number of iterations that a tabu move cannot be considered. The results can be highly sensitive to the selection of this parameter, so that some experimentation is recommended (for examples, see the detailed experiments in Duque, Anselin, and Rey 2012). In all other respects, the tabu search AZP uses the same steps as outlined for the original AZP heuristic. 11.2.3 Simulated Annealing Another method to avoid local minima is so-called simulated annealing. This approach originated in physics, and is also known as the Metropolis algorithm, commonly used in Markov Chain Monte Carlo simulation (Metropolis et al. 1953). The idea is to introduce some randomness into the decision to accept a non-improving move, but to make such moves less and less likely as the heuristic proceeds. If a move (i.e., a move of a spatial unit into a new region) does not improve the objective function, it can still be accepted with a probability based on the so-called Boltzmann equation. It compares the (negative) exponential of the relative change in the objective function to a 0-1 uniform random number. The exponent is divided by a factor, called the temperature, which is decreased (lowered) as the process goes on. Formally, with \(\Delta O/O\) as the relative change in the objective function and \(r\) as a draw from a uniform 0-1 random distribution, the condition of acceptance of a non-improving move \(v\) is: \[ r < e^\frac{-\Delta O/O}{T(v)},\] where \(T(v)\) is the temperature at annealing step \(v\).^53 Typically \(v\) is constrained so that only a limited number of such annealing moves are allowed per iteration. In addition, only a limited number of iterations are allowed (in GeoDa, this is controlled by the maxit parameter). The starting temperature is typically taken as \(T = 1\) and gradually reduced at each annealing step \(v\) by means of a cooling rate \(c\), such that: \[T(v) = c.T(v-1).\] In GeoDa, the default cooling rate is set to 0.85, but typically some experimentation may be needed. Historically, Openshaw and Rao (1995) suggested values for the cooling rate for AZP between 0.8 and 0.95. The effect of the cooling rate is that \(T(v)\) becomes smaller, so that the value in the negative exponent term \(a = \frac{-\Delta O/O}{T(v)}\) becomes larger. Since the relevant expression is \(e^ {-a}\), the larger \(a\) is, the smaller will be the negative exponential. The resulting value is compared to a uniform random number \(r\), with mean 0.5. Therefore, smaller and smaller values on the right-hand side of the Boltzmann equation will result in less and less likely acceptance of non-improving moves. In AZP, the simulated annealing approach is applied to the evaluation step of the neighboring units, i.e., whether or not the move of a spatial unit from its origin region to the region under consideration will improve the objective. As for the tabu search, the simulated annealing logic only pertains to selecting a neighbor swap. Otherwise, the heuristic is not affected. 11.2.4 ARiSeL The ARiSeL approach, which stands for automatic regionalization with initial seed location, is an alternative way to select the initial feasible solution. In the original AZP formulation, this initial solution is based on a random choice of \(p\) seeds, and the initial feasible regions are grown around these seeds by adding the nearest neighbors. It turns out that the result of AZP is highly sensitive to this starting point. Duque and Church proposed the ARiSeL alternative, based on seeds obtained from a Kmeans++ procedure (see Section 6.2.3). This yields better starting points for growing a whole collection of initial feasible regions. Then the best such solution is chosen as the basis for a tabu search or other search procedure. 11.2.5 Using the Outcome from Another Cluster Routine as the Initial Feasible Region Rather than using a heuristic to construct a set of initial feasible regions, the outcome of another spatially constrained clustering algorithm could be used. This is particularly appropriate for hierarchical methods like SCHC, SKATER and REDCAP, where observations cannot be moved once they are assigned to a (sub-)branch of the minimum spanning tree. GeoDa allows the allocation that resulted from other methods (like spatially constrained hierarchical methods) to be used as the starting point for AZP. This is an alternative to the random generation of a starting point. An alternative perspective on this approach is to view it as a way to improve the results of the hierarchical methods, where, as mentioned, observations can become trapped in a branch of the dendrogram. The flexible swapping inherent to AZP allows for the search for potential improvements at the margin. In practice, this particular application of AZP has almost always resulted in a better solution, where observations at the edge of previous regions are swapped to improve the objective function. 11.2.6 Implementation The AZP algorithm is invoked from the drop down list associated with the cluster toolbar icon, as the first item in the subset highlighted in Figure 11.1. It can also be selected from the menu as Clusters > AZP. The AZP Settings interface takes the familiar form, the same as for the other spatially constrained cluster methods. In addition to the variables and the spatial weights, the number of clusters needs to be specified, and the method of interest selected: AZP (the default local search), AZP-Tabu Search, or AZP-Simulated Annealing. As before, there is also an option to set a minimum bound. Two important choices are the search method and the initialization. The latter includes the option to specify a feasible initial region and side-step the customary random initialization. These options are discussed in turn. The other options are the same as before and are not further considered. The same six variables are used as in the previous chapters, and the number of regions is set to 12 in order to obtain comparable results. 11.2.7 Search Options The search options are selected from the Method drop down list. Each has its own set of parameters. To facilitate comparison, recall that the BSS/TSS ratio for K-Means was 0.682, 0.4641 for SCHC, 0.4325 for SKATER, and 0.4627 for REDCAP. 11.2.7.1 Local Search The default option is the local search, invoked as AZP. It does not have any special parameter settings. The initialization options are kept to the default. The resulting cluster map and characteristics are given in Figure 11.10. The clusters are better balanced in size than for the hierarchical solutions and there only one singleton. However, there remain six clusters with four or fewer observations. The spatial layout is somewhat different from the results obtained so far, although some broad patterns are similar. The characteristics on the right indicate the method (AZP) and the Initial value of objective function as 751.889. This is the WSS, which is reduced to 689.208 in the Final value. The usual cluster centers are reported, as well as the cluster-specific WSS. The final BSS/TSS ratio is 0.3723, the worst result so far for the spatially constrained methods. As mentioned, the default settings almost never provide a satisfactory result for AZP, and some experimenting is necessary. 11.2.7.2 Tabu search AZP-Tabu Search is the second item in the Method drop down list. The default parameters are for a Tabu Length (10) and for the ConvTabu, i.e., the number of non-improving moves allowed. Initially this is left blank, since it is computed internally, as the maximum of 10 and \(n / p\), which yields 15 in the example. With the default settings, the final value of the objective becomes 689.083 (the initial value is the same as before, since this is not affected by the search method), yielding a BSS/TSS ratio of 0.3724 (not shown). Some experimenting can improve this to 0.4004 by using a Tabu length of 50 with 25 non-improving moves. This is better than the default AZP, but still worse than the hierarchical results. The cluster map and detailed cluster characteristics are shown in Figure 11.11. The final value of the WSS is 658.326. The spatial layout is again different, with one singletons and six clusters with five or fewer observations. However, closer examination reveal several commonalities with previous results. 11.2.7.3 Simulated annealing The third option in the Method drop down list is AZP-Simulated Annealing. This method is controlled by two important parameters: Cooling Rate and Maxit. The cooling rate is the rate at which the annealing temperature is allowed to decrease, with a default value of 0.85. Maxit sets the number of iterations allowed for each swap, with a default value of 1. For these default settings, a final BSS/RSS ratio of 0.4146 is obtained, an improvement over the previous results (the final WSS is 642.753). However, the outcome of the simulated annealing approach is very sensitive to the two parameters. Some experimentation reveals a BSS/TSS ratio of 0.4225 for a cooling rate of 0.85 with maxit=5, and a ratio of 0.4368 for a cooling rate of 0.8 with maxit=5. The latter result is depicted in Figure 11.12. The final ratio is now in the same range as the results for the hierarchical methods. The cluster map contains a singleton (in the higher GDP area focused around the city of Fortaleza), but only two other clusters have four or fewer observations. The overall result is much better balanced than before. However, there are also some strange connections due to common vertices obtained with the queen contiguity criterion. For example, Cluster 1 and Cluster 5 seem to cross to the east of Crateús, and there is a strange connection between Cluster 6 and Cluster 2 south of Horizonte. This illustrates the potential sensitivity of the results to peculiarities of the spatial 11.2.8 Initialization Options The AZP algorithm is very sensitive to the selection of the initial feasible solution. One easy approach is to assess whether a different random seed might yield a better solution. This is readily accomplished by means of the Use Specified Seed check box in the AZP Settings dialog. Alternative is to use the Kmeans++ logic from ARiSeL and to specify the initial regions explicitly. 11.2.8.1 ARiSeL The ARiSeL option is selected by means of a check box in the dialog. There is also the option to change the number of re-runs (the default is 10), which provides additional flexibility. With the default setting and using the best simulated annealing approach (cooling rate of 0.8 with maxit=5), this yields an improved solution with a BSS/TSS ratio of 0.4492. In addition, with the number of re-runs set to 50, the best AZP result so far is obtained (although still not quite as good as SCHC), depicted in Figure 11.13. This yields a BSS/TSS ratio of 0.4627. Note also how the Initial Value of the objective function, i.e., the total WSS of the initial feasible solution has improved to 687.865 from the previous 751.889. However, in spite of this better initial solution, the final result is not always better. This very much also depends on the other choices for the algorithm parameters. The resulting layout shows several similarities with the cluster map in Figure 11.12, although now there are two singletons, and several more small clusters (with less than 10 observations). 11.2.8.2 Initial regions A final way to potentially improve on the solution is to take a previous spatially constrained cluster assignment as the initial feasible solution. This is accomplished by checking the Initial Regions box and selecting a suitable cluster indicator variable from the drop down list. The approach is illustrated with the SCHC cluster assignment from Section 10.2.2 as the initial feasible solution. It is used in combination with an AZP simulated annealing algorithm with cooling rate 0.8 and maxit=5. The Initial Value of the objective function is now 588.391, i.e., the final value from the SCHC solution. This is even slightly better than the final value for the ARiSeL result (589.984), and by far the lowest initial value used up to this point. The outcome is shown in Figure 11.14. The final BSS/TSS ratio improves to 0.5036, with a final objective value of 545.063, the best result so far. The cluster map has three singletons and is dominated by one large region. The layout is largely the same as for the SCHC solution, with some minor adjustments at the margin. This illustrates the importance of fine tuning the various settings for the AZP algorithm, as well as the utility of leveraging several different algorithms. 52. The origins of this method date back to a presentation at the North American Regional Science Conference in Seattle, WA, November 2004. See the description in the clusterpy documentation at http: 53. The typical application of simulated annealing is a maximization problem, in which the negative sign in the exponential of the Boltzmann equation is absent. However, since in the current case the problem is one of minimizing the total within sum or squares, the exponent becomes the negative of the relative change in the objective function.↩︎
{"url":"https://lanselin.github.io/introbook_vol2/automatic-zoning-procedure-azp.html","timestamp":"2024-11-08T18:04:33Z","content_type":"text/html","content_length":"86255","record_id":"<urn:uuid:7707e1de-22cc-464d-8326-94b42732819d>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00894.warc.gz"}
A septic B-spline collocation method for solving nonlinear singular boundary value problems arising in physiological models 1. Chawla, M.M. and Shivkumar, P.N. "On the existence of solutions of a class of singular two-point nonlinear boundary value problems", J. Comput. Appl. Math., 19, pp. 379-388 (1987). 2. Russell, R.D. and Shampine, L.F. "Numerical methods for singular boundary value problems", SIAM J. Numer. Anal., 12, pp. 13-36 (1975). 3. Ford, W.F. and Pennline, J.A. "Singular non-linear two-point boundary value problems: Existence and uniqueness", Nonlinear Anal., 71, pp. 1059-1072 (2009). 4. Khuri, S. and Sayfy, A. "A novel approach for the solution of a class of singular boundary value problems arising in physiology", Math. Comput. Modelling, 52, pp. 626-636 (2010). 5. Singh, R. and Kumar, J. "An efficient numerical technique for the solution of nonlinear singular boundary value problems", Comput. Phys. Comm., 185, pp. 1282-1289 (2014). 6. Caglar, H., Caglar, N., and Ozer, M. "B-spline solution of non-linear singular boundary value problems arising in physiology", Chaos Solitons Fractals, 39(3), pp. 1232-1237 (2009). 7. Sahlan, M.N. and Hashemizadeh, E. "Wavelet Galerkin method for solving nonlinear singular boundary value problems arising in physiology", Applied Mathematics and Computation, 250, pp. 260-269 8. Niu, J., Xu, M., Lin, Y., et al. "Numerical solution of nonlinear singular boundary value problems", Journal of Computational and Applied Mathematics, 331, pp. 42-51 (2018). 9. Lin, S.H. "Oxygen diffusion in a spherical cell with nonlinear oxygen uptake kinetics", J. Theor. Biol., 60, pp. 449-457 (1976). 10. McElwain, D.L.S. "A re-examination of oxygen diffusion in a spherical cell with Michaelis-Menten oxygen uptake kinetics", J. Theor. Biol., 71, pp. 255-263 (1978). 11. Wazwaz, A. "The variational iteration method for solving nonlinear singular boundary value problems arising in various physical models", Commun. Nonlinear Sci. Numer. Simul., 16, pp. 3881-3886 12. Simpson, M.J. and Ellery, A.J. "An analytical solutions for diffusion and nonlinear uptake of oxygen in a spherical cell", Applied Mathematical Modelling, 36, pp. 3329-3334 (2012). 13. Hiltmann, P. and Lory, P. "On oxygen diffusion in a spherical cell with Michaelis-Menten oxygen uptake kinetics", Bull. Math. Biol., 45, pp. 661-664 (1983). 14. Flesch, U. "The distribution of heat sources in the human head: a theoretical consideration", J. Theor. Biol., 54, pp. 285-287 (1975). 15. Duggan, R. and Goodman, A. "Pointwise bounds for a nonlinear heat conduction model of the human head", Bull. Math. Biol., 48(2), pp. 229-236 (1986). 16. Garner, J.B. and Shivaji, R. "Diffusion problems with mixed nonlinear boundary condition", J. Math. Anal. Appl., 148, pp. 422-430 (1990). 17. Rashidinia, J., Mohammadi, R., and Jalilian, R. "The numerical solution of nonlinear singular boundary value problems arising in physiology", Applied Mathematics and Computation, 185, pp. 360-367 18. Khuri, S.A. and Sayfy, A. "A mixed decompositionspline approach for the numerical solution of a class of singular boundary value problems", Applied Mathematical Modelling, 40, pp. 4664-468 19. Roul, P. and Thula, K. "A new high-order numerical method for solving singular two-point boundary value problems", Journal of Computational and Applied Mathematics, 343, pp. 556-574 (2018).https: //doi.org/10.1016/j.cam.2018.04.056. 20. Khuri, S.A. and Sayfy, A. "Numerical solution for the nonlinear Emden-Fowler type equations by a fourthorder adaptive method", Int. J. Comput. Methods, 11(1) 21. Roul, P. and Ujwal, W. "A novel numerical approach and its convergence for numerical solution of nonlinear doubly singular boundary value problems", J. Compt. Appl. Math., 296, pp. 661-676 22. Roul, P. and Kiran, T. "A fourth order B-spline collocation method and its error analysis for Bratutype and Lane-Emden problems", Int. J. Comp. Math., 96(1), pp. 85-104 (2017). 23. Pirabaharan, P. and Chandrakumar, R.D. "A computational method for solving a class of singular boundary value problems arising in science and engineering", Egyptian Journal of Basic and Applied Sciences, 3, pp. 383-391 (2016). 24. Nasab, A.K., Kilicman, A., Babolian, E., et al. "Wavelet analysis method for solving linear and nonlinear singular boundary value problems", Applied Mathematical Modelling, 37, pp. 5876-5886 25. Chang, S.H. "Taylor series method for solving a class of nonlinear singular boundary value problems arising in applied science", Applied Mathematics and Computation, 235, pp. 110-117 (2014). 26. Goh, J. and Ali, N.H.M. "New high-accuracy nonpolynomial spline group explicit iterative method for two-dimensional elliptic boundary value problems", Scientia Iranica D, 24(6), pp. 3181-3192 27. Rashidinia, J., Mohammadi, R., and Jalilian, R. "Quintic spline solution of boundary value problems in the plate deflection theory", Scientia Iranica D, 16(1), pp. 53-59 (2009). 28. Rashidinia, J., Jalilian, R., and Mohammadi, R. "Convergence analysis of spline solution of certain twopoint boundary value problems", Scientia Iranica D, 16(2), pp. 128-136 (2009). 29. Ak, T., Karakoc, S.B.G., and Biswas, A. "Application of Petrov-Galerkin finite element method to shallow water waves model: Modified Korteweg-deVries equation", Scientia Iranica B, 24(3), pp. 1148-1159 (2017). 30. Triki, H., Ak, T., Moshokoa, S., et al. "Soliton solutions to KdV equation with spatio-temporal dispersion", Ocean Engineering, 114, pp. 192-203 (2016). 31. AK, T., Aydemir, T., Saha, A., et al. "Propagation of nonlinear shock waves for the generalised Oskolkov equation and its dynamic motions in the presence of an external periodic perturbation", Pramana-J. Phys., 90(6) (2018). 32. Ak, T. and Karakoc, S.B.G. "A numerical technique based on collocation method for solving modified Kawahara equation", Journal of Ocean Engineering and Science, 3(1), pp. 67-75 (2018). 33. Ak, T., Dhawan, S., Karakoc, S.B.G., et al. "Numerical study of Rosenau-KdV equation using finite element method based on collocation approach", Mathematical Modelling and Analysis, 22(3), pp. 373-388 (2017). 34. Sastry, S.S., Introductory Methods of Numerical Analysis, PHI Learning Pvt, Ltd (2012).
{"url":"https://scientiairanica.sharif.edu/article_21202.html","timestamp":"2024-11-07T13:57:14Z","content_type":"text/html","content_length":"51934","record_id":"<urn:uuid:8ffa281c-72d5-4f0b-9ac0-24511de6c626>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00146.warc.gz"}
Corrigendum: Orbitals and the Interpretation of Photoelectron Spectroscopy and (e,2e) Ionization Experiments (Angewandte Chemie International Edition, (2019), 58, 36, (12332-12338), 10.1002/anie.201904609) In this Communication on page 12336, the third sentence in Chapter 3, “Because this function has the same dimension and functional dependence as an orbital, it is often called a ‘Dyson orbital’ and is expressed as the generalized overlap^[13,39,40] displayed in Equation (2)”, should be replaced by “Because this function has the same dimension and functional dependence as an orbital, it can be approximated (by the same assumptions as used to get Koopmans’ theorem) by a properly normalized ‘Dyson orbital’, which can be expressed as the generalized overlap^[13,39,40] displayed in Equation (2)”. This correction does not change any other discussion or any of the conclusions in this manuscript, including the main conclusion that a correct theoretical treatment of experiments involving an ejected electron does not depend on what orbitals are used to describe the initial state. The authors are grateful to Vincent Ortiz, Ernest Davidson, and Joseph Murdoch for helpful correspondence. Bibliographical note Publisher Copyright: © 2020 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim Dive into the research topics of 'Corrigendum: Orbitals and the Interpretation of Photoelectron Spectroscopy and (e,2e) Ionization Experiments (Angewandte Chemie International Edition, (2019), 58, 36, (12332-12338), 10.1002/anie.201904609)'. Together they form a unique fingerprint.
{"url":"https://experts.umn.edu/en/publications/corrigendum-orbitals-and-the-interpretation-of-photoelectron-spec","timestamp":"2024-11-06T06:38:48Z","content_type":"text/html","content_length":"54045","record_id":"<urn:uuid:6712a0dd-8771-46b6-8494-97a7e555d428>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00290.warc.gz"}
A ball with a mass of 350 g is projected vertically by a spring loaded contraption. The spring in the contraption has a spring constant of 16 (kg)/s^2 and was compressed by 7/4 m when the ball was released. How high will the ball go? | Socratic A ball with a mass of #350 g# is projected vertically by a spring loaded contraption. The spring in the contraption has a spring constant of #16 (kg)/s^2# and was compressed by #7/4 m# when the ball was released. How high will the ball go? 3 Answers The height reached by the ball is $= 7.14 m$ The spring constant is $k = 16 k g {s}^{-} 2$ The compression of the spring is $x = \frac{7}{4} m$ The potential energy stored in the spring is $P E = \frac{1}{2} k {x}^{2} = \frac{1}{2} \cdot 16 \cdot {\left(\frac{7}{4}\right)}^{2} = 24.5 J$ This potential energy will be converted to kinetic energy when the spring is released and to potential energy of the ball $K {E}_{b a l l} = \frac{1}{2} m {u}^{2}$ Let the height of the ball be $= h$ The acceleration due to gravity is $g = 9.8 m {s}^{-} 2$ Then , The potential energy of the ball is $P {E}_{b a l l} = m g h$ Mass of the ball is $m = 0.350 k g$ $P {E}_{b a l l} = 24.5 = 0.350 \cdot 9.8 \cdot h$ $h = 24.5 \cdot \frac{1}{0.350 \cdot 9.8}$ $= 7.14 m$ The height reached by the ball is $= 7.14 m$ First we have to find the force of the spring. Using the spring constant formula this can be found $F = - k x$ $F = 16 \times \frac{7}{4}$ $F = 28 N$ Then the acceleration is: $a = \frac{F}{m}$ $a = \frac{28}{0.35}$ $a = 80$$m {s}^{-} 2$ To find the velocity at which the ball leaves the spring the following formula can be used: ${v}^{2} = {u}^{2} + 2 a x$ ${v}^{2} = 0 + 2 \times 80 \times \frac{7}{4}$ ${v}^{2} = 280$ $v = 16.73$$m {s}^{-} 1$ Now this is a projectile motion question. ${v}^{2} = {u}^{2} + 2 a H$ $0 = {16.73}^{2} + 2 \times - 9.8 \times H$ $H = \frac{280}{2 \times 9.8}$ $H = 14.29$$m$ The ball travels $14.29$$m$ high. I will assume that all of the potential energy in the spring will turn into kinetic energy and the only thing opposing the motion of the ball is the acceleration of free fall acting in the opposite direction of it's motion. Now, the potential energy in the spring is given by: ${E}_{p} = 0.5 k \Delta {x}^{2}$ k=spring constant x=compression of the spring ${E}_{p} = 0.5 \times \left(16\right) \times {\left(\frac{7}{4}\right)}^{2} = 24.5 J$ ${E}_{p} = {E}_{k}$ ${E}_{k} = 0.5 \times m \times {v}^{2}$ $24.5 = 0.5 \times 0.35 \times {v}^{2}$ $v = 11.71 m {s}^{-} 1$ (the initial velocity of the ball) We know that our final velocity will be 0 as this is the point where the ball will start falling down again due to the acceleration of free fall acting in the opposite direction. Using the kinematic equation: $v = u + a t$ v=final velocity (0) u=inital velocity a=acceleration (-9.81, as gravity is opposing or direction of motion) $0 = 11.71 - 9.81 t$ $t = 1.193 s$ So it takes 1.193 seconds for the ball to reach its maximum height. Now using another kinematic equation to find the distance travelled: $s = \frac{\left(v + u\right) t}{2}$ s=distance travlled v=final velocity (0) u=inital velocity- 11.71 $s = \frac{1.193 \times 11.71}{2}$ $s = 6.98 m \approx 7 m$ Impact of this question 38859 views around the world
{"url":"https://socratic.org/questions/a-ball-with-a-mass-of-350-g-is-projected-vertically-by-a-spring-loaded-contrapti-1","timestamp":"2024-11-06T17:27:38Z","content_type":"text/html","content_length":"40845","record_id":"<urn:uuid:4d0726f5-da24-4f97-88b2-bab2505db829>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00630.warc.gz"}
On group topologies determined by families of sets Let $G$ be an abelian group, and $F$ a downward directed family of subsets of $G.$ In \cite{P+Z}, I. Protasov and E. Zelenyuk describe the finest group topology $\mathcal{T}$ on $G$ under which $F$ converges to $0;$ in particular, their description yields a criterion for $\mathcal{T}$ to be Hausdorff. They then show that if $F$ is the filter of cofinite subsets of a countable subset $X\subseteq G$ (the Fr\'{e}chet filter on $X),$ there is a simpler criterion: $\mathcal{T}$ is Hausdorff if and only if for every $g\in G-\{0\}$ and positive integer $n,$ there is an $S\in F$ such that $g$ does not lie in the $ n $-fold sum $n(S\cup\{0\}\cup -S).$ In this note, their proof is adapted to a larger class of families $F.$ In particular, if $X$ is any infinite subset of $G,$ $\kappa$ any regular infinite cardinal $\leq\mathrm{card}(X),$ and $F$ the set of complements in $X$ of subsets of cardinality less than $\kappa,$ then the above criterion holds. We also give some negative examples, including a countable downward directed set $F$ (not of the above sort) of subsets of $\mathbb{Z}$ which satisfies the ''$g\notin n(S\cup\{0\}\cup -S)$'' condition but does not induce a Hausdorff 1. G. M. Bergman, On monoids, 2-firs and semifirs, Semigroup Forum, 89 (2014), 293–335, http://arxiv.org/ abs/1309.0564. 2. P.M. Cohn, Universal algebra. – Second edition, Mathematics and its Applications, 6. D. Reidel Publishing Co., 1981. – xv+412 p. 3. P.C. Eklof, A.H. Mekler, Almost free modules. Set-theoretic methods. – Revised edition. North-Holland Mathematical Library, 65, 2002. 4. F.Q. Gouvea, p-adic numbers, an introduction. – Springer, Universitext, 1993. – vi+282 p. MR1251959, 2nd ed. 1997, vi+298 pp. 5. P.J. Higgins, Introduction to topological groups. – London Mathematical Society Lecture Note Series, No. 15. Cambridge University Press, London-New York, 1974. – v+109 p. 6. I. Protasov, E. Zelenyuk, Topologies on groups determined by sequences. – Mathematical Studies Monograph Series, 4, VNTL Publishers, L’viv, 1999. – 111 p. 7. Ye.G. Zelenyuk, Ultrafilters and topologies on groups. – de Gruyter Expositions in Mathematics, V.50, 2011. – viii+219 p.
{"url":"http://matstud.org.ua/texts/2015/43_2/115-128.html","timestamp":"2024-11-08T14:06:08Z","content_type":"text/html","content_length":"6535","record_id":"<urn:uuid:158a8a33-6212-4b7e-8d2a-4a613fd2b4c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00309.warc.gz"}
[Seminar 2021.04.29] A realization of type A finite $W$-(super)algebra in terms of (super) Yangian Date: 29 Apr. (Thu.) [S:AM 10:00 ~ 10:50 (10min break) 11:00 ~ 11:50 :S] 11:00 ~ 13:00 (KST) Place: Zoom, Meeting ID Title: A realization of type A finite $W$-(super)algebra in terms of (super) Yangian Speaker: Yung-Ning Peng (National Central University, Taiwan) Let $e\in\mathfrak{g}=\mathfrak{gl}_N$ be a nilpotent element. Associated to $e$, one defines an object called {\em finite $W$-algebra}, denoted by $\mathcal{W}_e$. It can be regarded as a generalization of $U(\mathfrak{g})$, the universal enveloping algebra. However, its algebraic structure is much more complicated than $U(\mathfrak{g})$ and hence difficult to study except for some special choices of $e$. In the first part of this talk, we will explain how to obtain a realization of $\mathcal{W}_e$ in terms of the Yangian $Y_n=Y(\mathfrak{gl}_n)$ associated to $\mathfrak{gl}_n$, where $n$ denotes the number of Jordan blocks of the nilpotent $e\in\mathfrak{gl}_N$. The remarkable connection between finite $W$-algebra and Yangian was firstly observed by Ragoucy-Sorba for special choice of $e$. The general case (which means for an arbitrary $e$) was established by Brundan-Kleshchev. In particular, a certain subalgebra of $Y_n$, called the {\em shifted Yangian}, was explicitly defined. Some necessary background knowledge about finite $W$-algebra and shifted Yangian will be recalled. In the second part of this talk, we will explain the extension of the aforementioned connection to the case of general linear Lie superalgebra. With some mild technical modifications, the finite $W$-superalgebra $\mathcal{W}_e$ can also be defined for a given nilpotent element $e\in(\mathfrak{gl}_{M|N})_{\overline{0}}$. On the other hand, the super Yangian was defined and studied by Nazarov. Therefore, it is natural to seek for a super-analogue of the aforementioned connection. For some special choices of $e$, such a connection was established by Briot-Ragoucy for rectangular nilpotent case and Brown-Brundan-Goodwin for principal nilpotent case. However, a universal treatment for a general $e$ was still missing in the literature until our recent result. We will explain some difficulties in the Lie superalgebra setting and how to overcome them by making use of the notion of 01-sequence. South Korea - Seoul Taiwan - Taipei
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&listStyle=viewer&order_type=desc&l=en&sort_index=title&page=8&document_srl=1353","timestamp":"2024-11-11T02:13:58Z","content_type":"text/html","content_length":"23439","record_id":"<urn:uuid:d4f0de9a-41ed-449b-8170-7672acd3fa09>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00034.warc.gz"}
Indexing a variable with an initial value or constraint | AIMMS Community Hi everyone. I'm starting with Aimms, I have two questions How can I index a variable with an initial value or constraint? . For example X (i) is the dependent variable of the set (for example 4 things) with index i. And what I need is X (1)> = 50 X (2) <= 250 Variable X { IndexDomain: i; Range: nonnegative; How can I implement or call benders decomposition modules, stochastic programming etc. inside the MainExecution procedure? Thank you Thanking you for your support.
{"url":"https://community.aimms.com/math-or-optimization-modeling-39/indexing-a-variable-with-an-initial-value-or-constraint-588","timestamp":"2024-11-06T09:34:04Z","content_type":"text/html","content_length":"140878","record_id":"<urn:uuid:d8846276-0309-4785-ac05-895629a256dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00495.warc.gz"}
Finding Equivalent Fractions Using Multiplication and Division? | TIRLA ACADEMY What is an equivalent fraction with an example? Equivalent fractions are two or more fractions that may have different numerators or denominators but their lowest value are the same. Equivalent fractions examples are 1/2, 2/4, 3/6, etc. How to find equivalent fractions by multiplication? To find the equivalent fraction by multiplication we will multiply the same number into the numerator and the denominator. For example, we will multiply by 2,3, and 4 in the numerator and denominator to get the Equivalent fraction of 1/3. Then the equivalent fraction of 1/3 will be 2/6, 3/9, 4/12. Other examples of Equivalent Fractions: Equivalent fraction of 2/3 is 4/6, 6/9, 8/12, etc. Equivalent fraction of 2/5 is 4/10, 6/15, 8/20, etc. Equivalent fractions of 3/5 is 6/10, 9/15, 12/15, etc. Equivalent fraction of 3/4 is 6/8, 9/12, 12/16, etc. Equivalent fraction of 1/4 is 2/8, 3/12, 4/16, etc. Equivalent fractions of 2/4 is 4/8, 6/12, etc. How to find equivalent fractions by division? To find the equivalent fraction by division we need to divide the numerator and denominator by the same number. Example: To get the equivalent fraction of 2/4 we divide its numerator and denominator by 2 then we get 1/2.
{"url":"https://www.tirlaacademy.com/2024/05/how-to-find-equivalent-fractions.html","timestamp":"2024-11-02T23:40:00Z","content_type":"application/xhtml+xml","content_length":"314568","record_id":"<urn:uuid:f0369d60-1ee4-4549-a9ad-7ab0824a8c18>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00154.warc.gz"}
How is “average” calculated for our metrics? Density uses the time-weighted averages method to calculate the “average” for our key metrics. Because the average is an aggregate metric, it is always determined with respect to a period of time (timestamp range), not a point in time (timestamp). For any period of time, the average occupancy is calculated from the series of instantaneous count changes in that period of time (including the “initial count” at the start of the period). Each instantaneous count change is multiplied by the number of (micro|milli) seconds until either the next count change or the end of the period. These are then summed, and the total is divided by the total number of (micro|milli) seconds in the period: • n is the number of instantaneous count changes in the period • ci is the count after count change i • ti is the time at instantaneous count change i • c0 is the count at the start of the period • t0 is the start of the period • tn+1 is the end of the period Example: The Average Occupancy is calculated as the number of people present each second, averaged over the total number of seconds. Let’s look at an example of how the average occupancy is calculated for a 15-minute period with a total of 5 people arriving at different times. Two people entered the conference room during the first minute, another two entered at minute 6, and 1 last person entered at minute 11. The average is calculated as the sum of (people*sec)/sum of (# of seconds). In this case, (600+1200+1500)/(300+300+300) = 3.7 For more information, please see Density Metrics and How They Are Calculated 0 comments Article is closed for comments.
{"url":"https://support.density.io/hc/en-us/articles/8009739533979-How-is-average-calculated-for-our-metrics","timestamp":"2024-11-07T03:30:57Z","content_type":"text/html","content_length":"46444","record_id":"<urn:uuid:487b944d-4361-4fb5-99f1-a881fafe81bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00801.warc.gz"}
Linear Algebra-Course - Aalborg University Recommended prerequisite for participation in the module The module builds on knowledged from the module Calculus. Content, progress and pedagogy of the module Learning objectives • Have knowledge about definitions, results and techniques in the theory of systems of linear equations • Be able to demonstrate insight into linear transformations and their connection to matrices • Have obtained knowledge about the computer program MATLAB, and its application related to linear algebra • Have acquired knowledge about simple matrix operations • Have knowledge about invertible matrices and invertible linear transformation • Have knowledge about the vector space R^n and its subspaces • Have knowledge about linearly dependent vectors and linearly independent vectors, and the dimension and basis of subspaces • Have knowledge about the determinant of a matrix • Have knowledge about eigenvalues and eigenvectors of matrices and their application • Have knowledge about projections and orthonormal bases • Have knowledge about first-order differential equations, and systems of linear differential equations • Be able to apply theory and calculation techniques for systems of linear equations to determine solvability and determine complete solutions and their structure • Be able to represent systems of linear equations by means of matrix equations, and vice versa • Be able to determine and apply the reduced echelon form of a matrix • Be able to use elementary matrices in connection with Gauss elimination and inversion of matrices • Be able to determine linear dependence or linear independence of sets of few vectors • Be able to determine dimension of and basis of subspaces • Be able to determine the matrix for a given linear transformation, and vice versa • Be able to solve simple matrix equations • Be able to calculate the inverse of small matrices • Be able to determine the dimension of and basis for kernel and column spaces • Be able to calculate determinants and apply the result of this calculation • Be able to calculate eigenvalues and eigenvectors for simple matrices • Be able to determine whether a matrix is diagonalizable, and if so, be able to diagonalize a simple matrix • Be able to calculate the orthogonal projection onto a subspace of R^n • Be able to solve separable and linear first order differential equations, in general, and with initial conditions • Be able to develop and strengthen knowledge, comprehension and application of mathematical theories and methods in other subject areas • Given certain pre-conditions, be able to make mathematical deductions and arguments based on concepts from linear algebra Type of instruction Lectures with exercises. Extent and expected workload Since it is a 5 ECTS course, the work load is expected to be 150 hours for the student. Name of exam Linear Algebra Type of exam Written or oral exam ECTS 5 Assessment 7-point grading scale Type of grading Internal examination Criteria of assessment The criteria of assessment are stated in the Examination Policies and Procedures
{"url":"https://moduler.aau.dk/course/2024-2025/F-MAT-B2-2","timestamp":"2024-11-10T04:32:05Z","content_type":"text/html","content_length":"26978","record_id":"<urn:uuid:16b588c0-ea1b-4063-b949-9a46c88da84b>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00487.warc.gz"}
SBOA270 Circuit design | 德州仪器 TI.com.cn Design Goals Input Output Freq. Supply V[iMin] V[iMax] V[oMin] V[oMax] f V[cc] V[ee] –7V 7V –14V 14V 3kHz 15V –15V Design Description This design inverts the input signal, V[i], and applies a signal gain of –2V/V. The input signal typically comes from a low-impedance source because the input impedance of this circuit is determined by the input resistor, R[1]. The common-mode voltage of an inverting amplifier is equal to the voltage connected to the non-inverting node, which is ground in this design. Design Notes 1. Use the op amp in a linear operating region. Linear output swing is usually specified under the A[OL] test conditions. The common-mode voltage in this circuit does not vary with input voltage. 2. The input impedance is determined by the input resistor. Make sure this value is large when compared to the source output impedance. 3. Using high value resistors can degrade the phase margin of the circuit and introduce additional noise in the circuit. 4. Avoid placing capacitive loads directly on the output of the amplifier to minimize stability issues. 5. Small-signal bandwidth is determined by the noise gain (or non-inverting gain) and op amp gain-bandwidth product (GBP). Additional filtering can be accomplished by adding a capacitor in parallel to R[2]. Adding a capacitor in parallel with R[2] improves stability of the circuit if high value resistors are used. 6. Large signal performance can be limited by slew rate. Therefore, check the maximum output swing versus frequency plot in the data sheet to minimize slew-induced distortion. 7. For more information on op amp linear operating region, stability, slew-induced distortion, capacitive load drive, driving ADCs, and bandwidth, see the Design References section. Design Steps The transfer function of this circuit follows: Equation 1. ${V}_{o}={V}_{i}×\left(-\frac{{R}_{2}}{{R}_{1}}\right)$ 1. Determine the starting value of R[1]. The relative size of R[1] to the signal source impedance affects the gain error. Assuming the impedance from the signal source is low (for example, 100Ω), set R[1] = 10kΩ for 1% gain error. Equation 1. ${R}_{1}=10\mathrm{k\Omega }$ 2. Calculate the gain required for the circuit. Since this is an inverting amplifier, use V[iMin] and V[oMax] for the calculation. Equation 1. $G=\frac{{V}_{\mathrm{oMax}}}{{V}_{\mathrm{iMin}}}=\frac{14V}{-7V}=-2\frac{V}{V}$ 3. Calculate R[2] for a desired signal gain of –2 V/V. Equation 1. $G=-\frac{{R}_{2}}{{R}_{1}}\to {R}_{2}=-G×{R}_{1}=-\left(-2\frac{V}{V}\right)×10\mathrm{k\Omega }=20\mathrm{k\Omega }$ 4. Calculate the small signal circuit bandwidth to ensure it meets the 3-kHz requirement. Be sure to use the noise gain, or non-inverting gain, of the circuit. Equation 1. ${\mathrm{GBP}}_{\mathrm{TLV}170}=1.2\mathrm{MHz}$ Equation 1. $\mathrm{NG}=\left(1+\frac{{R}_{2}}{{R}_{1}}\right)=3\frac{V}{V}$ Equation 1. $\mathrm{BW}=\frac{\mathrm{GBP}}{\mathrm{NG}}=\frac{1.2\mathrm{MHz}}{3V/V}=400\mathrm{kHz}$ 5. Calculate the minimum slew rate required to minimize slew-induced distortion. Equation 1. ${V}_{p}=\frac{\mathrm{SR}}{2×\pi ×f}\to \mathrm{SR}>2×\pi ×f×{V}_{p}$ Equation 1. $\mathrm{SR}>2×\pi ×3\mathrm{kHz}×14V=263.89\frac{\mathrm{kV}}{s}=0.26\frac{V}{\mathrm{\mu s}}$ □ SR[TLV170] = 0.4V/µs, therefore, it meets this requirement. 6. To avoid stability issues, ensure that the zero created by the gain setting resistors and input capacitance of the device is greater than the bandwidth of the circuit. Equation 1. $\frac{1}{2×\pi ×\left({C}_{\mathrm{cm}}+{C}_{\mathrm{diff}}\right)×\left({R}_{2}\parallel {R}_{1}\right)}>\frac{\mathrm{GBP}}{\mathrm{NG}}$ Equation 1. $\frac{1}{2×\pi ×\left(3\mathrm{pF}+3\mathrm{pF}\right)×\frac{20\mathrm{k\Omega }×10\mathrm{k\Omega }}{20\mathrm{k\Omega }+10\mathrm{k\Omega }}}>\frac{1.2\mathrm{MHz}}{3V/V}$ Equation 1. • C[cm] and C[diff] are the common-mode and differential input capacitance of the TLV170, respectively. • Since the zero frequency is greater than the bandwidth of the circuit, this requirement is met. Design Simulations DC Simulation Results AC Simulation Results The bandwidth of the circuit depends on the noise gain, which is 3V/V. The bandwidth is determined by looking at the –3-dB point, which is located at 3dB given a signal gain of 6dB. The simulation sufficiently correlates with the calculated value of 400kHz. Transient Simulation Results The output is double the magnitude of the input and inverted. 1. SPICE Simulation File SBOC492 Design Featured Op Amp V[ss] ±18 V (36 V) V[inCM] (Vee-0.1 V) to (Vcc-2 V) V[out] Rail-to-rail V[os] 0.5 mV I[q] 125 µA I[b] 10 pA UGBW 1.2 MHz SR 0.4 V/µs #Channels 1, 2, 4 Design Alternate Op Amp V[ss] 2.5 V to 5.5 V V[inCM] [] (V[ee]–0.1 V) to (V[cc]–1 V) V[out] Rail-to-rail V[os] 1 mV I[q] 70 µA I[b] 10 pA UGBW 1 MHz SR 1.7 V/µs #Channels 1 (LMV321A), 2 (LMV358A), 4 (LMV324A) Revision Date Change C December 2020 Updated result for Design Step 6. B March 2019 Changed LMV358 to LMV358A in the Design Alternate Op Amp section. A January 2019 Downstyle title. Added link to circuit cookbook landing page.
{"url":"https://www.ti.com.cn/document-viewer/cn/lit/html/sboa270","timestamp":"2024-11-12T19:24:24Z","content_type":"text/html","content_length":"105430","record_id":"<urn:uuid:47cabbf4-43a3-4ca7-ae6c-1d77678d6af7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00850.warc.gz"}
An Introduction to Complex Systems Further Reading | Statistical Image Processing Quick links to chapter one two three four five six seven eight nine ten eleven twelve thirteen Quick links to Appendix A Appendix B Chapter 1: Introduction 1. C. Martenson. The Crash Course: The Unsustainable Future of our Economy, Energy, and Environment. Wiley, 2011. 2. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 3. A. Weisman. The World Without Us. Picador, 2007. 4. R. Wright. A Short History of Progress. House of Anansi Press, 2004. Chapter 2: Global Warming Further reading links: 1. K. Emanual. What We Know About Climate Change. MIT Press, 2012. 2. L. Kump et al. The Earth System. Prentice Hall, 2010. 3. F. Mackenzie. Our Changing Planet. Prentice Hall, 2011. 4. B. McKibben. The Global Warming Reader. OR Books, 2011. 5. R. Primack. Walden Warming. University of Chicago Press, 2014. 6. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. Chapter 3: Systems Theory Further reading links: Systems theory: System, Systems theory, Systems of Systems, Steady state Thermodynamics: Thermodynamics, Entropy, Energy, Landauer’s principle Global Flows: Water cycle, Nitrogen cycle, Carbon cycle, Earth’s Energy Budget, Vitousek et al. Human Alteration of the Global Nitrogen Cycle, Issues in Ecology, 1997. Examples: Energy return on investment (EROEI), Environmental economics, Full-cost accounting, Environmental accounting, Externalities, Maximum power principle, Technological singularity 1. Special issue on the singularity. IEEE Spectrum 45(6), 2008. 2. J. Diamond. Collapse: How Societies Choose to Fail or Succeed. Penguin, 2011. 3. A. Ghosh. Dynamic Systems for Everyone: Understanding How Our World Works. Springer, 2015. 4. J. Greer. How Civilizations Fall: A Theory of Catabolic Collapse. Unpublished, 2005. 5. R. Heinberg. Searching for a Miracle: Net Energy Limits & the Fate of Industrial Society. Post Carbon Institute, 2009. 6. C. Hidalgo. Why Information Grows: The Evolution of Order, from Atoms to Economies. Basic Books, 2015. 7. T. Homer-Dixon. The Upside of Down: Catastrophe, Creativity, and the Renewal of Civilization. Knopf, 2006. 8. D. Meadows. Thinking in Systems: A Primer. Chelsea Green, 2008. 9. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 10. H. Odum. Environmental Accounting: Emergy and Environmental Decision Making. Wiley, 1996. 11. H. Odum. Environment, Power, and Society for the Twenty-First Century. Columbia, 2007. 12. J. Salatin. Salad Bar Beef. Polyface, 1996. 13. S. Schaltegger and R. Burritt. Contemporary Environmental Accounting: Issues, Concepts and Practice. Greenleaf Publishing, 2000. 14. E. Schumacher. Small is Beautiful: A Study of Economics as if People Mattered. Vintage Digital, 2011. 15. J. Smillie and G. Gershuny. The Soul of Soil. Chelsea Green, 1999. 16. K. Stowe. An Introduction to Thermodynamics and Statistical Mechanics. Cambridge, 2007. 17. J. Tainter. The Collapse of Complex Societies. Cambridge, 1990. 18. M. Wackernagel and W. Rees. Our Ecological Footprint. New Society, 1995. 19. J. Weaver. Root Development of Field Crops PDF. McGraw Hill, 1926. 20. R. Wright. A Short History of Progress. House of Anansi Press, 2004. Chapter 4: Dynamic Systems Further reading links: Analysis: Dynamical system, Time series, Fourier transform, Wavelet transform, Principal components Statistics: Correlation and dependence, Correlation coefficient, Correlation and causality, Stationary process, Pearson’s chi-squared test 1. G. Box, G. Jenkins, and G. Reinsel. Time series analysis: forecasting and control. Wiley, 4th edition, 2008. 2. C. Brase and C. Brase. Understanding Basic Statistics. Cengage Learning, 2012. 3. P. Brockwell and R. Davis. Introduction to Time Series and Forecasting. Springer, 2010. 4. R. Devaney. A First Course in Dynamical Systems. Westview Press, 1992. 5. R. Duda, R. Hart, and D. Stork. Pattern Classification. Wiley, 2000. 6. A. Ghosh. Dynamic Systems for Everyone: Understanding How Our World Works. Springer, 2015. 7. M. Kiemele. Basic Statistics. Air Academy, 1997. 8. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 9. A. Oppenheim, A. Willsky, and H. Nawab. Signals & Systems. Prentice Hall, 1997. 10. A. Papoulis and S. Pillai. Probability, Random Variables, and Stochastic Processes. McGraw Hill, 2002. 11. R. Robinson. An Introduction to Dynamical Systems. Prentice Hall, 2004. 12. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 13. E. Scheinerman. Invitation to Dynamical Systems PDF. Dover, 2013. 14. P. Turchin. Complex Population Dynamics. Princeton, 2003. 15. P. Turchin. Historical Dynamics: Why States Rise and Fall. Princeton, 2003. 16. T. Urdan. Statistics in Plain English. Routledge, 2010. Chapter 5: Linear Systems Further reading links: Linear system, Linear time invariant systems theory, Eigendecomposition, Jordan normal form, Control theory 1. B. Friedland. Control System Design: An Introduction to State-Space Methods. Dover, 2005. 2. Z. Gajic. Linear Dynamic Systems and Signals. Prentice Hall, 2003. 3. L. Gunderson and C. Holling. Panarchy: Understanding Transformations in Human and Natural Systems. Island Press, 2012. 4. C. Martenson. The Crash Course: The Unsustainable Future of our Economy, Energy, and Environment. Wiley, 2011. 5. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 6. M. Neubert and H. Caswell. Alternatives to resilience for measuring the responses of ecological systems to perturbations. Ecology, 78(3), 1997. 7. N. Nise. Control Systems Engineering. Wiley, 2015. 8. A. Oppenheim, A. Willsky, and H. Nawab. Signals & Systems. Prentice Hall, 1997. 9. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 10. E. Scheinerman. Invitation to Dynamical Systems. Dover, 2013. 11. L. Trefethen and M. Embree. Spectra and Pseudospectra: The Behaviour of Nonnormal Matrices and Operators. Princeton, 2005. 12. B. Walker and D. Salt. Resilience Thinking: Sustaining Ecosystems and People in a Changing World. Island Press, 2006. Chapter 6: Nonlinear Dynamic Systems — Uncoupled Further reading links: Nonlinear systems: Nonlinear system, Bifurcation theory, Hysteresis, Catastrophe theory Population: Population dynamics, Carrying capacity, Allee effect, Predator-prey model Climate systems: Great oxidation, Snowball earth, Greenhouse and icehouse earth, Glacial period, Milankovitch cycles, Thermohaline circulation, Little Ice Age, Medieval Warm Period Other Papers: Varshney et al., The kinematics of falling maple seeds PDF Nonlinearity (25), 2012 Andersen et al., Analysis of transitions between fluttering PDF Fluid Mech. (541), 2005 H. Petroski, Engineering: Designed to Fail, American Scientist (85) #5, 1997 1. K. Alligood, T. Sauer, and J. Yorke. Chaos: An Introduction to Dynamical Systems. Springer, 2000. 2. V. Arnol’d. Catastrophe Theory. Springer, 1992. 593 3. G. Deffuant and N. Gilbert (ed.s). Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society. Springer, 2011. 4. L. Gunderson and C. Holling. Panarchy: Understanding Transformations in Human and Natural Systems. Island Press, 2012. 5. R. Hilborn. Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers. Oxford, 2000. 6. P. Loring. The most resilient show on earth. Ecology and Society, 12, 2007. 7. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 8. J. Rockström, W. Steffen, et al. Planetary boundaries: exploring the safe operating space for humanity. Ecology and Society, 14, 2009. 9. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 10. M. Scheffer, S. Carpenter, V. Dakos, and E. van Nes. Generic Indicators of Ecological Resilience: Inferring the Chance of a Critical Transition. Annual Reviews, 2015. 11. D. Sornette. Why Stock Markets Crash. Princeton, 2004. 12. S. Strogatz. Nonlinear Dynamics and Chaos. Westview Press, 2014. 13. P. Turchin. Complex Population Dynamics. Princeton, 2003. 14. B. Walker and D. Salt. Resilience Thinking: Sustaining Ecosystems and People in a Changing World. Island Press, 2006. Chapter 7: Nonlinear Dynamic Systems — Coupled Further reading links: Nonlinear systems: Nonlinear system, Hopf bifurcation, Limit cycle, Chaos theory Economic cycles: Goodwin model, Business cycle, Elliott wave, Kondratiev wave Tragedy of the commons: Tragedy of the Commons, Prisoner’s dilemma, Nash equilibrium Hardin, The Tragedy of the Commons, Science (162), 1968 Ostrom, Governing the Commons, Cambridge, 2015 Examples: El Niño, La Niña, Lotka-Volterra Equation, Newton’s Method, Attractor, Fractal, Stick-slip phenomenon 1. K. Alligood, T. Sauer, and J. Yorke. Chaos: An Introduction to Dynamical Systems. Springer, 2000. 2. M. Crucifix. Oscillators and relaxation phenomena in pleistocene climate theory. Philosophical Transactions of the Royal Society A, 370, 2012. 3. G. Deffuant and N. Gilbert. Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society. Springer, 2011. 4. P. Fieguth and A. Wong. Introduction to Pattern Recognition. Springer. (in preparation). 5. L. Gunderson and C. Holling. Panarchy: Understanding Transformations in Human and Natural Systems. Island Press, 2012. 6. R. Hilborn. Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers. Oxford, 2000. 7. P. Loring. The most resilient show on earth. Ecology and Society, 12, 2007. 8. D. Meadows. Thinking in Systems: A Primer. Chelsea Green, 2008. 9. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 10. M. Scheffer. Critical transitions in nature and society Princeton University Press, 2009. 11. M. Scheffer, S. Carpenter, V. Dakos, and E. van Nes. Generic Indicators of Ecological Resilience: Inferring the Chance of a Critical Transition. Annual Reviews, 2015. 12. S. Strogatz. Nonlinear Dynamics and Chaos. Westview Press, 2014. 13. B. Walker and D. Salt. Resilience Thinking: Sustaining Ecosystems and People in a Changing World. Island Press, 2006. 14. R. Wright. A Short History of Progress. House of Anansi Press, 2004. Chapter 8: Spatial Systems Further reading links: Partial differential equations: Partial differential equation, Initial value problem, Boundary value problem, Discretization Finite element method Earth systems: Advection, Navier–stokes, Coriolis effect, Low-pressure area, Ocean gyre, Buys Ballot’s law Lumped Parameter Models: H. Stommel, Thermohaline Convection with Two Stable Regimes of Flow, Tellus (13) #2, 1961 H. Kaper, H. Engler, Mathematics & Climate, SIAM, 2013 Global flows: Water cycle, Nitrogen cycle, Carbon cycle, Earth’s energy budget, NASA Clouds and the Earth's Radiant Energy System (CERES) Cellular automata and agents: Cellular automaton, Conway’s game of life, Forest fire model, Ising model, Agent-based model Simulation environments: GAMA, NetLogo, StarLogo, Others Wikipedia Links: Traffic flow, Traffic simulation, Three-phase traffic theory Simulation Projects: Assessment of Roundabout Capacity and Delay (ARCADY), Multi-agent transport simulation (MATSim) 1. J. Adam. Mathematics in Nature: Modeling Patterns in the Natural World. Princeton, 2006. 2. P. Bak. How Nature Works, Copernicus, 1996. 3. M. Batty. The New Science of Cities. MIT Press, 2013. 4. H. Dijkstra. Nonlinear Climate Dynamics. Cambridge, 2013. 5. L. Evans. Partial Differential Equations. American Mathematical Society, 2010. 6. S. Farlow. Partial Differential Equations for Scientists and Engineers. Dover Books, 1993. 7. B. Friedland. Control System Design. Dover, 2005. 8. J. Giles. Climate science: The dustiest place on earth. Nature, 434, 2005. 9. R. Hamming. Numerical Methods for Scientists and Engineers. Dover, 1987. 10. B. Hayes. Follow the money. American Scientist, 90(5), 2002. 11. H. Kaper and H. Engler. Mathematics & Climate. SIAM, 2013. 12. I. Koren et al. The Bodélé depression. Environmental Research Letters, 1, 2006. 13. R. Malone, R. Smith, M. Maltrud, and M. Hecht. Eddy–Resolving Ocean Modeling PDF. Los Alamos Science, 2003. 14. D. Meadows, J. Randers, and D. Meadows. Limits to Growth: The 30-Year Update. Chelsea Green, 2004. 15. M. Mitchell. Complexity: A Guided Tour. Oxford, 2009. 16. A. Oppenheim, A. Willsky, and H. Nawab. Signals & Systems. Prentice Hall, 1997. 17. H. Peitgen, H. Jürgens, and D. Saupe. Chaos and Fractals: New Frontiers of Science. Springer, 2004. 18. B. Saltzman. Dynamical Paleoclimatology: Generalized Theory of Global Climate Change. Academic Press, 2001. 19. M. Schroeder. Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise. Dover, 2009. 20. M. Sipser. Introduction to the Theory of Computation. Cengage Learning, 2012. 21. W. Strauss. Partial Differential Equations: An Introduction. Wiley, 2007. 22. S. Strogatz. Exploring complex networks. Nature, 410, 2001. 23. F. White. Fluid Mechanics. McGraw Hill, 2010. 24. S. Wolfram. A New Kind of Science. Wolfram Media, 2002. Chapter 9: Power Laws and Non-Gaussian Systems Further reading links: Distributions: Normal distribution, Central limit theorem, Exponential distribution, Memorylessness, Poisson distribution, Power law, Black swan, Heavy-tailed distributions Power laws: Zipf’s law, Benford’s law, Pareto principle, Pareto distribution, Gutenberg-richter law Power laws and cities: The Origin of Scaling in Cities, Science (340), 2013 Bettencourt et al., Growth, innovation, scaling, and the pace of life in cities, PNAS (104), 2007 Discount functions: Time preference, Time value of money, Hyperbolic discounting Principles: Scale invariance, Scale-free network, Preferential attachment, Fitness model 1. P. Bak. How Nature Works. Copernicus, 1996. 2. J. Diamond. Collapse: How Societies Choose to Fail or Succeed. Penguin, 2011. 3. B. Hayes. Follow the money. American Scientist, 90(5), 2002. 4. G. Lawler. Introduction to Stochastic Processes. Chapman & Hall, 2006. 5. R. Lowenstein. When Genius Failed: The Rise and Fall of Long-Term Capital Management. Random House, 2001. 6. D. Manin. Zipf’s law and avoidance of excessive synonymy. Cognitive Science, 32, 2008. 7. S. Resnick. Adventures in Stochastic Processes. Birkhäuser, 2002. 8. M. Schroeder. Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise. Dover, 2009. 9. D. Sornette. Critical Phenomena in Natural Sciences. Springer, 2006. 10. M. Stumpf and M. Porter. Critical truths about power laws. Science, 335, 2012. 11. N. Taleb. The Black Swan: The Impact of the Highly Improbable. Random House, 2010. Chapter 10: Complex Systems Further reading links: Complex systems: Complex system, Phase transition, Critical phenomena, Critical point, Universality, Percolation theory, Network theory, Chaos theory Self organized criticality: Self-organized criticality, Self-organization, Emergence Complex behaviours: Swarm behaviour, Flocking Control of complex systems: D. Cajueiro, R. Andrade, Controlling self-organized criticality in sandpile models, Phys. Rev. E (81), 2010 P. Noël, C. Brummitt, R. D’Souza, Controlling Self-Organizing Dynamics on Networks Using Models that Self-Organize, Phys. Rev. Letters (111), 2013 Cascading Complex Systems: W. Aiello, F. Chung, L. Lu, A Random Graph Model for Power Law Graphs PDF, Experiment. Math. (10), 2001. S. Buldyrev et al. Catastrophic cascade of failures in interdependent networks. Nature Physics, 464, 2010. S. Rinaldi et al. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Systems Magazine, 21, 2001. B. Carreras et al., Evidence for Self-Organized Criticality in a Time Series of Electric Power System Blackouts, IEEE Circuits and Sytems I (51), 2004 Dobson et al., Complex Systems Analysis of Series of Blackouts, Chaos (17), 2007 W. Kröger, E. Zio, Vulnerable Systems, Springer, 2011 Dictyostelium (slime mould): Dictyostelium, Tutorial, Sample video Resilience: Definitions, Stockholm Resilience Centre J. Greer, Salvaging Resilience P. Smith et al., Network Resilience: A Systematic Approach, IEEE Communications Magazine, 2011 Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society . Springer, 2011. 1. S. Arora and B. Barak. Computational Complexity: A Modern Approach PDF. Cambridge, 2009. 2. P. Bak. How Nature Works. Copernicus, 1996. 3. P. Bak and K. Chen. Self-Organized Criticality. Scientific American, 1991. 4. Y. Bar-Yam. Dynamics of Complex Systems. Addison-Wesley, 1997. 5. S. Buldyrev et al. Catastrophic cascade of failures in interdependent networks. Nature Physics, 464, 2010. 6. R. Cohen and S. Havlin. Complex Networks: Structure, Robustness and Function. Cambridge, 2010. 7. G. Deffuant and N. Gilbert (ed.s). Viability and Resilience of Complex Systems: Concepts, Methods and Case Studies from Ecology and Society. Springer, 2011. 8. S. Dorogovtsev, A. Goltsev, and J. Mendes. Critical phenomena in complex networks. Reviews of Modern Physics, 80, 2008. 9. L. Fisher. The Perfect Swarm – The Science of Complexity in Everyday Life. Basic Books, 2009. 10. R. Frigg. Self-organised criticality — what it is and what it isn’t. Studies in History and Philosophy of Science, 34, 2003. 11. J. Gribbin. Deep Simplicity: Bringing Order to Chaos and Complexity. Random House, 2005. 12. H. Hoffmann and D. Payton. Suppressing cascades in a self-organized model with non-contiguous spread of failures. Chaos, Solitons, & Fractals, 67, 2014. 13. J. Holland. Complexity: A Very Short Introduction. Oxford, 2014. 14. S. Johnson. Emergence. Scribner, 2001. 15. M. Mitchell. Complexity: A Guided Tour. Oxford, 2009. 16. H. Peitgen, H. Jürgens, and D. Saupe. Chaos and Fractals: New Frontiers of Science. Springer, 2004. 17. S. Rinaldi et al. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Systems Magazine, 21, 2001. 18. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 19. M. Scheffer, S. Carpenter, V. Dakos, and E. van Nes. Generic Indicators of Ecological Resilience: Inferring the Chance of a Critical Transition. Annual Reviews, 2015. 20. M. Sipser. Introduction to the Theory of Computation. Cengage Learning, 2012. 21. S. Strogatz. Exploring complex networks. Nature, 410, 2001. 22. B. Walker and D. Salt. Resilience Thinking: Sustaining Ecosystems and People in a Changing World. Island Press, 2006. 23. S. Wolfram. A New Kind of Science. Wolfram Media, 2002. Chapter 11: Observation and Inference Further reading links: Electromagnetics: Electromagnetic radiation, Electromagnetic spectrum, Scattering, Reflection (physics), Radiative transfer, Planck’s law Radar: Radar, Synthetic aperture radar, Interferometric synthetic aperture radar Remote sensing: Remote sensing, Satellite, List of orbits Sensing types: Microwave radiometer, Scatterometer, Weather satellite, Altimeter Sensing platforms: Landsat program, Hubble Space Telescope, Radarsat-2, NASA Jason-2 Altimeter Inverse problems: Inverse problem, Least squares, Data assimilation, Regularization, Medical imaging 1. R. Aster, B. Borchers, and C. Thurber. Parameter Estimation and Inverse Problems. Academic Press, 2005. 2. M. Bertero and P. Boccacci. Introduction to Inverse Problems in Imaging. Taylor & Francis, 1998. 3. A. Chandra and S. Ghosh. Remote Sensing and Geographical Information System. Alpha Science, 2015. 4. A. Tikhonov et al.. Numerical Methods for the Solution of Ill-Posed Problems. Kluwer Academic Publishers, 1995. 5. L. Kump et al.. The Earth System. Prentice Hall, 2010. 6. P. Fieguth. Statistical Image Processing and Multidimensional Modeling. Springer, 2010. 7. T. Lillesand, R. Kiefer, and J. Chipman. Remote Sensing and Image Interpretation. Wiley, 2015. 8. F. Mackenzie. Our Changing Planet. Prentice Hall, 2011. Chapter 12: Water Further reading links: Ocean acidification: S. Doney, The Dangers of Ocean Acidification PDF, Scientific American, 2006 E. Kolbert, The Acid Sea, National Geographic, 2011 Ocean Garbage: Marine debris, Great pacific garbage patch L. Parker, Ocean Trash: 5.25 Trillion Pieces and Counting, National Geographic, 2015 Groundwater: Groundwater Recharge, Overdrafting, Ogallala Aquifer, Hydrogeology 1. S. Grace. Dam Nation: How Water Shaped theWest and Will Determine Its Future. Globe Pequot Press, 2013. 2. S. Lovejoy and D. Schertzer. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge, 2013. 3. C. Roberts. An Unnatural History of the Sea. Harper Collins, 2009. 4. C. Roberts. The Ocean of Life. Viking, 2012. 5. M. Scheffer. Critical transitions in nature and society. Princeton University Press, 2009. 6. S. Solomon. Water: The Epic Struggle for Wealth, Power, and Civilization. Harper Collins, 2009. 7. M. De Villiers. Water: The Fate of Our Most Precious Resource. McClelland & Stewart, 2003. 8. A. Weisman. The World Without Us. Picador, 2007. Chapter 13: Concluding Thoughts 1. F. Capra and P. Luisi. The Systems View of Life: A Unifying Vision. Cambridge, 2014. 2. A. Ghosh. Dynamic Systems for Everyone: Understanding How Our World Works. Springer, 2015. 3. M. Mitchell. Complexity: A Guided Tour. Oxford, 2009. 4. M. Scheffer. Critical transitions in nature and society Princeton University Press, 2009. 5. A. Weisman. The World Without Us. Picador, 2007. 6. R. Wright. A Short History of Progress. House of Anansi Press, 2004. Appendix A: Matrix Algebra 1. D. Damiano and J. Little. A Course in Linear Algebra. Dover, 2011. 2. D. Lay, S. Lay, and J. McDonald. Linear Algebra and Its Applications. Pearson, 2014. 3. D. Poole. Linear Algebra: A Modern Introduction. Brooks Cole, 2014. Appendix B: Random Variables and Statistics 1. M. DeGroot and M. Schervish. Probability and Statistics. Pearson, 2010. 2. J. Haigh. Probability: A Very Short Introduction. Oxford, 2012. 3. R.Walpole, R. Myers, S. Myers, and K. Ye. Probability & Statistics for Engineers & Scientists. Pearson, 2006.
{"url":"https://uwaterloo.ca/statistical-image-processing/introduction-complex-systems-further-reading","timestamp":"2024-11-09T05:17:17Z","content_type":"text/html","content_length":"98644","record_id":"<urn:uuid:86bd8b4e-9920-4d24-b51a-06fdfd3b74e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00333.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: A friend recommended this software and it really helped me get all my homework done faster and right. I strongly recommend it. Rick Edmondson, TX Look at that. Finally a product that actually does what it claims to do. Its been a breeze preparing my math lessons for class. Its been a big help that now leaves time for other things. Zoraya Christiansen This program has really made life easy for me and my students. Richard Penn, DE. My daughter is in 10th grade and son is in 7th grade. I used to spend hours teaching them arithmetic, equations and algebraic expressions. Then I bought this software. Now this algebra tutor teaches my children and they are improving at a better pace. Jenni Coburn, IN I think it's an awesome program. M.H., Illinois Search phrases used on 2014-06-18: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • salt percentage equations • dividing fractions then add in then subtracting and the multiplying • fraction square root calculator • free algebraic calculator • www.pre-algerba.com • quad formula word problems • worksheets on variables • exam algebra problem sums • solving one step inequalities using addition and subtraction worksheet • rational expression follow rules of rational numbers • figure out square roots for fractions • download solution manual contemporary abstract algebra by gallian • square root swf download • McDougal Littell 8th grade life science book answers • solve inequalities with exponents • examples of math investigatory • practice workbook california mcdougal littell math course 1 answers • algebra for 6th grade test • adding radical expressions calculator • power multiplication maths basics • combining rational expressions worksheets • implicit differentiation solver • practice workbook algebra 1 mcdougal littell help • fourth grade expression math worksheets • problem solving(binomials) • find the trinomial calculator • simplifying a cube • factoring three term polynomials • lcm calculation • like terms, powerpoint • homework sheets grade 1 • calculating wind speed against a plane: algebra • free 5th grade algebra • teaching 1st grade algebra • free year 8 maths worksheets subtractions • input exponential in polymath • 2nd order non-homogeneous lineer differential equations • QUADRATICS - vertex form application word problems • prentice hall mathematics algebra 2 online tests • square root worksheet • substitution math practice tests • examples math trivia • the signs we use for multiplying dividing adding subtracting • cos-1 TI-83 plus calculator • solving conic equations solver • hrw math worksheet "Holt, Rinehart and Winston" • 5TH GRADE PERCENT CHART • examples java program add fractions • Simple Slope Intercept Worksheet • Algebraic Power equation • math taks practice worksheets • solve linear equations with ti 83 • inequalities solver • fractions least to greatest worksheet • java program that finds the numbers divisible by 5 • introductory Algebra exponents problems • combine like terms worksheet • solutions to non-homogenous differential equations • ti 84 plus online • fieldston high school mathematics • first grade texas math sheets by houghton mifflin • add subtract multiply divide mixed numbers worksheet • completing the square with fractions • math formulas percentages • solve quadratic equation with excel • math trivia iq test • prealgerbra graphing & function • elimination method of equations worksheet • prealgebra concepts and definitions • ordering fractions and decimals worksheet • eog practice worksheets for third grade • find the area of the square worksheet • ti-84 trigonometry online calculator • rules on graphing when x is cubed • rational expression calculators • factoring squares in equations • how to get quadratic cube in ti-83 plus calculator • TCS aptitude questions with solved answers +doc • inequations maths solver free software • harcourt florida edition math test • powerpoint rules of exponents • mathematical age problem formula • prealgebra graphing worksheets • find slope with the TI-84
{"url":"https://www.softmath.com/math-book-answers/perfect-square-trinomial/dividing-monomial-calculator.html","timestamp":"2024-11-11T22:55:37Z","content_type":"text/html","content_length":"35341","record_id":"<urn:uuid:d2f384cc-72b5-430f-8e96-db188e96f790>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00395.warc.gz"}
Flexible Function Decomposition: decompose any function that cfd.FUN {cfdecomp} R Documentation Flexible Function Decomposition: decompose any function that returns a vector Flexible Function Decomposition: decompose any function that returns a vector family.y = "binomial", family.m = "binomial", bs.size = 250, mc.size = 50, FUN.y = mean, alpha = 0.05, cluster.sample = FALSE, cluster.name = NA, cluster.mrows = FALSE, sample.resid.y = FALSE, sample.resid.m = FALSE, print.iteration = FALSE, formula.y the formula for the multivariable model (see glm) for the outcome Y. formula.m the formula for the multivariable model (see glm) for the mediator M. mediator the column name of the mediator M. group column name of a factor variable containing the group identifier. data a data frame containing the variables in the model. family.y a description of the error distribution to be used in the model, see family for details. For the outcome variable any member of the glm family can be used. family.m a description of the error distribution to be used in the model, see family for details. For the mediator, currently gaussian, binomial and poisson are supported. bs.size the number of bootstrap iterations to be performed. mc.size the number of Monte Carlo iterations to be performed (more = more MC error reduction). a function to compute the statistics which can be applied to all data subsets, this function should return a vector and should be ran on pred_y (simulated y values in the natural FUN.y course or counterfactual) and optional additional columns. alpha the alpha level used to construct confidence intervals (0.05 = 95 percent confidence interval). cluster.sample set to TRUE if data are clustered in the long format (i.e. multiple rows per individual or other cluster). cluster.name the name (as a character) of the column containing the cluster identifiers. for the mediator model, only allows 1 observation per mediator so that the mediator model is not weighted by number of observations. e.g. set to TRUE if the mediator is time constant cluster.mrows in longitudinal analysis of long format data. sample.resid if the outcome is Gaussian, should the simulation sample from the residuals of the linear regression model of the outcome to approximate the empirical distribution of the sample.resid.y outcome in the simulation (Monte Carlo integration) (if so, set to TRUE), or should it sample from a Gaussian distribution with the standard deviation of the outcome? If the true distribution of the continuous outcome is not very Gaussian, the former may be preferred. sample.resid if the mediator is Gaussian, should the simulation sample from the residuals of the linear regression model of the mediator to approximate the empirical distribution of sample.resid.m the mediator in the simulation (Monte Carlo integration) (if so, set to TRUE), or should it sample from a Gaussian distribution with the standard deviation of the mediator? If the true distribution of the continuous mediator is not very Gaussian, the former may be preferred. print.iteration print the bootstrap iteration ... further arguments passed to or used by methods. out_nc_m returns the mean level of the mediator under the natural course, which is a value that should be close to the empirically observed value of the mediator for each group. out_nc_quantile provides the alpha/2 and 1-alpha/2 bootstrap quantiles for this mean (AKA bootstrap percentile confidence intervals). out_nc_y provides the output of the function fed into FUN.y for each bootstrap iteration, with out_nc_quantile_y providing the alpha/2 and 1-alpha/2 bootstrap quantiles of that output. Similarly, out_cf_m, out_cf_quantile_m,out_cf_y, and out_cf_quantile_y provide the corresponding values for the counterfactual scenario where the mediators of the groups are equalized. mediation and mediation_quantile are not provided for this function, so should be calculated by the user based on the output. mc_conv_info_m and mc_conv_info_y provide information that can help determine the number of Monte Carlo and Bootstrap iterations needed to achieve stability. See the Examples for more information. # the decomposition functions in our package are computationally intensive # to make the example run quick, I perform it on a subsample (n=250) of the data: cfd.example.sample <- cfd.example.data[sample(250),] # define some function (here one that calculates the mean from the data) # such a function already exists, but this is to demonstrate how to do it for one that # will be implemented in cfd.FUN: mean.fun <- function(data,yname) { x <- data # test if the function works on normal data: # then enter it into cfd.FUN and run: mean.results <- cfd.FUN(formula.y='out.gauss ~ SES + med.gauss + med.binom + age', formula.m='med.gauss ~ SES + age', # more advanced code demonstrating how to do this with a function that calculates # the age-adjusted rate ratio and life expectancy will hopefully soon be available # in a publication. #' @import stats utils version 0.4.0
{"url":"https://search.r-project.org/CRAN/refmans/cfdecomp/html/cfd.FUN.html","timestamp":"2024-11-12T06:16:19Z","content_type":"text/html","content_length":"9189","record_id":"<urn:uuid:537a69fa-7165-4fee-aee6-50ab2b06292a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00676.warc.gz"}
Time-series Learning Algorithms Candidates 30 September 2014 The time-series algorithms once selected and evaluated to build a analyzing and self-learning system for Openstack application awareness. OPs are always collecting bunches of metrics which are essentially time-series. Fourier Transform (FT) What it can do • Decompose f(t) to combination of basic waves (sine/cosine). • Smoothing. • Test periodicity □ i.e. Largest cycle period from FFT is significantly smaller than data span. • Forecasting and modeling, if time-series is periodic. • Transform f(t), which is on t time domain, to F(w), which is on w frequency domain. F(w) = f(t)’s component on frequency w. • FT transforms time-series f(t) into a sum of sine/cosine. • FT is invertible, i.e. a FT decomposition of sine/cosine waves is unique. • Suitable for periodic functions. Non-periodic functions result in non-constant F(w) or infinite sum of since/cosine. • Discrete Fourier Transform (DFT) is used for discrete computer sampling, with time complexity O(n*n). • Fast Fourier Transform (FFT) is the faster version approximation of DFT, with time complexity O(n*log(n)). • Do FFT multiple times for better smoothing, according to Netflix Scryer. Euler’s formula demonstrates the basic connection between complex numbers and triangular functions. How F(w) is calculated? e^iwt = cos(wt) + isin(wt), which is one of f(t)’s wave component. f(t)*e^iwt test f’s correlation with e^iwt. The bigger correlation, the bigger coefficient, which is F(w), results in. To reverse F(w) to f(t), we simply add each wave component together. A wave component is e^iwt, the cos(wt) + isin(wt), multiply its coefficient F(w). How FFT is used for smoothing? “FFT Filter smoothing is accomplished by removing Fourier components with frequencies higher than a cutoff frequency”, according to here. Also cutoff amplitude can be used. Choose the threshold wisely. Moving Average (MA) What it can do • Smoothing for long-term trends. • To see long-term trends. • Replace the current point with the (weighted) average of it and nearby range. • Moving average smoothed curve usually fits the originally one badly, but become useful to see long-term trends. This is different from FFT smoothing. Simple moving average (SMA) is the unweighted mean of the previous n data. Exponential moving average (EMA), or exponential http://en.wikipedia.org/wiki/Exponential_smoothing]), gives weighted average from current point to beginning, where the weight for each older smoothing ([wiki point decreases exponentially. There are yet many other moving average algorithms. ARIMA (Autoregressive Integrated Moving Average) What it can do • Time-series data modeling and forecasting. • Trends, circulation, seasonality is taken cared. • Seen from random process (statistics) perspective, rather than wave. • Time-series is considered as random process of autoregressive model + moving-average model, or informally a random-trend + random-walk. □ Random-trend is recognized by differencing, i.e. y(i) - y(i-1), the autoregressive model. □ Random-walk is recognized by moving-average model. • Parameters: □ p is the number of autoregressive terms, □ d is the number of nonseasonal differences, and □ q is the number of lagged forecast errors in the prediction equation. • Autocorrelation is used to determine lag factor p & q. • A bit complex. • Very widely used in forecasting, such as marketing data, stock trends. Given ARMA(p’ ,q) model: Assume the left side polynomial has a unitary root of multiplicity d ARIMA(p,d,q) process expresses it with p=p’−d Above are random process models. Classical Decomposition What it can do • Decompose time-series into trend+seasonal+remainder • Not recommended now • Essentially use Moving Average (MA) Step 1 • Calculate trend-cycle component = N-MA(series), or trend = 2×N-MA(series) Step 2 • Calculate the detrended series: s2 = series - trend Step 3 • Taking month March as an example, seasonal index is the average of all the March values in s2. Then adjust seasonal index so that they adds to zero. seasonal component = seasonal index stringed Step 4 • Remainder component = series - trend - season component. X-12-ARIMA Decomposition What it can do • ARIMA based time-series decomposition • A sophisticated method • Employs ARIMA • There is currently no R package for X-12-ARIMA decomposition STL Decomposition What it can do • Time-series decomposition • Equipped in R. • Advantages over the classical decomposition method and X-12-ARIMA. • Only provides additive decompositions. • Parameters: trend window & seasonal window • Triangular fitting? • Autocorrelation may be relevant to cycle finding Neuron Network What it can do • Time-series modeling and forecasting • Use neuron network model for time=series • Previous N points are used as input for the model • Equipped in R. • Magically it works. • Has seasonal models. Neuron network model, with previous N points in time-series as input to the model. ACF & PACF What it can do • Residual diagnostics □ The reminder component of time-series should contain no pattern. Use these tests for it. • Detect seasonality, according to here • ACF, autocorrelation function, is used to test reminder • Also “Average Test / Durbin-Watson Test” can be used for same purpose • Small p-value of Durbin-Watson Test indicates significant autocorrelation remaining. Average of reminder component, if no pattern remaining, should be zero. ACF or autocorrelation of reminder component, if no pattern remaining, should show not significant spike or lag. • OTexts Residual diagnostics: [https://www.otexts.org/fpp/5/4] Augmented Linear Regression (Netflix Scryer) What it can do • The more you know about your pattern, the better forecasting algorithm you can make. □ Especially repeated cycles • Must have and exploit the pattern that □ the cycle of day repeats □ the cycle of week repeats • Accurate and very simple. • Netflix Scryer Part1: [http://techblog.netflix.com/2013/11/scryer-netflixs-predictive-auto-scaling.html] • Netflix Scryer Part2: [http://techblog.netflix.com/2013/12/scryer-netflixs-predictive-auto-scaling.html] What it can do • To detect any periodicities/seasonality • Periodogram is the basic modulus-squared of the Fourier transform. • Essentially same with Power Spectral Density, categorized in “spectral analysis” • The squared radius of epicycle. • It is graphical technique, which usually requires manual inspection. • Other seasonality detection [methods http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc443.htm], which are also graphical techniques: □ A run sequence plot will often show seasonality. □ A seasonal subseries plot is a specialized technique for showing seasonality. □ Multiple box plots can be used as an alternative to the seasonal subseries plot to detect seasonality. □ The autocorrelation plot can help identify seasonality. First, remember the Fourier Transform decompose time-series Xn into v represents the frequency. Periodogram is a graph/function with v as abscissa (horizontal axis), and ordinates (vertical axis) as: I don’t remember whether the coefficient “1/2” is correct, but periodogram does show “the basic modulus-squared of the Fourier transform”. Remember fourier transform in complex format, the sin(..) part is “i*sin(..)”. But, it is NOT finished yet. Basic periodogram has “Leakage Problem” and maturer methods are derived from this: refer to [http://www.statsoft.com/Textbook/Time-Series-Analysis#problem]. • Identify Patterns in Time Series Data: [http://www.statsoft.com/Textbook/Time-Series-Analysis#spectrum] • What method can be used to detect seasonality in data?: [http://stats.stackexchange.com/questions/16117/what-method-can-be-used-to-detect-seasonality-in-data] • Spectral density estimation: [http://en.wikipedia.org/wiki/Spectral_estimation] • Spectral density: [http://en.wikipedia.org/wiki/Power_spectrum] • Periodogram: [http://en.wikipedia.org/wiki/Periodogram] My Crafted Spike Detection What it can do • Use standard deviation to detect outliers. • Peak outliers are taken as spikes. • Neighbor spike points which are not significantly different are merged. Step 1: De-trend. Remove trend component from the original time-series. Step 2: Identifier outliers, who’s |value-mean| >= a*standard_deviation. The “a” is a changeable parameter, usually 3. • Sometime it is useful to omit Step 2 by setting a to 0. Step 3: Select from outliers, who is larger/smaller than its neighbor on both sides. Call them S points. Step 4: Merge two S points A & B, if • A & B are both above/below mean, and • points between A & B are all outliers of same side to mean, but not S point, and • NOT (at least one point X between A & B, min(|A.value-X.value|, |B.value-X.value|) is larger than threshold k). k is a changeable parameter. i.e. a gap exists between A & B. A, B is merged to whom is farther from mean. Step 5: Merge all S points pair satisfy Step 4’s condition. The remaining S points are spikes. • There are yet more methods for searching spike/peak points.* For example, find maxima points who is larger than left and right neighbor points within distance k. Refer to [Simple Algorithms for Peak Detection in Time-Series] • Simple way to algorithmically identify a spike in recorded errors: [http://stats.stackexchange.com/questions/41145/simple-way-to-algorithmically-identify-a-spike-in-recorded-errors] • Detecting steps in time series: [http://stats.stackexchange.com/questions/20612/detecting-steps-in-time-series] • Simple Algorithms for Peak Detection in Time-Series: [http://www.tcs-trddc.com/trddc_website/pdf/SRL/Palshikar_SAPDTS_2009.pdf] • How to find local peaks/valleys in a series of data?: [http://stats.stackexchange.com/questions/22974/how-to-find-local-peaks-valleys-in-a-series-of-data] • Detecting cycle maxima (peaks) in noisy time series (In R?): [http://stackoverflow.com/questions/16341717/detecting-cycle-maxima-peaks-in-noisy-time-series-in-r] • Peak detection of measured signal: [http://stackoverflow.com/questions/3260/peak-detection-of-measured-signal] Dynamic Time Wrap What it can do • Calculate similarity of two time series. • Get similarity of the whole time series, not subsequence. • Can take time series of different length (N, M). • Tolerate accelerations and decelerations, refer to here • Very costly, cpu NM, mem NM. Cause R Studio to stuck □ But, there exist faster versions. See here. • Cannot tolerate that: one time series’s Y value is shifted by C, or scaled by S • R package: “dtw” For two points x and y, d(x, y) is a distance between the symbols, e.g. d(x, y) = | x - y |. Pseudo-code as below: int DTWDistance(s: array [1..n], t: array [1..m]) { DTW := array [0..n, 0..m] for i := 1 to n DTW[i, 0] := infinity for i := 1 to m DTW[0, i] := infinity DTW[0, 0] := 0 for i := 1 to n for j := 1 to m cost:= d(s[i], t[j]) DTW[i, j] := cost + minimum(DTW[i-1, j ], // insertion DTW[i , j-1], // deletion DTW[i-1, j-1]) // match return DTW[n, m] • Wiki: [http://en.wikipedia.org/wiki/Dynamic_time_warping] • Time Series Analysis and Mining with R: [http://rdatamining.wordpress.com/2011/08/23/time-series-analysis-and-mining-with-r/] • R Archive: [http://dtw.r-forge.r-project.org/] Pearson Correlation What it can do • Fast, simple, widely used. • Tolerate Y axis shift or scaled. • R equipped in ccf() • Cannot tolerate that: time series A and B have different length. □ You have to scale the X axis before hand (interpolation). • Be careful with the Y=C case, or Y=X=C. See below chart. Time series X=X(t), Y=Y(t). Think X, Y similar as Y=k*X+C. Then Y=Y(X) must be linear regressionable if they are similar. Pearson correlation is right what is used to determine linear relation Demonstrate chart of Pearson correlation examples: • Time series similarity measures: [http://quant.stackexchange.com/questions/848/time-series-similarity-measures] • Wiki: [http://en.wikipedia.org/wiki/Pearson_correlation_coefficient] • R has ccf(): [http://stats.stackexchange.com/questions/23993/how-to-correlate-two-time-series-with-possible-time-differences] Linear Segmentation What it can do • Segment time-series into segments, separated by point of change. • In R package ‘ifultools’. • Has really poor result, with or without noise. • Doesn’t necessarily detect extreme points. • Piecewise linear segmentation of a time series □ Cool method but why so bad result? • More segmentation methods can be found in References below. Use a sliding window of length n, each time move one point forward. If the least square regression of the window, has changed angle exceeding angle.tolerance, current point is marked as segmentation • LinearSegmentation: [http://cran.r-project.org/web/packages/ifultools/ifultools.pdf] • Segmenting Time Series: A Survey and Novel Approach: [http://www.ics.uci.edu/~pazzani/Publications/survey.pdf] • R ‘segmented’ package: [http://cran.r-project.org/web/packages/segmented/index.html]** It has ‘segmented()’ to do piecewise linear regression, but need breakpoints as parameter. • R ‘changepoint’ package: [http://cran.r-project.org/web/packages/changepoint/index.html]** It has changepoint detection based on mean and variance. • R ‘strucchange’ package: [http://cran.r-project.org/web/packages/strucchange/index.html]** It has sophisticated breakpoint detection but seems no that kind of breakpoint I want. • R ‘breakpoint’ package: [http://cran.r-project.org/web/packages/breakpoint/breakpoint.pdf]** It uses cross-entropy, but the result seems not what I need. • Wiki time-series segmentation: [http://en.wikipedia.org/wiki/Time-series_segmentation] • Wiki change point detection: [http://en.wikipedia.org/wiki/Change_detection] My Crafted Time-Series Segmentation What it can do • Time-series segmentation • Provide basic pattern fragments for recognition • FLAWS: when left/right regression walk across a maxima point, result is wrong • Use linear regression on both side of a point to determine • Time complexity O(N*C), N is time series length, C is a constant. • Detects extreme points. • Better perform smoothing (FFT or others) before use this method. Given a point, calculate the least square regression within K points of left side and right side, noted as Ll and Lr. If the angles of Ll and Lr differs more than threshold G, current point is a segment boundary point. • Segmenting Time Series: A Survey and Novel Approach: [http://www.ics.uci.edu/~pazzani/Publications/survey.pdf] • LinearSegmentation: [http://cran.r-project.org/web/packages/ifultools/ifultools.pdf] My Crafted Find Minima What it can do • Find minima points. □ Can be modified to find maxima points. □ Then can find spike points. • As input of time-series segmentation • Find minima points by comparing left and right neighbor average. Step 1: find minima points by comparing left & right neighbor points’ average. Step 2: merge minima points that are too close. R Code: neighbor_minima = function(x, k, t=0){ res = c(1) for(i in 1:length(x)){ l1 = max(1, i-k) l2 = max(1, i-1) r1 = min(i+1, length(x)) r2 = min(i+k, length(x)) lmean = mean(x[l1:l2]) rmean = mean(x[r1:r2]) cur = x[i] res = append(res, i) res = append(res, length(x)) merged = TRUE merged = FALSE res_merge = c() for(i in 1:(length(res))){ merged = TRUE merged = TRUE res_merge = append(res_merge, res[i]) res = res_merge • Find local maxima and minima: [http://stackoverflow.com/questions/6836409/finding-local-maxima-and-minima] • Finding local extrema: [http://stats.stackexchange.com/questions/30750/finding-local-extrema-of-a-density-function-using-splines] Apriori + Pattern Search Tree What it can do • Find pattern in time-series □ Pattern means: repeated segment with high occurrence frequency □ Can only detect EXACTLY matched segment • Apriori algorithm • Tree data structure + pruning • Only finds EXACTLY matched pattern • Could be slow/mem-consuming on large series. The classic Apriori algorithm. See page 5 in Discovering Similar Patterns in Time Series • Discovering Similar Patterns in Time Series P5: [ftp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/kdd/p497-caraca-valente.pdf] Apriori + Pattern Search Tree + Distance Similar What it can do • Find pattern time-series □ Similar segment can be found • Modified from “Apriori + Pattern Search Tree” • Use distance measure on similarity instead of exactly match • Problem: if A & B are to be resulted similar, subsequence of A and B from left to right must be all similar, to avoid A/B being pruned. • Could be slow/mem-consuming on large series. See page 6 in Discovering Similar Patterns in Time Series • Discovering Similar Patterns in Time Series P5: [ftp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/kdd/p497-caraca-valente.pdf] My Crafted Pattern Recognition What it can do • Find pattern in time-series, i.e. the repeating segments. • Find similarity on segmentations. First, do segmentation on time-series. • You need to choose a segmentation algorithm from above. Second, cluster the segments using similarity measure. • You need to choose a similarity measure algorithm from above. Now high occurrence segments are obtained, they are patterns. • Pattern Recognition and Classification for Multivariate Time Series: [http://www.dai-labor.de/fileadmin/Files/Publikationen/Buchdatei/tsa.pdf]. Like mine, find similarity on time-series □ A good discussion on time-series segmentation □ Good material Linear Interpolation What it can do • Fill points in time-series • I can use it to change irregular time interval to regular one • R way is cumbersome and fragile. I recommend use Python to implement it. □ Be careful with two NA point next to each other Draw a straight line from NA point’s left and right neighbor. NA point’s y value is taken from the line. R implementation seems to be cumbersome and fragile. I prefer to user Python to do it. #mdf$time2 = as.POSIXct(mdf$time, origin="1970-01-01") # get the time string from timestamp, the mdf$time is lost somehow #x1=xts(mdf$value, mdf$time2) # xts requires using Date for time, but zoo is enough z1=zoo(mdf$value, mdf$time) z2=zoo(NA, end(z1)+seq(-length(z1)+1, 0)*30) # the empty "slots" #z4=na.locf(z3) # use value of prior point for NA # use linear interpolation for NA, but this WON'T work if two NA next to each other # this is really frigle. so, I'd better implement myself interpolation in python z4=na.approx(z3[,2], na.rm=FALSE, rule=2) z5=merge(z2, z4, all=FALSE) # intersection of z2, z4 # z6 is the result • Zoo tutorial: [http://cran.r-project.org/web/packages/zoo/vignettes/zoo-quickref.pdf] • An example of merge: [http://www.climatescience.cam.ac.uk/community/blog/view/667/interpolation-of-time-series-data-in-r] • Time in millisecond to zoo: [http://stackoverflow.com/questions/11494188/zoo-objects-and-millisecond-timestamps] • Some time-series CRAN: [http://cran.r-project.org/web/views/TimeSeries.html] • Irregular time-series: [http://stackoverflow.com/questions/12623027/how-to-analyse-irregular-time-series-in-r] • Creating regular from irregular: [http://stackoverflow.com/questions/10423551/creating-regular-15-minute-time-series-from-irregular-time-series] • Linear interpolation: [http://en.wikipedia.org/wiki/Linear_interpolation] UCR Suite What it can do • Compare time series similarity in DTW (Dynamic Time Warp) • A DTW algorithm even faster than ED (Euclidean Distance) Refer to reference 1, optimizations including • Using the Squared Distance • Lower Bounding • Early Abandoning of ED and LB_Keogh • Early Abandoning of DTW • Exploiting Multicores • Early Abandoning Z-Normalization • Reordering Early Abandoning • Reversing the Query/Data Role in LB_Keogh • Cascading Lower Bounds • Searching and Mining Trillions of Time Series Subsequences under Dynamic Time Warping: [http://www.cs.ucr.edu/~eamonn/SIGKDD_trillion.pdf] • UCR Suite site: [http://www.cs.ucr.edu/~eamonn/UCRsuite.html] Motifs Discovery What it can do • Discovery motifs (the repeated subsequence in time-series) • Online Discovery and Maintenance of Time Series Motifs: [http://www.cs.ucr.edu/~eamonn/online_motifs.pdf] • A disk-aware algorithm for time series motif discovery: [http://link.springer.com/article/10.1007%2Fs10618-010-0176-8] □ Gives offline on disk approach • SAX (Symbolic Aggregate approXimation): [http://homepages.abdn.ac.uk/yaji.sripada/pages/teaching/CS4031/information/SAX.pdf] □ Transform a time-series in to a string with arbitrary length, using an alphabet. Brief description at [https://code.google.com/p/jmotif/wiki/SAX]** Official site: [http://www.cs.ucr.edu/ □ The authors are right who first proposed “Motifs Discovery” in [http://cs.gmu.edu/~jessica/Lin_motif.pdf] □ JMotif - motif discovery in java with SAX: [https://code.google.com/p/jmotif/] Tip & Tricks • For smoothing, we usually iterate a smoothing algorithm multiple times (e.g. 3x), to achieve better effect. • “We might subtract the trend pattern from the data values to get a better look at seasonality” in here • Moving Average usually give middle point greater weight, in order to mitigate the smoothing effect. • Summarize of data smoothing techniques: [http://wweb.uta.edu/faculty/ricard/Classes/KINE-5350/Data%20Smoothing%20and%20Filtering.ppt] • Selecting different MA order results in different trends obtained. “In particular, a 2×12-MA can be used to estimate the trend-cycle of monthly data and a 7-MA can be used to estimate the trend-cycle of daily data. Other choices for the order of the MA will usually result in trend-cycle estimates being contaminated by the seasonality in the data.” (from [https://www.otexts.org/fpp • After the day/month seasonality is extracted, you can use STL on remainder, with very small seasonal window to extract remaining periodicity. Create an Issue or comment below
{"url":"https://accelazh.github.io/datamining/Time-Series-Learning-Algorithms-Candidates","timestamp":"2024-11-09T14:34:35Z","content_type":"text/html","content_length":"42517","record_id":"<urn:uuid:41c02436-bfe2-4a96-9f3c-24d6df334d94>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00744.warc.gz"}
Solve Logarithmic Equation X - Tessshebaylo Solve Logarithmic Equation X Solving exponential and logarithmic equations you with logs on both sides p2 kate s math lessons techniques for example 1 3 ways to solve logarithms wikihow one side definition properties solutions study com how algebra 2 ppt powerpoint presentation free id 6581779 Solving Exponential And Logarithmic Equations You Solving Logarithmic Equations With Logs On Both Sides P2 Kate S Math Lessons Techniques For Solving Logarithmic Equations You Solving Logarithmic Equations Example 1 You Solving Logarithmic Equations You 3 Ways To Solve Logarithms Wikihow Logarithmic Equations With Logs On One Side Kate S Math Lessons 3 Ways To Solve Logarithms Wikihow Logarithmic Equations Definition Properties Solutions Study Com How To Solve Logarithmic Equations Algebra 2 Math You Ppt Solving Logarithmic Equations Powerpoint Presentation Free Id 6581779 Solving Logarithmic Equations With Diffe Bases Algebra 2 Precalculus You 3 Ways To Solve Logarithms Wikihow Question Solving Logarithmic Equations In A Real World Context Nagwa How To Solve An Exponential Equation By Using Natural Logarithms With Logarithmic Exact Answers Algebra Study Com Solving Logarithmic Equations With Diffe Bases 7 4 Solving Logarithmic Equations Full Lesson Grade 12 Mhf4u Jensenmath Ca You Solving Logarithmic Equations Math Mistakes Solved Solve The Logarithmic Equation For X 11 2 Ln 3 Chegg Com Solving Natural Log Equations You Solved Logarithmic Equations Solve The 55468 Equation For X 10 56 In 2 1 55 58 Log 4 3 57 5 Solving Logarithmic Equations How Nancypi You Logarithmic Equations A Plus Topper Solving exponential and logarithmic equations with logs example you 3 ways to solve logarithms wikihow on one definition algebra 2 ppt Trending Posts This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.tessshebaylo.com/solve-logarithmic-equation-x/","timestamp":"2024-11-06T15:29:00Z","content_type":"text/html","content_length":"58726","record_id":"<urn:uuid:c3aa4c1d-94b4-497a-b076-0b49885ea107>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00665.warc.gz"}
The 'Three-Body Problem' Has Perplexed Astronomers Since Newton Formulated It. A.I. Just Cracked It in Under a Second. It took just fractions of a second. (Image credit: Shutterstock) The mind-bending calculations required to predict how three heavenly bodies orbit each other have baffled physicists since the time of Sir Isaac Newton. Now artificial intelligence (A.I.) has shown that it can solve the problem in a fraction of the time required by previous approaches. Newton was the first to formulate the problem in the 17th century, but finding a simple way to solve it has proved incredibly difficult. The gravitational interactions between three celestial objects like planets, stars and moons result in a chaotic system — one that is complex and highly sensitive to the starting positions of each body. Current approaches to solving these problems involve using software that can take weeks or even months to complete calculations. So researchers decided to see if a neural network — a type of pattern recognizing A.I. that loosely mimics how the brain works — could do better. Related: The 11 Most Beautiful Mathematical Equations The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus. That could prove invaluable to astronomers trying to understand things like the behavior of star clusters and the broader evolution of the universe, said Chris Foley, a biostatistician at the University of Cambridge and co-author of a paper to the arXiv database, which has yet to be peer-reviewed. "This neural net, if it does a good job, should be able to provide us with solutions in an unprecedented time frame," he told Live Science. "So we can start to think about making progress with much deeper questions, like how gravitational waves form." Neural networks must be trained by being fed data before they can make predictions. So the researchers had to generate 9,900 simplified three-body scenarios using Brutus, the current leader when it comes to solving three-body problems. Sign up for the Live Science daily newsletter now Get the world’s most fascinating discoveries delivered straight to your inbox. They then tested how well the neural net could predict the evolution of 5,000 unseen scenarios, and found its results closely matched those of Brutus. However, the A.I.-based program solved the problems in an average of just a fraction of a second, compared with nearly 2 minutes. The reason programs like Brutus are so slow is that they solve the problem by brute force, said Foley, carrying out calculations for each tiny step of the celestial bodies' trajectories. The neural net, on the other hand, simply looks at the movements those calculations produce and deduces a pattern that can help predict how future scenarios will play out. That presents a problem for scaling the system up, though, Foley said. The current algorithm is a proof-of-concept and learned from simplified scenarios, but training on more complex ones or even increasing the number of bodies involved to four of five first requires you to generate the data on Brutus, which can be extremely time-consuming and expensive. "There's an interplay between our ability to train a fantastically performing neural network and our ability to actually derive data with which to train it," he said. "So there's a bottleneck there." One way around that problem would be for researchers to create a common repository of data produced using programs like Brutus. But first that would require the creation of standard protocols to ensure the data was all of a consistent standard and format, Foley said. There are still a few issues to work through with the neural net as well, Foley said. It can run for only a set time, but it's not possible to know in advance how long a particular scenario will take to complete, so the algorithm can run out of steam before the problem is solved. The researchers don't envisage the neural net working in isolation, though, Foley said. They think the best solution would be for a program like Brutus to do most of the legwork with the neural net, taking on only the parts of the simulation that involve more complex calculations that bog down the software. "You create this hybrid," Foley said. "Every time Brutus gets stuck, you employ the neural network and jig it forward. And then you assess whether or not Brutus has become unstuck." Originally published on Live Science. Edd Gent is a British freelance science writer now living in India. His main interests are the wackier fringes of computer science, engineering, bioscience and science policy. Edd has a Bachelor of Arts degree in Politics and International Relations and is an NCTJ qualified senior reporter. In his spare time he likes to go rock climbing and explore his newly adopted home.
{"url":"https://www.livescience.com/ai-solves-three-body-problem-fast.html","timestamp":"2024-11-05T18:58:09Z","content_type":"text/html","content_length":"693872","record_id":"<urn:uuid:45ca27d3-74f3-46e1-aac0-3221182c6fcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00374.warc.gz"}
PROC UNIVARIATE: Estimating Percentiles Using Q-Q Plots Estimating Percentiles Using Q-Q Plots There are two ways to estimate percentiles from a Q-Q plot: • Specify the PCTLAXIS option, which adds a percentile axis opposite the theoretical quantile axis. The scale for the percentile axis ranges between 0 and 100 with tick marks at percentile values such as 1, 5, 10, 25, 50, 75, 90, 95, and 99. • Specify the PCTLSCALE option, which relabels the horizontal axis tick marks with their percentile equivalents but does not alter their spacing. For example, on a normal Q-Q plot, the tick mark labeled "0" is relabeled as "50" because the 50th percentile corresponds to the zero quantile. You can also estimate percentiles by using probability plots created with the PROBPLOT statement. See Example 4.32.
{"url":"http://support.sas.com/documentation/cdl/en/procstat/63104/HTML/default/procstat_univariate_sect044.htm","timestamp":"2024-11-07T03:14:22Z","content_type":"application/xhtml+xml","content_length":"10829","record_id":"<urn:uuid:18147a65-9ae9-4134-be9e-c3a846085ea4>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00174.warc.gz"}
Variance Vs. Volition Because of the symmetry inherent in the Cartesian monoidal structure of the category of sets, there is no difficulty in treating on an equal footing a category and its opposite. Consequently, once one understands what to mean by a (covariant) functor from one category to another, contravariant functors can be explained with the greatest of ease as either covariant functors from the first category to the opposite of the second, or, equally effectively, as covariant functors from the opposite of the first category to the second itself. The only small snag here is that it is the orientation of the maps in the target category that determines the natural direction of the maps from one functor to another in the functor category. But this problem already rears its head in the identification of covariant functors from with covariant functors from B^op , and is dealt with simply by making a well-motivated choice. Symmetry comes into play again in defining the product of two (or more) categories, and, hence, in explaining what to mean by functors of two (or more) variables, of fixed, or mixed, variance: one takes them, for example, to be (covariant) functors from a product A^op × B A much larger and less easily surmountable snag arises once one wants to generalize the observations above to the setting of categories enriched in a closed, or monoidal, or, most generally, multilinear category whose structure (like that of almost any endofunctor category under the monoidal operation of composition) manifests no inherent symmetry whatsoever. Here, given a V-category A , there may well be appropriate understanding at all for what the opposite of should be. Hand in hand with this difficulty comes the further annoyance that, despite the easy availability of a notion of covariant from one to another, absence of symmetry totally stymies any attempt to elucidate what a contravariant V-functor between two should be. By the same token, absence of symmetry makes it impossible to explain what to mean by a of several variables, of fixed, or mixed, variance. On the other hand, even when the multilinear base category is not itself a (i.e., is not closed) – even if, say, is non-symmetrically monoidal, like our example earlier – it miraculously remains possible to explain what to understand by V-valued V-functors – of variance – defined on an arbitrary V-category A . What is more, even a notion of V-valued V-functor of mixed variance, defined on a pair of V-categories A , contravariant in and covariant in , is readily available: this comes down to a system consisting of a rule assigning to each object and each object a value-object F(A, B) , along with further information providing F: < B(B, B'), F(A, B), A(A', A) > F(A', B') all satisfying conditions analogous to the usual associativity and unit laws. Such an would be termed a V-valued V-functor of the two variables , contravariant in and covariant in . A prime example of such a functor is the actual V-valued hom-functor of the V-category A What is more, despite the likely absence of any on which such a V-valued V-functor of two variables might be defined, there is available a “law of exponents,” of sorts, identifying V-valued V-functors, of mixed variance, contravariant in and covariant in , with actual (covariant) – taking values (when is small enough, and complete enough) in a sort of presheaf V-category V^(A^op) of contravariant V-valued V-functors . Applied to hom-functor, this identification yields a Yoneda representation A V^(A^op) , which (sure enough) turns out to be and-faithful. Even more surprisingly, there is a twist on this “law of exponents” permitting identification of such V-valued V-functors of mixed variance with, instead, single covariant to a best described, intuitively, as the V-opposite (V^B)^^op of the category of covariant V-valued V-functors : NOT defined on – there may well not BE any serving as opposite for ; and NOT taking values in – there may well not be even a reasonable candidate (no matter how small or how complete may be) for a “covariant V-valued V-functors V-category V^B ; BUT defined on and taking values in (V^B)^^op . This identification, applied to the hom functor of A , yields the “other” Yoneda representation, A (V^A)^^op , which, like the former one, is again An amusing corollary of these considerations and others like them is the following moral regarding the direction of morphisms in Kleisli categories, and the variance of Lawvere-style algebras as (roughly speaking) “representable” functors on such Kleisli categories. One way to concoct the Kleisli category of a F |– U: X A F: A X left adjoint to U: X A) is to form the full image , that is, the with same objects as [A, B] = X(F(A), F(B)). Another way, even in the conceivable absence of , is to use as morphisms from a suitable transformations between U^A: X V is the given by U^A(X) = A(A, U(X)) ). Now while it has traditionally been thought that it should be the transformations from – mimicking the direction of operations – that ought to serve as the Kleisli-maps from , we see here that, it being not that stands any chance of being a we really need to use (V^X)^^op(U^A, U^B) , whose “elements” are the transformations from (!), as the Kleisli maps from . Fortunately, this observation is consistent with (rather than opposite to) the one provided by the full image of , for, by Yoneda and adjointness, [A, B] = X(F(A), F(B)) ≈ (V^X)^^op(Y(F A), Y(F B)) ≈ (V^X)^^op(U^A, U^B) . And the of Lawvere-style algebras “over” the Kleisli V-category K can then be taken as the full of the V-category V^(K^op) of contravariant V-valued V-functors whose compositions with A K are explicitly representable (which is to say: as the pullback of the diagram V-functors). Historical remarks. Much of what is to be told here has appeared in the author’s ancient articles in Springer LNM ## 99 and 195, or was promulgated orally on the occasion of the esteemed Professor Charles Ehresmann's 70 birthday celebration at Chantilly/Fontainebleau in the summer of 1975. But the oral seeds seem never to have taken root, having instead been simply scattered by the winds of time, so it has seemed worthwhile to broadcast them once again, hoping that this time they will fall on more fertile soil. Technical remarks. It is of course a gross abuse of language to speak of either of these functor categories V^(A^op) , (V^A)^^op The actual structure available is, at best, for each pair of V-functors F , a “job-description” for the desired n.t.(F, G) , in the form of a suitable functor on – or better, on (M[0](V))^op , the (opposite of the) strictly associative monoidal category (of finite strings of objects of ) that serves (in SLNM 195) as the multilinear structure – with any available category of “sufficiently large” sets. Any object of representing this “job-description” functor will serve as the desired transformations object. 27 May 2005, New Haven, CT (USA).
{"url":"http://tlvp.net/~fej.math.wes/ChEhres100/VarianceVsVolition.htm","timestamp":"2024-11-08T02:58:36Z","content_type":"text/html","content_length":"15319","record_id":"<urn:uuid:6da22939-2c78-4555-965f-762b1c4fd5e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00597.warc.gz"}
Repeat element of a vector n times without loop. Say I have a column vector x=[a;b;c]. I want to repeat each element n times to make a long length(x)*n vector. For example, for n=3, the answer would be: Can anyone think of an elegant way to do this without looping? 1 Comment U can use repmat it not exactly elegant but it will do the job x=[a;b;c]; n=3; newx = [repmat(x(1),n,1);repmat(x(2),n,1);repmat(x(3),n,1)] Accepted Answer n=3 ; x=(1:3)' % example 3 Comments %you mean r = repmat(x', n, 1) I guess, you are right. repmat(1:3, 1, 2) = [1,2,3,1,2,3] but the OP wants [1,1,2,2,3,3]. Then r = repmat(1:3, 2, 1); r = r(:) avoid the expensive transposition of the matrix. Well, I admit that even reading this message will waste more time then millions of matrix transpositions will cost... More Answers (6) I would use 3 Comments This should be chosen as the best 'correct' answer, thanks! DGM on 2 Aug 2023 This is probably the more accepted answer today (hence the upvotes), but repelem() was not available until after the question was originally answered (R2015a). 4 Comments Dear Walter Roberson, why you did not use outer product and you chosen kronecker ( just curious ) because the guy's question was having vectors ? The * matrix multiplication operator cannot by itself repeat elements. You would need something like (x.' * repmat(eye(length(x)), 1, n)).' if you wanted to use the * operator to duplicate elements -- forcing you to call upon repmat() to duplicate elements. Using the kronecker is a known idiom for duplicating data. It can be used for non-vectors too. >> kron([1 2;3 4], ones(3,1)) ans = There is several others ways of doing it which in some cases are more efficient. Have a look at what the size of your vector is and compare the methods. Below I compare speeds and it appears that on my computer the third and fourth methods are mostly faster for large arrays. n=100000; x=1:3; a=zeros(n,numel(x)); b=a; c=a; d=a; %memory allocation tic; a=repmat(x, n, 1); t1=toc; %Repmat method tic; b=kron(x, ones(n,1)); t2=toc; %kron method tic; c=x(ones(1,n),:); t3=toc; %indexing method tic; d=ones(n,1)*x; t4=toc; %multiplication method 2 Comments a=zeros(n,numel(x)); b=a; c=a; d=a; %memory allocation tic; a=repmat(x, n, 1); t1=toc %Repmat method tic; b=kron(x, ones(n,1)); t2=toc %kron method tic; c=x(ones(1,n),:); t3=toc %indexing method tic; d=ones(n,1)*x; t4=toc %multiplication method y = repmat(x,1,3); y = transpose(y); y = y(:); ind = [1;1;1;2;2;2;3;3;3]; x(ind) 1 Comment Ah, but how do you construct the ind vector for general length n repetitions ? See Also Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
{"url":"https://nl.mathworks.com/matlabcentral/answers/46898-repeat-element-of-a-vector-n-times-without-loop","timestamp":"2024-11-09T21:06:03Z","content_type":"text/html","content_length":"256030","record_id":"<urn:uuid:f5fb2e0d-2e4f-475e-af47-0dc2b9765e58>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00555.warc.gz"}
The Gift of Being Wrong All week I've been watching my students' geogebra assignments progress by watching successive versions of them pop up in their epearl portfolios. Fascinating, truly fascinating to find out that I was wrong about so many things. Wrong about what I thought would be difficult and easy. And I'm trying to see these as gifts. • I thought the easy part would be understanding what I wanted them to do, because, as I always tell people, I am so good at communicating. WRONG: □ A lot of kids interpreted "Create a geogebra file about the linear function that displays the graph and equation of any linear function y = ax + k for any values of a and k." as "Draw one particular linear function, using whatever value of a and k that you feel like at the moment." What I wanted was sliders for the slope and the initial value. So maybe, just maybe, I should have said that in the first place. Gift #1: A wake-up call. Get over yourself. • I thought the hard part would be figuring out how to get the zero and y intercept to always be in the right place, regardless of which linear function is currently set by the sliders. WRONG: □ Once the sliders were in and working, many did this just by using the "Draw a point" button and placing a point right on the axis, no algebra needed. Which is not what I wanted. □ Darn geogebra is too nice - it assumes that when you put a point on the x-axis, that you always want it to stay there, even as the function changes. SO I had to edit the assignment description to say that the intercepts have to be done WITHOUT using a drawing button - use the input bar only. Gift #2: A reminder that there's more than one way to skin a cat. □ That feels lame somehow, I mean if there's an easier way to do something, who wouldn't choose that? But too bad, there it is, this is an opportunity for them to gain conviction in algebraic formulas. Deal with it. • I thought the easy part would be figuring out the formulas for the x and y intercepts of the linear function - in fact, I thought they'd already know them, considering these are gr 11 kids, and strong students, who have already studied the linear function for 2 years. WRONG: □ They didn't know those formulas, or they didn't remember them. So okay, fine, we spent some time solving equations like a|x - h| + k = 0, so they could use the same method to solve the linear equivalent. Well no one could! It was no problem for them to solve 2x + 3 = 0, but it was another thing entirely when they had to treat the a and k as if they were known numbers. Gift # 3: A surprise - I found a huge gaping hole in their algebraic toolboxes! Let the mending begin.... □ I know that many people would say "Why get them to use formulas when it's more important that they understand and be able to figure it out from first principles?" But at this point, I think it's important for them to be able to generalize using algebra, and to use it to save time and cognitive load. □ Besides, if they have to derive and then type in Z = (-k/a, 0) for the zero, and then immediately see that it works, then they get to own that formula, and believe it. And lo and behold, once that happened, I got a lot of "Oh! Cool! It works!" It seemed like the idea that algebra always tells you the truth was new to them! • I thought that very few would try the bonus points, and I predicted who those few would be. I don't have the final versions yet, they're due tomorrow, but so far, RIGHT: □ One student put in almost all of the bonus features PLUS checkboxes □ One student wrote a text that contains, instead of inert letters, an object that changes with the sliders. I only just figured that one out last weekend. □ One student couldn't figure something out so she went online and read the geogebra manual! I wept when I read that in her reflections. □ The rest are doing the basic stuff, which is fine. It still feels like they're learning about the linear function in a whole new way. I'll share their work and reflections here, once I get their permissions of course. For now, I plan to upload their work to , or embed them right here, but once they have their own blogs going, they'll be doing all that themselves. Hmmmm.....I wonder if there are already any geogebras on geogebratube that came from students instead of teachers?
{"url":"http://audrey-mcsquared.blogspot.com/2013/10/the-gift-of-being-wrong.html","timestamp":"2024-11-04T15:38:36Z","content_type":"text/html","content_length":"94817","record_id":"<urn:uuid:cde08310-a83e-49d2-a9b2-d4f1a03c5457>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00013.warc.gz"}
Question: How Much Does An 8 Oz Water Bottle Weigh - BikeHike In the US customary measurement system, then, one cup of water, which is 8 fluid ounces (fl. oz) in volume, is actually 8.3214 oz in weight.How much does water weigh? US Customary Volume 1 fluid ounce Multiplier (exact) = 2 tbsp Metric Volume* 29.57 mL Avoirdupois Weight 1.040 oz Metric Weight 29.49 g. Does 8 ounces of water weigh 8 ounces? — a cup of water happens to equal both 8 fluid ounces (in volume) and 8 ounces (in weight), so you might naturally assume that 1 cup equals 8 ounces of weight universally in recipes. How much is 8 oz of water in a water bottle? The typical size bottle that we find in the large cases of bottled water is 16 fluid ounces. so, according to this size of bottle, half filled water bottle would cover 8 oz. Actually 1 cup = 250 ml which is 8 oz , so 4 cups = 1 liter which is 32 oz.Table. OUNCE CUP 8 oz 1 cup 2 1/4 cup 4 1/2 cup 6 1/3 cup. How much does a 16oz water bottle weigh? Answer: 16 ounces (oz) of water is equal to 1 pound in weight. How much does an average water bottle weigh? New data compiled by Beverage Marketing Corp. (BMC), New York, show that between 2000 and 2014 the average weight of a 16.9-ounce (half-liter) single-serve PET (polyethylene terephthalate) bottle of water has declined 52 percent to 9.25 grams. How much does 1 gallon of water weigh? One US liquid gallon of fresh water weighs roughly 8.34 pounds (lb) or 3.785 kilograms (kg) at room temperature. How much does 1 fluid ounce weigh? The fluid ounce derives its name originally from being the volume of one ounce avoirdupois of water, but in the US it is defined as 1⁄128 of a US gallon. Consequently, a fluid ounce of water weighs about 1.041 ounces avoirdupois. What is an 8 oz glass of water? Health experts commonly recommend eight 8-ounce glasses, which equals about 2 liters, or half a gallon a day. This is called the 8×8 rule and is very easy to remember. However, some experts believe that you need to sip on water constantly throughout the day, even when you’re not thirsty. Does 8 oz make 1 cup? Liquid measuring cups indicate that 1 cup = 8 ounces. But what they really mean is 1 cup of liquid = 8 fluid ounces. If a recipe calls for an ounce amount of a liquid, you can measure it in a liquid measuring cup. What is 8oz bottle? Overview: Glass & Plastic Container Size Conversion Chart Container Size Dram Milliliter 6 oz. 48 ~ 180 8 oz. 64 ~ 240 12 oz. 96 ~ 360 16 oz. 128 ~ 480. How much does a 20oz water bottle weigh? Note: 1 of the 20 Ounce PET Bottles weighs approximately 23.83 grams and 1 of the 16 ounce PET bottles weighs 19 grams. Since 453.59 grams equals 1 pound there is roughly 19 of the 20 Ounce PET bottles to the pound. What does an empty water bottle weigh? A 2-liter PET bottle that weighed 68 grams in 1980 now weighs as little as 42 grams. The average weight of single-serve 0.5 liter PET water bottle is now 9.9 grams, nearly half of what it weighed in How much does 32 oz of water weigh? Daily weight gain: By drinking more water per day, you will have a series of weight gains throughout the day as a quart (32 ounces) of water weighs two pounds. How heavy is a glass bottle? Average glass bottle weighs 8 oz. How much does a 5 gallon water bottle weigh? A US gallon is 8.34 pounds of water, so a 5-gallon jug of water weighs 41.7 pounds, including the weight of the jug. How much does a plastic bottle cap weigh? By Weight: A standard tin-plate bottle cap weighs in at 2.22 Grams 1 Pound is equal to 453.592 Grams. 1 Ton is 2000 pounds. Therefore a Ton is 907,184 Grams or 408,641.441 Bottle Caps. Do water and ice weigh the same? No, water and ice do not weigh the same. For example, if we take the same volume of water and ice in the same container, water would weigh more than ice. Therefore, ice floats on water since its density is less than that of water. What is water weight? Fast facts on water weight: Any extra water being held in the body is referred to as “water weight.” When water builds up in the body, it can cause bloating and puffiness, especially in the abdomen, legs, and arms. Water levels can make a person’s weight fluctuate by as much as 2 to 4 pounds in a single day. Which is heavier a gallon of milk or water? A gallon is a measurement of volume and density is directly proportional to the mass of a fixed volume. Milk is about 87% water and contains other substances that are heavier than water, excluding fat. A gallon of milk is heavier than a gallon of water. How many pounds does 8 fluid ounces weigh? Ounces to Pounds conversion table Ounces (oz) Pounds (lb) Pounds+Ounces (lb+oz) 8 oz 0.5 lb 0 kg 226.80 g 9 oz 0.5625 lb 0 kg 225.15 g 10 oz 0.625 lb 0 kg 283.50 g 20 oz 1.25 lb 0 kg 566.99 g. How do you convert fluid ounces to weight ounces? How to Convert Ounces to Fluid Ounces. To convert an ounce measurement to a fluid ounce measurement, divide the weight by 1.043176 times the density of the ingredient or material. Thus, the weight in fluid ounces is equal to the ounces divided by 1.043176 times the density of the ingredient or material. Is fluid oz volume or weight? Fluid ounces refers to volume (like milliliters) whereas regular ounces refer to weight (like grams). The fact that they are both called ounces, and not always differentiated by saying ounce/fluid ounce is one reason why they are so problematic. How big is an 8 oz water bottle? 8 oz. Bottled Water. The 8 oz. bottle is 5″ tall and is 2.25″ in diameter. Is a dry cup 8 oz? Well the answer is there are 8 Dry ounces in 1 standard U.S cup and 16 tablespoons in 1 cup. How many cups is 8 oz dry? 1 cup Ounces 10 tablespoons plus 2 teaspoons 2/3 cup 5.2 ounces 12 tablespoons 3/4 cup 6 ounces 16 tablespoons 1 cup 8 ounces 32 tablespoons 2 cups 16 ounces. How do I measure 8 oz in cups? How many ounces in a cup 1 cup = 8 fluid ounces, 16 tablespoons. ¾ cup = 6 fluid ounces, 12 tablespoons. ½ cup = 4 fluid ounces, 8 tablespoons. ¼ cup = 2 fluid ounces, 4 tablespoons. 1 cup = 4.5 dry weight ounces (dry weight is measuring flour and dry ingredients). How can I measure 8 ounces of water without a measuring cup? Use a tablespoon to measure out the liquid you need. Pouring slowly and steadily to avoid excess spillage into the vessel, fill your tablespoon with the liquid. Transfer to the vessel and repeat until you have measured the amount you need in tablespoons.
{"url":"https://bikehike.org/how-much-does-an-8-oz-water-bottle-weigh/","timestamp":"2024-11-07T17:10:13Z","content_type":"text/html","content_length":"67778","record_id":"<urn:uuid:a2256433-e896-4db2-a784-107d307a5ca2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00517.warc.gz"}
Video Game Breakout Want to make creations as awesome as this one? Video Game Breakout Brittnie Ward Created on September 22, 2024 More creations to inspire you © 20XX GENIALLY ESCAPE GAMES © 20XX GENIALLY ESCAPE GAMES Answer the review questions correctly to defeat the video game! Choose your character. 4 FROG 3 BARS 2 PUZZLE 1 SHIPS Complete the missions to obtain the password numbers 72 cubic inches 112 cubic inches 168 cubic inches If you added 2 more layers to the figure above, what would be the volume of the figure? LEVEL 1/5 5 1/5 yards 4 1/5 yards 4 2/5 yards Cai has 7 yards of ribbon. He uses 3/5 of the ribbon to make bows for Christmas presents.How many yards of ribbon did he use? LEVEL 2/5 50 x 1/12 = 50/12 = 4 1/3 50 x 1/2 = 12/50 = 4 1/6 50 x 1/12 = 50/12 = 4 1/6 Grayson orders a 50 pound bag of flour. He divides it into 12 same-size containers. Which equation is correct? LEVEL 3/5 16 cubic units 64 cubic units 32 cubic units If Maddie has a solid figure with 16 unit cubes stacked in it with no gaps or overlaps, what is the volume of the solid figure? LEVEL 4/5 2 square cm 1/4 square cm 1 squre cm A sqaure has side lengths of 1/2 cm. What is the area of the square? LEVEL 5/5 ** The password number for this mission is 1 ** 4 FROG 3 BARS 2 PUZZLE 1 SHIPS Complete the missions to obtain thepassword numbers Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar. (3 x 2) -:- (4 x 1) Select the expression that is equivalent to 3 x 2/4 LEVEL 1/5 They have the same Volume Cookie Box Cereal Box Which box has the greatest volume? LEVEL 2/5 (2 x 3) + (3 + 1) 3 + (2 +1) + 3 (2 x 5) + (5 x 2) Which expression has the same value as (2 + 3) x (3 + 1) LEVEL 3/5 6 1/4 pages 8 1/4 pages 9 1/4 pages On Monday Rayven wrote 8 1/3 pages of her autobiography. On Tuesday, she wrong 3/4 as many pages. How many pages did she write on Tuesday? LEVEL 4/5 3/7 x 1/9 = 3/63 3/7 x 1/9 = 4/16 3/7 x 1/9 = 3/56 LEVEL 5/5 Which equation matches the model above? ** The password number for this mission is 2 ** 4 FROG 3 BARS 2 PUZZLE 1 SHIPS Complete the missions to obtain the password numbers Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar. 1080 cubic inches 790 cubic inches 840 cubic inches What is the volume of the figure? LEVEL 1/5 3 x 5 15 + 7 (125 -:- 25 Aria is solving this expression, what is the final step she would do? LEVEL 2/5 the product is greater than 9/2 The product is less than 1/5 The product is greater than 1/5 Jolene is multiplying 1/5 x 9/2. Which answer is correct? LEVEL 3/5 ( 4 x 3) - 6 6 - ( 4 x 3) (4 -:- 3) - 6 The product of 4 and 3 is decreased by 6. LEVEL 4/5 1 1/3 hours 12 hours 1 hour Collins spent 4 hours last week practicing with the dance cats. Hazel practiced 1/3 of that time. How long did Hazel practice? LEVEL 5/5 ** THE password number for THIS MISSION IS 3 ** 4 FROG 3 BARS 2 PUZZLE 1 SHIPS Complete the missions to obtain thenumbers of the password Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar. 3 inches 4 inches 2 inches The volume of the figure is 96 cubic inches, what is the width of the figure? LEVEL 1/3 (4 x 6) x 3 (4 x 6) - 3 (4 + 6) - 3 Kamden has 4 bags of apples with 6 apples in each bag. He then gives 3 apples away. Which equation matches the scenario above? LEVEL 2/3 4/7 x 4 = 16/7 = 2 2/7 4/7 x 4 = 8/7 = 1 1/7 3/4 x 4 = 12/4 = 3 Which equation matches the model above? LEVEL 3/3 ** THE NUMBER OF THIS MISSION IS 4 ** 4 FROG 3 BARS 2 PUZZLE 1 SHIPS Complete the missions to obtain thepassword numbers © 20XX GENIALLY ESCAPE GAMES You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? Correctly answer the questions to defeat the alien ships! When you have beaten the level, write down the password number! You've chosen the Preppy Shape as your character. You're so preppy! You will lose all the progress Are you sure you want to exit? You've chosen the Sigma Paddle as your character. You're so sigma! Correctly answer the questions to complete the shape puzzle.When you have beaten the mission, write down the password number! Correctly answer the questions to get rid of the bars.After you have beaten this mission, write down the password number! You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? Correctly answer the questions to help the frog! When you have beaten the mission, write down your password number! You will lose all the progress Are you sure you want to exit? You've chosen the Slay Frog as your character. You're so slay! You've chosen the Skibidi Ship as your character.You're so skibidi! You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit? You will lose all the progress Are you sure you want to exit?
{"url":"https://view.genially.com/66f025e5d09a4ddb5240caed/interactive-content-video-game-breakout","timestamp":"2024-11-14T05:50:41Z","content_type":"text/html","content_length":"47695","record_id":"<urn:uuid:19d8ea3a-dc31-42ea-be63-b7180206bf0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00357.warc.gz"}
A Century-Old Question Is Still Revealing Answers in Fundamental Math - Pennsylvania Digital NewsA Century-Old Question Is Still Revealing Answers in Fundamental Math October 9, 2024 4 min read A Century-Old Question Is Still Revealing Answers in Fundamental Math Mathematicians have made lots of recent progress on a question called the Mordell conjecture, which was posed a century ago Justin Lewis/Getty Images After German mathematician Gerd Faltings proved the Mordell conjecture in 1983, he was awarded the Fields Medal, often described as the “Nobel Prize of Mathematics.” The conjecture describes the set of conditions under which a polynomial equation in two variables (such as x^2 + y^4 = 4) is guaranteed to have only a finite number of solutions that can be written as a fraction. Faltings’s proof answered a question that had been open since the early 1900s. Furthermore, it opened new mathematical doors to other unanswered questions, many of which researchers are still exploring today. In recent years mathematicians have made tantalizing progress in understanding these offshoots and their implications for even fundamental mathematics. The proof of the Mordell conjecture concerns the following situation: Suppose that a polynomial equation in two variables defines a curved line. The question at the heart of the Mordell conjecture is: What’s the relationship between the genus of the curve and the number of rational solutions that exist for the polynomial equation that defines it? The genus is a property related to the highest exponent in the polynomial equation describing the curve. It is an invariant property, meaning that it remains the same even when certain operations or transformations are applied to the curve. On supporting science journalism If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. The answer to the Mordell conjecture’s central question, it turns out, is that if an algebraic curve is of genus two or greater, there will be a finite number of rational solutions to the polynomial equation. (This number excludes solutions that are just multiples of other solutions.) For genus zero or genus one curves, there can be infinitely many rational solutions. “Just over 100 years ago, Mordell conjectured that this genus controlled the finiteness or infiniteness of rational points on one of these curves,” says Holly Krieger, a mathematician at the University of Cambridge. Consider a point (x, y). If both x and y are numbers that can be written as fractions, then (x, y) is a rational point. For instance, (^1⁄[3], 3) is a rational point, but (√2, 3) isn’t. Mordell’s idea meant that “if your genus was sufficiently large, your curve is somehow geometrically complicated,” Krieger says. She gave an invited lecture at the 2024 Joint Mathematics Meetings about the about the history of the Mordell conjecture and some of the work that has followed it. Faulting’s proof ignited new possibilities for exploring questions that expand on the Mordell conjecture. One such exciting question—the Uniform Mordell-Lang conjecture—was posed in 1986, the same year that Faltings was awarded the Fields Medal. The Uniform Mordell-Lang conjecture, which was formalized by Barry Mazur of Harvard University, was “proved in a series of papers culminating in 2021,” Krieger says. The work of four mathematicians—Vesselin Dimitrov of the California Institute of Technology, Ziyang Gao of the University of California, Los Angeles, and Philipp Habegger of the University of Basel in Switzerland, who were collaborators, and Lars Kühne of University College Dublin, who worked individually—led to proving that conjecture. For the Uniform Mordell-Lang conjecture, mathematicians have been asking: What happens if you broaden the mathematical discussion to include higher-dimensional objects? What, then, can be said about the relationship between the genus of a mathematical object and the number of associated rational points? The answer, it turns out, is that the upper bound—meaning highest possible number—of rational points associated with a curve or higher-dimensional object such as a surface depends only on the genus of that object. For surfaces, the genus corresponds to the number of holes in the surface. There’s an important caveat, however, according to Dimitrov, Gao and Habegger. “The geometric objects (curves, surfaces, threefolds etc.) [must] be contained inside a very special kind of ambient space, a so-called abelian variety,” they wrote in an e-mail to Scientific American. “An abelian variety is itself also ultimately defined by polynomial equations, but it comes equipped with a group structure. Abelian varieties have many surprising properties and it is somewhat of a miracle that they even exist.” The proof of the Uniform Mordell-Lang conjecture “is not only the resolution of a problem that’s been open for 40 years,” Krieger says. “It touches at the heart of the most basic questions in mathematics.” Those questions are focused on finding rational solutions—ones that can be written as a fraction—to polynomial equations. Such questions are often called Diophantine problems. The Mordell conjecture “is kind of an instance of what it means for geometry to determine arithmetic,” Habegger says. The team’s contribution to proving the Uniform Mordell-Lang conjecture showed “that the number of [rational] points is essentially bounded by the geometry,” he says. Therefore, having proved Uniform Mordell-Lang doesn’t give mathematicians an exact number on how many rational solutions there will be for a given genus. But it does tell them the maximum possible number of solutions. The 2021 proof certainly isn’t the final chapter on problems that are offshoots from the Mordell conjecture. “The beauty of Mordell’s original conjecture is that it opens up a world of further questions,” Mazur says. According to Habegger, “the main open question is proving Effective Mordell”—an offshoot of the original conjecture. Solving that problem would mean entering another mathematical realm in which it’s possible to identify exactly how many rational solutions exist for a given scenario. There’s a significant gap to bridge between the information given by having proved the Uniform Mordell-Lang conjecture and actually solving the Effective Mordell problem. Knowing the bound on how many rational solutions there are for a given situation “doesn’t really help you” pin down what those solutions are, Habegger says. “Let’s say you know that the number of solutions is at most a million. And if you only find two solutions, you’ll never know if there are more,” he says. If mathematicians can solve Effective Mordell, that will put them tremendously closer to being able to use a computer algorithm to quickly find all rational solutions rather than having to tediously search for them one by one.
{"url":"https://pennsylvaniadigitalnews.com/a-century-old-question-is-still-revealing-answers-in-fundamental-math/","timestamp":"2024-11-06T09:29:22Z","content_type":"text/html","content_length":"104278","record_id":"<urn:uuid:4ab6cf08-195b-4b1c-92c3-5451a7a8fcfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00394.warc.gz"}
Polynomial calculations bezout - Bezout equation for polynomials clean - cleans matrices (round to zero small entries) cmndred - common denominator form coeff - coefficients of matrix polynomial coffg - inverse of polynomial matrix colcompr - column compression of polynomial matrix degree - degree of polynomial matrix denom - denominator derivat - rational matrix derivative determ - determinant of polynomial matrix detr - polynomial determinant diophant - diophantine (Bezout) equation factors - numeric real factorization gcd - gcd calculation hermit - Hermite form horner - polynomial/rational evaluation hrmt - gcd of polynomials htrianr - triangularization of polynomial matrix invr - inversion of (rational) matrix lcm - least common multiple lcmdiag - least common multiple diagonal factorization ldiv - polynomial matrix long division numer - numerator pdiv - polynomial division pol2des - polynomial matrix to descriptor form pol2str - polynomial to string conversion polfact - minimal factors residu - residue roots - roots of polynomials routh_t - Routh's table rowcompr - row compression of polynomial matrix sfact - discrete time spectral factorization simp - rational simplification simp_mode - toggle rational simplification sylm - Sylvester matrix systmat - system matrix
{"url":"http://laris.fesb.hr/digitalno_vodjenje/download/scilab/polynomials/whatis.htm","timestamp":"2024-11-10T08:03:30Z","content_type":"text/html","content_length":"2868","record_id":"<urn:uuid:0676c18f-c190-4618-bf33-f7650b449de4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00498.warc.gz"}
A post by Harry Tata, PhD student on the Compass programme. Oligonucleotides in Medicine Oligonucleotide therapies are at the forefront of modern pharmaceutical research and development, with recent years seeing major advances in treatments for a variety of conditions. Oligonucleotide drugs for Duchenne muscular dystrophy (FDA approved) [1], Huntington’s disease (Phase 3 clinical trials) [2], and Alzheimer’s disease [3] and amyotrophic lateral sclerosis (early-phase clinical trials) [4] show their potential for tackling debilitating and otherwise hard-to-treat conditions. With continuing development of synthetic oligonucleotides, analytical techniques such as mass spectrometry must be tailored to these molecules and keep pace with the field. Working in conjunction with AstraZeneca, this project aims to advance methods for impurity detection and quantification in synthetic oligonucleotide mass spectra. In this blog post we apply a regularised version of the Richardson-Lucy algorithm, an established technique for image deconvolution, to oligonucleotide mass spectrometry data. This allows us to attribute signals in the data to specific molecular fragments, and therefore to detect impurities in oligonucleotide synthesis. Oligonucleotide Fragmentation If we have attempted to synthesise an oligonucleotide $\mathcal O$ with a particular sequence, we can take a sample from this synthesis and analyse it via mass spectrometry. In this process, molecules in the sample are first fragmented — broken apart into ions — and these charged fragments are then passed through an electromagnetic field. The trajectory of each fragment through this field depends on its mass/charge ratio (m/z), so measuring these trajectories (e.g. by measuring time of flight before hitting some detector) allows us to calculate the m/z of fragments in the sample. This gives us a discrete mass spectrum: counts of detected fragments (intensity) across a range of m/z bins [5]. To get an idea of how much of $\mathcal O$ is in a sample, and what impurities might be present, we first need to consider what fragments $\mathcal O$ will produce. Oligonucleotides are short strands of DNA or RNA; polymers with a backbone of sugars (such as ribose in RNA) connected by linkers (e.g. a phosphodiester bond), where each sugar has an attached base which encodes genetic information On each monomer, there are two sites where fragmentation is likely to occur: at the linker (backbone cleavage) or between the base and sugar (base loss). Specifically, depending on which bond within the linker is broken, there are four modes of backbone cleavage [7,8]. We include in $\mathcal F$ every product of a single fragmentation of $\mathcal O$ — any of the four backbone cleavage modes or base loss anywhere along the nucleotide — as well as the results of every combination of two fragmentations (different cleavage modes at the same linker are mutually exclusive). Sparse Richardson-Lucy Algorithm Suppose we have a chemical sample which we have fragmented and analysed by mass spectrometry. This gives us a spectrum across n bins (each bin corresponding to a small m/z range), and we represent this spectrum with the column vector $\mathbf{b}\in\mathbb R^n$, where $b_i$ is the intensity in the $i^{th}$ bin. For a set $\{f_1,\ldots,f_m\}=\mathcal F$ of possible fragments, let $x_j$ be the amount of $f_j$ that is actually present. We would like to estimate the amounts of each fragment based on the spectrum $\mathbf b$. If we had a sample comprising a unit amount of a single fragment $f_j$, so $x_j=1$ and $x_{ke j}=0,$ and this produced a spectrum $\begin{pmatrix}a_{1j}&\ldots&a_{nj}\end{pmatrix}^T$, we can say the intensity contributed to bin $i$ by $x_j$ is $a_{ij}.$ In mass spectrometry, the intensity in a single bin due to a single fragment is linear in the amount of that fragment, and the intensities in a single bin due to different fragments are additive, so in some general spectrum we have $b_i=\sum_j x_ja_{ij}.$ By constructing a library matrix $\mathbf{A}\in\mathbb R^{n\times m}$ such that $\{\mathbf A\}_{ij}=a_{ij}$ (so the columns of $\mathbf A$ correspond to fragments in $\mathcal F$), then in ideal conditions the vector of fragment amounts $\mathbf x=\begin{pmatrix}x_1&\ldots&x_m\end{pmatrix}^T$ solves $\mathbf{Ax}=\mathbf{b}$. In practice this exact solution is not found — due to experimental noise and potentially because there are contaminant fragments in the sample not included in $\mathcal F$ — and we instead make an estimate $\mathbf {\hat x}$ for which $\mathbf{A\hat x}$ is close to $\mathbf b$. Note that the columns of $\mathbf A$ correspond to fragments in $\mathcal F$: the values in a single column represent intensities in each bin due to a single fragment only. We $\ell_1$-normalise these columns, meaning the total intensity (over all bins) of each fragment in the library matrix is uniform, and so the values in $\mathbf{\hat x}$ can be directly interpreted as relative abundances of each fragment. The observed intensities — as counts of fragments incident on each bin — are realisations of latent Poisson random variables. Assuming these variables are i.i.d., it can be shown that the estimate of $\mathbf{x}$ which maximises the likelihood of the system is approximated by the iterative formula $\mathbf {\hat{x}}^{(t+1)}=\left(\mathbf A^T \frac{\mathbf b}{\mathbf{A\hat x}^{(t)}}\right)\odot \mathbf{\hat x}^{(t)}.$ Here, quotients and the operator $\odot$ represent (respectively) elementwise division and multiplication of two vectors. This is known as the Richardson-Lucy algorithm [9]. In practice, when we enumerate oligonucleotide fragments to include in $\mathcal F$, most of these fragments will not actually be produced when the oligonucleotide passes through a mass spectrometer; there is a large space of possible fragments and (beyond knowing what the general fragmentation sites are) no well-established theory allowing us to predict, for a new oligonucleotide, which fragments will be abundant or negligible. This means we seek a sparse estimate, where most fragment abundances are zero. The Richardson-Lucy algorithm, as a maximum likelihood estimate for Poisson variables, is analagous to ordinary least squares regression for Gaussian variables. Likewise lasso regression — a regularised least squares regression which favours sparse estimates, interpretable as a maximum a posteriori estimate with Laplace priors — has an analogue in the sparse Richardson-Lucy algorithm: $\mathbf {\hat{x}}^{(t+1)}=\left(\mathbf A^T \frac{\mathbf b}{\mathbf{A\hat x}^{(t)}}\right)\odot \frac{ \mathbf{\hat x}^{(t)}}{\mathbf 1 + \lambda},$ where $\lambda$ is a regularisation parameter [10]. Library Generation For each oligonucleotide fragment $f\in\mathcal F$, we smooth and bin the m/z values of the most abundant isotopes of $f$, and store these values in the columns of $\mathbf A$. However, if these are the only fragments in $\mathcal F$ then impurities will not be identified: the sparse Richardson-Lucy algorithm will try to fit oligonucleotide fragments to every peak in the spectrum, even ones that correspond to fragments not from the target oligonucleotide. Therefore we also include ‘dummy’ fragments corresponding to single peaks in the spectrum — the method will fit these to non-oligonucleotide peaks, showing the locations of any impurities. For a mass spectrum from a sample containing a synthetic oligonucleotide, we generated a library of oligonucleotide and dummy fragments as described above, and applied the sparse Richardson-Lucy algorithm. Below, the model fit is plotted alongside the (smoothed, binned) spectrum and the ten most abundant fragments as estimated by the model. These fragments are represented as bars with binned m/z at the peak fragment intensity, and are separated into oligonucleotide fragments and dummy fragments indicating possible impurities. All intensities and abundances are Anscombe transformed ($x\ rightarrow\sqrt{x+3/8}$) for clarity. As the oligonucleotide in question is proprietary, its specific composition and fragmentation is not mentioned here, and the bins plotted have been transformed (without changing the shape of the data) so that individual fragment m/z values are not identifiable. We see the data is fit extremely closely, and that the spectrum is quite clean: there is one very pronounced peak roughly in the middle of the m/z range. This peak corresponds to one of the oligonucleotide fragments in the library, although there is also an abundant dummy fragment slightly to the left inside the main peak. Fragment intensities in the library matrix are smoothed, and it may be the case that the smoothing here is inappropriate for the observed peak, hence other fragments being fit at the peak edge. Investigating these effects is a target for the rest of the project. We also see several smaller peaks, most of which are modelled with oligonucleotide fragments. One of these peaks, at approximately bin 5352, has a noticeably worse fit if excluding dummy fragments from the library matrix (see below). Using dummy fragments improves this fit and indicates a possible impurity. Going forward, understanding and quantification of these impurities will be improved by including other common fragments in the library matrix, and by grouping fragments which correspond to the same molecules. [1] Junetsu Igarashi, Yasuharu Niwa, and Daisuke Sugiyama. “Research and Development of Oligonucleotide Therapeutics in Japan for Rare Diseases”. In: Future Rare Diseases 2.1 (Mar. 2022), FRD19. [2] Karishma Dhuri et al. “Antisense Oligonucleotides: An Emerging Area in Drug Discovery and Development”. In: Journal of Clinical Medicine 9.6 (6 June 2020), p. 2004. [3] Catherine J. Mummery et al. “Tau-Targeting Antisense Oligonucleotide MAPTRx in Mild Alzheimer’s Disease: A Phase 1b, Randomized, Placebo-Controlled Trial”. In: Nature Medicine (Apr. 24, 2023), pp. 1–11. [4] Benjamin D. Boros et al. “Antisense Oligonucleotides for the Study and Treatment of ALS”. In: Neurotherapeutics: The Journal of the American Society for Experimental NeuroTherapeutics 19.4 (July 2022), pp. 1145–1158. [5] Ingvar Eidhammer et al. Computational Methods for Mass Spectrometry Proteomics. John Wiley & Sons, Feb. 28, 2008. 299 pp. [6] Harri Lönnberg. Chemistry of Nucleic Acids. De Gruyter, Aug. 10, 2020. [7] S. A. McLuckey, G. J. Van Berkel, and G. L. Glish. “Tandem Mass Spectrometry of Small, Multiply Charged Oligonucleotides”. In: Journal of the American Society for Mass Spectrometry 3.1 (Jan. 1992), pp. 60–70. [8] Scott A. McLuckey and Sohrab Habibi-Goudarzi. “Decompositions of Multiply Charged Oligonucleotide Anions”. In: Journal of the American Chemical Society 115.25 (Dec. 1, 1993), pp. 12085–12095. [9] Mario Bertero, Patrizia Boccacci, and Valeria Ruggiero. Inverse Imaging with Poisson Data: From Cells to Galaxies. IOP Publishing, Dec. 1, 2018. [10] Elad Shaked, Sudipto Dolui, and Oleg V. Michailovich. “Regularized Richardson-Lucy Algorithm for Reconstruction of Poissonian Medical Images”. In: 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Mar. 2011, pp. 1754–1757. Compass students attending AISTATS 2023 in Valencia We (Ed Davis, Josh Givens, Alex Modell, and Hannah Sansford) attended the 2023 AISTATS conference in Valencia in order to explore the interesting research being presented as well as present some of our own work. While we talked about our work being published at the conference in this earlier blog post, having now attended the conference, we thought we’d talk about our experience there. We’ll spotlight some of the talks and posters which interested us most and talk about our highlights of Valencia as a whole. Talks & Posters Mode-Seeking Divergences: Theory and Applications to GANs One especially interesting talk and poster at the conference was presented by Cheuk Ting Li on their work in collaboration with Farzan Farnia. This work aims to set up a formal classification for various probability measure divergences (such as f-Divergences, Wasserstein distance, etc.) in terms of there mode-seeking or mode-covering properties. By mode-seeking/mode-covering we mean the behaviour of the divergence when used to fit a unimodal distribution to a multi-model target. Specifically a mode-seeking divergence will encourage the target distribution to fit just one of the modes ignoring the other while a mode covering divergence will encourage the distribution to cover all modes leading to less accurate fitting on an individual mode but better covering the full support of the distribution. While these notions of mode-seeking and mode-covering divergences had been discussed before, up to this point there seems to be no formal definition for these properties, and disagreement on the appropriate categorisation of some divergences. This work presents such a definition and uses it to categorise many of the popular divergence methods. Additionally they show how an additive combination a mode seeking f-divergence and the 1-Wasserstein distance retain the mode-seeking property of the f-divergence while being implementable using only samples from our target distribution (rather than knowledge of the distribution itself) making it a desirable divergence for use with GANs. Talk: https://youtu.be/F7LdHIzZQow Paper: https://proceedings.mlr.press/v206/ting-li23a.html Using Sliced Mutual Information to Study Memorization and Generalization in Deep Neural Networks The benefit of attending large conferences like AISTATS is having the opportunity to hear talks that are not related to your main research topic. This was the case with a talk by Wongso et. al. was one such talk. Although it did not overlap with any of our main research areas, we all found this talk very interesting. The talk was on the topic of tracking memorisation and generalisation in deep neural networks (DNNs) through the use of /sliced mutual information/. Mutual information (MI) is commonly used in information theory and represents the reduction of uncertainty about one random variable, given the knowledge of the other. However, MI is hard to estimate in high dimensions, which makes it a prohibitive metric for use in neural networks. Enter sliced mutual information (SMI). This metric is the average of the MI terms between their one-dimensional projections. The main difference between SMI and MI is that SMI is scalable to high dimensions and scales faster than MI. Next, let’s talk about memorisation. Memorisation is known to occur in DNNs and is where the DNN fits random labels in training as it has memorised noisy labels in training, leading to bad generalisation. The authors demonstrate this behaviour by fitting a multi-layer perceptron to the MNIST dataset with various amounts of label noise. As the noise increased, the difference between the training and test accuracy became greater. As the label noise increases, the MI between the features and target variable does not change, meaning that MI did not track the loss in generalisation. However, the authors show that the SMI did track the generalisation. As the label noise increased, the SMI decreased significantly as the MLP’s generalisation got worse. Their main theorem showed that SMI is lower-bounded by a term which includes the spherical soft-margin separation, a quantity which is used to track memorisation and generalisation! In summary, unlike MI, SMI can track memorization and generalisation in DNNs. If you’d like to know more, you can find the full paper here: https://proceedings.mlr.press/v206/wongso23a.html. Invited Speakers and the Test of Time Award As well as the talks on papers that had been selected for oral presentation, each day began with a (longer) invited talk which, for many of us, were highlights of each day. The invited speakers were extremely engaging and covered varied and interesting topics; from Arthur Gretton (UCL) presenting ‘Causal Effect Estimation with Context and Confounders’ to Shakir Mohamed (DeepMind) presenting ‘Elevating our Evaluations: Technical and Sociotechnical Standards of Assessment in Machine Learning’. A favourite amongst us was a talk from Tamara Broderick (MIT) titled ‘An Automatic Finite-Sample Robustness Check: Can Dropping a Little Data Change Conclusions?’. In this talk she addressed a worry that researchers might have when the goal is to analyse a data sample and apply any conclusions to a new population: was a small proportion of the data sample instrumental to the original conclusion? Tamara and collorators propose a method to assess the sensitivity of statistical conclusions to the removal of a very small fraction of the data set. They find that sensitivity is driven by a signal-to-noise ratio in the inference problem, does not disappear asymptotically, and is not decided by misspecification. In experiments they find that many data analyses are robust, but that the conclusions of severeal influential economics papers can be changed by removing (much) less than 1% of the data! A link to the talk can be found here: https://youtu.be/QYtIEqlwLHE This year, AISTATS featured a Test of Time Award to recognise a paper from 10 years ago that has had a prominent impact in the field. It was awarded to Andreas Damianou and Neil Lawrence for the paper ‘Deep Gaussian Processes’, and their talk was a definite highlight of the conferece. Many of us had seen Neil speak at a seminar at the University of Bristol last year and, being the engaging speaker he is, we were looking forward to hearing him speak again. Rather than focussing on the technical details of the paper, Neils talk concentrated on his (and the machine learning community’s) research philosophy in the years preceeding writing the paper, and how the paper came about – a very interesting insight, and a refreshing break from the many technical talks! There was so much to like about Valencia even from our short stay there. We’ll try and give you a very brief highlight of our favourite things. Food & Drink: Obviously Valencia is renowned for being the birthplace of paella and while the paella was good we sampled many other delights our stay. Our collective highlight was the nicest Burrata any of us had ever had which, in a stunning display of individualism, all four of us decided to get on our first day at the conference. Artist rendition of our 4 meals. About half an hours tram ride from the conference centre are the beaches of Valencia. These stretch for miles as well as having a good 100m in depth with (surprisingly hot) sand covering the lot. We visited these after the end of the conference on the Thursday and despite it being the only cloudy day of the week it was a perfect way to relax at the end of a hectic few days with the pleasantly temperate water being an added bonus. Valencia has so much interesting architecture scattered around the city centre. One of the most memorable remarkable places was the San Nicolás de Bari and San Pedro Mártir (Church of San Nicolás) which is known as the Sistine chapel of Valencia (according to the audio-guide for the church at least) with its incredible painted ceiling and live organ playing. Ceiling of the Church of San Nicolás Student Perspectives: Intro to Recommendation Systems A post by Hannah Sansford, PhD student on the Compass programme. Like many others, I interact with recommendation systems on a daily basis; from which toaster to buy on Amazon, to which hotel to book on booking.com, to which song to add to a playlist on Spotify. They are everywhere. But what is really going on behind the scenes? Recommendation systems broadly fit into two main categories: 1) Content-based filtering. This approach uses the similarity between items to recommend items similar to what the user already likes. For instance, if Ed watches two hair tutorial videos, the system can recommend more hair tutorials to Ed. 2) Collaborative filtering. This approach uses the the similarity between users’ past behaviour to provide recommendations. So, if Ed has watched similar videos to Ben in the past, and Ben likes a cute cat video, then the system can recommend the cute cat video to Ed (even if Ed hasn’t seen any cute cat videos). Both systems aim to map each item and each user to an embedding vector in a common low-dimensional embedding space $E = \mathbb{R}^d$. That is, the dimension of the embeddings ($d$) is much smaller than the number of items or users. The hope is that the position of these embeddings captures some of the latent (hidden) structure of the items/users, and so similar items end up ‘close together’ in the embedding space. What is meant by being ‘close’ may be specified by some similarity measure. Collaborative filtering In this blog post we will focus on the collaborative filtering system. We can break it down further depending on the type of data we have: 1) Explicit feedback data: aims to model relationships using explicit data such as user-item (numerical) ratings. 2) Implicit feedback data: analyses relationships using implicit signals such as clicks, page views, purchases, or music streaming play counts. This approach makes the assumption that: if a user listens to a song, for example, they must like it. The majority of the data on the web comes from implicit feedback data, hence there is a strong demand for recommendation systems that take this form of data as input. Furthermore, this form of data can be collected at a much larger scale and without the need for users to provide any extra input. The rest of this blog post will assume we are working with implicit feedback data. Problem Setup Suppose we have a group of $n$ users $U = (u_1, \ldots, u_n)$ and a group of $m$ items $I = (i_1, \ldots, i_m)$. Then we let $\mathbf{R} \in \mathbb{R}^{n \times m}$ be the ratings matrix where position $R_{ui}$ represents whether user $u$ interacts with item $i$. Note that, in most cases the matrix $\mathbf{R}$ is very sparse, since most users only interact with a small subset of the full item set $I$. For any items $i$ that user $u$ does not interact with, we set $R_{ui}$ equal to zero. To be clear, a value of zero does not imply the user does not like the item, but that they have not interacted with it. The final goal of the recommendation system is to find the best recommendations for each user of items they have not yet interacted with. Matrix Factorisation (MF) A simple model for finding user emdeddings, $\mathbf{X} \in \mathbb{R}^{n \times d}$, and item embeddings, $\mathbf{Y} \in \mathbb{R}^{m \times d}$, is Matrix Factorisation. The idea is to find low-rank embeddings such that the product $\mathbf{XY}^\top$ is a good approximation to the ratings matrix $\mathbf{R}$ by minimising some loss function on the known ratings. A natural loss function to use would be the squared loss, i.e. $L(\mathbf{X}, \mathbf{Y}) = \sum_{u, i} \left(R_{ui} - \langle X_u, Y_i \rangle \right)^2.$ This corresponds to minimising the Frobenius distance between $\mathbf{R}$ and its approximation $\mathbf{XY}^\top$, and can be solved easily using the singular value decomposition $\mathbf{R} = \ mathbf{U S V}^\top$. Once we have our embeddings $\mathbf{X}$ and $\mathbf{Y}$, we can look at the row of $\mathbf{XY}^\top$ corresponding to user $u$ and recommend the items corresponding to the highest values (that they haven’t already interacted with). Logistic MF Minimising the loss function in the previous section is equivalent to modelling the probability that user $u$ interacts with item $i$ as the inner product $\langle X_u, Y_i \rangle$, i.e. $R_{ui} \sim \text{Bernoulli}(\langle X_u, Y_i \rangle),$ and maximising the likelihood over $\mathbf{X}$ and $\mathbf{Y}$. In a research paper from Spotify [3], this relationship is instead modelled according to a logistic function parameterised by the sum of the inner product above and user and item bias terms, $\ beta_u$ and $\beta_i$, $R_{ui} \sim \text{Bernoulli} \left( \frac{\exp(\langle X_u, Y_i \rangle + \beta_u + \beta_i)}{1 + \exp(\langle X_u, Y_i \rangle + \beta_u + \beta_i)} \right).$ Relation to my research A recent influential paper [1] proved an impossibility result for modelling certain properties of networks using a low-dimensional inner product model. In my 2023 AISTATS publication [2] we show that using a kernel, such as the logistic one in the previous section, to model probabilities we can capture these properties with embeddings lying on a low-dimensional manifold embedded in infinite-dimensional space. This has various implications, and could explain part of the success of Spotify’s logistic kernel in producing good recommendations. [1] Seshadhri, C., Sharma, A., Stolman, A., and Goel, A. (2020). The impossibility of low-rank representations for triangle-rich complex networks. Proceedings of the National Academy of Sciences, 117 [2] Sansford, H., Modell, A., Whiteley, N., and Rubin-Delanchy, P. (2023). Implications of sparsity and high triangle density for graph representation learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5449-5473. [3] Johnson, C. C. (2014). Logistic matrix factorization for implicit feedback data. Advances in Neural Information Processing Systems, 27(78):1–9. Compass students attending the Workshop on Functional Inference and Machine Intelligence (FIMI) at ISM Tokyo A post by Compass CDT students Edward Milsom, Jake Spiteri, Jack Simons, and Sam Stockman. We (Edward Milsom, Jake Spiteri, Jack Simons, Sam Stockman) attended the 2023 Workshop on Functional Inference and Machine Intelligence (FIMI) taking place on the 14, 15 and 16th of March at the Institute of Statistical Mathematics in Tokyo, Japan. Our attendance to the workshop was to further collaborative ties between the two institutions. The in-person participants included many distinguished academics from around Japan as well as our very own Dr Song Liu. Due to the workshops modest size, there was an intimate atmosphere which nurtured many productive research discussions. Whilst staying in Tokyo, we inevitably sampled some Japanese culture, from Izakayas to cherry blossoms and sumo wrestling! We thought we’d share some of our thoughts and experiences. We’ll first go through some of our most memorable talks, and then talk about some of our activities outside the workshop. Sho Sonoda – Ridgelet Transforms for Neural Networks on Manifolds and Hilbert Spaces We particularly enjoyed the talk given by Sho Sonoda, a Research Scientist from the Deep Learning Theory group at Riken AIP on “Ridgelet Transforms for Neural Networks on Manifolds and Hilbert Spaces.” Sonoda’s research aims to demystify the black box nature of neural networks, shedding light on how they work and their universal approximation capabilities. His talk provided valuable insights into the integral representations of neural networks, and how they can be represented using ridgelet transforms. Sonoda presented a reconstruction formula from which we see that if a neural network can be represented using ridgelet transforms, then it is a universal approximator. He went on to demonstrate that various types of networks, such as those on finite fields, group convolutional neural networks (GCNNs), and networks on manifolds and Hilbert spaces, can be represented in this manner and are thus universal approximators. Sonoda’s work improves upon existing universality theorems by providing a more unified and direct approach, as opposed to the previous case-by-case methods that relied on manual adjustments of network parameters or indirect conversions of (G)CNNs into other universal approximators, such as invariant polynomials and fully-connected networks. Sonoda’s work is an important step toward a more transparent and comprehensive understanding of neural networks. Greg Yang – The unreasonable effectiveness of mathematics in large scale deep learning Greg Yang is a researcher at Microsoft Research who is working on a framework for understanding neural networks called “tensor programs”. Similar to Neural Tangent Kernels and Neural Network Gaussian Processes, the tensor program framework allows us to consider neural networks in the infinite-width limit, where it becomes possible to make statements about the properties of very wide networks. However, tensor programs aim to unify existing work on infinite-width neural networks by allowing one to take the infinite limit of a much wider range of neural network architectures using one single In his talk, Yang discussed his most recent work in this area, concerning the “maximal update parametrisation”. In short, they show that in this parametrisation, the optimal hyperparameters of very wide neural networks are the same as those for much smaller neural networks. This means that hyperparameter search can be done using small, cheap models, and then applied to very large models like GPT-3, where hyperparameter search would be too expensive. The result is summarised in this figure from their paper “Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer”, which shows how this is not possible in the standard parametrisation. This work was only possible by building upon the tensor program framework, thereby demonstrating the value of having a solid theoretical understanding of neural networks. Statistical Seismology Seminar Series In addition to the workshop, Sam attended the 88th Statistical Seismology seminar in the Risk Analysis Research Centre at ISM https://www.ism.ac.jp/~ogata/Ssg/ssg_statsei_seminarsE.html. The Statistical Seismology Research Group at ISM was created by Emeritus Professor Yosihiko Ogata and is one of the leading global research institutes for statistical seismology. Its most significant output has been the Epidemic-Type Aftershock Sequence (ETAS) model, a point process based earthquake forecasting model that has been the most dominant model for forecasting since its creation by Ogata in 1988. As part of the Seminar series, Sam gave a talk on his most recent work (Forecasting the 2016-2017 Central Apennines Earthquake Sequence with a Neural Point Process’, https://arxiv.org/abs/2301.09948) to the research group and other visiting academics. Japan’s interest is earthquake science is due to the fact that they record the most earthquakes in the world. The whole country is in a very active seismic area, and they have the densest seismic network. So even though they might not actually have the most earthquakes in the world (which is most likely Indonesia) they certainly document the most. The evening before flying back to the UK, Sam and Jack felt a magnitude 5.2 earthquake 300km north of Tokyo in the Miyagi prefecture. At that distance all that was felt was a small shudder… It’s safe to say that the abundance of delicious food was the most memorable aspect of our trip. In fact, we never had a bad meal! Our taste buds were taken on a culinary journey as we tried a variety of Japanese dishes. From hearty, broth-based bowls of ramen and tsukemen, to fun conveyor-belt sushi restaurants, and satisfying tonkatsu (breaded deep-fried pork cutlet) with sticky rice or spicy udon noodles, we were never at a loss for delicious options. We even had the opportunity to cook our own food at an indoor barbecue! Aside from the food, we thoroughly enjoyed our time in Tokyo – exploring the array of second-hand clothes shops, relaxing in bath-houses, and trying random things from the abundance of vending Compass students at AISTATS 2023 Congratulations to Compass students Josh Givens, Hannah Sansford and Alex Modell who, along with their supervisors have had their papers accepted to be published at AISTATS 2023. ‘Implications of sparsity and high triangle density for graph representation learning’ Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick Rubin-Delanchy Hannah: In this paper we explore the implications of two common characteristics of real-world networks, sparsity and triangle-density, for graph representation learning. An example of where these properties arise in the real-world is in social networks, where, although the number of connections each individual has compared to the size of the network is small (sparsity), often a friend of a friend is also a friend (triangle-density). Our result counters a recent influential paper that shows the impossibility of simultaneously recovering these properties with finite-dimensional representations of the nodes, when the probability of connection is modelled by the inner-product. We, by contrast, show that it is possible to recover these properties using an infinite-dimensional inner-product model, where representations lie on a low-dimensional manifold. One of the implications of this work is that we can ‘zoom-in’ to local neighbourhoods of the network, where a lower-dimensional representation is possible. The paper has been selected for oral presentation at the conference in Valencia (<2% of submissions). Density Ratio Estimation and Neyman Pearson Classification with Missing Data Josh Givens, Song Liu, Henry W J Reeve Josh: In our paper we adapt the popular density ratio estimation procedure KLIEP to make it robust to missing not at random (MNAR) data and demonstrate its efficacy in Neyman-Pearson (NP) classification. Density ratio estimation (DRE) aims to characterise the difference between two classes of data by estimating the ratio between their probability densities. The density ratio is a fundamental quantity in statistics appearing in many settings such as classification, GANs, and covariate shift making its estimation a valuable goal. To our knowledge there is no prior research into DRE with MNAR data, a missing data paradigm where the likelihood of an observation being missing depends on its underlying value. We propose the estimator M-KLIEP and provide finite sample bounds on its accuracy which we show to be minimax optimal for MNAR data. To demonstrate the utility of this estimator we apply it the the field of NP classification. In NP classification we aim to create a classifier which strictly controls the probability of incorrectly classifying points from one class. This is useful in any setting where misclassification for one class is much worse than the other such as fault detection on a production line where you would want to strictly control the probability of classifying a faulty item as non-faulty. In addition to showing the efficacy of our new estimator in this setting we also provide an adaptation to NP classification which allows it to still control this misclassification probability even when fit using MNAR data. Student Perspectives: An Introduction to Stochastic Gradient Methods A post by Ettore Fincato, PhD student on the Compass programme. This post provides an introduction to Gradient Methods in Stochastic Optimisation. This class of algorithms is the foundation of my current research work with Prof. Christophe Andrieu and Dr. Mathieu Gerber, and finds applications in a great variety of topics, such as regression estimation, support vector machines, convolutional neural networks. We can see below a simulation by Emilien Dupont (https://emiliendupont.github.io/) which represents two trajectories of an optimisation process of a time-varying function. This well describes the main idea behind the algorithms we will be looking at, that is, using the (stochastic) gradient of a (random) function to iteratively reach the optimum. Stochastic Optimisation Stochastic optimisation was introduced by [1], and its aim is to find a scheme for solving equations of the form $abla_w g(w)=0$ given “noisy” measurements of $g$ [2]. In the simplest deterministic framework, one can fully determine the analytical form of $g(w)$, knows that it is differentiable and admits an unique minimum – hence the problem $w_*=\underset{w}{\text{argmin}}\quad g(w)$ is well defined and solved by $abla_w g(w)=0$. On the other hand, one may not be able to fully determine $g(w)$ because his experiment is corrupted by a random noise. In such cases, it is common to identify this noise with a random variable, say $V$, consider an unbiased estimator $\eta(w,V)$ s.t. $\mathbb{E}_V[\eta(w,V)]=g(w)$ and to rewrite the problem as Student Perspectives: An Introduction to Deep Kernel Machines A post by Edward Milsom, PhD student on the Compass programme. This blog post provides a simple introduction to Deep Kernel Machines[1] (DKMs), a novel supervised learning method that combines the advantages of both deep learning and kernel methods. This work provides the foundation of my current research on convolutional DKMs, which is supervised by Dr Laurence Aitchison. Why aren’t kernels cool anymore? Kernel methods were once top-dog in machine learning due to their ability to implicitly map data to complicated feature spaces, where the problem usually becomes simpler, without ever explicitly computing the transformation. However, in the past decade deep learning has become the new king for complicated tasks like computer vision and natural language processing. Neural networks are flexible when learning representations The reason is twofold: First, neural networks have millions of tunable parameters that allow them to learn their feature mappings automatically from the data, which is crucial for domains like images which are too complex for us to specify good, useful features by hand. Second, their layer-wise structure means these mappings can be built up to increasingly more abstract representations, while each layer itself is relatively simple[2]. For example, trying to learn a single function that takes in pixels from pictures of animals and outputs their species is difficult; it is easier to map pixels to corners and edges, then shapes, then body parts, and so on. Kernel methods are rigid when learning representations It is therefore notable that classical kernel methods lack these characteristics: most kernels have a very small number of tunable hyperparameters, meaning their mappings cannot flexibly adapt to the task at hand, leaving us stuck with a feature space that, while complex, might be ill-suited to our problem. (more…) Student perspectives: ensemble modelling for probabilistic volcanic ash hazard forecasting A post by Shannon Williams, PhD student on the Compass programme. My PhD focuses on the application of statistical methods to volcanic hazard forecasting. This research is jointly supervised by Professor Jeremy Philips (School of Earth Sciences) and Professor Anthony Lee. (more…) Student perspectives: Neural Point Processes for Statistical Seismology A post by Sam Stockman, PhD student on the Compass programme. Throughout my PhD I aim to bridge a gap between advances made in the machine learning community and the age-old problem of earthquake forecasting. In this cross-disciplinary work with Max Werner from the School of Earth Sciences and Dan Lawson from the School of Mathematics, I hope to create more powerful, efficient and robust models for forecasting, that can make earthquake prone areas safer for their inhabitants. For years seismologists have sought to model the structure and dynamics of the earth in order to make predictions about earthquakes. They have mapped out the structure of fault lines and conducted experiments in the lab where they submit rock to great amounts of force in order to simulate plate tectonics on a small scale. Yet when trying to forecast earthquakes on a short time scale (that’s hours and days, not tens of years), these models based on the knowledge of the underlying physics are regularly outperformed by models that are statistically motivated. In statistical seismology we seek to make predictions through looking at distributions of the times, locations and magnitudes of earthquakes and use them to forecast the future. Student Perspectives: Application of Density Ratio Estimation to Likelihood-Free problems A post by Jack Simons, PhD student on the Compass programme. I began my PhD with my supervisors, Dr Song Liu and Professor Mark Beaumont with the intention of combining their respective fields of research; Density Ratio Estimation (DRE), and Simulation Based Inference (SBI): • DRE is a rapidly growing paradigm in machine learning which (broadly) provides efficient methods of comparing densities without the need to compute each density individually. For a comprehensive yet accessible overview of DRE in Machine Learning see [1]. • SBI is a group of methods which seek to solve Bayesian inference problems when the likelihood function is intractable. If you wish for a concise overview of the current work, as well as motivation then I recommend [2]. Last year we released a paper, Variational Likelihood-Free Gradient Descent [3] which combined these fields. This blog post seeks to condense, and make more accessible, the contents of the paper. Motivation: Likelihood-Free Inference Let’s begin by introducing likelihood-free inference. We wish to do inference on the posterior distribution of parameters $\theta$ for a specific observation $x=x_{\mathrm{obs}}$, i.e. we wish to infer $p(\theta|x_{\mathrm{obs}})$ which can be decomposed via Bayes’ rule as $p(\theta|x_{\mathrm{obs}}) = \frac{p(x_{\mathrm{obs}}|\theta)p(\theta)}{\int p(x_{\mathrm{obs}}|\theta)p(\theta) \mathrm{d}\theta}.$ The likelihood-free setting is that, additional to the usual intractability of the normalising constant in the denominator, the likelihood, $p(x|\theta)$, is also intractable. In lieu of this, we require an implicit likelihood which describes the relation between data $x$ and parameters $\theta$ in the form of a forward model/simulator (hence simulation based inference!). (more…)
{"url":"https://compass.blogs.bristol.ac.uk/tag/student/","timestamp":"2024-11-07T09:12:08Z","content_type":"text/html","content_length":"140359","record_id":"<urn:uuid:0be27370-6596-4777-9c50-b3b5435d51e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00222.warc.gz"}
What is Hierarchical Clustering? An Introduction to Hierarchical Clustering Contributed by: Satish Rajendran LinkedIn Profile: https://www.linkedin.com/in/satish-rajendran85/ What is Hierarchical Clustering Clustering is one of the popular techniques used to create homogeneous groups of entities or objects. For a given set of data points, grouping the data points into X number of clusters so that similar data points in the clusters are close to each other. In most of the analytical projects, after data cleaning and preparation, clustering techniques are often carried out before predictive or other analytical modeling. Clustering falls under the category of unsupervised learning. Meaning, there is no labeled class or target variable for a given dataset. We are only interested in grouping similar records or objects in a cluster. Let us try to understand clustering in r by taking a retail domain case study. Let’s say if the leading retail chain wanted to segregate customers into 3 categories, low-income group (LIG), medium-income group (MIG), and high-income group (HIG) based on their sales and customer data for better marketing strategies. In this case, data is available for all customers and the objective is to separate or form 3 different groups of customers. We can achieve this with the help of clustering techniques. The below image depicts the same. Pink, blue, and yellow circles are the data points which are grouped into 3 clusters, namely LIG, MIG, and HIG having similar type of customers or homogeneous group of customers within the clusters. Now that we have a fair idea about clustering, it’s time to understand hierarchical clustering. Hierarchical Clustering creates clusters in a hierarchical tree-like structure (also called a Dendrogram). Meaning, a subset of similar data is created in a tree-like structure in which the root node corresponds to the entire data, and branches are created from the root node to form several clusters. Also Read: Top 20 Datasets in Machine Learning Hierarchical Clustering is of two types. 1. Divisive 2. Agglomerative Hierarchical Clustering Divisive Hierarchical Clustering is also termed as a top-down clustering approach. In this technique, entire data or observation is assigned to a single cluster. The cluster is further split until there is one cluster for each data or observation. Agglomerative Hierarchical Clustering is popularly known as a bottom-up approach, wherein each data or observation is treated as its cluster. A pair of clusters are combined until all clusters are merged into one big cluster that contains all the data. Both algorithms are exactly the opposite of each other. So we will be covering Agglomerative Hierarchical clustering algorithms in detail. How Agglomerative Hierarchical clustering Algorithm Works For a set of N observations to be clustered: 1. Start assigning each observation as a single point cluster, so that if we have N observations, we have N clusters, each containing just one observation. 2. Find the closest (most similar) pair of clusters and make them into one cluster, we now have N-1 clusters. This can be done in various ways to identify similar and dissimilar measures (Explained in a later section) 3. Find the two closest clusters and make them to one cluster. We now have N-2 clusters. This can be done using agglomerative clustering linkage techniques (Explained in a later section) 4. Repeat steps 2 and 3 until all observations are clustered into one single cluster of size N. Clustering algorithms use various distance or dissimilarity measures to develop different clusters. Lower/closer distance indicates that data or observation are similar and would get grouped in a single cluster. Remember that the higher the similarity depicts observation is similar. Step 2 can be done in various ways to identify similar and dissimilar measures. Namely, • Euclidean Distance • Manhattan Distance • Minkowski Distance • Jaccard Similarity Coefficient • Cosine Similarity • Gower’s Similarity Coefficient Euclidean Distance The Euclidean distance is the most widely used distance measure when the variables are continuous (either interval or ratio scale). The Euclidean distance between two points calculates the length of a segment connecting the two points. It is the most evident way of representing the distance between two points. The Pythagorean Theorem can be used to calculate the distance between two points, as shown in the figure below. If the points (x1, y1)) and (x2, y2) in 2-dimensional space, Then the Euclidean distance between them is as shown in the figure below. Manhattan Distance Euclidean distance may not be suitable while measuring the distance between different locations. If we wanted to measure a distance between two retail stores in a city, then Manhattan distance will be more suitable to use, instead of Euclidean distance. The distance between two points in a grid-based on a strictly horizontal and vertical path. The Manhattan distance is the simple sum of the horizontal and vertical components. In nutshell, we can say Manhattan distance is the distance if you had to travel along coordinates only. Minkowski Distance The Minkowski distance between two variables X and Y is defined as- When p = 1, Minkowski Distance is equivalent to the Manhattan distance, and the case where p = 2, is equivalent to the Euclidean distance. Jaccard Similarity Coefficient/Jaccard Index Jaccard Similarity Coefficient can be used when your data or variables are qualitative in nature. In particular to be used when the variables are represented in binary form such as (0, 1) or (Yes, Where (X n Y) is the number of elements belongs to both X and Y (X u Y) is the number of elements that belongs to either X or Y We will try to understand with an example, note that we need to transform the data into binary form before applying Jaccard Index. Let’s consider Store 1 and Store 2 sell below items and each item is considered as an element. Then, we can observe that bread, jam, coke and cake are sold by both stores. Hence, 1 is assigned for both stores. Jaccard Index value ranges from 0 to 1. Higher the similarity when Jaccard index is high. Also Read: Overfitting and Underfitting in Machine Learning Cosine Similarity Let A and B be two vectors for comparison. Using the cosine measure as a similarity function, we have- Cosine Similarity values range between -1 and 1. Lower the cosine similarity, low is the similarity b/w two observations. Let us understand by taking an example, consider shirt brand rating by 2 customer on the rate of 5 scale- Allen Solly Arrow Peter England US Polo Van Heusen Zodiac Customer 1 4 5 3 5 2 1 Customer 2 1 2 4 3 3 5 Gower’s Similarity Coefficient If the data contains both qualitative and quantitative variables then we cannot use any of the above distance and similarity measures as they are valid for qualitative or quantitative variables. Gower’s Similarity Coefficient can be used when data contains both qualitative and quantitative variables. Agglomerative clustering linkage algorithm (Cluster Distance Measure) This technique is used for combining two clusters. Note that it’s the distance between clusters, and not individual observation. How to Find Optimal number of clustering One of the challenging tasks in agglomerative clustering is to find the optimal number of clusters. Silhouette Score is one of the popular approaches for taking a call on the optimal number of clusters. It is a way to measure how close each point in a cluster is to the points in its neighboring clusters. Let a[i ]be the mean distance between an observation i and other points in the cluster to which observation I assigned. Let b[i][ ]be the minimum mean distance between an observation i and points in other clusters. Silhouette Score ranges from -1 ro +1. Higher the value of Silhouette Score indicates observations are well clustered. Silhouette Score = 1 indicates that the observation (i) is well matched in the cluster assignment. This brings us to the end of the blog, if you found this helpful then enroll with Great Learning’s free Machine Learning foundation course! You can also take up a free course on hierarchical clustering in r and upskill today! While exploring machine hierarchical clustering, it’s essential to enhance your knowledge through comprehensive and structured online courses. Great Learning offers a range of free online certificate courses that can help you delve deeper into the fascinating world of data science and machine learning. These courses cover various topics such as clustering algorithms, data visualization, and more. By enrolling in these courses, you can gain valuable insights, practical skills, and hands-on experience, all at your own pace and convenience. Expand your understanding of machine hierarchical clustering and unlock new opportunities in the field of data science with Great Learning’s free online courses.
{"url":"https://www.mygreatlearning.com/blog/hierarchical-clustering/","timestamp":"2024-11-06T14:14:31Z","content_type":"text/html","content_length":"396694","record_id":"<urn:uuid:f5b53dae-79c4-4785-8ed0-fd51bfc027eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00492.warc.gz"}
1. Revisit the all-weather portfolio you crafted. Create the maximum Sharpe portfolio's daily return dataframe and then merge it with Fama French's five return factors. # Required Libraries import pandas as pd import pandas_datareader as pdr import datetime import yfinance as yf import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm # Define the list of stock tickers tickers = ['AAPL','V','XOM','KO','AEP'] # Fetch historical data for the past 5 years end_date = datetime.datetime.now() start_date = end_date - datetime.timedelta(days=5*365 ) data = yf.download(tickers, start = start_date, end = end_date)['Adj Close'] # Calculate daily returns for each stock stock_returns = data.pct_change().dropna() # Calculate annualized mean return and standard deviation of each stock mean_daily_returns = stock_returns.mean() mean_annual_returns = mean_daily_returns * 252 std_deviation = stock_returns.std() * np.sqrt(252) # Portfolio Weights for different portfolios(Calculated in previous datalab) max_sharpe_weights = np.array([0.92166, 0.0, 0.03532, 0.0 ,0.04302]) # Calculate Daily Portfolio Returns daily_max_sharpe_returns = stock_returns.dot(max_sharpe_weights) # Create DataFrame df_portfolios = pd.DataFrame({'Max_Sharpe': daily_max_sharpe_returns}) # Calculate Cumulative Returns cumulative_returns = (1 + df_portfolios) .cumprod() # Put them side-by-side in a DataFrame ff_data = pd.read_csv('F-F_Research_Data_5_Factors_2x3_daily.CSV',skiprows=3) ff_data.set_index('Unnamed: 0', inplace=True) ff_data.index = pd. to_datetime(ff_data.index.astype(str), format='%Y%m%d') merged_data = df_portfolios.merge(ff_data, left_index=True, right_index=True) print(merged_data) 2. Examine visually the correlation between portfolio and factor returns. factors = ['Mkt-RF', 'SMB', 'HML', 'RMW', 'CMA'] for factor in factors: plt.figure(figsize=(10,6)) plt.scatter(merged_data['Max_Sharpe'], merged_data[factor]) plt.title(f'Correlation between Max_Sharpe and {factor}') plt.xlabel('Max_Sharpe Returns') plt.ylabel(f'{factor} Returns') plt.grid(True) plt.show() import seaborn as sns # Compute the correlation matrix k = merged_df.drop([ 'Ticker','Year'],axis = 1, inplace=True) corr = k.corr() # Draw the heatmap plt.figure(figsize=(10,8)) sns.heatmap(corr, annot=True, cmap="coolwarm", center=0) plt.title('Correlation Heatmap') plt. 3. Regress the portfolio return on each factor and assess the portfolio's sensitivity to each factor. For the curious, optional challenge, how do you test whether the intercept (i.e., alpha) is significantly different from the risk-free rate for a single-factor regression? factors = ['Mkt-RF', 'SMB', 'HML', 'RMW', 'CMA'] sensitivities = {} for factor in factors: # Set up the independent variable with a constant term X = sm.add_constant(merged_data[factor]) # Run the regression model = sm.OLS(merged_data['Max_Sharpe'], X) results = model.fit() # Store the coefficient (sensitivity) of the factor sensitivities[factor] = results.params[factor] print(f"\nRegression results for factor {factor}:") print(results.summary()) print("\nSensitivities of Portfolio to Each Factor:") for factor, sensitivity in sensitivities.items(): print(f"{factor}: {sensitivity:.4f}") 4. Regress the portfolio return on all factors and assess the portfolio's sensitivity to factors. For the curious, optional challenge, how do you test whether the intercept (i.e., alpha) is significantly different from the risk-free rate for a multi-factor regression? import statsmodels.api as sm X = sm.add_constant(merged_data[['Mkt-RF','SMB', 'HML','RMW','CMA']]) model = sm.OLS(merged_data['Max_Sharpe'], X).fit() b0, b1, b2, b3, b4, b5 = model.params print( 'Intercept (Alpha): %f' % b0) print('Sensitivities of active returns to factors:\nMkt-Rf: %f\nSMB: %f\nHML: %f\nRMW: %f\nCMA: %f' % (b1, b2, b3, b4, b5)) X_multi = merged_data[['Mkt-RF','SMB', 'HML', 'RMW','CMA']] X_multi = sm.add_constant(X_multi) # Adds a constant term to the predictors model_multi = sm.OLS(merged_data['Max_Sharpe'], X_multi) results_multi = model_multi.fit() print( 5. Optional Bonus. Construct a multi-factor pricing model for assets based on Arbitrage Pricing Theory. The Arbitrage Pricing Theory (APT) is a theory of asset pricing that holds that an asset’s returns can be forecasted with the linear relationship between an asset’s expected returns and the macroeconomic (e.g., GDP, changes in inflation, yield curve changes, changes in interest rates, market sentiments, exchange rates) or firm-specific statistical factors that affect the asset’s risk. Hint: You can draw these variables straight into your Jupyter notebook via Refinitiv API. The APT is a substitute for the Capital Asset Pricing Model (CAPM) in that both assert a linear relation between assets’ expected returns and their covariance with other random variables. (In the CAPM, the covariance is with the market portfolio’s return.) The covariance is interpreted as a measure of risk that investors cannot avoid by diversification. The slope coefficient in the linear relation between the expected returns and the covariance is interpreted as a risk premium ~ "Arbitrage Pricing Theory (Guberman and Wang 2005).
{"url":"https://deepnote.com/app/bmf5324-1bd9/DataLabs-a96443a8-fef1-4589-b930-671191e0abe0","timestamp":"2024-11-03T16:27:52Z","content_type":"text/html","content_length":"185809","record_id":"<urn:uuid:24267eb3-328a-4d20-ba3d-b8b321f45471>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00514.warc.gz"}
Subexponential-time Algorithms for Maximum Independent Set in P_t-free and Broom-free Graphs Subexponential-time Algorithms for Maximum Independent Set in P_t-free and Broom-free Graphs In algorithmic graph theory, a classic open question is to determine the complexity of the Maximum Independent Set problem on P_t-free graphs, that is, on graphs not containing any induced path on t vertices. So far, polynomial-time algorithms are known only for t< 5 [Lokshtanov et al., SODA 2014, 570--581, 2014], and an algorithm for t=6 announced recently [Grzesik et al. Arxiv 1707.05491, 2017]. Here we study the existence of subexponential-time algorithms for the problem: we show that for any t> 1, there is an algorithm for Maximum Independent Set on P_t-free graphs whose running time is subexponential in the number of vertices. Even for the weighted version MWIS, the problem is solvable in 2^O(√(tn n)) time on P_t-free graphs. For approximation of MIS in broom-free graphs, a similar time bound is proved. Scattered Set is the generalization of Maximum Independent Set where the vertices of the solution are required to be at distance at least d from each other. We give a complete characterization of those graphs H for which d-Scattered Set on H-free graphs can be solved in time subexponential in the size of the input (that is, in the number of vertices plus the number of edges): If every component of H is a path, then d-Scattered Set on H-free graphs with n vertices and m edges can be solved in time 2^O(|V(H)|√(n+m) (n+m)), even if d is part of the input. Otherwise, assuming the Exponential-Time Hypothesis (ETH), there is no 2^o(n+m)-time algorithm for d-Scattered Set for any fixed d> 3 on H-free graphs with n-vertices and m-edges.
{"url":"http://cdnjs.deepai.org/publication/subexponential-time-algorithms-for-maximum-independent-set-in-p-t-free-and-broom-free-graphs","timestamp":"2024-11-06T12:18:07Z","content_type":"text/html","content_length":"155707","record_id":"<urn:uuid:a6df6969-ee3a-4120-bd7a-966f633d8ce5>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00852.warc.gz"}
What Are Parentheses In Math? - Answered What Are Parentheses In Math? In mathematics, parentheses are used to group items in a sentence or in a table. They can also be used to indicate that a particular group of items is more important than others. How do you use parentheses examples? You can use parentheses examples to make a list of items more organized. What is an example of parentheses in math? In mathematics, parentheses are used to group items together. They can also be used to show the order of operations on a list. What is this Σ? This is the symbol for the symbol for the Greek letter Sigma. What are the first 1000000000000 digits of pi? The first 1000000000000 digits of pi are 1, 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97. How do you explain parentheses to a child? When you use parentheses, you are telling a child that there are more than one thing that are happening in the sentence. For example, in the sentence, “I saw a big bird,” the child might see “I saw a big bird,” “I saw a big bird,” and “I saw a big bird.” The child might also see “I saw a big bird (or a bird),” “I saw a big bird (or a bird),” and “I saw a big bird (or a bird).” What is this symbol called in math? The symbol for “math” is the Greek letter Μέτρο. What does ? mean from a guy? The word “?” is a abbreviation for “guy.” What does ? mean in slang? In slang, ? is often used as a term for “I’m not sure.” Do brackets mean negative? Boldfaced words indicate that the answer is a yes. What is this symbol mean? This symbol is usually used to represent a cross. How do you know where to put parentheses in an equation? In an equation, parentheses are used to group together the variables. They help to make the equation more clear. What does parentheses around a number mean? It means that the number is in parentheses and the parentheses are telling you how many times the number is followed by that particular letter. What do parentheses look like? parentheses look like a group of parentheses that are connected by a line. What does ? mean in slang? “To be really sick of someone or something.” What do parentheses () and square bracket in differ? In a regular expression, parentheses () and square bracket in differ because the first character in a parentheses is a backslash, whereas the second character in a square bracket is a forward slash. Who invented zero? The first zero was created by a mathematician named Ibn Sīnā in 1256. Is Pie a real number? Yes, Pie is a real number. How are parentheses used in math? In mathematics, parentheses are used to group items together so that they can be analyzed more easily. They can also be used to show how something works, for example in a math problem. How do you solve maths with brackets and parentheses? There is no one definitive answer to this question, as it depends on the specific maths problem you are trying to solve. However, some tips on how to solve maths problems with brackets and parentheses can be found here: https://www.math.niu.edu/~jacob/math-tips/math-tips-for-solving-math-problems-in-brackets-and-parens How do you do parentheses in a math problem? In a math problem, parentheses are used to group items together. The items in the parentheses must be in order from left to right. Do parentheses or brackets include the number? In a sentence, parentheses include the number, while brackets do not. Why are negative numbers in parentheses? Negative numbers are used to indicate that a quantity is smaller than what is being described. What are () these called? The () these called? are () a () set of () questions that are used to () test ()) the ()) knowledge of people. How do you simplify using parentheses? There are a few ways to simplify using parentheses. One way is to use the parentheses to group items together. For example, in the sentence “I want to eat a bacon wrapped date,” the items “date,” “wrapped,” and ” bacon” are all put together in the parentheses. This makes it easy to see which item is being talked about. Another way to use parentheses is to use them to group items by type. For example, in the sentence “I want to eat a bacon wrapped date,” the items “date,” “wrapped,” and ” bacon” are all put together in the parentheses, but the “date” is grouped with the “wrapped” and “bacon” items. This makes it easy to see which item is being talked about. What is the difference between () and []? [] is a list, while () is a function. How do you use parentheses to solve order of operations? There are three ways to use parentheses to solve order of operations problems:1. Parentheses can be used to group items in a list, as in (2, 3) or (A, B) .2. Parentheses can be used to group operations together, as in (A+B) or (A-B).3. Parentheses can be used to group items in a table, as in (A, B, C) . What does ? mean in texting? It stands for “I’m sorry, I don’t understand.” Is a number in parentheses negative? No, a number in parentheses is not negative. What is the difference between parentheses and brackets in math? In math, parentheses are used to group things together and brackets are used to group things together in a specific order. For example, in order to make an equation, you would use parentheses to group the terms together and use brackets to group the terms in a specific order. What does the () mean in math? The () in math stands for the order of operations. The most common () is addition, subtraction, multiplication, and division (also called the order of operations).
{"url":"https://www.nftartrank.com/what-are-parentheses-in-math/","timestamp":"2024-11-09T13:18:24Z","content_type":"text/html","content_length":"126294","record_id":"<urn:uuid:dfff2b9c-b639-4b3f-b014-c7f67cee11d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00082.warc.gz"}
Simpson's Approximation Calculator - Nethercraft.net Simpson’s Approximation Calculator Simpson’s Approximation Calculator Looking for a quick and easy way to approximate definite integrals without going through the tedious process of manual calculation? Simpson’s Approximation Calculator is here to save the day! This handy tool uses Simpson’s rule to provide a fast and accurate estimation of definite integrals. Whether you’re a student tackling calculus problems or a professional in need of quick solutions, this calculator is a must-have in your toolkit. What is Simpson’s Rule? Simpson’s Rule is a method for approximating the value of a definite integral by using quadratic polynomials. It provides a more accurate approximation compared to other methods such as the trapezoidal rule. The key idea behind Simpson’s Rule is to divide the interval of integration into multiple subintervals and approximate the function as a quadratic polynomial within each subinterval. By summing up the areas under these polynomials, we can estimate the value of the integral. How to Use Simpson’s Approximation Calculator Using Simpson’s Approximation Calculator is easy and straightforward. Simply enter the function you want to integrate, specify the interval of integration, and input the number of subintervals you want to divide the interval into. The calculator will then provide you with an approximation of the definite integral using Simpson’s rule. You can use this result as an estimate or a starting point for further analysis. Benefits of Using Simpson’s Approximation Calculator There are several benefits to using Simpson’s Approximation Calculator for your integration needs. Firstly, it saves you time and effort by automating the process of calculating definite integrals. Instead of manually solving complex calculus problems, you can get quick and accurate results with just a few clicks. Additionally, Simpson’s rule provides a more precise approximation compared to other methods, making it a reliable tool for obtaining close estimates of definite integrals. Applications of Simpson’s Rule Simpson’s Rule is commonly used in various fields such as physics, engineering, and economics for numerical integration. It is especially useful when dealing with functions that are difficult to integrate analytically. By using Simpson’s Rule, researchers and professionals can quickly estimate the area under a curve or solve integral equations without the need for manual calculations. This method is an essential tool in modern computational mathematics. Limitations of Simpson’s Rule While Simpson’s Rule is a powerful tool for approximating definite integrals, it does have its limitations. One drawback is that it requires the function to be smooth and well-behaved within the interval of integration. Functions with discontinuities or sharp corners may not yield accurate results when using Simpson’s rule. Additionally, the accuracy of the approximation depends on the number of subintervals used – a higher number of subintervals generally leads to a more precise estimation but also increases computational complexity. Overall, Simpson’s Approximation Calculator is a valuable tool for anyone in need of quick and accurate estimates of definite integrals. Whether you’re a student studying calculus or a professional working on complex mathematical problems, this calculator can streamline your work and provide reliable results. By harnessing the power of Simpson’s Rule, you can solve integration problems with ease and efficiency. Try out Simpson’s Approximation Calculator today and see for yourself the benefits it can bring to your mathematical endeavors.
{"url":"https://nethercraft.net/simpsons-approximation-calculator/","timestamp":"2024-11-07T10:28:31Z","content_type":"text/html","content_length":"53856","record_id":"<urn:uuid:1bcc0bfc-538f-4583-a17a-263bb2012874>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00688.warc.gz"}
A Beginners Guide to Discrete Mathematics 2nd Edition - Medical Book Seller Pakistan If you have decided to plunge yourself into the discipline of discrete mathematics, then you have chosen a fascinating as well as immensely practical area of study. In "A Beginners Guide to Discrete Mathematics 2nd Edition," you will find basic information about the importance of learning about it, areas of its usage, and the most significant ideas that people studying this field will come across. Discrete mathematics is calling you; get set to embark on an exciting journey into it! What is Discrete Mathematics? Discrete mathematics is one of the branches of mathematics that deals with mathematical systems that are considered discrete rather than continuous. As compared to calculus which addresses continuous change, discrete math addresses discrete changes in that it is not a continuum or gradient. You can look at it as the algebra of discrete objects rather than a chained, never-ending process of pie. The Importance of Discrete Mathematics Thus you may ask, why on earth does one need discrete math at all? Well, it is core to CS and many branches of math. A programming language assists in the creation of algorithms and data structures, cryptography, and even in solving actual problems. Without it, the world would almost completely be deprived of the digital service that is easily identified with contemporary society. Just think about how you would solve a puzzle when you aren’t even aware of the parts! Practical Applications Think it is only mathematics and computer science students who can appreciate discrete mathematics? It’s used in network protection (hey there secure internet purchasing), optimization difficulties (such as discovering the shortest path for delivery trucks), as well as social sciences for modeling and dealing with issues. It is present in every nook and cranny of the world, gradually reducing our woes and increasing our productivity. Key Concepts in Discrete Mathematics Sets and Subsets Let's start with the basics: sets, In other words, one can distinguish between the two kinds of objects, Sets. A set can also be defined as a well-defined group of objects carefully chosen in a way that members have no order and the whole set can be taken as one object. For instance, a set of vowels in the English alphabet is {a, e, i, o, u}. Subsets are defined as any other set that is part of a specific set or that has several of its elements making up the other set. Definitions and Examples In the topic of discrete mathematics, sets are clearly defined. For example, if we have the collection of even numbers that are less than ten the notation for such a set will be: E = {2, 4, 6, 8}. Such examples are easy to understand so that we may comprehend the bigger and more complex concepts later in the lesson. Functions and Relations Understanding Functions Functions are operations that take in some input and produce some output – this may be seen as similar to a machine. So, for every input one gets a particular output is a conventional hypothesis in engineering design. Think of a vending machine: you input into a machine by pressing a button and you get a snack out of it (output). It can be transcribed in mathematical terms if f(x) = x + 2, and if the input is 3 the output will be 5. Exploring Relations Functions, as defined, are different from relations in that the latter shows how one element of one set can be related to an element in another set. They can be highly non-elementary forms of connections and may include connections more than just ones within a function. For instance, a relation might illustrate who among students in a single class are friends with each other. Logic and Propositional Calculus Basic Logic Principles Invisibility is the primary concept in the process of logical analysis and estimating the probabilities of different situations. It enables us to understand arguments and statements made by other people. For example, the following is a proposition: If it rains then the ground will be wet. It serves the purpose of enabling individuals to make reasoning and deductions of such statements. Propositional Calculus Explained Propositional calculus advances the level of logic by taking up propositions and how they can be combined. In a way, it is similar to providing elements and premises for arriving at even greater and higher statements. For example, join the proposition “It is raining” and “I have an umbrella” with AND/OR operators. The Art of Counting In combinatorics, counting is done without actually totaling all or each of the items to be counted. And that means it is similar to determining how many distinct pizzas can be created out of a certain number of toppings. You do not list all the possibilities that can occur but the mathematical methods are employed along the way to arrive at the total. Permutations and Combinations Combinatorial arrangements work with order, while rational selections work with choice, disregarding the order of the items chosen. Thus we have arranged the books on the bookshelf i.e. the permutations; as well as selecting 3 books out of a total of 10, which is the combinations. Graph Theory Basics of Graphs Graphs are illustrations of associations between items. Consider a social network where every individual is represented by a node and the friendship that the individuals have as the edges. Graph theory enables us to study these relations and look for patterns. Applications in Real Life That simple fact seems to be what a lot of people are missing when they dismiss the application of graph theory as being useful only for social networks. This is applied in logistics where such things as the shortest route for the delivery vehicle, biology – networks of neurons, and solving of puzzles. In the hands of a mathematician, it is a handy device. Study Tips for Beginners Recommended Resources Using the most appropriate tools at the beginning can go a long way in producing the results that are anticipated. Go for textbooks, online classes, and examples or problems. Even though it is a large book, the main textbook that can serve as a starting point is “Discrete Mathematics and Its Applications” by Kenneth H. Rosen. Effective Study Habits Consistency is key. Always allocate particular time for studying, diversity, and divide tough tasks, do not be shy and ask for help. Joining study groups also helps in support and getting different points of view on matters. Practice Problems and Solutions The art of making practices can never be overstressed, so as far as mathematics is concerned, the necessary practice should be performed to get the best results. Solve problems logically, see your results, and comprehend your mistakes. Such educational websites as Khan Academy have practice problems to make learning all the more delightful. Considering the jump into discrete mathematics might seem complicated at the beginning, yet, the given strategy is quite useful and engaging. "A Beginners Guide to Discrete Mathematics 2nd Edition" has introduced the readers to sets and functions as well as combinatorics and graph theory among other basic operational concepts. Just as in any other area of analysis, the more time you spend cracking discrete math problems, the better you will be at it. Q: What are the Conditions for studying discrete mathematics? A: A basic understanding of algebra and a knack for logical thinking are great starting points. Q: How is discrete math used in computer science? A: It's fundamental in areas like algorithms, data structures, and cryptography, forming the backbone of computer science principles. Q: Can I study discrete mathematics online? A: Absolutely! Many online help and courses are available, including Khan Academy, Coursera, and edX. Q: What's the difference between discrete and continuous mathematics? A: Discrete math deals with distinct values, while continuous math (like calculus) handles smoothly varying quantities. Q: How can I improve my understanding of discrete math? A: Regular practice, seeking help when needed, and using a variety of resources like textbooks and online tutorials can greatly enhance your understanding. Add a Comment
{"url":"https://medicalbookseller.pk/a-beginners-guide-to-discrete-mathematics-2nd-edition/","timestamp":"2024-11-10T09:48:16Z","content_type":"text/html","content_length":"105847","record_id":"<urn:uuid:ae4d5a28-f606-4a44-a2c2-1535d87997e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00091.warc.gz"}
5.7.1 – Finding the Domain and Range of a Quadratic Function Any number can be the input value of a quadratic function. Therefore, the domain of any quadratic function is all real numbers. Because parabolas have a maximum or a minimum point, the range is restricted. Since the vertex of a parabola will be either a maximum or a minimum, the range will consist of all y-values greater than or equal to the y-coordinate at the turning point or less than or equal to the y-coordinate at the turning point, depending on whether the parabola opens up or down. Domain and Range of a Quadratic Function The domain of any quadratic function is all real numbers unless the context of the function presents some restrictions. The range of a quadratic function written in general form [latex]\,f\left(x\right)=a{x}^{2}+bx+c\,[/latex] with a positive [latex]\,a\,[/latex] value is [latex]\,f\left(x\right)\ge f\left(-\frac{b} {2a}\right),\,[/latex] or [latex]\,\left[f\left(-\frac{b}{2a}\right),\infty \right);\,[/latex] the range of a quadratic function written in general form with a negative [latex]\,a\,[/latex] value is [latex]\,f\left(x\right)\le f\left(-\frac{b}{2a}\right),\,[/latex] or [latex]\,\left(-\infty ,f\left(-\frac{b}{2a}\right)\right].[/latex] The range of a quadratic function written in standard form [latex]\,f\left(x\right)=a{\left(x-h\right)}^{2}+k\,[/latex] with a positive [latex]\,a\,[/latex] value is [latex]\,f\left(x\right)\ge k;\, [/latex] the range of a quadratic function written in standard form with a negative [latex]\,a\,[/latex] value is [latex]\,f\left(x\right)\le k.[/latex] How To Given a quadratic function, find the domain and range. 1. Identify the domain of any quadratic function as all real numbers. 2. Determine whether [latex]\,a\,[/latex] is positive or negative. If [latex]\,a\,[/latex] is positive, the parabola has a minimum. If [latex]\,a\,[/latex] is negative, the parabola has a maximum. 3. Determine the maximum or minimum value of the parabola, [latex]\,k.[/latex] 4. If the parabola has a minimum, the range is given by [latex]\,f\left(x\right)\ge k,\,[/latex] or [latex]\,\left[k,\infty \right).\,[/latex] If the parabola has a maximum, the range is given by [latex]\,f\left(x\right)\le k,\,[/latex] or [latex]\,\left(-\infty ,k\right].[/latex] Example 1 – Finding the Domain and Range of a Quadratic Function Find the domain and range of [latex]\,f\left(x\right)=-5{x}^{2}+9x-1.[/latex] As with any quadratic function, the domain is all real numbers. Because [latex]\,a\,[/latex] is negative, the parabola opens downward and has a maximum value. We need to determine the maximum value. We can begin by finding the [latex]\,x\text{-}[/latex] value of the vertex. [latex]$$\begin{array}{ccc}\hfill h& =& -\frac{b}{2a}\hfill \\ \hfill & =& -\frac{9}{2\left(-5\right)}\hfill \\ \hfill & =& \frac{9}{10}\hfill \end{array}$$[/latex] The maximum value is given by [latex]$$\,f\left(h\right).$$[/latex] [latex]$$\begin{array}{ccc}\hfill f\left(\frac{9}{10}\right)& =& -5{\left(\frac{9}{10}\right)}^{2}+9\left(\frac{9}{10}\right)-1\hfill \\ & =& \frac{61}{20}\hfill \end{array}$$[/latex] The range is [latex]\,f\left(x\right)\le \frac{61}{20},\,[/latex] or [latex]\,\left(-\infty ,\frac{61}{20}\right].[/latex] Try It Find the domain and range of [latex]\,f\left(x\right)=2{\left(x-\frac{4}{7}\right)}^{2}+\frac{8}{11}.[/latex] Show answer The domain is all real numbers. The range is [latex]\,f\left(x\right)\ge \frac{8}{11},\,[/latex] or [latex]\,\left[\frac{8}{11},\infty \right).[/latex] 5.7.2 – Determining the Maximum and Minimum Values of Quadratic Functions The output of the quadratic function at the vertex is the maximum or minimum value of the function, depending on the orientation of the parabola. We can see the maximum and minimum values in (Figure) There are many real-world scenarios that involve finding the maximum or minimum value of a quadratic function, such as applications involving area and revenue. Example 2 – Finding the Maximum Value of a Quadratic Function A backyard farmer wants to enclose a rectangular space for a new garden within her fenced backyard. She has purchased 80 feet of wire fencing to enclose three sides, and she will use a section of the backyard fence as the fourth side. a. Find a formula for the area enclosed by the fence if the sides of fencing perpendicular to the existing fence have length [latex]\,L.[/latex] b. What dimensions should she make her garden to maximize the enclosed area? Let’s use a diagram such as (Figure) to record the given information. It is also helpful to introduce a temporary variable, [latex]\,W,\,[/latex] to represent the width of the garden and the length of the fence section parallel to the backyard fence. a. We know we have only 80 feet of fence available, and [latex]\,L+W+L=80,\,[/latex] or more simply, [latex]\,2L+W=80.\,[/latex] This allows us to represent the width, [latex]\,W,\,[/latex] in terms of [latex]\,L.[/latex] Now we are ready to write an equation for the area the fence encloses. We know the area of a rectangle is length multiplied by width, so [latex]$$\begin{array}{ccc}\hfill A& =& LW=L\left(80-2L\right)\hfill \\ \hfill A\left(L\right)& =& 80L-2{L}^{2}\hfill \end{array}$$[/latex] This formula represents the area of the fence in terms of the variable length [latex]\,L.\,[/latex] The function, written in general form, is b. The quadratic has a negative leading coefficient, so the graph will open downward, and the vertex will be the maximum value for the area. In finding the vertex, we must be careful because the equation is not written in standard polynomial form with decreasing powers. This is why we rewrote the function in general form above. Since [latex]\,a\,[/latex] is the coefficient of the squared term, [latex]\,a=-2,b=80,\,[/latex] and [latex]\,c=0.[/latex] To find the vertex: [latex]$$\begin{array}{ccccccc}\hfill h& =& -\frac{b}{2a}\hfill & & \hfill \phantom{\rule{1em}{0ex}}k& =& A\left(20\right)\hfill \\ & =& -\frac{80}{2\left(-2\right)}\hfill & \phantom{\rule{1em}{0ex}} \text{and}& & =& 80\left(20\right)-2{\left(20\right)}^{2}\hfill \\ & =& 20\hfill & & & =& 800\hfill \end{array}$$[/latex] The maximum value of the function is an area of 800 square feet, which occurs when [latex]\,L=20\,[/latex] feet. When the shorter sides are 20 feet, there is 40 feet of fencing left for the longer side. To maximize the area, she should enclose the garden so the two shorter sides have length 20 feet and the longer side parallel to the existing fence has length 40 feet. This problem also could be solved by graphing the quadratic function. We can see where the maximum area occurs on a graph of the quadratic function in (Figure). How To Given an application involving revenue, use a quadratic equation to find the maximum. 1. Write a quadratic equation for a revenue function. 2. Find the vertex of the quadratic equation. 3. Determine the y-value of the vertex. Example 3 – Finding Maximum Revenue The unit price of an item affects its supply and demand. That is, if the unit price goes up, the demand for the item will usually decrease. For example, a local newspaper currently has 84,000 subscribers at a quarterly charge of $30. Market research has suggested that if the owners raise the price to $32, they would lose 5,000 subscribers. Assuming that subscriptions are linearly related to the price, what price should the newspaper charge for a quarterly subscription to maximize their revenue? Revenue is the amount of money a company brings in. In this case, the revenue can be found by multiplying the price per subscription times the number of subscribers, or quantity. We can introduce variables, [latex]\,p\,[/latex] for price per subscription and [latex]\,Q\,[/latex] for quantity, giving us the equation [latex]\,\text{Revenue}=pQ.[/latex] Because the number of subscribers changes with the price, we need to find a relationship between the variables. We know that currently [latex]\,p=30\,[/latex] and [latex]\,Q=84,000.\,[/latex] We also know that if the price rises to $32, the newspaper would lose 5,000 subscribers, giving a second pair of values, [latex]\,p=32\,[/latex] and [latex]\,Q=79,000.\,[/latex] From this we can find a linear equation relating the two quantities. The slope will be [latex]$$\begin{array}{ccc}\hfill m& =& \frac{79,000-84,000}{32-30}\hfill \\ & =& \frac{-5,000}{2}\hfill \\ & =& -2,500\hfill \end{array}$$[/latex] This tells us the paper will lose 2,500 subscribers for each dollar they raise the price. We can then solve for the y-intercept. [latex]$$\begin{array}{cccc}\hfill Q& =& -2500p+b\hfill & \phantom{\rule{2em}{0ex}}\text{Substitute in the point}Q=84,000\text{ and }p=30\hfill \\ \hfill 84,000& =& -2500\left(30\right)+b\hfill & \ phantom{\rule{2em}{0ex}}\text{Solve for}b\hfill \\ \hfill b& =& 159,000\hfill & \end{array}$$[/latex] This gives us the linear equation [latex]\,Q=-2,500p+159,000\,[/latex] relating cost and subscribers. We now return to our revenue equation. [latex]$$\begin{array}{ccc}\hfill \mathrm{Revenue}& =& pQ\hfill \\ \hfill \mathrm{Revenue}& =& p\left(-2,500p+159,000\right)\hfill \\ \hfill \mathrm{Revenue}& =& -2,500{p}^{2}+159,000p\hfill \end We now have a quadratic function for revenue as a function of the subscription charge. To find the price that will maximize revenue for the newspaper, we can find the vertex. [latex]$$\begin{array}{ccc}\hfill h& =& -\frac{159,000}{2\left(-2,500\right)}\hfill \\ & =& 31.8\hfill \end{array}$$[/latex] The model tells us that the maximum revenue will occur if the newspaper charges $31.80 for a subscription. To find what the maximum revenue is, we evaluate the revenue function. [latex]$$\begin{array}{ccc}\hfill \text{maximum revenue}& =& -2,500{\left(31.8\right)}^{2}+159,000\left(31.8\right)\hfill \\ & =& 2,528,100\hfill \end{array}$$[/latex] This could also be solved by graphing the quadratic as in (Figure). We can see the maximum revenue on a graph of the quadratic function. Access these online resources for additional instruction and practice with quadratic equations. Key Equations general form of a quadratic function [latex]f\left(x\right)=a{x}^{2}+bx+c[/latex] standard form of a quadratic function [latex]f\left(x\right)=a{\left(x-h\right)}^{2}+k[/latex] Key Concepts • The domain of a quadratic function is all real numbers. The range varies with the function. See Example 1. • A quadratic function’s minimum or maximum value is given by the [latex]\,y\text{-}[/latex] value of the vertex. • The minimum or maximum value of a quadratic function can be used to determine the range of the function and to solve many kinds of real-world problems, including problems involving area and revenue. See Example 2 and Example 3. axis of symmetry a vertical line drawn through the vertex of a parabola, that opens up or down, around which the parabola is symmetric; it is defined by [latex]\,x=-\frac{b}{2a}.[/latex] general form of a quadratic function the function that describes a parabola, written in the form [latex]\,f\left(x\right)=a{x}^{2}+bx+c[/latex] , where [latex]\,a,b,\,[/latex] and [latex]\,c\,[/latex] are real numbers and [latex]\,a \ne 0.[/latex] in a given function, the values of [latex]\,x\,[/latex] at which [latex]\,y=0[/latex] , also called zeros standard form of a quadratic function the function that describes a parabola, written in the form [latex]\,f\left(x\right)=a{\left(x-h\right)}^{2}+k[/latex] , where [latex]\,\left(h,\text{ }k\right)\,[/latex] is the vertex the point at which a parabola changes direction, corresponding to the minimum or maximum value of the quadratic function vertex form of a quadratic function another name for the standard form of a quadratic function in a given function, the values of [latex]\,x\,[/latex] at which [latex]\,y=0[/latex] , also called roots
{"url":"https://pressbooks.pub/guttmanalgebratrig/chapter/5-7-the-vertex-of-a-quadratic-function/","timestamp":"2024-11-13T14:45:51Z","content_type":"text/html","content_length":"83816","record_id":"<urn:uuid:aed71683-28b5-4f71-928c-094d5973a9e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00743.warc.gz"}
Mapping and Enforcement of Minimally Restrictive Manufacturability Constraints in Mechanical Design Traditional design-for-manufacturability (DFM) strategies focus on efficiency and design simplification and tend to be too restrictive for optimization-based design methods; recent advances in manufacturing technologies have opened up many new and exciting design options, but it is necessary to have a wide design space in order to take advantage of these benefits. A simple but effective approach for restricting the design space to designs that are guaranteed to be manufacturable is needed. However, this should leave intact as much of the design space as possible. Work has been done in this area for some specific domains, but a general method for accomplishing this has not yet been refined. This article presents an exploration of this problem and a developed framework for mapping practical manufacturing knowledge into mathematical manufacturability constraints in mechanical design problem formulations. The steps for completing this mapping and the enforcing of the constraints are discussed and demonstrated. Three case studies (a milled heat exchanger fin, a 3-D printed topologically optimized beam, and a pulley requiring a hybrid additive–subtractive process for production) were completed to demonstrate the concepts; these included problem formulation, generation and enforcement of the manufacturability constraints, and fabrication of the resulting designs with and without explicit manufacturability constraints. Volume Section: Technical Brief mechanical design, problem formulation, constraint mapping, design for manufacturing, manufacturing processes, advanced manufacturing, analysis and design of components, devices, and systems, applied mechanics, computational foundations for engineering optimization, conceptual design, design automation, design engineering, design methods, manufacturing automation 1 Introduction Recent years have seen much advancement in the sophistication of mechanical design methods, both for the design of individual parts and integrated assemblies/systems. Many of these new techniques for design genesis focus on design automation, in which large areas of a given design space can be explored in a quick and efficient way and a large number of candidate designs can be compared quickly; some good examples are the development of generative design [1,2], topology optimization [3,4], candidate architecture analysis for mechanical systems [5,6], and machine learning for analyzing and selecting potential designs [7,8]. Such methods are steadily increasing in their level of maturity, but problems remain which restrict their usefulness in final product design. A particular concern that has not yet been fully addressed is the manufacturability of the final designs. While these and other advanced design methods can produce very sophisticated and nearly ideal parts in terms of performance and other metrics, the designs are often extremely complex and not easily manufacturable using conventional fabrication methods, including with additive manufacturing techniques [9–12]. This is the case for both macro-level user products and design problems at smaller scales (e.g., structured material design, micro-scale design features, and similar areas of interest). In this article, the term “mechanical design” refers to the design of mechanical devices, assemblies, and systems (including electrical devices with mechanical components such as motors and switches and the design of structured materials). Realization of the final design is one of the most important considerations of a product lifecycle but it is often overlooked or deprioritized by designers, especially at the earlier stages and in requirements definition [13,14]. When the manufacturing process can be selected after the completion of the design, this can speed up the design process and reduce the number of design requirements. However, this presents the risk of mismatch between the final product and any available fabrication processes [9,13,15,16]. When this mismatch is encountered, the final design may need to be sent back for additional iterations (i.e., “repeated design”). This generally increases the cost and schedule risk significantly and may, in extreme cases, require revising the requirements from scratch after lessons learned from the manufacturing domain [17,18]. This risk may be low if the product is simple or is a member of an established product family that was shown to work well in the past. However, when the design is relatively complex (as is often the case in generated or geometrically optimized designs), the risk can be high that the mismatch will occur and that the product is completely nonmanufacturable with any available process [9,10,16]. Traditionally, this problem was addressed using design-for-manufacturability (DFM) principles (see Section 1 in the Supplemental Materials on the ASME Digital Collection); these principles were developed as a set of guidelines or rules to simplify the manufacturing requirements to the point where several processes could be feasible and the risk of mismatch is low [16,19–21]. According to Bralla [16] and Boothroyd [18], traditional DFM prioritizes simplicity in design and material selection and integration of simple expert intuition into decision-making processes. When applied to assemblies and systems, as many existing or commercially available components as possible should be used. In all cases, the tolerances should be as loose as possible. In the past couple of decades, partially aided by emerging advanced design and production methods and a renewed focus on user-centered design, the traditional mass-production focus has been shifting to a mass-customization environment [22–24]. In such an environment with small-batch, high-value part production, it is vital for designers to have access to as much of the product design space as possible in order to produce useful designs for complex problems such as those encountered in the medical and aerospace industries [25–31]. Therefore, it is necessary for a DFM technique to be developed and used which guarantees (or at least better ensures) manufacturability while restricting the design space as little as possible; this will allow more rigorous problem formulations and will prevent missing potentially feasible regions of the design space. In this context, the design space is the set of feasible solutions for a given design problem which satisfy the constraints and objectives of the problem. The different points or regions of the design space may represent the outcomes of different design decisions or tradeoffs. This new paradigm in DFM will only be possible if the general design restrictions imposed by traditional DFM are replaced with well-defined, manufacturing-process-driven design constraints which can be customized for each design problem or even each design feature. This would involve explicitly imposing the manufacturability constraints in the problem formulation or requirements definition, instead of simply checking manufacturability post-design or simplifying the design at the point of low-risk manufacturing. In this article, this concept is referred to as minimally restrictive design-for-manufacturability or MR-DFM. Significant work has been invested into developing concepts related to optimizing, automating, and improving DFM for some specific problems and domains (such as in Refs. [9,10,32–51]), but any kind of general process- and solver-independent method for capturing, defining, mapping, and enforcing useful manufacturing constraints does not yet exist in the literature. The “minimally restrictive” nature of the constraints generated refers only to their restrictiveness on the domains of the design variables, not to minimization (optimality) in the mathematical sense, and does not establish a “tolerance” on the design variables (the concept of tolerance allocation has been extensively explored [52–55] and is not the focus of this work). The novelty and value of the work presented in this paper and its Supplemental Materials on the ASME Digital Collection is the dynamic nature of MR-DFM; in contrast to the more static or linear nature of traditional DFM, MR-DFM enables better capture and control of design information during the design phase of product or mechanical system development. This allows better integration of the manufacturing (and often material-based) information into design decisions and, therefore, allows the constraints to be as liberal as possible, providing the largest possible design space. While most implementations of MR-DFM will require a significant amount of problem-specific information, many general principles are defined and discussed in this work. Beginning with the baseline given here, designers, manufacturing engineers, DFM practitioners, and other design stakeholders can more easily apply the principles to the problem at hand in a useful and dynamic way. In addition, MR-DFM has a better potential to be automated based on historical or collected data than most traditional DFM approaches, which often rely mostly on expert intuition or simple general guidelines. This article was organized into several major sections, beginning with conceptualization, development, and definitions of manufacturability constraints in Sec. 2. The final part of Sec. 2 is dedicated to demonstrating the relationship between the proposed method and classic DFM, as well as the novelty of this approach in the engineering literature and practice. From here, Sec. 3 examines the important properties of the constraints, particularly related to restrictiveness and dominance. A general framework for applying MR-DFM to realistic design problems is given in Sec. 4, followed by three detailed case studies in Sec. 5. After the case studies, some remarks on the practical implementation and automation of the proposed process are given in Sec. 6. The final part of the paper (Sec. 7) presents some conclusions and future work directions. This article is accompanied by a significant amount of the Supplemental Materials on the ASME Digital Collection. 2 Manufacturability Constraints: Definition, Generation, and Enforcement A general framework for applying rigorous (i.e., repeatable, as complex or simple as needed, and with low uncertainty) manufacturability constraints in design practice requires development in three domains, namely (1) process and material behavior modeling, (2) mapping and problem formulation, and (3) practical implementation, including verification and validation strategies. Figure 1 shows some of the major technical knowledge areas within each domain. The first, the process and material modeling domain, is mostly concerned with mechanics and materials science and includes rigorous process and material modeling and definition. The second domain is concerned with collecting and mapping design knowledge and formulating useful and rigorous problem formulations. The third is the practical implementation, consisting mainly of design method (to solve the problems formulated in Domain 2), automation, and verification, validation, certification, and standards development for the design problem or problem family under consideration. Much previous work has been completed in the first and third domains, but very little in the second domain. Hence, there is a clear and specific need for an MR-DFM concept which is general and can serve as the connection from the process/material modeling and practical design application. To address this need, this article focuses on the second domain to develop a framework for practical MR-DFM with a focus on mechanical design problems. As with classic DFM, this framework is agnostic to the design solution, manufacturing process or processes, and materials selected. This method is based on a simple, direct mapping of the “practical” knowledge from the selected manufacturing processes (and by extension the materials) into a set of mathematical manufacturability constraints which can be imposed in the design problem formulation or requirements, or only selected features. Whether these constraints actually restrict the design space (i.e., the set of all possible design solutions or options) more than the other design constraints in the problem may be established during the mapping process or after a preliminary solution is found (depending mainly on the complexity and form of the constraints) [56–58]. In an ideal situation, all of the nonconstant (“boundary”) constraints are linear or at least monotonic [56,59–61] and there is only one for each design variable over the entire space. Realistic problems are unlikely to be so simple, but effort should be made to impose the smallest possible number of these constraints and ensure that they are as minimally restrictive as possible while ensuring a significant increase of manufacturability under the conditions established by the stakeholders/designers. A fundamental tradeoff exists between manufacturing constraint fidelity (how accurately constraints quantify the boundary between manufacturable and nonmanufacturable designs) and complexity (number of constraints, constraint nonlinearity, computational evaluation expense [62–64]). This framework aids engineers in determining tradeoff decisions that are appropriate for a given design effort. 2.1 Mapping the Manufacturability Constraints. Three related levels of analysis (separately from what is shown in Fig. 1) can be defined to map the practical knowledge from process mechanics into enforceable manufacturability constraints. These are the manufacturing considerations (basic mechanical knowledge about the processes and materials), manufacturing constraints (constraints on the process), and manufacturability constraints (design constraints imposed by the choice of the manufacturing process). These are further described and developed in the following sections and shown in relation to each other in Fig. 2. Some mathematical discussion of design problem formulation and descriptions of major manufacturing process families are covered in the Supplemental Materials on the ASME Digital Collection in Sections 2 and 3, 2.1.1 Manufacturing Considerations. Three things can be gained at this level: (1) process advantages (which expand the design space) and (2) process limitations (which restrict the design complexity), and best-practices or guidelines for proper use of the process. This is the broadest level of analysis and the applicability may be to an entire industry or family of manufacturing processes. Technical ownership at this level belongs to technicians and process engineers who have the most practical knowledge about manufacturing processes. The use of DFM generally implies that a specific process has been selected early in the design lifecycle; if the process is not yet specified, the manufacturing considerations level would be the most appropriate place to compare processes to aid in the selection process. An example of manufacturing considerations (for a machining process) is the requirement that all features in the design be (1) reachable by the cutting tools and (2) able to dissipate the friction heat and stress from the cutting without damaging the product. 2.1.2 Manufacturing Constraints. Mapped from the design considerations, these are natural constraints on the use of the process in question and are typically not changeable within a particular process. In most cases, incompatible manufacturing constraints necessitate the selection of a different manufacturing process to fabricate the design in question. The level of analysis is moderate in scope, being restricted to a single manufacturing process or several very similar processes within the same family. It should be noted that manufacturing constraints, by their nature, are more likely to be equality constraints and may take the form of discrete or combinatorial functions (such as a list of available machining tools for milling). The constraints generated here can be redundant, duplicate, or inactive, so it is important to consider some kind of refinement at this level in order to facilitate the formal constraint definition once mapped to the design itself. Following the example manufacturing considerations from the previous step, the equivalent manufacturing constraints would be (1) the quantification of the cutting tool range and (2) the limitation of machining to features strong and thick enough to withstand the associated heat and stress. 2.1.3 Manufacturability Constraints. These constraints are mapped from the manufacturing constraints and are constraints on the design, not on the process. There are different methods of enforcing these, depending on the nature of the problem, but in most cases they can be described mathematically and imposed onto the problem via mathematical constraints for typical mechanical design problems. Carrying on the example from the previous two steps, the manufacturability constraints on a design to be made using a machining process would be the (1) maximum complexity allowed considering the type of tool used and (2) the minimum feature thickness required for the machining loads. Note that the design space could be expanded through use of higher fidelity constraints, such as feature thickness constraints that depend on neighboring geometry instead of a uniform limit, but this trades off with the effort required to create and use constraints in the design problem. 2.1.4 High-Level Mapping Scheme. Figure 2 shows the conceptual mapping process, with each of the levels shown relative to each other. 1. First (Fig. 2(a)), the process advantages, disadvantages, and best-practices are analyzed and then mapped to manufacturing constraints (Fig. 2(b)). The needed domain-specific knowledge here is a fundamental understanding of the manufacturing process or processes that may be used. 2. These constraints are then subject to a refinement process (Fig. 2(b)), where they are identified, specified carefully, ranked in terms of importance, and combined when possible to reduce the number of them that need to be mapped to the design domain (Fig. 2(c)). Knowledge about the mechanics of manufacturing processes is needed for this step, but most of the technical data will carry over from the first step. 3. Finally, the manufacturing constraints are mapped onto the design domain, where the focus shifts from the process mechanics to the details of the design (Fig. 2(c)). Once these constraints and the other problem constraints imposed by the stakeholders (material constraints, cost constraints, technology limitations, performance and reliability, etc.) are applied and enforced, the remaining design space is available for the designers to explore. This mapping method ensures that all of the important manufacturing process information is used in the design process, while restricting the design space as little as possible. 2.2 Constraint Representation and Uncertainty. One of the major concerns remaining with the derived manufacturability constraints is the quality of representation and level of uncertainty for them. Classic DFM tends to provide a very flexible design representation, proxy comparison metrics, and a highly simplified system representation. Therefore, the classic DFM constraints tend to be very general and simple. On the other hand, MR-DFM gives a much more structured design representation, a higher-fidelity system representation, and more realistic metrics. Figure 3 shows the design formulation space for classic DFM and MR-DFM [56,65 ]. This is heavily dependent on the inputs from Domain 1 (Fig. 1) and uncertainties in the location and effect of the constraints certainly may come from uncertainties in process and problems modeling. This is especially true in the cases where expert intuition is used to determine the constraints. To illustrate the concept, the constraints can be visualized like mathematical constraints within a level set, as shown in Fig. 4. In these examples, the more static and linear traditional DFM constraints would typically be simple bounds, while the more dynamic MR-DFM would allow the constraints to be more completely defined and possibly allow more liberal constraints for the same design problem (and hence a larger design space) [56–58,66–68,]. When defining classic DFM constraints on design problems, it is common to give very general guidelines such as “simplify as much as possible.” This simply limits general design complexity, often resulting in a feasible region similar to the one shown in Fig. 4(a) [66,67]. In the case where MR-DFM is used, more complex but tight constraints can be used, as shown in Fig. 4(b). These cases are far more simplified than most design problems and only represent level sets for two variables; however, they illustrate the benefit of using MR-DFM. Most product and system design problems have many local optima, so these represent what could happen around only one of the available solutions. More extensive discussion of this topic and a realistic example of how the constraints compare is given in Section 4 of the Supplemental Materials on the ASME Digital Collection. Note that the MR-DFM constraints in Fig. 4(b) are “fuzzy” and less well-defined than those for classic DFM. This represents the uncertainties that come from simplifications in process and material modeling and possible errors when using expert intuition to derive the constraints. However, it should be noted that often the MR-DFM constraints still provide a significant benefit even if there is some uncertainty about their exact location and path. One of the expected outcomes from this kind of problem formulation is that a large set of constraints will be derived, most of which will likely not be active. However, at the formulation phase of the problem it is difficult to determine activity [56], especially in a way that is solution method agnostic. For problems which are convex, have a small number of variables, or have a clear and obvious solution within the original feasible domain, this is not difficult using monotonicity analysis. For very simple problems with 2–3 variables, visual analysis (such as plots or level sets) may be effective. However, these kinds of design problems most likely have large design domains and a number of local optima that need to be examined. In the case of multi-objective problems, this could become even more complicated and difficult to address. 2.3 Relationship to Classic Design-for-Manufacturability. This proposed method of identifying and using manufacturing knowledge in design is distinct from (and potentially complementary to) classic DFM in several major ways. Specifically, • MR-DFM uses the basic DFM principles and modifies them using specific manufacturing process knowledge and problem formulation techniques from classical and modern optimal design methods. • In contrast to classic DFM (which generally relies on generic design rules which the designer or other stakeholders then apply to a problem), MR-DFM generates a clear, clean, screened/sorted set of constraints which can be directly integrated into a design problem; this point will be elaborated in later sections of this paper. • Both classical DFM and MR-DFM focus on constraint generation. However, classical DFM can also be used to drive objective function and solution method selection. • Using the classical definitions [9,16,17,19], DFM forces a general design simplification (typically as a heuristic requirement or tool for decision making after the initial design is completed). This tends to be unnecessarily restrictive for many modern design methods, especially those which can be formulated as a mathematical program, where the imposition of the constraints is a simple task once they are known. As shown in the previous section and in Figs. 3 and 4, MR-DFM works to create the smallest possible restriction on the design space for each individual design problem by identifying realistic constraints. • Due to the restrictiveness and focus on simplicity, classical DFM does not work well in conjunction with topology optimization and similar design automation strategies. Successful constraining of these designs to the manufacturable domain requires process-specific and carefully formulated constraints [9,34,35,38–42]. • Due to the potential time and resource cost of applying MR-DFM fully, it is better suited for complex and computationally expensive design problems (e.g., many problems in the aerospace, automotive, and medical devices industries) where higher-value designs and products can better justify the extra cost. On the other hand, classical DFM is the best for simple problems and products which are well-established (e.g., consumer goods and part/product families with relatively simple designs). • Classical DFM relies primarily upon expert intuition and decision-making processes derived from it [13,16,18]. MR-DFM may be based on expert intuition or may be based on explicit or implicit mathematics, depending on the form and needs of the problems. This makes MR-DFM solution method independent (since it focused on solid formulations) and able to handle many different types (and mixes) of input data. It also helps remove a lot of dependency on the experience of the decision-maker; since MR-DFM is far more data- and model- driven than classical DFM is, it is far easier to apply effectively for regular engineers without decades of practical experience. • As shown in Fig. 3, the problem formulation decision space (i.e., a framework approach for conceptualizing design problem formulation decision options) for MR-DFM is very different from classical DFM since it is a distinct method with different goals. Specifically, classic DFM relies on a very simple predictive model (i.e., as much as possible, everything is primitive shapes and as simple as can be made), proxy comparison methods (necessary since DFM is usually very generic unless applied carefully to a specific problem), and a very flexible application (also since the principles are typically generic). On the other hand, MR-DFM relies on a higher-fidelity predictive model (i.e., the manufacturing knowledge), a more structured design representation, and more realistic comparison methods (since the constraints are directly mapped). • Once a set of data about a specific process is collected and mapped, this mapping can generally be used again for other designs with small or no modifications needed. This helps open the door to easier automation of the process. See the Supplemental Materials on the ASME Digital Collection for additional information about these concepts and distinctions. This method is novel and relevant to the modern design world due to these major contributions. In addition, it is easy for practicing engineers and students to apply, as it is far more specific and knowledge-driven than classic DFM. A final major contribution that can be attributed to this method is that applying it does not require many years of developed expert intuition to apply, whereas most DFM methods do. It does require knowledge about a specific process (or family of processes) and their effects on materials during processing; this is typically a much smaller domain than the DFM requirement of intuitively understanding the complexities between design and manufacturing on a general level. Therefore, the designer does not need as much practical experience or developed judgement to successfully apply MR-DFM. In addition, an important future work direction in this area will be the use of digital twins to drive the constraints, removing most or all required experience on the part of the user and helping to better automate the 3 Constraint Restrictiveness and Dominance Let G be the set of all possible manufacturability constraints (active and inactive) on the set of all possible design variables X (including those that may be defined as constants). Assuming that the potentially useful set of constraints $g¯∈G$ on the set of selected design variables $x¯∈X$ is defined and ordered after mapping (Fig. 2) and that the nonmanufacturability constraints are known, screening can begin. The exact screening process will depend on the nature of the problem, but the general goal is to evaluate each of the manufacturablity constraints for each design variable and determine if this constraint g[i] ∈ G restricts the design space in any way for that variable x[i] ∈ X. If so, it should be classified as a “restrictive” constraint and kept in the initial set. After a set of potentially useful manufacturability constraints is defined, they need to be screened (in a sequence as done in previous steps) for duplication, redundancy, and dominance. After all the screening steps, the manufacturability constraints could be classified into five categories: 1. Restrictive: The manufacturability constraint restricts the design space to feasible designs for a specific process [34]. These constraints are potentially active but their status will need to be established during the problem solution [56,69–71]. Inactive constraints can be removed from the model once activity can be tested mathematically. However, this does not affect the initial formulation and restrictive constraints are useful in estimating the feasible design space without unnecessarily restricting it. An example of a useful and restrictive constraint (which is very likely but not guaranteed to be mathematically active) could be the minimum feature thickness on a manufactured part to be designed for minimum mass. 2. Not useful/inactive: The constraint is either nonrestrictive in the defined design space or it is obviously inactive for the problem. An example of a nonuseful constraint would be a maximum feature size constraint when the objective function seeks to minimize mass or size; in this case, an upper bound on the size is obviously not an active constraint and can be safely removed. Depending on the needs of the designer, this determination may be made based on expert intuition or may be easily automated. In cases of doubt, the designer may decline to reject the constraint and retain it in the restrictive category. 3. Duplicate: The constraint applied is mathematically or effectively identical to one that was already imposed and is therefore not needed at all. This is relatively common for manufacturability problems, as lower or upper bound values for the design may be identical for several constraint sources (e.g., heat dissipation and minimum thickness-to-height ratio to withstand force of machining may produce identical lower bounds on wall thickness for a machined part). 4. Internally dominated: The constraint was restrictive when added to the model but later found to be less restrictive to another manufacturability constraint (it is assumed here that the list of constraints will be examined in a sequence) and therefore is a dominated constraint and no longer necessary. 5. Externally dominated: Identical to internally dominated but dominated by a nonmanufacturability constraint. This process can be easily automated in many cases, with the possible exception of determining some of the rejected constraints for the “not useful” category since these may need to be determined by expert opinion. However, if the resources are available to retain questionable constraints until their activity can be established (to avoid mistakenly rejecting active constraints and opening up the design to manufacturing process mismatch), automation of the process can still be done, even if not as efficiently as with a smaller number of constraints. When possible, defining constraints as bounds or low-order functions will prevent this problem. In the case where a hybrid manufacturing process is selected or it is necessary to consider more than one process, it should be noted that constraints may be different in different stages of the process. Depending on the problem, this could be a significantly more difficult problem or may be leveraged to improve the design (e.g., if a sequential hybrid process [11] is used the order of the processes may affect the constraints significantly). Additional discussion, remarks, and considerations about the form and use of these constraints can be found in Section 5 of the Supplemental Materials on the ASME Digital Collection. 4 General MR-DFM Framework 4.1 High-Level Framework. Combining the discussion from the previous two sections, a general framework for generating, mapping, and screening the set of MR-DFM constraints can be formulated for use within a general mechanical design process; this framework is shown in Fig. 5. The inputs consist of stakeholder preferences, the selection of a manufacturing process to use, and any needed nonmanufacturability constraints imposed or potentially imposed on the system. The inputs and outputs of the activity blocks are shown in Table 1. The first step (Block 1) is to collect the manufacturing considerations (including both advantages and limitations, as well as any relevant best practices), which can then be directly translated to manufacturing constraints. The best method for converting these will often be problem-specific, but some general principles can be developed, as will be shown in the following sections. It is assumed that the stakeholders specify a manufacturing process or an acceptable set of processes in the design requirements, but if this is not the case, several processes can be compared at this step to see which are the least restrictive within the desired domain. It is necessary, however, to select a process or small set of processes before going any further. Table 1 Activity Input Output 1 Given from stakeholders Raw set of manufacturing considerations 2 Raw set of manufacturing considerations Ranked, ordered, and specified manufacturing constraints 3 Full set of manufacturing constraints Raw set of manufacturability constraints 4 Raw set of manufacturability constraints Set of restrictive or possibly restrictive manufacturability constraints 5 Set of restrictive or possibly restrictive manufacturability Screened set of restrictive or possibly restrictive constraints with duplicate, redundant, and dominated constraints constraints removed Activity Input Output 1 Given from stakeholders Raw set of manufacturing considerations 2 Raw set of manufacturing considerations Ranked, ordered, and specified manufacturing constraints 3 Full set of manufacturing constraints Raw set of manufacturability constraints 4 Raw set of manufacturability constraints Set of restrictive or possibly restrictive manufacturability constraints 5 Set of restrictive or possibly restrictive manufacturability Screened set of restrictive or possibly restrictive constraints with duplicate, redundant, and dominated constraints constraints removed 4.2 Preliminary Constraint Identification and Screening. Once a preliminary set of manufacturing constraints are defined, they should be subjected to an identify, specify, rank, and combine (I–S–R–C) process (Block 2). There are a variety of methods for accomplishing this from optimization, decision analysis, and systems engineering, depending on the specifics of the problem. However, a good general (widely applicable) method is discussed extensively in the NASA Systems Engineering Handbook [15] for collecting sets of system requirements and other constraints. In this approach, four steps are taken to ensure that the list of requirements or constraints are both as complete as possible and can be feasibly implemented. These steps are: 1. Identify all of the relevant requirements and constraints that should be considered. This includes both the manufacturing-related constraints and the ones from other sources. For example, a set of dimensional constraints on a part feature may come from both performance requirements and the minimal thickness needed to successfully machine the part. 2. As much as possible, specify all the requirements in the same terms and language to make them easier to compare against each other. For example, for all the identified dimensional constraints, they should be specified in the same units at the same temperature and use conditions to make them directly comparable. 3. Since in a large system or complex product, it is impossible to account for every single possible constraint or requirement [15,56,72,73] (hence the common use of factors of safety and design assumptions). Therefore, the set of desired or needed constraints or requirements should be ranked in terms of influence and impact. It may only be possible to account for some of them in the final design (this would of course be problem-specific) and having a ranked list would help identify the urgency of each one to the stakeholders. If only part of the set could realistically be accomplished, then a ranked list will make the decision of which to keep and which to reject more straightforward. The ranking may be done manually by the stakeholders or using some common decision analysis techniques (such as rank scoring or analytic hierarchy process (AHP) or other appropriate methods). 4. Combine as many of the constraints together as possible. It is very common that some of the requirements for a system or product will be redundant and can be combined to reduce the size of the constraint set. For example, if the minimum thickness of a part feature is specified as 3 mm by heat dissipation requirements during machining as well as by performance or interface requirements, only one of these constraints needs to be kept in the final set. The output of this process is a set of manufacturing (and other) constraints which are well-defined, clearly specified, ranked in order of importance, and combined into the smallest practical number of constraints. This is then mapped onto a set of manufacturability constraints (Block 3). 4.3 Manufacturing Constraints and Manufacturability Constraints Mapping. Revisiting earlier definitions, manufacturability constraints are limitations on the capability of the manufacturing process itself, while manufacturability constraints are on the design that will be manufactured. The conversion of these will take place within Block 3 inFig. 5. The exact steps needed to complete this conversion will be problem-specific in most cases and will require either some measure of expert intuition or historical design data. However, some general principles can be identified for most mechanical design problems. • Most general design constraints on final parts can be divided into three categories: (1) dimension (e.g., length or height), (2) form (e.g., straightness or roundness), and (3) surface finish [74 • Therefore, most manufacturability constraints will be related to a dimension, form, or required surface finish. • Generally, the manufacturing constraints will be the process constraints that drive the dimension, form, or surface finish of the part. For example, heat dissipation needed, vibration and compliance, and applied force during processing (whether from direct force of a tool or force from shrinkage and residual stresses) are all process aspects that affect the final part in terms of dimension, form, and surface finish. This applies regardless of the manufacturing process used, as these general principles apply to machining, casting, additive manufacturing processes, and most others. Therefore, a general approach for converting manufacturing constraints into manufacturability constraints will be to: 1. Consider the identified manufacturing constraints and determine the influence on dimension, form, or surface finish each would have on a manufactured part. 2. Each effect on dimension, form, or surface finish from each manufacturing constraint will have an equivalent manufacturability constraint driven by it. 3. The potential set of manufacturability constraints will be this set of equivalent constraints, condensed for clarity and to remove obviously redundant or useless constraints. 4.4 Detailed Constraint Screening. After the potential set of manufacturability constraints is defined (set C1), the constraints are then individually screened to determine if they are restrictive or obviously not restrictive. Uncertain constraints at this point should be retained in the set. The collection of restrictive or potentially restrictive constraints then make up set C2 ⊆ C1 (Block 5). This set is then screened as a set for duplicate and dominated constraints, which are rejected from the set. Note that this includes comparison with known nonmanufacturability constraints as well. 4.5 Final Constraint Set. The set C[final] should consist of the smallest possible number of manufacturability constraints, which can then be effectively imposed into the design problem, restricting the design space only just enough to ensure manufacturability (i.e., “minimally restrictive”) and leaving as much of it as possible for the designer to explore. Mathematical activity is not clearly established in this set, as the problem has only been formulated and not yet solved. After building the set of useful and possibly useful constraints, they can be fed into a solution method (classic optimization techniques, topology optimization, procedure/rule-based design, etc.) and the initial solution should give needed information about the activity of the constraints. At this point, the formulation can be finalized and a set of feasible designs generated for the stakeholders to examine. 4.6 Data Types and Automation. The data that are mapped from the manufacturing knowledge to the design problem could take a variety of forms and the mapping may be explicit, implicit, or manual depending on the needs and formulation of the problem. This may depend on the processes involved in the form of the design problems. In the most common case, it is anticipated that the design problem will involve geometric constraints related to manufacturability; in this case, the mapping could be based on primitive shapes (squares, circles, triangles, etc.), nodes in a mesh, points or lines on a shell model, a toolpath (i.e., g-code), or similar. The form of this will be one of the decisions made by the stakeholders when using MR-DFM very early in the design lifecycle. 5 Case Studies Due to space limitations, this section provides brief summaries of three extensive case studies meant to explore and demonstrate the concepts developed in this article. Section 6 in the Supplemental Materials on the ASME Digital Collection provides the full details for the case studies. 5.1 Case 1: Milled Aluminum Heat Exchanger Fin. This case study explores a design problem using a single well-defined manufacturing process as the basis for the manufacturability constraints. A heat exchanger fin in a natural-convection environment must be machined from 6061 aluminum under a given set of performance and temperature parameters. The design objective was to minimize total fin volume. A figure describing the problem, the list of assumptions, and modeling parameters, and the full description is provided in Section 6.6.1 of the Supplemental Materials on the ASME Digital Collection. A summary of the mapping process from manufacturing considerations to useful manufacturability constraints is shown in Table 2. Table 2 Mfg considerations Resulting Mfg constraints Manufacturability constraints Heat of machining Heat of machining Q = 40 W t ≥ 1.17 mm for cutting force Machining force Cutting force F = 200 N t ≥ 1.27 mm for heat during machining Compliance/vibration (also accounted for in cutting force constraint) Nonmanufacturability constraints t ≥ 0.85 mm for thermal performance $t≥negligible(O≤μm)$ for buckling Mfg considerations Resulting Mfg constraints Manufacturability constraints Heat of machining Heat of machining Q = 40 W t ≥ 1.17 mm for cutting force Machining force Cutting force F = 200 N t ≥ 1.27 mm for heat during machining Compliance/vibration (also accounted for in cutting force constraint) Nonmanufacturability constraints t ≥ 0.85 mm for thermal performance $t≥negligible(O≤μm)$ for buckling Figures 6(a) and 6(b) show the final designs (with and without manufacturability constraints, respectively), while Figs. 6(c) and 6(d) present the final manufactured designs. It can be clearly observed that the thin (0.85 mm) fin has numerous manufacturing defects (Fig. 6(e)), while the one with the manufacturability constraint was successfully fabricated. In this unconstrained (thin) fin, three major defects were observed: (1) (Note (A)) the fin thickness was inconsistent, with the bottom of the fin being the nominal thickness and the top being 20% thinner, (2) (Note (B)) the top corner was chipped by the end mill during a cutting pass due to its flexibility under cutting force, and (3) (Note (C)) the top of the fin displayed a jagged, almost “scalloped” surface finish. It was observed that all of these defects were cause by the fin deflecting under the load from the end mill, with the top thinning implying that the deflection was at least $10%$ of the fin thickness, 100 times the allowable deflection. None of the fin surface defects or thinning or extreme heating were observed in the machining of the 1.27 mm fin under identical manufacturing conditions, showing that the imposed constraints were restrictive and effective in ensuring manufacturability. Note that the cost of manufacturability and accuracy was a 33% increase in the mass of the fin. However, this study showed that this was the least-restrictive constraint under which the fin can be effectively fabricated using the specified conditions and process assumptions. It should be noted that this particular case study could have been completed using a variety of design methods, including classic DFM and MR-DFM (as shown). If classic DFM had been used, it is likely that the original thickness based on performance would have been calculated and a factor of safety applied. Using a typical factor of safety of 2.0 for a problem like this, the fin thickness would have been about 1.7 mm thick—in this case, it would both perform correctly and be manufacturable, but the design would be inferior (i.e., be heavier) than the one produced using MR-DFM. Since MR-DFM provided more detailed constraints directly derived from the manufacturing process, a thinner fin that was both manufacturable and functional was able to be produced. 5.2 Case 2: FDM/SLA TO Cantilever Beam. This case study used a design problem that must consider the constraints from two manufacturing processes from the same family, unlike Case Study 1 which used only a single process. In this problem, a simple symmetric cantilever beam was to be designed via topology optimization (TO) for minimum mass and minimum compliance (i.e., maximum stiffness). The final design was required to be symmetric along the length and thickness directions. The full set of a schematic of the problem, parameters, assumptions, and methods is provided in Section 6.2.1. in the Supplemental Materials on the ASME Digital Collection. Similar to Case Study 1, the mapping for the problem is shown in Table 3. Table 3 Mfg considerations Resulting Mfg constraints Manufacturability constraints Shell/infill print pattern Two shells + infill for FDM Minimum length scale for FDM is 2 mm FDM = 0.4 mm/line One shells + infill for SLA Minimum length scale for SLA is 1 mm SLA = 0.2 mm/line Mass fraction $≈50%$ Nonmanufacturability constraints Factor of safety ≥1 for TO Beam design symmetry Mfg considerations Resulting Mfg constraints Manufacturability constraints Shell/infill print pattern Two shells + infill for FDM Minimum length scale for FDM is 2 mm FDM = 0.4 mm/line One shells + infill for SLA Minimum length scale for SLA is 1 mm SLA = 0.2 mm/line Mass fraction $≈50%$ Nonmanufacturability constraints Factor of safety ≥1 for TO Beam design symmetry The Pareto method was used to generate the topology under the given conditions and constraints, using a voxel count of 500,000. Printing orientation was not expected to have a large impact on the final product, so the parts were printed wall-out (from the base up) to replicate cantilever beams; the orientation of the layers can be seen on inspection of Fig. 7. Figure 7(c) shows an attempt to fabricate the 1 mm length scale solution using FDM, resulting in numerous major manufacturing defects and missing features (highlighted yellow regions) since the features were too small for the process to accurately create with the 0.4 mm bead size. In contrast, the FDM fabrication was successful for the 2 mm length scale solution (Fig. 7(d)). Both geometries were successfully produced by the SLA process, as shown in Figs. 7(e) and 7(f). The Pareto software did not allow the use of a zero-thickness minimum feature size constraint, so it was not possible to generate a topology that was not manufacturable using SLA with this method at the small part size involved. However, the manufacturability constraints imposed clearly had a major impact on the part geometry and were clearly restrictive at least for the FDM fabrication. It is clear from the results of this case study that the constraints were necessary for the designs to be manufacturable. Traditional DFM methods would have provided some guidelines for setting up the general design problem (cantilever beam under load), but would have been very difficult to integrate directly with the topology optimization problem. Since the formulation of the problem requires the manufacturability constraints to be known before a solution is attempted, it is was necessary to use MR-DFM to find them. The TO problem could have been set up using some general guidelines (e.g., a general minimum feature thickness), but it would not have been easily customized for each of the processes or have accounted for specific constraints for each case. 5.3 Case 3: Hybrid AM/SM PLA Pulley. The final case study presented is an updated solution approach to the generator pulley problem presented by Patterson and Allison [11]. In this problem, a belt-drive pulley for a generator was designed and required to be manufactured from PLA plastic using a combined additive-subtractive hybrid manufacturing process. Therefore, this is also a two-process problem but with the processes in different families. A detailed problem diagram, along with the assumptions, modeling parameters, and sample manufacturing details are given in Section 6.3.1 in the Supplemental Materials on the ASME Digital Collection. The constraint mapping for Case Study 3 is shown in Table 4. Table 4 Mfg considerations Resulting Mfg constraints Manufacturability constraints Hybrid process Two shells + infill for FDM Minimum length scale for FDM is 2 mm AM + SM in sequence Print orientation Minimum shell count = 5 FDM + lathe Shell thickness Minimum roof layer count = 5 Anisotropic material Part base/roof thickness Minimum base layer count = 5 Work holding Maximum size ≤ lathe chuck jaw diameter Nonmanufacturability constraints Factor of safety ≥ 1for TO Pulley design symmetry Mfg considerations Resulting Mfg constraints Manufacturability constraints Hybrid process Two shells + infill for FDM Minimum length scale for FDM is 2 mm AM + SM in sequence Print orientation Minimum shell count = 5 FDM + lathe Shell thickness Minimum roof layer count = 5 Anisotropic material Part base/roof thickness Minimum base layer count = 5 Work holding Maximum size ≤ lathe chuck jaw diameter Nonmanufacturability constraints Factor of safety ≥ 1for TO Pulley design symmetry Based on the requirements, a series of TO solutions were found (Pareto TO, three million voxels), with the solution selected being the minimum-mass solution found before no more feasible designs were found (Fig. 8(a)); the final design (Fig. 8(b)) had a mass fraction of 0.66 and a compliance of about 5.00 mm/kN, well within the maximum allowed. Figure 8(c) shows the printed pulley before lathe operations, while Fig. 8(d) is the successful finished pulley. To show that the manufacturability constraints are restrictive, a second pulley was produced using the same TO solution, but which used only two shells; the activity of the minimum feature size constraint was established in Case Study 2, so only the shell constraint was tested here. Figure 8(e) illustrates the results for the two-shell pulley, which was clearly a failure. Several areas of surface tearing, layer delamination, and plastic melting were observed, as highlighted in the figure. As can be reasonably expected when using a hybrid process, the effects from both the additive and subtractive processes influenced the final outcome of the design. In this case, the design problem (re-designing the pulley web under two dissimilar processes) likely would have been too complex to use traditional DFM principles to accomplish without a large amount of expert intuition and even guessing. It is clear that it was necessary to find the true manufacturability constraints at the beginning of the problem formulation for the design to be successful. As seen in Fig. 8(a), a number of feasible designs could have been produced, all of them manufacturable. If a traditional DFM process had been followed to complete this task, it is most likely that only a single design would have been accomplished and the wed re-design would have been based on a primitive (triangle, square, or similar) or minimal surface (which can be easily calculated in general cases) instead of a topologically optimal, manufacturable, 3-D design which is the best within the goals and constraints of the problem. 6 Remarks on Inputs and Practical Implementation 6.1 Process and Material Modeling. One of the most important mantras, reinforced throughout this article, MR-DFM is that the quality of the constraints depends heavily on the quality of the inputs. In design, it is very common to use simplified representations of process and material models in order to reduce computational cost. While this may be useful in many cases, this could be the source of significant error in the formulation of MR-DFM constraints and should be avoided when possible. It is best to use direct experimental results to generate the constraints whenever possible via a fitted model or random variable generation. In both cases, the estimated uncertainty and potential errors may be calculated and documented. With these, the amount of “fuzziness” (Fig. 4(b)) can be estimated to ensure that the calculated constraints are actually useful for the design at hand. An important area of future research should be to find how much uncertainty is allowable in these constraints before the method fails to provide a design superior to one generated using classic DFM. 6.2 Constraint Automation and Design Tradeoffs. An obvious consideration with the MR-DFM method is that it can rapidly become unmanageable for a single designer or decision-maker as the problem size increases. The case studies presented are reasonably complex for academic studies but when dozens of features are to be considered or system constraints must be considered (i.e., interfaces, tolerances, and reliability) it can become impossible to implement manually. Therefore, the mapping and enforcement should be automated as far as possible. This should be relatively easy to do for most manufacturability problems, as many DFM and MR-DFM constraints can be reasonably defined as boundaries and simple polynomial constraints, with some binary and discrete functions on occasion. However, as the method and computational methods improve in capacity, more accurate models of interactions between geometry and considerations such as temperature, stress, and deformation during manufacturing can be used. These higher-fidelity constraints can expand the design space even more (in many cases) but only are worth pursuing where the increase in design quality is worth the investment of time and resources. Some practical considerations for automating the constraint generation are as follows: • For explorations and for mapping new processes and process–material combinations, the mapping will likely involve a significant amount of manual or mathematical work. However, the mapping should be stable and consistent for each process so the mappings can be cataloged. • These catalogs of mappings and constraints can be used as the starting place for automating constraints. • Once a set of mappings are completed (e.g., for the ten most common manufacturing processes), design automation studies can begin. Similarities and differences can be identified and some mappings combined or simplified in the ideal case. • It is very important at the beginning of the design process to identify or specify the data type (mathematical functions, points in a mesh, etc.) to be used in the mapping. Consistent application of this will aid in automating the constraints. • A future work direction may be to see if a new data type is needed to map these constraints, but this may not be necessary for general mechanical design problems. In formulating most mechanical design problems, most of the relevant constraints will be geometric and will be in the form of distances, thicknesses, norms between points, and similar. • A very helpful technology that can support MR-DFM is digital twins of manufacturing processes. When properly developed and verified, this may be able to replace actual manufacturing processes and make the identification of manufacturing considerations much easier. • From the other perspective, MR-DFM and other constraint/formulation methods may help drive the development and refinement of these digital twins. It would be especially useful if digital twin models could be developed and formulated so that they provide the manufacturing considerations and constraints as default outputs. Clearly, there will be design tradeoffs when using this method as there are with classic FDM and other methods. The most important considerations for most problems would be as follows: • Cost, both in terms of labor during initial mappings or explorations and in terms of computational expense once automated. • Different classes of manufacturing processes will have drastically different costs associated with using them in design. For example, simple geometric constraints based on a die casting tool will be far less expensive (both in terms of design cost and in terms of production cost, assuming enough are to be made to justify tooling costs) than designing a free-form lattice structure with different location densities. • As previously discussed, MR-DFM is far more focused and based on technical knowledge instead of expert intuition. For simple problems and those with few or no active manufacturablity constraints, the flexibility offered by using expert-intuition-driven DFM could provide better design outcomes. • Continuing from the last point, not every mechanical design problem will have active manufacturability constraints and discovering this after using MR-DFM could waste time and resources. Design problems which are convex in nature are less likely to require many (or any) manufacturability constraints due to the fact that only a single global solution is possible. 6.3 Expert Intuition. For any DFM or MR-DFM method, some level of expert intuition will be necessary. For classic DFM, several excellent guides are available but still require interpretation and application to the problem at hand. With MR-DFM, the ability to capture process knowledge (manufacturing considerations) and experience is one of the strengths of the presented method; this allows more realistic constraints and a wider design space. However, there will always be some uncertainty in the collected information. To minimize this, the design team should be careful to collect only the most reliable information and make an effort to communicate well with the expert, who will often be a technician and not a researcher or engineer. Respect for the experience of the expert (who will not necessarily have advanced degrees), the inclusion of the expert in design decisions, and a real effort at partnership with the design team will be vital for collecting the best quality design knowledge. When practical, it would also be very useful for at least some members of the design team to have some degree of practical knowledge and experience with the selected manufacturing process or processes. Whether this involved hiring designers with hands-on experience (of which there are very few in most industries), providing additional coursework, or some kind of hands-on training will depend on the type of problem being solved and the desired outcomes. 6.4 Design Method Selection. The presented framework is design-method agnostic and can provide useful constraints for a wide variety of problems. However, the design method may affect the formulation of the problem and this should be expected for any design method outside of classic optimization methods. Therefore, care should be taken that the final model (objective function and constraints) is as mathematically rigorous and clear as possible to avoid any mistakes or misunderstandings during any needed reformulation. All variables should be distinct and clearly and consistently defined in a way that is easily understandable by someone not in the original design team. 6.5 Verification, Validation, and Certification. As with all design methods (all three domains in Fig. 1), the final step and ultimate success metric is the completion of the verification, validation, and certification (VV&C) process. While this is largely independent of the specific constraint formulation, it may be necessary to have the constraints very clearly specified for VV&C. Therefore, it is vital that everything be well-documented during the entire process and, when possible, a calculation of the expected uncertainty and error in the placement of the constraints. 7 Conclusions and Future Work In this work, a conceptual framework and approach was developed for generating and imposing minimally restrictive manufacturability constraints (MR-DFM) for mechanical design problems. The technique is based on mapping of practical manufacturing knowledge into enforceable manufacturability constraints; these can be screened and eliminated as needed to ensure that the imposed constraints restrict the design space only enough to guarantee (or at least greatly improve) manufacturability while leaving as much as possible intact for design exploration. This new approach to DFM is useful and provides value beyond the state-of-the-art due to its dynamic nature. Unlike more static and linear traditional DFM, it better allows the more effective capture and use of practical manufacturing information which can be problem- or process-specific. This enables better consideration of manufacturing information into product and system design and helps provide a wide design space. MR-DFM is also much less dependent on expert judgement and, as discussed in the previous section, can be automated (using models, digital twins, historical data, and similar) for many important applications. The method was shown to be straightforward and useful in three case studies, all of which were not manufacturable using the specified manufacturing conditions without the additional constraints. Note that the case studies made some simplifying assumptions, clearly stated within each problem, which may impact the validity of the case studies if the conditions were changed. The reader should keep in mind these assumptions when following the case studies and applying the approach to other (similar) problems. The impact of using simplifying assumptions for generating manufacturability constraints is the risk of false positives (the generated constraints are not sufficient to guarantee manufacturability) and false negatives (the constraints exclude more of the design space than actually required); care should be taken to avoid this when applying the method. The case studies presented covered three different types of mechanical design problems under manufacturability: (1) design under a single process, (2) design under two different but related processes, and (3) design under a hybrid of two dissimilar processes. In terms of design problems, • The design focus of Case Study 1 was to find the least restrictive constraint on the fin thickness such that it could both meet its design requirements and be manufacturable. This involved modeling and testing several aspects of the design (heat transfer, beam bending, and buckling) using a common design variable; the constraint found was the least-restrictive one which was applicable to all of the problems encountered containing the design variable. • Case Study 2 explored the formulation of an algorithmic design problem (topology optimization) under two possible manufacturing processes, requiring constraints which were valid for the entire problem. This also captured the potential scenario in which uncertainty exists in the selection of the process and so the design must allow some flexibility. • Finally, Case Study 3 explored the complexity of a hybrid manufacturing problem, where a larger number of manufacturability constraints needed to be considered even for a just two design variables. This problem also clearly demonstrated the way that the constraints can be quickly tested and eliminated when not useful. This method is applicable for any mechanical design problems in which the physical design constraints (part architecture and performance) can be defined and understood in terms of its manufacturing processes in some way. Future work will focus on refinement of the method and extension of it to other design domains, such as design of tailored materials, on the automation of the process for larger problems, tolerance allocation along with the constraints, method sensitivity, the use of simulations and experiments to establish the limits of problem complexity, and examining the impact of the various assumptions on the proposed framework. The authors thank William Bernstein (National Institute of Standards and Technology), Sherri Messimer (University of Alabama in Huntsville), Katherine Matlack (University of Illinois), Dan Herber (Colorado State University), Yong Hoon Lee (University of Illinois), Danny Lohan (Toyota Research Institute of North America), Krishnan Suresh (University of Wisconsin-Madison), and Niao He (ETH Zurich) for their discussion and comments on the presented work at various stages of development. All opinions and conclusions offered in this article are solely those of the authors. Funding Data • No external funding was used to complete this project or fund its publication. Conflict of Interest This article does not include research in which human participants were involved. Informed consent not applicable. Data Availability Statement The authors attest that all data for this study are included in the paper. , “ A Practical Generative Design Method Comput.-Aided Des. ), pp. D. J. , and J. T. , “ Managing Variable-Dimension Structural Optimization Problems Using Generative Algorithms Struct. Multidiscipl. Optim. ), pp. , “ A 199-Line Matlab Code for Pareto-Optimal Tracing in Topology Optimization Struct. Multidiscipl. Optim. , pp. E. M. S. N. , and , “ Topology Optimization, Additive Layer Manufacturing, and Experimental Testing of an Air-Cooled Heat Sink ASME J. Mech. Des. ), p. D. R. , and J. T. , “ Enumeration of Architectures With Perfect Matchings ASME J. Mech. Des. ), p. D. R. , and J. T. , “ A Problem Class With Combined Architecture, Plant, and Control Design Applied to Vehicle Suspensions ASME J. Mech. Des. ), p. S. L. S. N. I. A. , and , “ New Roles for Machine Learning in Design Artif. Intell. Eng. ), pp. , and , “ Machine Learning Algorithms for Recommending Design Methods ASME J. Mech. Des. ), p. S. L. T. N. de Lima C. R. G. H. , and E. C. , “ Topology Optimization With Manufacturing Constraints: A Unified Projection-Based Approach Adv. Eng. Softw. , pp. , and J. J. , “ Incorporating Manufacturing Constraints in Topology Optimization Methods: A Survey 37th Computers and Information in Engineering Conference Cleveland, OH Aug. 6–9 , Vol. 1, ASME. A. E. , and J. T. , “ Manufacturability Constraint Formulation for Design Under Hybrid Additive-Subtractive Manufacturing ASME IDETC: 23rd Design for Manufacturing and the Life Cycle Conference Quebec City, QC, Canada Aug. 26–29 , Vol. 4, ASME. , and , “ Toward Rapid Manufacturability Analysis Tools for Engineering Design Education Procedia Manuf. , pp. B. S. , and W. J. Systems Engineering and Analysis 4th ed. Prentice Hall Hoboken, NJ van Houten F. J. , and C. S. , “ Tools and Techniques for Product Design CIRP Ann. ), pp. NASA Systems Engineering Handbook: NASA/Sp-2016-6105 Rev2 – Full Color Version, 12th Media Services J. G. Design for Manufacturability Handbook 2nd ed. McGraw-Hill Education New York, NY , and K. H. Engineering Design: A Systematic Approach 3rd ed. Heidelberg, Germany , “ Product Design for Manufacture and Assembly Comput.-Aided Des. ), pp. J. W. S. K. C. C. P. A. , and W. H. , “ New Directions in Design for Manufacturing Eighth Design for Manufacturing Conference Salt Lake City, UT Sept. 28–Oct. 2 , Vol. 3d, ASME. T. T. , and , “ Application of Concurrent Engineering in Manufacturing Industry Int. J. Comput. Integr. Manuf. ), pp. , and , “ The Development of a Database System to Optimise Manufacturing Processes During Design J. Mater. Process. Technol. ), pp. D. M. Y. T. S. H. K. J. S. W. , and , “ From Design for Manufacturing (DFM) to Manufacturing for Design (MFD) Via Hybrid Manufacturing and Smart Factory: A Review and Perspective of Paradigm Shift Int. J. Precision Eng. Manuf.-Green Technol. , pp. , and M. M. , “ Customizability Analysis in Design for Mass Customization Comput.-Aided Des. ), pp. , and , “ Design for Mass Personalization CIRP Ann. ), pp. G. A. , “ A Framework for Decision-Based Engineering Design ASME J. Mech. Des. ), pp. , “ Methods for Evaluating and Covering the Design Space During Early Design Development Integr. VLSI J. ), pp. I. Y. , and B. M. , “ Design Space Optimization Using a Numerical Design Continuation Method Int. J. Numer. Methods Eng. ), pp. , and , “ Using Modeling Knowledge to Guide Design Space Search Artif. Intell. ), pp. M. R. , and , “ Enhancement of Design for Manufacturing and Assembly Guidelines for Effective Application in Aerospace Part and Process Design SAE Technical Papers , and , “ Design for Manufacturing and Assembly Methodology Applied to Aircrafts Design and Manufacturing IFAC Proc. Vol. ), pp. , and , “ Conceptual Design for Assembly in Aerospace Industry: A Method to Assess Manufacturing and Assembly Aspects of Product Architectures Proceedings of the Design Society: International Conference on Engineering Design Delft, The Netherlands Sept. 5–8 , Vol. 1, pp. A. E. C. D. , and C. A. , “ Application and Modification of Design for Manufacture and Assembly Principles for the Developing World IEEE Global Humanitarian Technology Conference (GHTC 2014) San Jose, CA Oct. 10–13 M. C. M. C. , and , “ Evaluation of Design Feedback Modality in Design for Manufacturability ASME J. Mech. Des. ), p. , and , “ Design for Manufacturability of SISE Parallel Plate Forced Convection Heat Sinks IEEE Trans. Compon. Pack. Technol. ), pp. J. K. , and , “ Casting and Milling Restrictions in Topology Optimization Via Projection-Based Algorithms 38th Design Automation Conference, Parts A and B Chicago, IL Aug. 12–15 , Vol. 3, ASME. Reddy K. T. W. , and C. J. , “ Application of Topology Optimization and Design for Additive Manufacturing Guidelines on an Automotive Component 42nd Design Automation Conference Charlotte, NC Aug. 21–24 , Vol. 2A, ASME. , and , “ An Approach for Interlinking Design and Process Planning J. Mater. Process. Technol. ), pp. G. A. , and , “ Design for Additive Manufacturing – Element Transitions and Aggregated Structures CIRP J. Manuf. Sci. Technol. ), pp. , and , “ On Design for Additive Manufacturing: Evaluating Geometrical Limitations Rapid Prototyp. J. ), pp. , and , “ Casting Constraints in Structural Optimization Via a Level-Set Method 10th World Conference on Structural and Multidisciplinary Optimization Orlando, FL May 19–24 A. R. , and C. S. , “ An Explicit Parameterization for Casting Constraints in Gradient Driven Topology Optimization Struct. Multidiscipl. Optim. , pp. , and , “ A Review of Optimization of Cast Parts Using Topology Optimization Struct. Multidiscipl. Optim. , pp. R. A. , and D. A. , “ Methods for Automated Manufacturability Analysis of Injection-Molded and Die-Cast Parts Res. Eng. Des. , pp. , and , “ Design for Manufacturing and Assembly: A Method for Rules Classification International Joint Conference on Mechanics, Design Engineering, and Advanced Manufacturing Aix-en-Provence, France June 2–4 , Springer International Publishing, pp. , and , “ CAD-Integrated Design for Manufacturing and Assembly in Mechanical Design Int. J. Comput. Integr. Manuf. ), pp. R. B. , and , “ Product Design and Manufacturing Process Based Ontology for Manufacturing Knowledge Reuse J. Intell. Manuf. , pp. S. S. S. M. , and M. K. , “ Design for Manufacturing and Assembly/Disassembly: Joint Design of Products and Production Systems Int. J. Prod. Res. ), pp. W. M. , and , “ An Ontology-Based Product Design Framework for Manufacturability Verification and Knowledge Reuse Int. J. Adv. Manuf. Technol. , pp. , and , “ Process of Creating an Integrated Design and Manufacturing Environment As Part of the Structure of Industry 4.0 ), p. A. V. , and , “ Toward Blockchain and Fog Computing Collaborative Design and Manufacturing Platform: Support Customer View Rob. Comput.-Integr. Manuf. , p. , and , “ A Rule-Based System to Promote Design for Manufacturing and Assembly in the Development of Welded Structure: Method and Tool Proposition Appl. Sci. ), p. , and , “ Optimal Tolerance Allocation of Automotive Pneumatic Control Valves Based on Product and Process Simulations ASME IDETC: 32nd Design Automation Conference, Parts A and B , Vol. H.-G. R. , and , “ Optimal Tolerance Allocation With Loss Functions ASME J. Manuf. Sci. Eng. ), pp. Y. M. , and , “ Optimal Tolerance Allocation for a Sliding Vane Compressor ASME J. Mech. Des. ), pp. , and , “ Optimal Tolerance Design of Assembly for Minimum Quality Loss and Manufacturing Cost Using Metaheuristic Algorithms Int. J. Adv. Manuf. Technol. , pp. P. Y. , and D. J. Principles of Optimal Design: Modeling and Computation Cambridge University Press Cambridge, UK S. J. , and , “ Level Set Methods for Optimization Problems Involving Geometry and Constraints J. Comput. Phys. ), pp. , and E. A. , “ A Level Set Approach for Topology Optimization With Local Stress Constraints Int. J. Numer. Methods Eng. ), pp. Y.-W. P. C. W. , and , “ Wireless Max–Min Utility Fairness With General Monotonic Constraints by Perron–Frobenius Theory IEEE Trans. Inf. Theory ), pp. , and , “ Estimating Cognitive Diagnosis Models in Small Samples: Bayes Modal Estimation and Monotonic Constraints Appl. Psychol. Meas. ), pp. Y.-W. P. C. W. , and , “ A Unified Framework for Wireless Max–Min Utility Optimization With General Monotonic Constraints IEEE INFOCOM 2014 – IEEE Conference on Computer Communications Toronto, ON, Canada Apr. 27–May 2 S. Y. I. Y. , and C. K. , “ A New Efficient Convergence Criterion for Reducing Computational Expense in Topology Optimization: Reducible Design Variable Method Int. J. Numer. Methods Eng. ), pp. E. J. , and M. A. , “ Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design AIAA J. ), pp. , and , “ Computational Design of Gradient Paths in Additively Manufactured Functionally Graded Materials ASME J. Mech. Des. ), p. D. L. , and J. T. , “ Developing and Comparing Alternative Design Optimization Formulations for a Vibration Absorber Example 22nd Design for Manufacturing and the Life Cycle Conference; 11th International Conference on Micro- and Nanosystems Cleveland, OH Aug. 6–9 K. S. K. H. , and I. H. , “ Multiple Level-Set Methods for Optimal Design of Nonlinear Magnetostatic System IEEE Trans. Magn. ), pp. , and , “ A Review of Level-Set Methods and Some Recent Applications J. Comput. Phys. , pp. N. V. , and , “ Analytical Level Set Fabrication Constraints for Inverse Design Sci. Rep. , pp. J. N. , “ Logic, Optimization, and Constraint Programming INFORMS J. Comput. ), pp. , “Active Constraint Strategies in Optimization,” Geophysical Data Inversion Methods and Applications. Theory and Practice of Applied Geophysics C. O. , and , eds., Vieweg+Teubner Verlag Wiesbaden, Germany , pp. , and , “ On the Accurate Identification of Active Constraints SIAM J. Optim. ), pp. , and , “ Ranking of Customer Requirements in a Competitive Environment Comput. Ind. Eng. ), pp. , and , “ Selecting CRM Packages Based on Architectural, Functional, and Cost Requirements: Empirical Validation of a Hierarchical Ranking Model Requirements Eng. , pp. , and Manufacturing Engineering and Technology London, UK , and , “ Integrating GD&t Into Dimensional Variation Models for Multistage Machining Processes Int. J. Prod. Res. ), pp. M. I. , and , “ Interpreting the Semantics of GD&T Specifications of a Product for Tolerance Analysis Comput.-Aided Des. , pp.
{"url":"https://www.asmedigitalcollection.asme.org/openengineering/article/doi/10.1115/1.4054170/1140263/Mapping-and-Enforcement-of-Minimally-Restrictive","timestamp":"2024-11-09T00:00:22Z","content_type":"text/html","content_length":"451791","record_id":"<urn:uuid:9366be45-27b6-42fd-ae47-aaf63574fb12>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00220.warc.gz"}
February 2017 The nice thing about BUGS/JAGS/Stan/etc is that they can operate on arbitrarily complex bayesian networks. You can take my running ‘coin toss’ example and add extra layers. Imagine that we believe that the mint who made the coin produces coins who bias ranges between theta=0.7 and theta=0.9 uniformly. Now we can take data about coin tosses, and use it to infer not only knowledge about the bias of one coin, but also about the coins made by the mint. But this kind of generality comes at a cost. Let’s look at a simpler model: we have ten datapoints, drawn from a normal distribution with mean mu and standard deviation sigma, and we start with uniform priors over mu and sigma. For particular values of mu and sigma, the posterior density is proportional to the likelihood, which is a product of gaussians. However, we can avoid doing a naive N exponentials with a bit of algebra, instead doing a single exponential involving a summation. So, as we add more data points, the runtime cost of evaluating the posterior (or at least something proportional to it) will rise, but only at the cost of a few subtractions/squares/divides rather than more exponentials. In contrast, when I use JAGS to evaluate 20 datapoints, it does twice as many log() calls as it does for 10 datapoints, so seems not to be leveraging any algebraic simplifications. Next step: write a proof of concept MCMC sampler which runs faster than JAGS for the non-hierarchical cases which are most useful to me. JAGS: normal and not-normal performance Previously, we’ve walked through a coin-tossing example in JAGS and looked at the runtime performance. In this episode, we’ll look at the cost of different distributions. Previously, we’ve used a uniform prior distribution. Let’s baseline on 100,000 step chain, with 100 datapoints. For uniform prior, JAGS takes a mere 0.2secs. But change to a normal prior, such as dnorm(0.5,0.1), and JAGS takes 3.3sec – with __ieee754_log_avx called from DBern::logDensity taking up 80% of the CPU time, according to perf: 70.90% jags-terminal libm-2.19.so [.] __ieee754_log_avx 9.27% jags-terminal libjags.so.4.0.2 [.] _ZNK4jags20ScalarStochasticNode10logDensityEjNS_7PDFTypeE 5.09% jags-terminal bugs.so [.] _ZNK4jags4bugs5DBern10logDensityEdNS_7PDFTypeERKSt6vectorIPKdSaIS5_EES5_S5_ If we go from bernoulli data with normal prior, to normal data with normal priors on mean/sd, it gets more expensive again – 4.8 seconds instead of 3.3 – as the conditional posterior gets more complex. But still it’s all about the logarithms. Logarithms aren’t straightforward to calculate. Computers usually try to do a fast table lookup, falling back to a series-expansion approach if that doesn’t work. On linux, the implementation comes as part of the GNU libc, with the name suggesting that it uses AVX instructions if your CPU is modern enough (there’s not a “logarithm” machine code instruction, but you can use AVX/SSE/etc to help with your logarithm implementation). Notably, JAGS is only using a single core throughout all of this. If we wanted to compute multiple chains (eg. to check convergence) then the simple approach of running each chain in a separate JAGS process works fine – which is what the jags.parfit R package does. But could you leverage the SIMD nature of SSE/AVX instruction to run several chains in parallel? To be honest, the last time I was at this low level, SSE was just replacing MMX! But since AVX seems to have registers which hold eight 32-bit floats perhaps there’s the potential to do 8 chains in blazing data-parallel fashion? (Or alternatively, how many people in the world care both about bayesian statistics and assembly-level optimization?!) Just for reference, on my laptop a simple “double x=0; for (int i=0; i<1e8; i++) x = log(x);” loop takes 6.5 seconds, with 75% being in __ieee754_log_avx – meaning each log() is taking 48ns. To complete the cycle, let’s go back to JAGS with a simple uniform prior and bernoulli likelihood, only do ten updates, with one datapoints and see how many times ‘log’ is called. For this, we can use ‘ltrace’ to trace calls to shared objects like log(): $ ltrace -xlog $JAGS/jags/libexec/jags-terminal example.jags 2>&1 | grep -c log@libm.so Rather surprisingly, the answer is not stable! I’ve seen anything from 20 to 32 calls to log() even though the model/data isn’t changing (but the random seed presumably is). Does that line up with the 3.4 seconds to do 10 million steps @ 10 data points, if log() takes 48ns? If we assume 2 calls to log(), then 10e6 * 10 * 2 * 48e-9 = 9.6secs. So, about 2.8x out but fairly close. Next step is to read through the JAGS code to understand the Gibbs sampler in detail. I’ve already read through the two parsers and some of the Graph stuff, but want to complete my understanding of the performance characteristics. Performance impact of JAGS In the previous two posts (here and here) I walked through an example of using JAGS directly to analyse a coin-toss experiment. I’m interested to learn how the runtime of JAGS is affected by model choice and dataset size, and where the time is spent during evaluation. JAGS is open-source, and written in C++, so it’s quite easy to poke around the innards. First, let’s do some high level black-box tests. We’ll take the coin-flip example from the previous post, and see how the wall-clock time on my rather old laptop increases as a) we increase the chain length, and b) as we increase the dataset size. My expectation is that both will be linear, since JAGS only uses a single core. For ten coin flips, and 10/20/30 million steps it takes 3.4/6.9/10.4 seconds without monitors, and 4.5/8.8/12.9 seconds with a monitor. Plugging that into R shows a nice linear relationship, and we can get R to build a linear model for us to stop us having to think too hard: > t < - read.table(stdin(), header=T) 0: steps time 1: 10000000 3.4 2: 20000000 6.9 3: 30000000 10.4 > lm( time ~ steps, t) (Intercept) steps -1.0e-01 3.5e-07 In other words, each step takes 0.3 microseconds. Similarly, if we stick to 10 million steps and no monitors, but increase the dataset size across 10/20/50/100, it takes 3.4/4.2/5.8/8.6 seconds which R also shows is linear albeit with a 3 second > t < - read.table(stdin(), header=T) 0: datapoints time 1: 10 3.5 2: 20 4.2 3: 50 5.8 4: 100 8.6 > lm(time ~ datapoints, t) (Intercept) datapoints 3.00408 0.05602 So this mean that it takes 3 seconds to do a 10 million step walk, and although adding more datapoints makes each step more expensive, it’s only a little bit more expensive – 10 datapoints being about 0.5 seconds more than 1 datapoint. However, if we desired to go to “big data” with, say 10 million data points, we’d be talking about half a million seconds – ie. 11 days. So let’s hope we don’t need 10 million steps on a 10 million point dataset! Next thing on the list is to understand where all that time is going. For this we can use the lovely perf tools which were added to linux in 2.6: $ perf record jags example.jags Welcome to JAGS 4.2.0 on Sun Feb 26 16:00:26 2017 Initializing model Updating 10000000 [ perf record: Woken up 2 times to write data ] [ perf record: Captured and wrote 0.578 MB perf.data (14737 samples) ] $ perf report Overhead Command Shared Object Symbol 33.32% jags-terminal libm-2.19.so [.] __ieee754_log_avx 19.87% jags-terminal basemod.so [.] _ZN4jags4base15WichmannHillRNG7uniformEv 11.90% jags-terminal libjrmath.so.0.0.0 [.] jags_rbeta 10.08% jags-terminal libm-2.19.so [.] __ieee754_exp_avx So this shows that the vast majority of time is being spent calculating logs and exponentials. Wichmann Hill is a pseudo-random uniform number generator. But why would you need exp/log to generate bernoulli distribution using a uniform prior? Let’s use a debugger to see why it’s calling the log function .. $ jags -d gdb (gdb) b __ieee754_log_avx Breakpoint 1 (__ieee754_log_avx) pending. (gdb) r example.jags Starting program: /home/adb/tmp/jags/libexec/jags-terminal example.jags Welcome to JAGS 4.2.0 on Sun Feb 26 16:14:24 2017 Initializing model Updating 10000000 Breakpoint 1, __ieee754_log_avx (x=16.608779218128113) at ../sysdeps/ieee754/dbl-64/e_log.c:57 57 ../sysdeps/ieee754/dbl-64/e_log.c: No such file or directory. (gdb) bt #0 __ieee754_log_avx (x=16.608779218128113) at ../sysdeps/ieee754/dbl-64/e_log.c:57 #1 0x00007ffff66abce9 in jags_rbeta (aa=1, bb=0.5, rng=0x63b700) at rbeta.c:102 #2 0x00007ffff690e42e in jags::bugs::ConjugateBeta::update (this=0x63c200, chain=0, rng=0x63b700) at ConjugateBeta.cc:157 #3 0x00007ffff7b8b464 in jags::ImmutableSampler::update (this=0x63c170, rngs=std::vector of length 1, capacity 1 = {...}) at ImmutableSampler.cc:28 Our uniform prior is equivalent to a beta(1,1) prior, and since beta and bernoulli distributions are conjugate, our posterior will be a beta distribution. For Gibbs sampling, each “jump” is a draw from a single parameter conditional distribution – and since we only have one parameter theta, each “jump” sees us draw from a beta distribution. Of course, we could’ve used this fact to calculate the posterior distribution algebraically and avoid all of this monkeying about with MCMC. But the purpose was to explore the performance of the JAGS implementation rather than solve a coin-toss problem per-se. In the next article, I’ll look at the performance cost of switching to other distributions, such as normal and lognormal. JAGS, and a bayesian coin toss In the previous post, I talked about Bayesian stats and MCMC methods in general. In this post, I’ll work through an example where we try to infer how fair a coin-toss is, based on the results of ten coin flips. Most people use JAGS via an R interface, but I’m going to use JAGS directly to avoid obfuscation. (Note: a coin-toss is a physical event determined by physics, so the “randomness” arises only through uncertainty of how hard it’s tossed, how fast it spins, where it lands etc, and therefore is open to all sorts of evil) Firstly, we have to tell JAGS about our problem – eg. how many coin tosses we’ll do, and that we believe each coin toss is effectively a draw from a Bernoulli distribution with unknown proportion theta, and what our prior beliefs about theta are. To do this, we create “example.model” containing: model { for (i in 1:N){ x[i] ~ dbern(theta) theta ~ dunif(0,1) This says that we’ll have N coin-flips, and each coin flip is assumed to be drawn from the same Bernoulli distribution with unknown proportion theta. We also express our prior belief that all values of theta from zero to one are equally likely. We can now launch “jags” in interactive mode: $ jags Welcome to JAGS 4.2.0 on Sun Feb 26 14:31:57 2017 JAGS is free software and comes with ABSOLUTELY NO WARRANTY Loading module: basemod: ok Loading module: bugs: ok .. and tell it to load our example.model file .. . model in example.model If the file doesn’t exist, or the model is syntactically invalid you’ll get an error – silence means everything has gone fine. Next, we need the data about the coin flip, which corresponds to the x[1] .. x[N] in our model. We create a file called “example.data” containing: N < - 10 x <- c(0,1,0,1,1,1,0,1,0,0) The format for this file matches what R’s dump() function spits out. Here we’re saying that we have flipped ten coins (N is 10) and the results were tails/heads/tails/heads/heads etc. I’ve chosen the data so we have the same number of heads and tail, suggesting a fair coin. We tell JAGS to load this file as data: . data in example.data Reading data file example.data Again, it’ll complain about syntax errors (in an old-school bison parser kinda way) or if you have duplicate bindings. But it won’t complain yet if you set N to 11 but only provided 10 data points. Next, we tell JAGS to compile everything. This combines your model and your data into an internal graph structure, ready for evaluating. It’s also where JAGS will notice if you’ve got too few data points or any unbound names in your model. . compile Reading data file example.data . compile Compiling model graph Resolving undeclared variables Allocating nodes Graph information: Observed stochastic nodes: 10 Unobserved stochastic nodes: 1 Total graph size: 14 The graph consists of ten “observed” nodes (one per coin flip) and one unobserved stochastic node (the unknown value of theta). The other nodes presumably include the bernoulli distribution and the uniform prior distribution. At this stage, we can tell JAGS where it should start its random walk by providing an initial value for theta. To do this, we create a file “example.inits” containing: theta < - 0.5 .. and tell JAGS about it .. . parameters in example.inits Reading parameter file example.inits Finally, we tell JAGS to initialize everything so we’re ready for our MCMC walk: . initialize Initializing model Now we’re ready to start walking. We need to be a bit careful at first, because we have to choose a starting point for our random walk (we chose theta=0.5) and if that’s not a good choice (ie. it corresponds to a low posterior probability) then it will take a while for the random walk to dig itself out of the metaphorical hole we dropped it in. So, we do a few thousand steps of our random walk, give it a fancy name like “burn-in period” and cross our fingers that our burn-in period was long enough: . update 4000 Updating 4000 -------------------------------------------------| 4000 ************************************************** 100% (JAGS give some enterprise-level progress bars when in interactive mode, but not in batch mode). JAGS has happily done 4000 steps in our random walk, but it hasn’t been keeping track of anything. We want to know what values of theta is jumping between, since that sequence (aka “chain”) of values is what we want as output. To tell JAGS to start tracking where it’s been, we create a sampler for our ‘theta’ variable, before proceeding for another 4000 steps, and then writing the results out to a file: . monitor theta . update 4000 -------------------------------------------------| 4000 ************************************************** 100% . coda * The last command causes two files to be written out – CODAindex.txt and CODAchain1.txt. CODA is a hilariously simple file format, coming originally from the “Convergence Diagnostic and Output Analysis” package in R/S-plus. Each line contains a step number (eg. 4000) and the value of theta at that step (eg. 0.65). Here’s an interesting thing – why would we need a “Convergence Diagnostic” tool? When we did our “burn-in” phase we crossed our fingers and hoped we’d ran it for long enough. Similarly, when we did the random walk we also used 4000 steps. Is 4000 enough? Too many? We can answer these questions by looking at the results of the random walk – both to get the answer to our original question, but also to gain confidence that our monte-carlo approximation has thrown enough darts to be accurate. At this point, we’ll take our coda files and load them into R to visualize the results. $ R R version 3.0.2 (2013-09-25) -- "Frisbee Sailing" Copyright (C) 2013 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) > require(coda) Loading required package: coda > c < - read.coda(index.file="CODAindex.txt",output.file="CODAchain1.txt") Abstracting theta ... 5000 valid values > summary(c) Iterations = 4001:9000 Thinning interval = 1 Number of chains = 1 Sample size per chain = 5000 1. Empirical mean and standard deviation for each variable, plus standard error of the mean: Mean SD Naive SE Time-series SE 0.501658 0.139819 0.001977 0.001977 2. Quantiles for each variable: 2.5% 25% 50% 75% 97.5% 0.2436 0.4000 0.5022 0.6017 0.7675 This is telling us that, given ten coin flips and our prior uniform belief and our bernoulli assumption, the most probably value for theta (the proportion of coin-flip yielding heads) is close to 0.5. Half of the probability mass lies between theta=0.4 and theta=0.6, and 95% of the probability mass lies between theta=0.25 and theta=0.75. So it’s highly unlikely that the coin flip is extremely biased – ie. theta<0.25 or theta>0.75. Pleasantly, “highly unlikely” means “probability is less than 5%”. That’s a real common-or-garden probability. Not any kind of frequencist null-hypothesis p-value. We can make lots of other statements too – the probability that the bias is greater than 0.75 is about 40%. If we had a second coin (or coin flipper) we could make statement like “the probability that coin2 has a higher bias than coin1 is xx%”. Let’s briefly revisit the question of convergence. There’s a few ways to determine how well your random walk represents (or “has converged to”) the true posterior distribution. One way, by Rubin and Gelman, is to run several random walks and look at the variance between them. The coda package in R comes with a function gelman.diag() for this purpose. However, in our simple example we only did one chain so we can’t run it on our coda files. (Incidentally, Gelman writes a great blog about stats). In the next post, I’m will look at the performance characteristics of JAGS – how it scales with the number of data points, and what tools you can use to track this. No BUGS, instead JAGS JAGS is a useful statistics tool, helping you decide how to generalise experimental results. For example, if you roll a die ten times and it comes up “six” half of the time, is that strong evidence that the die is loaded? Or if you toss a coin ten times and it comes up heads nine times, what the probability that the coin is a “normal” coin? JAGS is based on the bayesian approach to statistics, which uses Bayes rule to go from your experimental results to a (probabilistic) statement of how loaded the dice is. This is different approach from the frequencist approach to statistics which most textbooks cover – with p values and null hypothesis test. The upside of the bayesian approach is that it answers the kind of questions you want to ask (like, “what is the probability that drug A is better than drug B at treating asthma”) as opposed to the convoluted questions which frequencist statistics answer (“assuming that there’s no difference between drug A and drug B, what’s the probability that you’d get a measured difference at least as large as the one you saw?”). The downside of Bayesian stats is that you have to provide a “prior” probability distribution, which expresses your beliefs of how likely each outcome is prior to seeing any experiment results. That can seem a bit ugly, since it introduces a subjective element to the calculation. Some people find that unacceptable, and indeed that’s why statistics forked in the early 1900s from its bayesian origins to spawn the frequencist school of probability with it’s p-values driven by popular works by Ronald Fisher. But on the other hand, no experiment is run in a vacuum. We do not start each experiment in complete ignorance, nor is our selection of which experiments to run, or which hypotheses to check, determined objectively. The prior allows us to express information from previous knowledge, and we can rerun our analysis with a range of priors to see how sensitive our results are to the choice of prior. Although Bayes rule is quite simple, only simpler textbook examples can be calculated exactly using algebra. This does include a few useful cases, like the coin-flipping example used earlier (so long as your prior comes from a particular family of probability distribution). But for more real-world examples, we end up using numerical techniques – in particular, “Markov Chain Monte Carlo” methods. “Monte Carlo” methods are anything where you do simple random calculations which, when repeated enough times, converge to the right answer. A nice example is throwing darts towards a circular darts board mounted on a square piece of wood – if the darts land with uniform probability across the square, you can count what fraction land inside the circle and from that get an approximation of Pi. As you throw more and more darts, the approximation gets closer and closer to the right answer. “Markov Chain” is the name given to any approach where the next step in your calculation only depends on the previous step, but not any further back in history. In Snakes and Ladders, your next position depends only on your current position and the roll of the die – it’s irrelevant where you’ve been before that. When using MCMC methods for Bayesian statistics, we provide our prior (a probability distribution) and some data and a choice of model with some unknown parameters, and the task is to produce probability distributions for these unknown parameters. So our model for a coin toss might be a Bernoilli distribution with unknown proportion theta, our prior might be a uniform distribution from 0 to 1 (saying that we think all values of theta are equally likely) and the data would be a series of 0’s or 1’s (corresponding to heads and tails). We run our MCMC algorithm of choice, and out will pop a probability distribution over possible values of theta (which we call the ‘posterior’ distribution). If our data was equally split between 0’s and 1’s, then the posterior distribution would say that theta=0.5 was pretty likely, theta=0.4 or theta=0.6 fairly likely and theta=0.1 or theta=0.9 much less likely. There’s several MCMC methods which can be used here. Metropolis-Hasting, created in 1953 during the Teller’s hydrogen bomb project, works by jumping randomly around the parameter space – always happy to jump towards higher probability regions, but will only lump to lower probability regions some of the time. This “skipping around” yields a sequence (or “chain”) of values for theta drawn from the posterior probability distribution. So we don’t ever directly get told what the posterior distribution is, exactly, but we can draw arbitrarily many values from it in order to answer our real-world question to sufficient degree of accuracy. JAGS uses a slightly smarter technique called Gibbs Sampling which can be faster because, unlike Metropolis-Hasting, it never skips/rejects any of the jumps. Hence the name JAGS – Just Another Gibbs Sampler. You can only use this if it’s easy to calculate the conditional posterior distribution, which is often the case. But it also frees you from the Metropolis-Hasting need to have (and tune) a “proposal” distribution to choose potential jumps. In the next post, I’ll cover pragmatics of running JAGS on a simple example, then look at the performance characteristics.
{"url":"https://www.nobugs.org/blog/archives/2017/02/","timestamp":"2024-11-04T10:36:00Z","content_type":"text/html","content_length":"93257","record_id":"<urn:uuid:a2d5079f-e6a6-4344-8e9c-a9f8cb298b96>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00453.warc.gz"}
Armitage, P., McPherson, C. K., and Rowe, B. C. (1969), "Repeated Significance Test on Accumulating Data," Journal of the Royal Statistical Society, Series A, 132, 235–244. Chow, S. C. and Liu, J. P. (1998), Design and Analysis of Clinical Trials, Concept and Methodologies, New York: John Wiley & Sons. Chow, S. C., Shao, J., and Wang, H. (2003), Sample Size Calculations in Clinical Research, Boca Raton, FL: CRC Press. Cox, D. R. and Hinkley, D. V.(1974), Theoretical Statistics, London: Chapman & Hall. DeMets, D. L., Furberg, C. D., and Friedman, L. M. (2006), Data Monitoring in Clinical Trials, New York: Springer. Diggle, P. J., Heagerty, P., Liang, K. Y., and Zeger, S. L. (2002), Analysis of Longitudinal Data, Second Edition, New York: Oxford University Press. Dmitrienko, A., Molenberghs, G., Chuang-Stein, C., and Offen, W. (2005), Analysis of Clinical Trials Using SAS: A Practical Guide, Cary, NC: SAS Institute. Efron, B. and Hinkley, D. V.(1978), "Assessing the Accuracy of the Maximum Likelihood Estimator: Observed versus Expected Fisher Information," Biometrika, 65, 457–483. Ellenberg, S. S., Fleming, T. R., and DeMets, D. L. (2003), Data Monitoring Committees in Clinical Trials, New York: John Wiley & Sons. Emerson, S. S. (1996), "Statistical Packages for Group Sequential Methods," The American Statistician, 50, 183–192. Emerson, S. S. and Fleming, T. R. (1989), "Symmetric Group Sequential Designs," Biometrics, 45, 905–923. Emerson, S. S., Kittelson, J. M., and Gillen, D. L. (2005), "On the Use of Stochastic Curtailment in Group Sequential Clinical Trials," UW Biostatistics Working Paper Series, http:// Food and Drug Administration (1998), "E9: Statistical Principles for Clinical Trials," Federal Register, 63 (179), 49583–49598. Haybittle, J. L. (1971), "Repeated Assessment of Results in Clinical Trials of Cancer Treatment," Brit. J. Radiology, 44, 793–797. Hsieh, F. Y. and Lavori, P. W. (2000), "Sample-Size Calculations for the Cox Proportional Hazards Regression Model with Nonbinary Covariates," Controlled Clinical Trials, 21, 552–560. Hwang, I. K., Shih, W. J., and DeCani, J. S. (1990), "Group Sequential Designs Using a Family of Type I Error Probability Spending Functions," Statistics in Medicine, 9, 1439–1445. Jennison, C. and Turnbull, B. W. (1990), "Statistical Approaches to Interim Monitoring of Medical Trials: A Review and Commentary," Statistical Science, 5, 299–317. Jennison, C. and Turnbull, B. W. (2000), Group Sequential Methods with Applications to Clinical Trials, New York: Chapman & Hall. Kalbfleisch, J. D. and Prentice, R. L. (1980), The Statistical Analysis of Failure Time Data, New York: John Wiley & Sons. Kim, K. and DeMets, D. L. (1987), "Design and Analysis of Group Sequential Tests Based on the Type I Error Spending Rate Function," Biometrika, 74, 149–154. Kim, K. and Tsiatis, A. A. (1990), "Study Duration for Clinical Trials with Survival Response and Early Stopping Rule," Biometrics, 46, 81–92. Kittelson, J. M. and Emerson, S. S. (1999), "A Unifying Family of Group Sequential Test Designs," Biometrics, 55, 874–882. Lan, K. K. G. and DeMets, D. L. (1983), "Discrete Sequential Boundaries for Clinical Trials," Biometrika, 70, 659–663. Lan, K. K. G., Lachin, J. M, and Bautista, O. (2003), "Over-ruling a Group Sequential Boundary: A Stopping Rule versus a Guideline," Statistics in Medicine, 22, 3347–3355. Lan, K. K. G., Simon, R., and Halperin, M. (1982), "Stochastically Curtailed Tests in Long-Term Clinical Trials," Sequential Analysis, 1, 207–219. Lindgren, B. W. (1976), Statistical Theory, Third Edition, New York: Macmillan. McCullagh, P. and Nelder, J. A. (1989), Generalized Linear Models, Second Edition, New York: Chapman & Hall/CRC. Mehta, C. R. and Tsiatis, A. A. (2001), "Flexible Sample Size Considerations under Information Based Interim Monitoring," Drug Information Journal, 35, 1095–1112. O’Brien, P. C. and Fleming, T. R. (1979), "A Multiple Testing Procedure for Clinical Trials," Biometrics, 35, 549–556. O’Neill, R. T. (1994), "Interim Analysis, A Regulatory Perspective on Data Monitoring and Interim Analysis," Statistics in the Pharmaceutical Industry, Revised and Expanded Second Edition, ed. C. R. Buncher and J-Y Tsay, New York: Marcel Dekker, 285–290. Pampallona, S. and Tsiatis, A. A. (1994), "Group Sequential Designs for One-Sided and Two-Sided Hypothesis Testing with Provision for Early Stopping in Favor of the Null Hypothesis," J. Statist. Planning and Inference, 42, 19–35. Peto, R., Pike, M. C., et al. (1976), "Design and Analysis of Randomized Clinical Trials Requiring Prolonged Observation of Each Patient: I. Introduction and Design," British Journal of Cancer, 34, 585–612. Pocock, S. J. (1977), "Group Sequential Methods in the Design and Analysis of Clinical Trials," Biometrika, 64, 191–199. Pocock, S. J. (1982), "Interim Analyses for Randomized Clinical Trials: The Group Sequential Approach," Biometrics, 38, 153–162. Pocock, S. J. and White, I. (1999), "Trials Stopped Early: Too Good to Be True?" Lancet, 353, 943–944. Proschan, M. A., Lan, K. K. G., and Wittes, J. T. (2006), Statistical Monitoring of Clinical Trials, New York: Springer. Rudser, K.D. and Emerson, S.S. (2007), "Implementing Type I & Type II Error Spending for Two-Sided Group Sequential Designs," Contemporary Clinical Trials, doi:10.1016/j.cct.2007.09.002. Schoenfeld, D. A. (1983), "Sample-Size Formula for the Proportional-Hazards Regression Model," Biometrics, 39, 499–503. Senn, S. (1997), Statistical Issues in Drug Development, New York: John Wiley & Sons. Snapinn, S. M. (2000), "Noninferiority Trials," Current Controlled Trials in Cardiovascular Medicine, 1, 19–21. Wang, S. K. and Tsiatis, A. A. (1987), "Approximately Optimal One-Parameter Boundaries for Group Sequential Trials," Biometrics, 43, 193–200. Ware, J. H., Muller, J. E., and Braunwald, E. (1985), "The Futility Index: An Approach to the Cost-Effective Termination of Randomized Clinical Trials" American Journal of Medicine, 78, 635–643. Whitehead, J. (1997), The Design and Analysis of Sequential Clinical Trials, Revised Second Edition, Chichester: John Wiley & Sons. Whitehead, J. (2001), "Use of the Triangular Test in Sequential Clinical Trials," Handbook of Statistics in Clinical Oncology, ed. J. Crowley, New York: Marcel Dekker, 211–228. Whitehead, J. and Jones, D. R. (1979), "The Analysis of Sequential Clinical Trials," Biometrika, 66, 443–452. Whitehead, J. and Stratton, I. (1983), "Group Sequential Clinical Trials with Triangular Continuation Regions," Biometrics, 39, 227–236.
{"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_seqdesign_sect049.htm","timestamp":"2024-11-11T10:59:08Z","content_type":"application/xhtml+xml","content_length":"17204","record_id":"<urn:uuid:11a5b068-6faf-4a2f-a8ae-3d5eba408fc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00727.warc.gz"}
Why is A4 paper the size that it is? - Double Helix Take a sheet of A4 paper and measure its sides. A4 is 210 millimetres wide and 297 millimetres long. It’s probably the most common size of paper and it’s used in most countries. However, A4 side lengths aren’t simple numbers like 200 or 300 millimetres. So why don’t we use something easier to measure? If you take a sheet of paper and cut it halfway down the longer side, you end up with two new pieces of paper. These pieces of paper each have half the area of the original sheet, but they are the same proportions as the original sheet! There’s only one type of rectangle that has this ability. Because these half sheets have the same proportions as A4, they also have a name – A5. If you cut an A5 sheet in half, you get two pieces of A6 paper, with the same proportions as A5 and A4. All these paper sizes are part of a set called the A series. This pattern also works if you want to go bigger instead of smaller. If you take two sheets of A4 paper and stick the long sides together, you’ll end up with a sheet of paper that has the same proportions as A4, but is twice as big. This size is called A3. You can use the same process to make A3 sheets into A2, and even A2 sheets into A1 paper. So why is A4 paper called A4? A4 is half an A3, or one quarter of A2, but more importantly, it’s one sixteenth of A0. A0 has an area of one square metre (but it isn’t a square), and every other paper size in the A series is based on A0. We use A4 for writing on because it is a lot more convenient than trying to write on a square metre sheet of paper! If you’re after more science news for kids, subscribe to Double Helix magazine! 9 responses I love it, that is so interesting. I’m going to go put 16 A4 sheets together and measure the area! Great piece of info?…keep it up I did it, and it’s not 1 meter squared. Big disappointment when this site is intended for students. Hi Winston, A0 is not a square, but it is one square metre in size, or at least close to. A0 is 841 mm wide and 1189 mm tall. That comes to 0.999949 square metres, which is as close as you can get without changing the shape of the paper or going to fractions of a millimetre. it is also almost exactly 16 times larger than an A4 sheet – four times wider and four times taller. You can learn a bit more about the A series of paper sizes on Wikipedia: Hope this helps! You calculated incorrectly Hi Luke, I think my numbers are right? All the sizes are rounded to the nearest millimetre in the standard, which can make very small errors appear. If there’s something clearly wrong, let me know exactly what the error is and I’ll address it. Nah he said it was not one meter (sic) square but it is one square metre. As David pointed out, the dims are 841 x 1189, same proportion as an A4 sheet, ie the long side is 1.4142135 times the length of the short side. That number, 1.4142135 is the square root of 2, and that ratio is the magic reason why the A paper series “works” in maintaining the proportionality over the different sizes. Great article explaining why A4 paper is the size that it is! It’s fascinating to learn about the logic behind the A series paper sizes and how each size is a proportion of the A0 size. This system makes it incredibly versatile and adaptable for various uses. For those interested in more details about the A4 size and other paper sizes, you might find this resource helpful: https:// This site uses Akismet to reduce spam. Learn how your comment data is processed. By submitting this form, you give CSIRO permission to publish your comments on our websites. Please make sure the comments are your own. For more information please see our terms and conditions.
{"url":"https://blog.doublehelix.csiro.au/why-is-a4-paper-the-size-that-it-is/","timestamp":"2024-11-14T09:17:38Z","content_type":"text/html","content_length":"122017","record_id":"<urn:uuid:38d6ce50-4497-43ea-a6da-981f5d9c3b10>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00372.warc.gz"}
How to relate two point in a sublots with matplotlib? To relate two points in a subplot using matplotlib, you can use the plot function to plot a line between the two points in a specific subplot. Here is an example code snippet that demonstrates how to do this: 1 import matplotlib.pyplot as plt 3 # Create a figure and subplots 4 fig, axs = plt.subplots(1, 2) 6 # Define the coordinates of the two points 7 point1 = (1, 1) 8 point2 = (3, 3) 10 # Plot the points in the first subplot 11 axs[0].****ter(point1[0], point1[1], color='red', label='Point 1') 12 axs[0].****ter(point2[0], point2[1], color='blue', label='Point 2') 14 # Plot a line connecting the two points in the second subplot 15 axs[1].plot([point1[0], point2[0]], [point1[1], point2[1]], color='green', label='Line connecting points') 17 # Set labels and titles for the subplots 18 axs[0].set_title('Points in subplot 1') 19 axs[0].legend() 20 axs[1].set_title('Line connecting points in subplot 2') 21 axs[1].legend() 23 plt.show() This code creates a figure with two subplots. It defines two points (point1 and point2) and plots them in the first subplot using ****ter. It then plots a line connecting these two points in the second subplot using the plot function. Finally, it sets titles and legends for both subplots and displays the figure using plt.show().
{"url":"https://devhubby.com/thread/how-to-relate-two-point-in-a-sublots-with-matplotlib","timestamp":"2024-11-05T13:32:44Z","content_type":"text/html","content_length":"118601","record_id":"<urn:uuid:21732af4-9f9f-493b-ad69-7bc08f774ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00885.warc.gz"}
Henry's Physical Education Mauchly's Test of Sphericity with Repeated Measures ANOVA in SPSS hello this is dr. Gandhi welcome to my video on testing sphericity using spss the assumption of C erisa T is used for repeated measures ANOVA and to test the assumption we test the null hypothesis that the variances of the differences between all groups are equal so taking a look at these fictitious data have loaded the data view in SPSS you can see that I have independent variable program with two levels experimental and treatment as usual and three observations three dependent variables a pretest a test that occurs six weeks later and a post-test that occurs twelve weeks later these all represent the same instrument so we have these three dependent variables and looking at this first case participant 1 0 0 1 these three scores would all be generated by that one participant so these are within subjects this is within subjects all three of these dependent variablesncreated by the same participant so when we talk about the variances of the differences between all possible groups being equal the groups we're talking about are these three dependent variables not the independent variable not the levels of the independent variable but rather these three dependent variables so I'm going to conduct two repeated measures ANOVA one that has just the three dependent variables here the three scores and then one that has the between-subjects factor and I'll show you how we test for sphericity using mock Lee's test so first I'm going to go to analyze the general linear model and then repeated measures and you can see this is whatthe first dialog looks like by default has within subject factor name and then the number of levels so this is only the within subject actor this is not the independent variable so in this case let's assume that these tests the pretest and the test that occurs six weeks after in 12 weeks after let's assume that they're measuring depression so I'm going to enter depression in there so I'm going to change factor 1 which is what is there by default to depression and the number of levels will be 3 because we have 3 dependent variables so go down here and enter 3 and then click Add so depression and then 3 then I'm going to go down here and click define the bottom left to fine and then I get the repeated measures dialog and you can see it's already set up 3 within-subjects variables but they're blank get 1 2 & 3 so for the first one I'm going to move over pretest the next the test occurs 6 weeks after and then for the last 4 3 the test that occurs 12 weeks after and for this first example I'm not going to use a between subjects factor now to generate mock waste test of sarisa T I don't need to make any changes under these buttons to the right I just need to click OK and conduct the repeated measures ANOVA and I'm going to move down to the maquas test of cerissa T and you can see here that we have a p-value for the statistic of 0.026 now mock Wiis test of cerissa T uses an alpha of 0.05 so this is a statistically significant result this means we have violated the assumption of sphericityin order to assume that we have Spiro city we'd have to have a value of greater than point zero five here so what can we do when we violate the assumption of sarisa T oftentimes when using parametric statistics we note that some parametric statistics are robust to some violations of the assumptions associated with them however in the case of repeated measures ANOVA repeated measures ANOVA are sensitive to violations of cerissa t so we need to act when we have a statistically significant value we can't just assume that the statistic is robust to ERISA T because it's not fortunately SPSS includes a few Corrections that we can use in the event that we do violate ERISA t one is the greenhouse Kyser and the other is the wind felt now here we have the values of epsilon for these statistics not the P values that's down here in the test of within-subjects effects but we need to first look at epsilon for greenhouse Geiser and for wind felt and you can see in this case one is 0.85 one and the other is 0.88 six the number that we want to keep in mind when we're looking at these two values is 0.75 if we have values here that are less than 0.75 we're going to interpret the greenhouse Geyser correction which is down here if the value is greater than 0.75 we're going to interpret the wind felt correction in this case we can see that both of these values are greater than point five so we would interpret the wind felt so moving down to the test that within-subjects effects we can see the first row is sphericity assumed we can't use that value because we violated the assumption of curiosity then we have green house Keizer again we're not going to use that one because we have an epsilon value here of greater than 0.75 and then we have wind felt this is the one we would interpret and you can see it is statistically significant another option that you have when you have data that is violated the assumption of cerissa T is to conduct a manova a multivariate analysis of variance as opposed to repeated measures ANOVA because manova does not have the assumption of sarisa t so now i'm going to go back and conduct another repeated measures ANOVA except this time I'm going to add program as a between-subjects factor that's the only change I'm going to make a quick okay and you can see for mock waste test erisa T now I have a p-value of 0.1 for two it's a non statistically significant result so I can assume that I've met the assumption of sphericity so in this case if we were interested in depression x program we would use the sphericity assumed row and we have a p-value here of 0.0 0.2 mock waste test of sarissa t this test has a tendency to miss violations of cerissa t when working with small samples and it has a tendency to detect violations of Spira city that aren't actually there in large samples also mock waste tests of sarissa t is only interpretable if we have at least three dependent variables i hope you found this video on mock ways test of cerissa t to be useful as always if you have any questions or concerns, feel free to contact me and i'll be happy to assist you
{"url":"https://peteachers.tistory.com/?page=6","timestamp":"2024-11-11T20:26:16Z","content_type":"text/html","content_length":"67583","record_id":"<urn:uuid:f46f4772-bfff-4222-806a-baa55cd72121>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00763.warc.gz"}
7,105 research outputs found The general relativistic equations of stellar structure and evolution are reformulated in a notation which makes easy contact with Newtonian theory. A general relativistic version of the mixing-length formalism for convection is presented. It is argued that in work on spherical systems, general relativity theorists have identified the wrong quantity as total mass-energy inside radius Rapidly rotating, slightly non-axisymmetric neutron stars emit nearly periodic gravitational waves (GWs), quite possibly at levels detectable by ground-based GW interferometers. We refer to these sources as "GW pulsars". For any given sky position and frequency evolution, the F-statistic is the optimal (frequentist) statistic for the detection of GW pulsars. However, in "all-sky" searches for previously unknown GW pulsars, it would be computationally intractable to calculate the (fully coherent) F-statistic at every point of a (suitably fine) grid covering the parameter space: the number of gridpoints is many orders of magnitude too large for that. Here we introduce a "phase-relaxed" F-statistic, which we denote F_pr, for incoherently combining the results of fully coherent searches over short time intervals. We estimate (very roughly) that for realistic searches, our F_pr is ~10-15% more sensitive than the "semi-coherent" F-statistic that is currently used. Moreover, as a byproduct of computing F_pr, one obtains a rough determination of the time-evolving phase offset between one's template and the true signal imbedded in the detector noise. Almost all the ingredients that go into calculating F_pr are already implemented in LAL, so we expect that relatively little additional effort would be required to develop a search code that uses F_pr.Comment: 8 pages, 4 figures, submitted to PR A general physical mechanism of the formation of line-driven winds at the vicinity of strong gravitational field sources is investigated in the frame of General Relativity. We argue that gravitational redshifting should be taken into account to model such outflows. The generalization of the Sobolev approximation in the frame of General Relativity is presented. We consider all processes in the metric of a nonrotating (Schwarzschild) black hole. The radiation force that is due to absorbtion of the radiation flux in lines is derived. It is demonstrated that if gravitational redshifting is taken into account, the radiation force becomes a function of the local velocity gradient (as in the standard line-driven wind theory) and the gradient of $g_{00}$. We derive a general relativistic equation of motion describing such flow. A solution of the equation of motion is obtained and confronted with that obtained from the Castor, Abbott & Klein (CAK) theory. It is shown that the proposed mechanism could have an important contribution to the formation of line-driven outflows from compact objects.Comment: 20 pages, submitted to Ap Two different versions of the Green's function for the scalar wave equation in weakly curved spacetime (one due to DeWitt and DeWitt, the other to Thorne and Kovacs) are compared and contrasted; and their mathematical equivalence is demonstrated. The DeWitt-DeWitt Green's function is used to construct several alternative versions of the Thorne-Kovacs post-linear formalism for gravitational-wave generation. Finally it is shown that, in calculations of gravitational bremsstrahlung radiation, some of our versions of the post-linear formalism allow one to treat the interacting bodies as point masses, while others do not The general relativistic equations of stellar structure and evolution are reformulated in a notation which makes easy contact with Newtonian theory. Also, a general relativistic version of the mixing-length formalism for convection is presented. Finally, it is argued that in previous work on spherical systems general relativity theorists have identified the wrong quantity as "total mass-energy inside radius r. Supermassive black holes which exist in the nuclei of many quasars and galaxies are examined along with the collapse which forms these holes and subsequent collisions between them which produce strong, broad-band bursts of gravitational waves. Such bursts might arrive at earth as often as 50 times per year--or as rarely as once each 300 years. The detection of such bursts with dual-frequency Doppler tracking of interplanetary spacecraft is considered We describe the possibility of using LISA's gravitational-wave observations to study, with high precision, the response of a massive central body to the tidal gravitational pull of an orbiting, compact, small-mass object. Motivated by this application, we use first-order perturbation theory to study tidal coupling for an idealized case: a massive Schwarzschild black hole, tidally perturbed by a much less massive moon in a distant, circular orbit. We investigate the details of how the tidal deformation of the hole gives rise to an induced quadrupole moment in the hole's external gravitational field at large radii. In the limit that the moon is static, we find, in Schwarzschild coordinates and Regge-Wheeler gauge, the surprising result that there is no induced quadrupole moment. We show that this conclusion is gauge dependent and that the static, induced quadrupole moment for a black hole is inherently ambiguous. For the orbiting moon and the central Schwarzschild hole, we find (in agreement with a recent result of Poisson) a time-varying induced quadrupole moment that is proportional to the time derivative of the moon's tidal field. As a partial analog of a result derived long ago by Hartle for a spinning hole and a stationary distant companion, we show that the orbiting moon's tidal field induces a tidal bulge on the hole's horizon, and that the rate of change of the horizon shape leads the perturbing tidal field at the horizon by a small angle.Comment: 14 pages, 0 figures, submitted to Phys. Rev. We study an observational method to analyze non-Gaussianity of a gravitational wave (GW) background made by superposition of weak burst signals. The proposed method is based on fourth-order correlations of data from four detectors, and might be useful to discriminate the origin of a GW background. With a formulation newly developed to discuss geometrical aspects of the correlations, it is found that the method provides us with linear combinations of two interesting parameters, I_2 and V_2 defined by the Stokes parameters of individual GW burst signals. We also evaluate sensitivities of specific detector networks to these parameters.Comment: 18 pages, to appear in PR We give general sufficient conditions for the existence of trapped surfaces due to concentration of matter in spherically symmetric initial data sets satisfying the dominant energy condition. These results are novel in that they apply and are meaningful for arbitrary spacelike slices, that is they do not require any auxiliary assumptions such as maximality, time-symmetry, or special extrinsic foliations, and most importantly they can easily be generalized to the nonspherical case once an existence theory for a modified version of the Jang equation is developed. Moreover, our methods also yield positivity and monotonicity properties of the Misner-Sharp energy
{"url":"https://core.ac.uk/search/?q=authors%3A(Thorne%20K%20S)","timestamp":"2024-11-06T22:14:15Z","content_type":"text/html","content_length":"132536","record_id":"<urn:uuid:458bb0b8-7e54-4309-8ff3-0d28e4bff5c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00614.warc.gz"}
How do you calculate parquet flooring? At the most basic level, you simply need to multiply the width by the length of a room. This will give you the area. The most important thing to remember is that you will then need to allow an amount for wastage, between 7-10% for a board or strip floor and between 10-15% for parquet woodblocks. How much is Provenza flooring per square foot? Provenza Compared to other Vinyl Plank Brands Brand / Line Overall Thickness Price per Square Foot Provenza MaxCore 8 mm $4.99 per sq. ft Mannington Adura Max 8 mm $5.50 – $6.00 per sq. ft Armstrong Luxe Plank 7.8 mm $4.39 – $5.39 per sq. ft Smartcore Ultra XL 7.5 mm $3.89 per sq. ft How many square feet are in a bundle of flooring? Wide x Random Length Solid Hardwood Flooring (19.5 sq. ft. / bundle) How much does polyurethane flooring cost? Polyurethane finish costs around $40 to $100 per gallon depending on the thickness of the finish and the quality of the product. One gallon of polyurethane is usually enough to apply two coats over a 300-square-foot space. Polyurethanes come in both water-based and oil-based (alkyd-based) forms. How do I calculate how much flooring I need? Now, multiply the length of the room by the width of the room to determine the square footage, which of course is expressed in square feet. For example: If the room is 12.583 ft. in length and 9.5 ft. in width, you would multiply the two figures (12.583 x 9.5) to determine the square footage, which equals 119.54 sq ft. How do you calculate square Metres? Multiply the length and width together. Once both measurements are converted into metres, multiply them together to get the measurement of the area in square metres. Who owns Provenza flooring? Ron Sadri It was Provenza Floors, in fact, that began commercially applying an innovative acrylic color impregnation process that expanded the options in stains for hardwood flooring. “Infusion was developed more than 10 years ago, but is still our signature product,” said Ron Sadri, principal/owner of Provenza Floors. Where are Provenza floors made? Featuring a wide plank European Oak (Made in Holland) with a light to heavy wire brush and multi-stain process that is hand finished in the USA by Provenza master wood crafters. How many boxes of flooring do I need calculator? Order the Flooring One box of flooring may cover 30 square feet. Divide this number into the total square footage of the area you plan to cover. For example, if you have a total of 550 square feet, including waste, divided by 30 equals just over 18 boxes. How do I calculate how much hardwood flooring I need? To determine how many square feet you need, measure the room(s) length by its width. Example: if you have a room that is 10′ long by 10′ wide, you would multiply those numbers together. The total amount of square footage = 100 sq. ft. How does a flooring calculator work? Before you start using the calculator, you should measure the width and length of the room you want to lay a new floor in. Based on these values, the calculator first calculates the area of the room in square footage. It estimates this using the formulae: Once you know the room area (in square footage) you’re good to go purchase the material! How to estimate flooring square footage? Follow these steps then use the flooring estimator to check your calculations and ensure accuracy: Measure the dimensions of your room to get the square footage. Add around 10% to the value you compute to accommodate for any waste and cuts upon installation. If you’re going with a tile pattern, you may want to add more. Does Lowe’s have a tile flooring calculator? Regardless of the type of tile you’re using — whether it’s porcelain tile, stone tile or wood tile — our Tile Flooring Calculator can help you get an estimate of the supplies and man-hours required. At Lowe’s, we have an endless selection of tile in different sizes, materials, textures and colors. How do you calculate linoleum flooring? You’ll calculate linoleum flooring just like you would for sheet vinyl. Follow these steps: Calculate the area of the room by multiplying the width times the length. Add 10%-20%for overage and waste from cutting. Round up to the nearest foot.
{"url":"https://www.shakuhachi.net/how-do-you-calculate-parquet-flooring/","timestamp":"2024-11-05T04:23:14Z","content_type":"text/html","content_length":"50314","record_id":"<urn:uuid:8c7cde7f-c798-4e1a-a2dc-6aa03fcd3faa>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00306.warc.gz"}
FIND and COUNT Unique Values, Unique Text Values or numbers in Excel Table of Contents Whenever we prepare any report in excel, we have two constituents in any report. The Text portion and the Numerical portion. But just storing the text and numbers doesn’t make the super reports. Many times we need to automate the process in the reports to minimize the effort and improve the accuracy. Many functions are provided by the Excel which work on Text and give us the useful output as well. But few problems are still left on which we need to apply some tricks with the available tools. IN THIS ARTICLE, WE’LL LEARN TO COUNT THE UNIQUE TEXT VALUES IN EXCEL. HOW TEXT IS HANDLED IN EXCEL? TEXT is simply the group of characters and strings of characters that convey the information about the different data and numbers in Excel. Every character is connected with a code [ANSI]. Text comprises of the individual entity character which is the smallest bit that would be found in Excel. We can perform the operations on the strings[Text] or the characters. Characters are not limited to A to Z or a to z but many symbols are also included in this which we would see in the latter part of the article. OCCUR IN THE CASE OF NUMBERS] If we need to make anything inactive, such as a Date to be nonresponding to the calculation, we put it as a text. Similarly, if we want to avoid any calculations for a number it needs to be put as a Literally, Unique values are the ones that exist only once in the complete data but in general, and the default output of Unique function, unique values can also be taken as all the different values present in the data. The correct word for finding all the available different types of values is DISTINCT VALUES. For example, suppose we have a data as if we want to find out the unique values as per Excel, it’ll give us the output as 1,2,3,4,5,6,7,8,9,0 i.e. 10 will be the count of the result of unique values as per excel but if we use the standard English language definition or exact once condition, only 1,2,7,8 are the four values which exist only once and the count will be 4. If you want to find out the values which exist only once or twice and so on…. you can visit here. But if you want to know all the distinct numbers which exist in the range, you can continue reading this article. This can be a requirement sometimes when we need to count the unique text values in a given data or column. It can be done in a number of ways. Let us discuss a few. 2. USING UNIQUE FUNCTION This is an option already available in Excel directly and is very easy to use. THIS OPTION IS NOT AUTOMATIC AND SHOULD BE USED ONLY IF WE DON’T HAVE MUCH DATA TO BE ANALYZED. • Select the complete data or column where we want to find the number of unique values. [ If we need the original column intact, copy the column and paste it somewhere else. We can perform operations on this column]. • Go to HOME>DATA>REMOVE DUPLICATES. • Select the column from the dialog box. We have only one column so choose the column. • After selecting the column, press OK. • It’ll leave only unique values in the column. • Now , we can put the formula =COUNTA(FIRST CELL OF COLUMN: LAST CELL OF COLUMN). [HELP: COUNTA FUNCTION] • It’ll give us the number of unique text values present in the column. In this method, we will make use of UNIQUE FUNCTION which will help us to find out the unique values or texts in the given column. After getting all the unique values we can count them easily. Let us We’ll take an example for trying our procedure. Suppose we have the following data. We have to process the information twice to get the desired results. First of all we need to find the unique values. • Select the cell where we want to find the unique values. • Put the following formula. • =UNIQUE(ARRAY,COLUMN-WISE, FALSE) • For our example, the formula will be • =UNIQUE(G6:G17, FALSE,0) [The first argument is the array containing the data. The second argument tell it that it has to do the comparison in the rows and the third tells that it can count the entries which are more than one]. • The resulting array will contain only the unique values. • In the next column, we can count these by the use of COUNTA FUNCTION as =COUNTA(H6:H17). We can see in the picture that we had only 4 values but we took a wide range. The reason is that we don’t know what is the number of unique values if the data is too large. • Take a look at the picture below for reference. So, the cases we just discussed deal with the data where we have text only with or without any blanks. [ Because if there are any blanks, our intermediate step will remove those and keep only the single values. ] But what if we have a mix i.e. some numbers are also there in the same column. Different cases are discussed below. Let us discuss the cases when there is a mix of numbers, text or blanks etc. too. Let us take an example of mixed data. Let us try to find out the way to count the unique values present in this data. 1. Enter the formula at the top cell where you want the result as =SUM(IFERROR(1/COUNTIF(H27:H41,H27:H41),0)) where H27:H41 is the range on which we apply this check. We can go for a complete column 2. Press Enter. 3. The result will appear. The above procedure will find out the total number of different types i.e. unique values whether it is a number or text. So, the cases we just discussed deal with the data where we have text only with or without any blanks. [ Because if there are any blanks, our intermediate step will remove those and keep only the single values. ] But what if we have a mix i.e. some numbers are also there in the same column. [ Although it happens quite less because a column is dedicated for a particular value only. It can be a text or number only.] But even then, we’ll discuss the way to tackle the same. Let us take an example of mixed data. Let us try to find out the way to count the unique values present in this data. 1. Enter the formula at the top cell where you want the result as =SUM(ISTEXT(H4:H18)*IFERROR(1/COUNTIF(H4:H18,H4:H18),0)) where H4:H18 is the range on which we apply this check. We can go for a complete column also. 2. Press Enter. 3. The result will appear. The above procedure will find out the total number of different types of TEXT VALUES available ignoring all the numbers or blanks. Just like the previous case, where we discussed counting the unique text values only ignoring the blanks and numbers , let us now try to count the unique number values only ignoring the text and But even then, we’ll discuss the way to tackle the same. Let us take an example of mixed data. Let us try to find out the way to count the unique values present in this data. 1. Enter the formula at the top cell where you want the result as =SUM(ISNUMBER(H27:H41)*IFERROR(1/COUNTIF(H27:H41,H27:H41),0)) where H27:H41 is the range on which we apply this check. We can go for a complete column also. 2. Press Enter. 3. The result will appear. The above procedure will find out the total number of different types i.e. unique NUMBER TYPE VALUES available ignoring all the text or blanks.
{"url":"https://gyankosh.net/exceltricks/how-to-count-unique-text-values-in-excel/","timestamp":"2024-11-03T18:59:49Z","content_type":"text/html","content_length":"172448","record_id":"<urn:uuid:29c215d0-605d-4209-a94c-1e1d408d56f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00588.warc.gz"}
What happened to Rachel tonight? (3,427 posts) Fri May 28, 2021, 09:39 PM May 2021 What happened to Rachel tonight? Anyone know? 20 replies Fri May 28, 2021, 09:40 PM May 2021 A long weekend for fishing. 2. The whole line up changed tonite Fri May 28, 2021, 09:41 PM May 2021 for some reason. Chris Hayes was on for 2 hours, then an hour with Ari and then I'm guess a repeat of Chris Hayes. 6. Ari's show was a repeat. Fri May 28, 2021, 09:51 PM May 2021 I had seen it earlier. I was wondering what's up. 3. I think she said last night that she would be off for the long holiday weekend. n/t Fri May 28, 2021, 09:46 PM May 2021 5. I hope that's all it is. Fri May 28, 2021, 09:48 PM May 2021 Fri May 28, 2021, 09:47 PM May 2021 7. I get 7 paid weeks off a year. I assume she get a similar number. Fri May 28, 2021, 09:56 PM May 2021 Good to see her using it. And I ve heard she likes fishing. As a fellow angler I hope she is catching a bunch. Fri May 28, 2021, 10:21 PM May 2021 I just retired from a pediatric practice where I worked for just shy of 25 years, and maxed out at 4 weeks. And a max of 2 sick days. Lucky you! 10. Well, 5 weeks vacation. After 20 years as a salaried employee. Fri May 28, 2021, 10:36 PM May 2021 And 10 holidays which I mainly work due to the nature of our business and get that take when I want. Plus 10 sick days which I seldom use. . So not 7 weeks vacation but 7 weeks paid off. I ve got 35 And know why? My company has significant European branches and decided to equalize the entire company! So I can thank the French, Germans and others for my time off The company decided that we are more productive when we get down town. You d be surprised at the numbers of people who don t take them. To the point executives have started getting pressure to make their employees take the time. Certainly not me! Americans are crazy. 8. Looks like a five day weekend, and she deserves it. n/t Fri May 28, 2021, 09:57 PM May 2021 11. She was off last night as well so I expected she would be off tonight as well Fri May 28, 2021, 10:44 PM May 2021 Maybe gone fishing for the weekend. 12. She's on a boat, on a lake Fri May 28, 2021, 10:45 PM May 2021 taking a well earned break. Can't begrudge her that, she is a treasure! Fri May 28, 2021, 11:01 PM May 2021 Without question. The stars get the day off. Cuomo & Lemon were both absent on CNN. Fri May 28, 2021, 11:32 PM May 2021 15. She's frequently off on Friday nights. nt Fri May 28, 2021, 11:35 PM May 2021 16. Reid, Maddow, O'Donnell, Williams all were off Fri May 28, 2021, 11:38 PM May 2021 most likely for the Memorial Day weekend. Melber and Hayes both did 2 hour shows that repeated starting at 11PM 17. friday of long holiday weekend and many regular TV hosts are off Sat May 29, 2021, 12:14 AM May 2021 18. Her staff is also is getting a well-deserved five-day holiday weekend. n/t Sat May 29, 2021, 06:41 AM May 2021 19. She was off from Thursday Sat May 29, 2021, 06:46 AM May 2021 so my guess is that she's taking a holiday weekend break. What I found odd is that Chris said he was having an extended program and did not mention that she was off.
{"url":"https://democraticunderground.com/100215475925","timestamp":"2024-11-03T15:31:21Z","content_type":"text/html","content_length":"106012","record_id":"<urn:uuid:73f99529-f7d9-4940-ac5c-0d9cbeb4a51c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00399.warc.gz"}
Using Sigma Notation to Represent a Right Riemann Sum Question Video: Using Sigma Notation to Represent a Right Riemann Sum Mathematics • Higher Education Represent the area under the curve of the function π (π ₯) = π ₯Β² + 4 in the interval [β 2, 2] in sigma notation using a right Riemann sum with π subintervals. Video Transcript Represent the area under the curve of the function π of π ₯ equals π ₯ squared plus four in the closed interval negative two to two in sigma notation using a right Riemann sum with π Remember, when weβ re writing a right Riemann sum, we take values of π from one to 10. And when weβ re writing a left Riemann sum, we take values of π from zero to π minus one. So to estimate the area under the curve of some function π of π ₯ in the closed interval of π to π using π subintervals. We find the sum of Ξ π ₯ times π of π ₯π for π minus one to π in a right Riemann sum and the sum of Ξ π ₯ times π of π ₯π for π ₯ equals zero to π minus one in a left Riemann sum. And here Ξ π ₯ is π minus π divided by π . And π ₯π is equal to π plus π lots of Ξ π ₯. Now of course, in this question, weβ re looking to find a right Riemann sum. So we will be taking values of π from one to π . And we can calculate Ξ π ₯ quite easily. Weβ ll let π be equal to negative two and π be equal to two. This means Ξ π ₯ is equal to two minus negative two over π . And of course, two minus negative two is four. So Ξ π ₯ is simply four over π . Once weβ ve calculated Ξ π ₯, we can work out π ₯ subscript π . Itβ s π , which we said was negative two plus π lots of Ξ π ₯. Here thatβ s π times four over π or four π over π . So we know Ξ π ₯ and we know π ₯ subscript π . So next, we need to work out what π of π ₯π is. Now in this question, π of π ₯ is equal to π ₯ squared plus four. So weβ re going to substitute our expression for π ₯π into this function. And when we do, we find that π of π ₯π is negative two plus four π over π all squared plus four. We then distribute the parentheses and we obtain four minus 16π over π plus 16π squared over π squared. Combining like terms, we see that π of π ₯π is eight minus 16π over π plus 16π squared over π squared. And actually, weβ ve done enough now. We can substitute all of this into the summation formula. Itβ s the sum between π equals one and π of Ξ π ₯ β thatβ s four over π β times π of π ₯π . And we found that to be eight minus 16π over π plus 16π squared over π squared. Weβ re going to add the values inside the parentheses. And we do that by multiplying the first one by π squared, the second one by π , both the numerator and the denominator. And we create a common denominator of π squared. And so the part inside our parentheses becomes eight π squared over π squared minus 16π π over π squared plus 16π squared over π squared. And thatβ s great because we can find the sum and difference of those numerators. Letβ s clear some space for the final steps. Now we take out a factor of eight over π squared. And the bit outside our parenthesis becomes 32 over π cubed. And weβ re going to multiply that by π squared minus two π π plus two π squared. Now this might look a bit complicated. And you might wish to redistribute these parentheses and check that we have the same terms. And then this next step is a little special. π cubed is independent of π , as of course is 32. This means we can take the fact of 32 over π cubed outside of our summation. And we found the right Riemann sum. Itβ s 32 over π cubed times the sum of π squared plus two π squared minus two π π for values of π between one and π .
{"url":"https://www.nagwa.com/en/videos/950129723271/","timestamp":"2024-11-14T06:56:18Z","content_type":"text/html","content_length":"246913","record_id":"<urn:uuid:a0681658-5c79-4285-a6c5-51240d882449>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00713.warc.gz"}
Understanding Binary Trees and Their Properties In computer science and data analysis, binary trees play a crucial role as they provide an efficient and organized way to store, retrieve, and manipulate data. With two children nodes for each parent node, binary trees offer versatility in various applications, from representing hierarchical relationships to implementing efficient searching algorithms. In this article, we will explore the fundamentals of binary trees and delve into their essential properties. What is a Binary Tree? A binary tree is a tree-like data structure composed of nodes, where each node has at most two child nodes: a left child and a right child. These child nodes can be either empty or filled with data. The structure is called a tree because it resembles an inverted tree, with the root node representing the tree's base and subsequent nodes branching out as the tree grows. Binary trees can be classified into different types, such as complete binary trees, perfect binary trees, and balanced binary trees, based on specific constraints on their structure and ordering Properties of Binary Trees 1. Root Node Every binary tree has a root node that serves as the initial point of access to the tree's elements. The root node is the topmost node in the hierarchy and does not have a parent node. 2. Child Nodes Every node in a binary tree can have at most two child nodes: a left child and a right child. These child nodes may contain data or be empty, depending on the elements stored in the tree. 3. Leaf Nodes Leaf nodes, also known as external nodes, are the bottommost nodes in a binary tree that do not have any child nodes. They serve as endpoints in the tree structure. 4. Internal Nodes Internal nodes, also called non-leaf nodes, are the nodes in a binary tree that have at least one child node. They reside between the root node and the leaf nodes and represent the intermediate layers of the tree. 5. Height of a Tree The height of a binary tree is the length of the longest path from the root node to any leaf node. It indicates the total number of layers or levels in the tree structure. A binary tree with only one node (the root node) has a height of 0. 6. Depth of a Node The depth of a node in a binary tree is the length of the path from the root node to that specific node. It represents the node's level within the tree hierarchy. 7. Parent-Child Relationship In a binary tree, every node (except the root node) has a parent node, which is the node directly above it. Similarly, every node can have at most two children, namely the left child and the right 8. Traversal Techniques Binary trees offer various traversal techniques to visit and process each node in the tree: • In-order traversal: Visit the left subtree, then the current node, and finally the right subtree. • Pre-order traversal: Visit the current node, then the left subtree, and finally the right subtree. • Post-order traversal: Visit the left subtree, then the right subtree, and finally the current node. • Level-order traversal: Visit the nodes level by level, starting from the root node and moving to each subsequent level. 9. Binary Search Trees A binary search tree (BST) is a type of binary tree that adds an ordering property to the structure. In a BST, the left child of a node contains a value smaller than the node's value, while the right child contains a value greater than the node's value. This ordering property enables efficient searching, insertion, and deletion operations in logarithmic time complexity. Understanding binary trees and their properties is essential for efficiently working with data structures and algorithms involving hierarchical relationships. The properties and characteristics discussed in this article provide a solid foundation for further exploration, including advanced operations, balancing techniques, and applications of binary trees in problem-solving. So, embrace the versatility of binary trees and leverage their power to enhance your data analysis and algorithmic skills!
{"url":"https://noobtomaster.com/data-structures-using-java/understanding-binary-trees-and-their-properties/","timestamp":"2024-11-13T01:17:33Z","content_type":"text/html","content_length":"29042","record_id":"<urn:uuid:d2f0ac49-daa2-4b0f-9d5d-52f850a1535e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00619.warc.gz"}
Advancing Turbulence Insight: The Power of Non-Linear SBSL-EARSM Model Key Takeaways • The Explicit Algebraic Reynolds Stress Model (EARSM) simplifies complex physical processes from Reynolds Stress Models into a more manageable two-equation framework. • Non-linear eddy viscosity models (EVMs), with their quadratic terms defined as a function of strain and vorticity, are able to overcome the limitations of linear models. This allows them to accurately predict the high Reynolds stress anisotropy that results from complex flow physics. • Non-linear EVMs have received significant interest because they offer the potential to return better predictions than linear EVMs but with only a moderate increase in required computing Key Techniques for Implementing the S-BSL-EARSM Turbulence Model EARSM builds on the standard two-equation turbulent models. Derived from the Reynolds Stress Transport Equation, EARSM establishes a non-linear relationship between the Reynolds stresses, the mean strain rate, and the vorticity tensor. Consequently, EARSM is much less computationally demanding than the Reynolds stress model (RSM) while capturing important turbulence features such as anisotropy in normal stresses and secondary flows, which linear eddy viscosity turbulence models cannot. The EARSM model implemented in Fidelity Open is the simplified baseline explicit algebraic Reynolds stress model (S-BSL-EARSM) proposed by Menter et al. (2009), which is based on the BSL k-ω model of Menter (1994) and allows the inclusion of anisotropic effects into the turbulence model. In the S-BSL-EARSM model, the Reynolds-stress tensor is expressed using an effective eddy-viscosity formulation, including a corrective extra-anisotropy tensor where the effective turbulence eddy viscosity is defined as: All the terms and coefficients of the effective turbulence eddy viscosity and the extra anisotropy tensor are defined as follows: The turbulence time scale, where the production term is: The constants Inner model constants: Outer model constants : The auxiliary blending function Wall Functions for S-BSL-EARSM The value of In the intermediate region, Menter F., 1994, "Two-equation eddy viscosity turbulence models for engineering applications", AIAA Journal, vol. 32:1299-1310. Menter F., Garbaruk A.V., Egorov Y., 2009, "Explicit algebraic Reynolds stress models for anisotropic wall-bounded flows", EUCASS ‐ 3rd European Conference for Aero-Space Sciences, Versailles. Assumptions and Insights of k-epsilon Low Re Yang-Shih Turbulence Model Theoretical Foundations of k-ω Menter-Shear Stress Transport Turbulence Model
{"url":"https://community.cadence.com/cadence_blogs_8/b/cfd/posts/advancing-turbulence-insight-the-power-of-non-linear-sbsl-earsm-model","timestamp":"2024-11-15T04:43:58Z","content_type":"text/html","content_length":"83553","record_id":"<urn:uuid:29927be5-89c9-4355-af12-f06a43de49eb>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00528.warc.gz"}
Approximating general metric distances between a pattern and a text Let T = t[0] ... t[n-1] be a text and P = p[0] ... p[m-1] a pattern taken from some finite alphabet set Σ, and let d be a metric on Σ. We consider the problem of calculating the sum of distances between the symbols of P and the symbols of substrings of T of length m for all possible offsets. We present an ε-approximation algorithm for this problem which runs in time O(1/ε^2n · polylog(n, |Σ |)). This algorithm is based on a low distortion embedding of metric spaces into normed spaces (especially, into l[∞]), which is done as a preprocessing stage. The algorithm is also based on a technique of sampling. Original language English Title of host publication Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms Publisher Association for Computing Machinery Pages 419-427 Number of pages 9 ISBN (Print) 9780898716474 State Published - 1 Jan 2008 Externally published Yes Event 19th Annual ACM-SIAM Symposium on Discrete Algorithms - San Francisco, CA, United States Duration: 20 Jan 2008 → 22 Jan 2008 Publication series Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms Conference 19th Annual ACM-SIAM Symposium on Discrete Algorithms Country/Territory United States City San Francisco, CA Period 20/01/08 → 22/01/08 ASJC Scopus subject areas • Software • General Mathematics Dive into the research topics of 'Approximating general metric distances between a pattern and a text'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/approximating-general-metric-distances-between-a-pattern-and-a-te","timestamp":"2024-11-12T16:56:57Z","content_type":"text/html","content_length":"54785","record_id":"<urn:uuid:3786c7b3-4beb-4718-a03b-5f8b67b290ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00367.warc.gz"}
ECE 515 - Control System Theory & Design Homework 7 - Due: 03/07 Problem 1 Recall that for an LTI system, \(\dot x = Ax + Bu\), the controllability Gramian has the form \[ W (0, t) = \int \limits _{0} ^{t} e^{-As}BB^Te^{-A^T s} ds \] a. Prove that the matrix \(\overline{W}(0,t)\) defined below is nonsingular for some \(t>0\) if and only if \(W(0, t)\) is. \[ \overline{W}\left(0, t\right) := \int \limits _{0} ^{t} e^{As} B B^T e^{A^Ts} ds \] b. The result of (a) implies that the pair \((A, B)\) is controllable if and only if the pair \((-A, B)\) is controllable. Is this true for LTV systems? Prove or give a counter example. Problem 2 For the scalar system \[\dot x=-x+u\] consider the problem of steering its state from \(x=0\) at time 0 to \(x=1\) at some given time \(t\). a. Since the system is controllable, we know that this transfer is possible for every value of \(t\). Verify this by giving an explicit formula for a control that solves the problem. b. Is the control you obtained in part (a) unique? If yes, prove it; if not, find another control that achieves the transfer (in the same time \(t\)). c. Now suppose that the control values must satisfy the constraint \(|u|\le 1\) at all times. Is the above problem still solvable for every \(t\)? for at least some \(t\)? Prove or disprove. d. Answer the same questions as in part (c) but for the system \(\dot x=x+u\) (again with \(|u|\le 1\)). Problem 3 Let us revisit our consensus problem from the last homework. Assume that agent 1 is a the leader and knows a desired location \(p \in \mathbb{R}\) to which all agents should converge. Agents 2 and 3 do not know \(p\) and all agents see each other. Based on this information, write down modified consensus equations for which you can prove that all three agents asymptotically converge to \(p\) from arbitrary initial positions. Problem 4 Consider the system \(\dot x=Ax+Bu\) with \[ A= \begin{bmatrix} -1&0&3\\0&1&1\\0&0&2 \end{bmatrix},\qquad B= \begin{bmatrix} 1\\1\\1 \end{bmatrix} \] Compute its Kalman controllability decomposition. Identify controllable and uncontrollable modes. Problem 5 Do BMP Problem 5.5.6 Problem 6 Recall that the controllable subspace of an LTI system \(\dot x = A x + Bu\) is the range of its controllability matrix \(\mathcal{C}\left(A, B \right)\). Consider a pair of matrices \((A, B)\) with \(A \in \mathbb{R}^{n\times n}, B \in \mathbb{R}^{n\times m}\) and let a matrix \(K \in \mathbb{R}^{m \times n}\) be given. Prove that the controllability subspaces of \((A, B)\) and \((A+BK, B)\) are equal and thus \((A+BK, B)\) is controllable if and only if \((A,B)\) is.
{"url":"https://courses.grainger.illinois.edu/ece515/sp2024/homework/hw07.html","timestamp":"2024-11-13T21:43:17Z","content_type":"application/xhtml+xml","content_length":"34338","record_id":"<urn:uuid:dffa4aca-c001-49a7-a56c-20c6645f822a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00018.warc.gz"}
The "Debt Avalanche" Always Wins - The Best Interest Would you rather listen to this? I discuss this article in Episode 42 of The Best Interest Podcast, below. Friend-of-the-blog Tyler knows I like a good math problem, so he asked me: Question for you about paying off debt. Many people know about the “debt snowball” and the “debt avalanche.” But I wonder if focusing on the debt that is costing you the most based on the rate and balance would have an advantage over just focusing on paying off the smaller balance with a higher rate? First, let’s define some terms: • The “debt snowball” is an idea that you should focus on your smallest debt principal first. • The “debt avalanche” suggests focusing on your largest interest rates first. • And Tyler’s idea—I call it the “debt blizzard”—focuses on whichever debt has the largest monthly payment first. The table above aligns with these definitions (see the three columns on the right). But, which method is mathematically best? Surprise! Did you see the article title!? It’s the debt avalanche. Always. No matter what. Speaking of…did you see this crazy avalanche video?! If you want to understand why, follow this logic: • All three methods eliminate the total loan principal. We need to find whichever method minimizes the interest paid. Do you agree? • So let’s say I give you $1.00. Just one. You decide to pay off some debt. Two things will happen: □ You’ll decrease your remaining principal by $1.00 □ You’ll decrease your future interest payments by…well, we’d have to do some math. • But if you want to be most effective with that dollar, how would you do it? No matter which debt you target, you’re always decreasing your principal owed by $1.00. That’s not a differentiator. Your only “knob to turn” lies in the interest payments. • What’s the smart move? You should target whichever debt lowers your future interest payments the most. Agreed? • Well…that’s easy. In our example above, one debt is charging interest at 24%. That’s the debt we should target with this $1.00. • Now repeat, dollar after dollar… • As long as the 24% loan still exists, that one makes sense to target. Then the 8% loan, then 6%, then 4%. • As we said at the beginning: All three methods eliminate the total loan principal. Our method minimizes the interest payments. • We’ve just created the debt avalanche. That’s the optimal payoff plan. The chart above shows the exact payout schedule for our four loans using the three different methods (I assumed $1000 per month payments). The solid lines track our debt over time. The dotted lines track how much interest we’ve paid. The avalanche has both the shortest repayment period and the least interest paid. The snowball’s and blizzard’s efficacies are dependent on the loans themselves. Sometimes they’ll work well, other times poorly. Because the snowball (which cares about principal) and the blizzard (which cares about monthly payments) are focused on the wrong metrics. Yes – the snowball does have a non-financial benefit of “small wins.” By focusing on the smallest debt first, a person can build motivational momentum to continue their positive financial journey. This is phenomenal and could be a justifiable reason to use the debt snowball. The blizzard has psychological benefits too. But the avalanche would still be mathematically optimal. Nothing ground-breaking here. But this should be helpful if you or people in your life are unsure how to approach paying off their debt. Use the debt avalanche. Focus on the highest interest rates first. Thank you for reading! If you enjoyed this article, join 8500+ subscribers who read my 2-minute weekly email, where I send you links to the smartest financial content I find online every week. You can read past newsletters before signing up. Want to learn more about The Best Interest’s back story? Read here. Looking for a great personal finance book, podcast, or other recommendation? Check out my favorites. Was this post worth sharing? Click the buttons below to share! 2 thoughts on “The “Debt Avalanche” Always Wins” 1. I’m surprised nobody talks about a hybrid approach to blend the psychological benefits and the optional financial benefits. Let’s say you have 3 credit cards at various amounts and all are at a high interest rate but relatively close together (19%-21%). Then a car loan at 1.9%, and number of student loans between Starting with the credit cards, then the student loans, then the car loan makes sense due to interest rates and will save a bunch of money. Starting with the lowest balance on the credit card (and then the next bucket when all credit cards are done) though can get you an easy win and not change the interest payments significantly as they’re in the same ball park. There’s also a benefit to reducing risk, as completely paying off loans faster by reduces your minimum payment requirements and puts you in a less risky position in an emergency/job loss situation while reducing the risk of one quitting as they feel they’re not seeing progress fast enough. If we think of it like risk-adjusting a portfolio, we might have to consider if the extra risk is worth the small savings over the hybrid approach. Although certainly the savings would be worth it over the debt snowball. I mostly agree with you, just providing an alternative perspective. My method is harder to explain and counts on the individual to be able to make the right judgment as there’s no black and white 1. Thanks for the thoughts, Joe – I appreciate it! Yeah, the “risk reduction” aspect of eliminating minimum payments is definitely worth considering and is probably worth me thinking about some more… If you’re paying $1000/month, what value should you place on reducing your monthly payment fro $500/mo to $400/mo? It’s got to be a related to the question, “What are the odds that you won’t be able to make $1000/mo payments in the future? And if that’s the case, how low will you go?” Great food for thought.
{"url":"https://bestinterest.blog/debt-avalanche/","timestamp":"2024-11-06T12:25:17Z","content_type":"text/html","content_length":"197380","record_id":"<urn:uuid:40c170fd-3522-42c1-8370-9749ae371e16>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00745.warc.gz"}
Portfolio Allocation and Pair Trading Strategy using Python Learn to perform a comparative analysis of the Portfolio Allocation Strategy with the Pair trading strategy, using the Sharpe, Sortino and Calmar ratio. The complete data files and python code used in this project are available in a downloadable format at the end of the article. This article is the final project submitted by the author as a part of his coursework in Executive Programme in Algorithmic Trading (EPAT) at QuantInsti. Do check our Projects page and have a look at what our students are building. About the Author Ravindra Singh Rawat has a Bachelors’ degree in Electronics and Telecommunication Engineering from NMIMS University. Previously, he has worked with Talerang (a company incubated at Harvard) and QuantInsti. He has a keen interest in analyzing financial data and he aspires to build a career in the financial markets in some capacity. Project Abstract The aim of the project is to compare the Portfolio Allocation Strategy with the Pair trading strategy. The scope of the project is the Indian Equities Market.Our aim was to perform a comparative analysis using the Sharpe, Sortino and Calmar ratio. • Under the above assumptions, it was found that Allocation works better for the following sectors: Pharmaceuticals and Financial Services basket. • Pairs Trading seems to be a better alternative for the following sectors: Technology, Automobile, and Private Banks basket. It is advisable to use the Sharpe ratio metric along with the Sortino and Calmar ratio metrics, as the latter two ratios take into account the downside risk and the drawdown associated with a particular strategy. A brief history of the Ratios The Sharpe Ratio was developed after William F. Sharpe in 1966. It measures the performance of an investment (e.g., security or portfolio) compared to a risk-free asset, after adjusting for its One of the key criticisms of the Sharpe Ratio was that it was poor at estimating tail risks; a normal distribution is assumed, hence it cannot differentiate between positive and negative trades. This gave rise to the Post Modern Portfolio Theory. The post-modern portfolio theory (PMPT) is a portfolio optimization methodology that uses the downside risk of returns instead of the mean variance of investment returns used by the Modern Portfolio Theory (MPT) i.e the Sortino Ratio. The PMPT stands in contrast to the modern portfolio theory (MPT); both of which detail how risky assets should be valued while stressing the benefits of diversification, with the difference in the theories being how they define risk and its impact on returns. Brian M. Rom and Kathleen Ferguson, two software designers, created the PMPT in 1991 when they believed there to be flaws in software design using the MPT. The year 1991 also gave rise to the Calmar Ratio. It was created by Terry W. Young and first published in the trade journal titled ‘Futures’. Young owned California Managed Accounts, a firm in Santa Ynez, California, which managed client funds and published the newsletter CMA Reports. The name of his ratio "Calmar" is an acronym of his company's name and his newsletter: CALifornia Managed Accounts Reports Young defined it thus: "The Calmar ratio is the average annual rate of return for the last 36 months divided by the maximum drawdown for the last 36 months. It is calculated on a monthly basis." Young believed the Calmar ratio was superior because: The Calmar ratio changes gradually and serves to smooth out the overachievement and underachievement periods of a CTA's (Commodity Trading Advisors) performance more readily than the Sharpe ratio. The first time that I came across the phrase ‘Monte Carlo simulations’ was when I was reading the book titled ‘Fooled by Randomness’ by Nassim Nicholas Taleb. I found the idea of running multiple simulations to be very intriguing. While I was learning about the Pair Trading Strategy during my EPAT sessions, I thought to myself, why not create a project where I compare the Portfolio Allocation (Monte Carlo simulation based) strategy to a Pair trading strategy. Right then and there I had my project idea. Project Description My idea was simple: I wanted to compare a Portfolio Allocation strategy with a Pair Trading (using Mean Reversion) strategy. In fact, I used a triplet i.e 3 stocks instead of a pair, for both the strategies. The basis on which I would compare the effectiveness of the said strategies would be based on the following ratios: Sharpe Ratio It is the ratio for comparing reward (return on investment) to risk (standard deviation). This allows us to adjust the returns on an investment by the amount of risk that was taken in order to achieve it. This is given by the following formula: • 𝑅 - annual expected return of the asset in question. • 𝑅𝑓 - annual risk-free rate. Think of this as a deposit in the bank earning x% per annum. • 𝜎 - annualized standard deviation of returns Sortino Ratio The Sortino ratio is very similar to the Sharpe ratio, the only difference being that where the Sharpe ratio uses all the observations for calculating the standard deviation, the Sortino ratio only considers the negative variance. The rationale for this is that we aren't too worried about positive deviations, however, the negative deviations are of great concern, since they represent a loss of our money. This is given by the following formula: • R - annual expected return of the asset in question. • 𝑅𝑓 - annual risk-free rate. • 𝜎 - annualized downside standard deviation of returns Calmar ratio This is similar to the other ratios, with the key difference being that the Calmar ratio uses max drawdown in the denominator as opposed to standard deviation. • R - annual expected return of the asset in question. • 𝑅𝑓 - annual risk-free rate. For the purpose of my project, I have considered Rf to be 0, for all ratios. I collected the data from the Yahoo Finance website. The data was pertaining to stocks in the Indian stock market. The stock data was dated from 29th May 2017 to 30th June 2020. The sectors that I have compared are Technology, Finance(Private Banks), Finance(Financial Services), Automobile, and Pharma. From each sector, I have selected 3 stocks at random. Portfolio Allocation Strategy methodology At first, I computed the Sharpe ratio. The procedure was as follows: • I imported the relevant stock data from Yahoo Finance. • Then I calculated the Return of the Adjusted Close prices using the pct_change() method. • After that, I calculated the mean of returns and the covariance matrix. The covariance matrix tells us about the relationship between the movement of 2 stocks. • In order to run a simulation, I first had to create a matrix. Since I planned to run 10000 simulations, I initialized the rows to be of the number 10000. In the said matrix, I wanted to display the Mean returns, Standard deviation, Sharpe ratio, and the 3 stocks. Hence the matrix would be of the order 10000 x 6 • The 3 ‘stocks’ columns would display their respective weightage in the portfolio. The idea was to randomize the weightage of the stocks in the portfolio (The weights are assigned inside a 'for • Initially, all the values in the matrix are displayed as zero because no values are fed to it. • The main computation takes place inside the 'for loop'. • Portfolio return is calculated as follows: 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑟𝑒𝑡𝑢𝑟𝑛=𝑀𝑒𝑎𝑛 𝑟𝑒𝑡𝑢𝑟𝑛𝑠∗𝑤𝑒𝑖𝑔ℎ𝑡𝑠∗252 Where 252 is the number of trading days in a year • Portfolio Standard deviation is calculated as follows: 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛= √(𝑤𝑒𝑖𝑔ℎ𝑡𝑠)∙((√𝐶𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑀𝑎𝑡𝑟𝑖𝑥)∙(√𝑤𝑒𝑖𝑔ℎ𝑡𝑠)) The Portfolio Standard deviation is the square root of the dot product of weights and dot product of Covariance matrix and weights. • Then we calculate the Sharpe ratio by dividing the portfolio return by the Portfolio Standard deviation. 𝑆ℎ𝑎𝑟𝑝𝑒 𝑟𝑎𝑡𝑖𝑜= 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑟𝑒𝑡𝑢𝑟𝑛/𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 • We then populate the ‘zero’ matrix with the required values. • Our aim is to find the following portfolios: One with the maximum Sharpe ratio and the other with the least Standard deviation. • For that we use the ‘iloc’ functionality of pandas, to locate the following rows. • The ‘iloc’ functionality is used in conjunction with ‘idxmax’ and ‘idxmin’ functionalities to compute the portfolios with the maximum Sharpe ratio and the least Standard deviation respectively. • At the end, we plot our results using the matplotlib library. Procedure for computing the Sortino ratio • The procedure is similar to calculating the Sharpe ratio. The difference lies in the for loop i.e the calculations inside it • First I calculated the daily portfolio return • Then I calculated the mean of the portfolio return (using the mean() method ) and multiplied it by 252, in order to annualize the return. • In order to calculate the downside Standard deviation, I only considered those Portfolio returns which were negative (using the np.where method as a filter). The Standard deviation was calculated using the std() method. • After that I calculate the Sortino ratio: 𝑆𝑜𝑟𝑡𝑖𝑛𝑜 𝑅𝑎𝑡𝑖𝑜= 𝐴𝑛𝑛𝑢𝑎𝑙𝑖𝑠𝑒𝑑 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑟𝑒𝑡𝑢𝑟𝑛/𝐷𝑜𝑤𝑛𝑠𝑖𝑑𝑒 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 • After using the ‘iloc’ functionality as I had done whilst calculating the Sharpe ratio, I plotted the results using the matplotlib library. Procedure for calculating the Calmar ratio • The procedure is similar to calculating the Sharpe ratio. The difference lies in the for loop i.e the calculations inside it • First I calculated the daily portfolio return • After that, I created a function called maximum drawdown. This function is used to calculate the drawdown as follows: 𝐷𝑟𝑎𝑤𝑑𝑜𝑤𝑛= 1−(𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑟𝑒𝑡𝑢𝑟𝑛.𝑅𝑢𝑛𝑛𝑖𝑛𝑔 𝑀𝑎𝑥) • Portfolio return can be thought of as equity and running max as Peak drawdown equity. Running Max is always greater than or equal to Portfolio return. • After that, I computed the maximum drawdown using the max() function • Once this was done, I proceeded to calculate the Calmar Ratio as follows: 𝐶𝑎𝑙𝑚𝑎𝑟 𝑟𝑎𝑡𝑖𝑜= 𝑃𝑜𝑟𝑡𝑓𝑜𝑙𝑖𝑜 𝑟𝑒𝑡𝑢𝑟𝑛/𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝐷𝑟𝑎𝑤𝑑𝑜𝑤𝑛 • After using the ‘iloc’ functionality as I had done whilst calculating the Sharpe ratio, I plotted the results using the matplotlib library. Procedure for the Pair trading strategy At first, I imported the data from Yahoo finance. • Unlike the Portfolio Allocation strategy, I am only interested in the price series. Hence I only imported the Adjusted Close series and did not calculate the return. After that, I plotted the series using the plot() method. • Now we get into the nitty-gritty of the pair trading strategy. I started by calculating the hedge ratio. • The Hedge ratio tells us about the number of shares we have to buy/sell for the strategy to remain mean reverting. For e.g. I buy a share of X, and the hedge ratio for Y is 1.29 and for Z is -2.29. This means that when I buy a share of X, I need to sell 1.29 shares of Y and buy 2.29 shares of Z for the strategy to remain mean reverting. Then I calculated the spread of the strategy. The spread is the difference between the long position and the short position. Then I plotted the spread. • For a pair trading strategy, the spread must be stationary. To check this, we perform a statistical test called the ADF test. • To satisfy the ADF test, the P-value at ADF index 0 i.eADF[0] must be less than the P- values at ADF index 4 i.e ADF[4]. After that, I created a function called ‘stat_arb’ and the values fed to it were the Adjusted Close price, the lookback period and the Standard deviation. • The ‘stat_arb’ was created so that I could generate signals for the pair trading strategy. The signals are generated using Bollinger bands. The Bollinger band consists of 3 lines: the moving average, lower band and upper band. • Inside the function, I calculated the Moving Average and the Moving Standard Deviation. Lookback was used as a value for the rolling window. Then I computed the Upper band and the Lower band: Upper Band = Moving Average + Standard Deviation * Moving Standard Deviation Lower Band = Moving Average - Standard Deviation * Moving Standard Deviation Then I created the entry and exit positions for the long and short positions. • A long entry is when the spread is lesser than the lower band. • A long exit is when the spread is greater than or equal to the moving average. • A short entry is when the spread is greater than the upper band. • A short exit is when the spread is lesser than or equal to the moving average. • An exit is denoted by 0. Along entry is denoted by 1 and a short entry is denoted by -1. • A net position is the summation of long and short positions. Then I calculated the spread difference so that I could calculate the pnl. The spread difference is Today’s spread – The previous day’s spread. The pnl is the spread difference multiplied by the net positions. The net position is shifted by 1 i.e the previous day, in order to avoid the look-ahead bias. After that, I calculated the cumulative pnl. The strategy returns are needed to calculate the Sharpe, Sortino and Calmar ratio. To calculate the strategy returns I first need to calculate the percentage change of spread. 𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑓 𝑠𝑝𝑟𝑒𝑎𝑑=(𝑇𝑜𝑑𝑎𝑦′𝑠𝑝𝑟𝑒𝑎𝑑−𝑃𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑑𝑎𝑦′𝑠 𝑠𝑝𝑟𝑒𝑎𝑑)/( 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟 0∗𝐴𝑑𝑗𝑢𝑠𝑡𝑒𝑑 𝐶𝑙𝑜𝑠𝑒 𝑜𝑓 𝑆𝑡𝑜𝑐𝑘 𝐵∗𝑆ℎ𝑖𝑓𝑡(1)+(𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟 0∗𝐴𝑑𝑗𝑢𝑠𝑡𝑒𝑑 𝐶𝑙𝑜𝑠𝑒 𝑜𝑓 𝑆𝑡𝑜𝑐𝑘 𝐶∗𝑆ℎ𝑖𝑓𝑡(1)+𝑆𝑡𝑜𝑐𝑘 𝐴 𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 𝑟𝑒𝑡𝑢𝑟𝑛𝑠=𝑛𝑒𝑡 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛.𝑠ℎ𝑖𝑓𝑡(1)∗𝑝𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝑐ℎ𝑎𝑛𝑔𝑒 𝑜𝑓 𝑠𝑝𝑟𝑒𝑎𝑑 The net position is shifted by a day to avoid look-ahead bias. Then I calculated the cumulative product of strategy returns (cumulative returns) and plotted it on a graph. In another cell, I plotted the cumulative returns with the Then I proceeded with calculating the drawdown (required to calculate the Calmar ratio). 1− (𝐶𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 𝑟𝑒𝑡𝑢𝑟𝑛𝑠/𝑅𝑢𝑛𝑛𝑖𝑛𝑔 𝑀𝑎𝑥) Cumulative returns can be thought of as the equity and Running max as Peak equity. Running max is always greater than or equal to Cumulative returns. Then I plot the drawdown. After that, I proceeded with calculating the Sharpe, Sortino and Calmar ratio. 𝑆ℎ𝑎𝑟𝑝𝑒 𝑟𝑎𝑡𝑖𝑜= 𝑀𝑒𝑎𝑛 𝑜𝑓 𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 𝑟𝑒𝑡𝑢𝑟𝑛𝑠/(𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 𝑟𝑒𝑡𝑢𝑟𝑛𝑠∗ √252) 𝑆𝑜𝑟𝑡𝑖𝑛𝑜 𝑟𝑎𝑡𝑖𝑜= 𝑀𝑒𝑎𝑛 𝑜𝑓 𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 𝑟𝑒𝑡𝑢𝑟𝑛𝑠/(𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑆𝑡𝑟𝑎𝑡𝑒𝑔𝑦 𝑟𝑒𝑡𝑢𝑟𝑛𝑠∗ √252) • To calculate the Calmar ratio, I needed the Average annual return and the Maximum drawdown. • To calculate the Average annual return, I require the last value of cumulative returns and the years. • To calculate the years I need to count the number of years and divided it by 252i.e the number of trading days. 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑎𝑛𝑛𝑢𝑎𝑙 𝑟𝑒𝑡𝑢𝑟𝑛=(𝐹𝑖𝑛𝑎𝑙 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝐶𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 𝑟𝑒𝑡𝑢𝑟𝑛)1/𝑦𝑒𝑎𝑟𝑠−1 𝐶𝑎𝑙𝑚𝑎𝑟 𝑟𝑎𝑡𝑖𝑜= 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑎𝑛𝑛𝑢𝑎𝑙 𝑟𝑒𝑡𝑢𝑟𝑛/𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑑𝑟𝑎𝑤𝑑𝑜𝑤𝑛 Alternatively, I also calculated a pyfolio sheet to generate the said ratios. Pyfolio serves as a good way to check if the ratios generated were correct. Here I collected the results sector-wise. Figure 1a: Ratios computed for the Technology basket Technology Sharpe Sortino Calmar Pair Trading Strategy 1.21 1.53 1.22 Portfolio allocation Strategy 0.899 1.217 0.862 Figure 1b: Ratios computed for the Automotive basket Automobile Sharpe Sortino Calmar Pair Trading Strategy 0.21 0.25 0.1 Portfolio allocation Strategy -0.0039 -0.00538 -0.0007 Figure 1c: Ratios computed for the Finance (Private Banks) Basket Finance (Private Banks) Sharpe Sortino Calmar Pair Trading Strategy 1.72 2.74 2.13 Portfolio allocation Strategy 0.559 0.661 0.4243 Figure 1d: Ratios computed for the Finance (Financial Services) Basket Finance (Financial Services) Sharpe Sortino Calmar Pair Trading Strategy -0.3 -0.38 -0.21 Portfolio allocation Strategy 1.131 1.64 1.006 Figure 1e: Ratios computed for the Pharmaceuticals Basket Pharmaceuticals Sharpe Sortino Calmar Pair Trading Strategy -0.38 -0.42 -0.22 Portfolio allocation Strategy 1.558 2.502 1.69 Figures and Graphs Figure A1: Scatter plot of Sharpe Ratio for the technology basket. Note that for all scatter plots, the red star represents the highest risk-adjusted return. The blue star represents the return with the least volatility. Figure A2: Scatter plot of Sharpe Ratio for the Automobile basket. Figure A3: Scatter plot of the Sharpe ratio for the Private Banks basket Figure A4: Scatter plot of the Sharpe Ratio for the Financial Services basket. Figure A5: Scatter plot of the Sharpe Ratio for the Pharmaceuticals basket. Figure B1: Scatter plot of the Sortino ratio for the Technology basket Figure B2: Scatter plot of the Sortino Ratio for the Automobile basket Figure B3: Scatter plot of the Sortino Ratio for the Private Banks basket Figure B4: Scatter plot of the Sortino Ratio for the Financial Services basket Figure B5: Scatter plot of the Sortino Ratio for the Pharmaceutical basket Figure C1: Scatter plot of the Calmar ratio for the Technology basket Figure C2: Scatter plot of the Calmar ratio for the Automobile basket Figure C3: Scatter plot of the Calmar Ratio for the Private Banks basket Figure C4: Scatter plot of the Calmar ratio of the Financial Services basket Figure C5: Scatter plot of the Calmar ratio of the Pharmaceutical basket Figure D1: Cumulative returns for the Pair trading strategy in the Technology basket. Figure D2: Cumulative returns and the Positions plotted alongside each other on the y-axis, for the Pair trading strategy in the Technology basket. Figure D3: Drawdown of the Pair trading strategy in the Technology basket. Figure E1: Cumulative returns for the Pair trading strategy in the Automobile basket. Figure E2: Cumulative returns and the Positions plotted alongside each other on the y-axis, for the Pair trading strategy in the Automobile basket. Figure E3: Drawdown of the Pair trading strategy in the Automobile basket. Figure F1: Cumulative returns for the Pair trading strategy in the Private Banks basket. Figure F2: Cumulative returns and the Positions plotted alongside each other on the y-axis, for the Pair trading strategy in the Private Banks basket. Figure F3: Drawdown of the Pair trading strategy in the Private Banks basket. Figure G1: Cumulative returns for the Pair trading strategy in the Financial Services basket. Figure G2: Cumulative returns and the Positions plotted alongside each other on the y-axis, for the Pair trading strategy in the Financial Services basket. Figure G3: Drawdown of the Pair trading strategy in the Financial Services basket. Figure H1: Cumulative returns for the Pair trading strategy in the Pharmaceutical basket. Figure H2: Cumulative returns and the Positions plotted alongside each other on the y-axis, for the Pair trading strategy in the Pharmaceutical basket. Figure H3: Drawdown of the Pair trading strategy in the Pharmaceutical basket. Conclusion and future implications The big assumption made: We can only invest in specific stocks from the selected sectors. Further, we assumed that the margin required to put on a pair trade is: • This is from the percentage change of the spread equation. • In reality, the broker can charge much more margin for overnight positions in the futures. • Our aim was to perform a comparative analysis under the assumptions and limitations. Under the above assumptions, it was found that: • Allocation works better for the following sectors: Pharmaceuticals and Financial Services basket • Pair Trading seems to be a better alternative for the following sectors: • Technology, Automobile, and Private Banks basket The Portfolio allocation strategy generates alpha from correct security selection and the weightage assigned to the stocks. The pair trading strategy generates its alpha from the pair selection and the mean reversion process. By applying mean reversion trading, traders can capitalize on price fluctuations as stocks revert to their average, improving overall portfolio performance. In the Pair trading Strategy, I have gone ahead with implementing the strategy in spite of it not fulfilling the ADF test criteria. What is the advantage of using either strategy? The advantage of a Pair Trading Strategy is that it is market neutral. What are the limitations? Short selling for overnight positions is not allowed in the Indian cash Equities Market. Shorting is allowed in the Futures segment. In such a situation, portfolio allocation strategy takes the cake as much less margin is required. Shorting is difficult with the Pair Trading Strategy as there is much more margin required. In a bear market, the Portfolio allocation strategy has a higher chance of performing worse. • Bacon, C. and Chairman, S., 2009. How sharp is the Sharpe ratio? Risk-adjusted Performance Measures. Statpro, nd. • EPAT lectures on Portfolio Optimization and Pair Trading Strategies • Taleb, N., 2005. Fooled by randomness: The hidden role of chance in life and in the markets (Vol. 1). Random House Incorporated. If you want to learn various aspects of Algorithmic trading then check out the Executive Programme in Algorithmic Trading (EPAT). The course covers training modules like Statistics & Econometrics, Financial Computing & Technology, and Algorithmic & Quantitative Trading. EPAT equips you with the required skill sets to build a promising career in algorithmic trading. Enroll now! Files in the download • pair_mean_reversion_v3 • v2_Calmar • v4_sharpe_tech • v4_sortino Disclaimer: The information in this project is true and complete to the best of our Student’s knowledge. All recommendations are made without guarantee on the part of the student or QuantInsti^®. The student and QuantInsti^® disclaim any liability in connection with the use of this information. All content provided in this project is for informational purposes only and we do not guarantee that by using the guidance you will derive a certain profit.
{"url":"https://blog.quantinsti.com/portfolio-allocation-pair-trading-strategy-python-project-ravindra-rawat/","timestamp":"2024-11-13T07:36:48Z","content_type":"text/html","content_length":"224442","record_id":"<urn:uuid:25b84cfa-79a2-4351-9418-f890733db7c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00040.warc.gz"}
ACP Seminar (Astronomy - Cosmology - Particle Physics) Speaker: Baojiu Li (Durham) Title: Cosmology with new gravitational degrees of freedom Date Wed, Mar 05, 2014, 13:30 - 14:30 Place: Seminar Room A Related 1137.pdf In this talk I will present results from studies on the cosmology in the presence of new degrees of freedom in the gravitational sector. The new degrees of freedom can be classified Abstract: according to their spins, and this talk will be focused mainly on spin-0 and spin-2 cases. In particular, I will talk about the screened modified gravity theories, including f(R) gravity, chameleon theory, DGP model and Galileon gravity. In such theories, the new degrees of freedom are hidden locally but can have detectable imprints on cosmological observables. I will show how these degrees behave in cosmology, and explain how to best constraint them.
{"url":"http://research.ipmu.jp/seminar/?seminar_id=1137","timestamp":"2024-11-14T18:12:18Z","content_type":"text/html","content_length":"14289","record_id":"<urn:uuid:0defc4f6-3cdb-4def-be94-1fa4f13c9888>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00475.warc.gz"}
omplex number Calculator phase, complex number angle Rectangular form (standard form): z = 45 Angle notation (phasor, module and argument): z = 45 ∠ 0° Polar form: z = 45 × (cos 0° + i sin 0°) Exponential form: z = 45 × e^i 0 = 45 × e^i 0 Polar coordinates: r = |z| = 45 ... magnitude (modulus, absolute value) θ = arg z = 0 rad = 0° = 0π rad ... angle (argument or phase) Cartesian coordinates: Cartesian form of imaginary number: z = 45 Real part: x = Re z = 45 Imaginary part: y = Im z = 0 Calculation steps 1. Complex number: 1+i 2. Argument (angle) of the complex number: arg(the result of step No. 1) = arg(1+i) = 45 ° This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. As an imaginary unit, use (in electrical engineering), which satisfies the basic equation i^2 = −1 j^2 = −1 . The calculator also converts a complex number into angle notation (phasor notation), exponential, or polar coordinates (magnitude and angle). Enter expression with complex numbers like Complex numbers in the angle notation polar coordinates r, θ) may you write as is magnitude/amplitude/radius, and is the angle (phase) in degrees, for example, which is the same as Example of multiplication of two imaginary numbers in the angle/polar/phasor notation: 10L45 * 3L90 For use in education (for example, calculations of alternating currents at high school), you need a quick and precise complex number calculator. Basic operations with complex numbers We hope that working with the complex number is quite easy because you can work with imaginary unit as a variable. And use the definition i^2 = -1 to simplify complex expressions. Many operations are the same as operations with two-dimensional vectors. It is very simple: add up the real parts (without i) and add up the imaginary parts (with i): This is equal to use rule: (a+b ) = (a+c) + (b+d) i (1+i) + (6-5i) = 7-4i 12 + 6-5i = 18-5i (10-5i) + (-5+5i) = 5 Again it is very simple: subtract the real parts and subtract the imaginary parts (with i): This is equal to use rule: (a+b ) = (a-c) + (b-d) i (1+i) - (3-5i) = -2+6i -1/2 - (6-5i) = -6.5+5i (10-5i) - (-5+5i) = 15-10i To multiply two complex numbers, use distributive law, avoid binomials, and apply i^2 = -1 This is equal to use rule: (a+b ) = (ac-bd) + (ad+bc) i (1+i) (3+5i) = 1*3+1*5i+i*3+i*5i = 3+5i+3i-5 = -2+8i -1/2 * (6-5i) = -3+2.5i (10-5i) * (-5+5i) = -25+75i The division of two complex numbers can be accomplished by multiplying the numerator and denominator by the denominator's complex conjugate. This approach avoids imaginary unit from the denominator. If the denominator is c+d , to make it without i (or make it real), multiply with conjugate c-d ) = c ^2 (10-5i) / (1+i) = 2.5-7.5i -3 / (2-i) = -1.2-0.6i 6i / (4+3i) = 0.72+0.96i Absolute value or modulus The absolute value or modulus is the distance of the image of a complex number from the origin in the plane. The calculator uses the Pythagorean theorem to find this distance. Very simple, see |3+4i| = 5 |1-i| = 1.4142136 |6i| = 6 abs(2+5i) = 5.3851648 Square root The square root of a complex number (a+bi) is z, if z = (a+bi). Here ends simplicity. Because of the fundamental theorem of algebra, you will always have two different square roots for a given number. If you want to find out the possible values, the easiest way is to use De Moivre's formula. Our calculator is on edge because the square root is not a well-defined function on a complex number. We calculate all complex roots from any number - even in expressions: sqrt(9i) = 2.1213203+2.1213203i sqrt(10-6i) = 3.2910412-0.9115656i pow(-32,1/5)/5 = -0.4 pow(1+2i,1/3)*sqrt(4) = 2.439233+0.9434225i pow(-5i,1/8)*pow(8,1/3) = 2.3986959-0.4771303i Square, power, complex exponentiation Our calculator can power any complex number to an integer (positive, negative), real, or even complex number. In other words, we calculate 'complex number to a complex power' or 'complex number raised to a power'... Famous example: i^2 = -1 i^61 = i (6-2i)^6 = -22528-59904i (6-i)^4.5 = 2486.1377428-2284.5557378i (6-5i)^(-3+32i) = 2929449.0399425-9022199.5826224i i^i = 0.2078795764 pow(1+i,3) = -2+2i Square Root of a value or expression. the sine of a value or expression. Autodetect radians/degrees. the cosine of a value or expression. Autodetect radians/degrees. tangent of a value or expression. Autodetect radians/degrees. e (the Euler Constant) raised to the power of a value or expression Power one complex number to another integer/real/complex number The natural logarithm of a value or expression The base-10 logarithm of a value or expression abs or |1+i| The absolute value of a value or expression Phase (angle) of a complex number is less known notation: cis(x) = cos(x)+ i sin(x); example: cis (pi/2) + 3 = 3+i the conjugate of a complex number - example: conj(4i+5) = 5-4i
{"url":"https://www.hackmath.net/en/calculator/complex-number?input=phase%281%2Bi%29","timestamp":"2024-11-12T23:45:01Z","content_type":"text/html","content_length":"34714","record_id":"<urn:uuid:c2c36f2c-3df4-4e9c-a2ad-c60f2aa4fa54>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00392.warc.gz"}
MAT 141 - Finite Math Des Moines Area Community College Nov 12, 2021-2022 Course Catalog 2021-2022 Course Catalog [ARCHIVED CATALOG] Add to Portfolio (opens a new window) MAT 141 - Finite Math Credits: 4 Lecture Hours: 4 Lab Hours: 0 Practicum Hours: 0 Work Experience: 0 Course Type: CoreA general education course in practical mathematics for those students not majoring in mathematics or science. This course will include such topics as set operations and applications, methods of counting, probability, systems of linear equations, matrices, geometric linear programming and an introduction to Markov chains. Prerequisite: Minimum ALEKS score of 30% or MAT 063 with a C- or better. 1. Solve linear equations and inequalities in one variable 1. Determine if the sentence is linear 2. Isolate the variable 3. Change order when operating with a negative factor 2. Describe the functions and functional notation 1. Define a relation 2. Define a function 3. Determine the dependency relationship between the variables 4. Use f(x) notation 3. Graph linear equations and inequalities in two variables 1. Describe the Cartesian coordinate system 2. Determine the coordinates of sufficient points needed to draw the line of the equation 3. Locate and indicate the proper half-plane for an inequality 4. Write linear models for verbal problems 1. Identify the quantities pertinent to the problem 2. Identify extraneous information 3. Label clearly the necessary constant and variable quantities 4. Write a mathematical sentence that relates the necessary quantities 5. Identify, when necessary, missing information 5. Perform basic matrix operations 1. Define a matrix and related terms 2. State the conditions under which various operations may be performed 3. Add, subtract, and multiply matrices when possible 4. Invert a 2 x 2 or a 3 x 3 matrix, when possible 6. Solve systems of linear equations by a variety of methods 1. State the possible solutions and the conditions of their appearance for a linear system 2. Graph the set of equations on one set of axes 3. Use the ‘multiply and add’ method to determine the solution 4. Apply row operations to an augmented matrix to determine the solution (Gauss-Jordan method). 5. Solve the system by applying matrix algebra 7. Identify the feasible region and vertices for a set of linear constraints My 1. Graph each of the constraints on the same set of axes Portfolio 2. Indicate the intersection of all the half-planes as a polygon 3. Find the coordinates of the vertices of the polygon 8. Solve linear programming problems 1. Model the limited resource problem in terms of an objective function and a set of constraints 2. Graph the constraints 3. Apply the Corner Point Theorem 4. Confirm the result for reasonableness 9. Perform basic set operations, using correct notation 1. Define a set and its related terms 2. Determine the intersection and union of given sets 3. Illustrate the intersection and union of sets with Venn diagrams 4. Use set notation to describe a Venn diagram 10. Solve counting problems using the multiplication principles 1. State the Fundamental Counting Principle 2. Determine if a problem is a permutation or a combination 3. State the relationship between combinations, Pascal’s triangle, and the binomial coefficients 4. Use correctly combination and permutation notations 5. Calculate factorials 11. Write the sample space and specific events of an experiment 1. Define sample space and event 2. Distinguish between continuous and discrete outcomes 3. Describe a trial of an event 4. Write a clear description of an event of interest 12. Evaluate the probabilities of basic problems such as dice, cards, coins, and balls 1. Define the probability of an event 2. Apply the addition rule for combined probabilities 3. Apply the multiplication rule for combined probabilities 4. Determine if events are mutually exclusive 13. Calculate conditional probabilities by various methods 1. Calculate conditional probability by formula 2. Calculate conditional probability by probability trees 3. Determine if events are independent 4. Calculate probabilities by Bayes’ formula 14. State characteristic properties of probability distributions 1. Create a probability distribution form a frequency distribution table 2. Create a probability distribution graph 3. Relate the area under a probability distribution graph to the probability of an event 4. State the random variable of the probability distribution 5. Calculate the mean, median, mode, and standard deviation of the random variable 15. Calculate the probabilities of events by means of known probability distributions 1. Apply Chebychev’s Theorem 2. Find the probabilities of events based on normally distributed random variables 3. Estimate the probabilities of binomial events by means of a normal distribution Back to Top | Print-Friendly Page (opens a new window) Add to Portfolio (opens a new window) All © 2024 Des Moines Area Community College. Powered by . Nov 12, 2021-2022 Course Catalog 2021-2022 Course Catalog [ARCHIVED CATALOG] Add to Portfolio (opens a new window) MAT 141 - Finite Math Credits: 4 Lecture Hours: 4 Lab Hours: 0 Practicum Hours: 0 Work Experience: 0 Course Type: CoreA general education course in practical mathematics for those students not majoring in mathematics or science. This course will include such topics as set operations and applications, methods of counting, probability, systems of linear equations, matrices, geometric linear programming and an introduction to Markov chains. Prerequisite: Minimum ALEKS score of 30% or MAT 063 with a C- or better. 1. Solve linear equations and inequalities in one variable 1. Determine if the sentence is linear 2. Isolate the variable 3. Change order when operating with a negative factor 2. Describe the functions and functional notation 1. Define a relation 2. Define a function 3. Determine the dependency relationship between the variables 4. Use f(x) notation 3. Graph linear equations and inequalities in two variables 1. Describe the Cartesian coordinate system 2. Determine the coordinates of sufficient points needed to draw the line of the equation 3. Locate and indicate the proper half-plane for an inequality 4. Write linear models for verbal problems 1. Identify the quantities pertinent to the problem 2. Identify extraneous information 3. Label clearly the necessary constant and variable quantities 4. Write a mathematical sentence that relates the necessary quantities 5. Identify, when necessary, missing information 5. Perform basic matrix operations 1. Define a matrix and related terms 2. State the conditions under which various operations may be performed 3. Add, subtract, and multiply matrices when possible 4. Invert a 2 x 2 or a 3 x 3 matrix, when possible 6. Solve systems of linear equations by a variety of methods 1. State the possible solutions and the conditions of their appearance for a linear system 2. Graph the set of equations on one set of axes 3. Use the ‘multiply and add’ method to determine the solution 4. Apply row operations to an augmented matrix to determine the solution (Gauss-Jordan method). 5. Solve the system by applying matrix algebra 7. Identify the feasible region and vertices for a set of linear constraints My 1. Graph each of the constraints on the same set of axes Portfolio 2. Indicate the intersection of all the half-planes as a polygon 3. Find the coordinates of the vertices of the polygon 8. Solve linear programming problems 1. Model the limited resource problem in terms of an objective function and a set of constraints 2. Graph the constraints 3. Apply the Corner Point Theorem 4. Confirm the result for reasonableness 9. Perform basic set operations, using correct notation 1. Define a set and its related terms 2. Determine the intersection and union of given sets 3. Illustrate the intersection and union of sets with Venn diagrams 4. Use set notation to describe a Venn diagram 10. Solve counting problems using the multiplication principles 1. State the Fundamental Counting Principle 2. Determine if a problem is a permutation or a combination 3. State the relationship between combinations, Pascal’s triangle, and the binomial coefficients 4. Use correctly combination and permutation notations 5. Calculate factorials 11. Write the sample space and specific events of an experiment 1. Define sample space and event 2. Distinguish between continuous and discrete outcomes 3. Describe a trial of an event 4. Write a clear description of an event of interest 12. Evaluate the probabilities of basic problems such as dice, cards, coins, and balls 1. Define the probability of an event 2. Apply the addition rule for combined probabilities 3. Apply the multiplication rule for combined probabilities 4. Determine if events are mutually exclusive 13. Calculate conditional probabilities by various methods 1. Calculate conditional probability by formula 2. Calculate conditional probability by probability trees 3. Determine if events are independent 4. Calculate probabilities by Bayes’ formula 14. State characteristic properties of probability distributions 1. Create a probability distribution form a frequency distribution table 2. Create a probability distribution graph 3. Relate the area under a probability distribution graph to the probability of an event 4. State the random variable of the probability distribution 5. Calculate the mean, median, mode, and standard deviation of the random variable 15. Calculate the probabilities of events by means of known probability distributions 1. Apply Chebychev’s Theorem 2. Find the probabilities of events based on normally distributed random variables 3. Estimate the probabilities of binomial events by means of a normal distribution Back to Top | Print-Friendly Page (opens a new window) Add to Portfolio (opens a new window) 2021-2022 Course Catalog [ARCHIVED CATALOG] Add to Portfolio (opens a new window) MAT 141 - Finite Math Credits: 4 Lecture Hours: 4 Lab Hours: 0 Practicum Hours: 0 Work Experience: 0 Course Type: CoreA general education course in practical mathematics for those students not majoring in mathematics or science. This course will include such topics as set operations and applications, methods of counting, probability, systems of linear equations, matrices, geometric linear programming and an introduction to Markov chains. Prerequisite: Minimum ALEKS score of 30% or MAT 063 with a C- or better. 1. Solve linear equations and inequalities in one variable 1. Determine if the sentence is linear 2. Isolate the variable 3. Change order when operating with a negative factor 2. Describe the functions and functional notation 1. Define a relation 2. Define a function 3. Determine the dependency relationship between the variables 4. Use f(x) notation 3. Graph linear equations and inequalities in two variables 1. Describe the Cartesian coordinate system 2. Determine the coordinates of sufficient points needed to draw the line of the equation 3. Locate and indicate the proper half-plane for an inequality 4. Write linear models for verbal problems 1. Identify the quantities pertinent to the problem 2. Identify extraneous information 3. Label clearly the necessary constant and variable quantities 4. Write a mathematical sentence that relates the necessary quantities 5. Identify, when necessary, missing information 5. Perform basic matrix operations 1. Define a matrix and related terms 2. State the conditions under which various operations may be performed 3. Add, subtract, and multiply matrices when possible 4. Invert a 2 x 2 or a 3 x 3 matrix, when possible 6. Solve systems of linear equations by a variety of methods 1. State the possible solutions and the conditions of their appearance for a linear system 2. Graph the set of equations on one set of axes 3. Use the ‘multiply and add’ method to determine the solution 4. Apply row operations to an augmented matrix to determine the solution (Gauss-Jordan method). 5. Solve the system by applying matrix algebra 7. Identify the feasible region and vertices for a set of linear constraints 1. Graph each of the constraints on the same set of axes 2. Indicate the intersection of all the half-planes as a polygon 3. Find the coordinates of the vertices of the polygon 8. Solve linear programming problems 1. Model the limited resource problem in terms of an objective function and a set of constraints 2. Graph the constraints 3. Apply the Corner Point Theorem 4. Confirm the result for reasonableness 9. Perform basic set operations, using correct notation 1. Define a set and its related terms 2. Determine the intersection and union of given sets 3. Illustrate the intersection and union of sets with Venn diagrams 4. Use set notation to describe a Venn diagram 10. Solve counting problems using the multiplication principles 1. State the Fundamental Counting Principle 2. Determine if a problem is a permutation or a combination 3. State the relationship between combinations, Pascal’s triangle, and the binomial coefficients 4. Use correctly combination and permutation notations 5. Calculate factorials 11. Write the sample space and specific events of an experiment 1. Define sample space and event 2. Distinguish between continuous and discrete outcomes 3. Describe a trial of an event 4. Write a clear description of an event of interest 12. Evaluate the probabilities of basic problems such as dice, cards, coins, and balls 1. Define the probability of an event 2. Apply the addition rule for combined probabilities 3. Apply the multiplication rule for combined probabilities 4. Determine if events are mutually exclusive 13. Calculate conditional probabilities by various methods 1. Calculate conditional probability by formula 2. Calculate conditional probability by probability trees 3. Determine if events are independent 4. Calculate probabilities by Bayes’ formula 14. State characteristic properties of probability distributions 1. Create a probability distribution form a frequency distribution table 2. Create a probability distribution graph 3. Relate the area under a probability distribution graph to the probability of an event 4. State the random variable of the probability distribution 5. Calculate the mean, median, mode, and standard deviation of the random variable 15. Calculate the probabilities of events by means of known probability distributions 1. Apply Chebychev’s Theorem 2. Find the probabilities of events based on normally distributed random variables 3. Estimate the probabilities of binomial events by means of a normal distribution Back to Top | Print-Friendly Page (opens a new window) Add to Portfolio (opens a new window) 2021-2022 Course Catalog [ARCHIVED CATALOG] Add to Portfolio (opens a new window)
{"url":"https://catalog.dmacc.edu/preview_course_nopop.php?catoid=18&coid=22887","timestamp":"2024-11-12T12:41:08Z","content_type":"text/html","content_length":"42934","record_id":"<urn:uuid:bbc4d345-ad29-4e49-8775-f9e40ab15825>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00693.warc.gz"}
N. S Bayesian Foundations This note reviews some of the probabilistic foundations of the Bayesian paradigm. We focus on somewhat existential questions about prior distributions through the lens of de Finetti’s representation theorem. The goal is to show that a Bayesian analysis—or more generally, the subjectivist view of probability—can be both motivated and justified by the simple belief of Bayesian Decision Theory This note reviews some of the key results in Bayesian decision theory. The motivation is to understand how and why Bayes estimators are “good” estimators. We outline conditions of Bayesian optimality and frequentist optimality, and then present a key result (the complete class theorem) connecting these two criteria which shows that all “good” estimators in the frequentist sense must be Bayes with respect to some prior. Bayesian Asymptotics This note reviews some of the key results in Bayesian asymptotics. We consider the following questions: Where do posteriors concentrate mass as the sample size gets large? Are posteriors consistent in the frequentist sense? What shape does the limiting posterior have? We start with a general result on the consistency of posterior distributions (Doob’s theorem), and then present results on the asymptotic normality for parametric models (Bernstein-von Mises theorem). Bayesian Computation and MCMC This note reviews some of the key results underlying MCMC theory, discusses the theoretical underpinnings of popular MCMC algorithms (the Metropolis-Hastings algorithm and Gibbs sampler), and presents a few applications in the context of economic choice models. Bayesian Linear Regression This note derives the posterior distribution of a Bayesian linear regression model with conjugate priors and may be used as a companion to chapter 2.8 in Rossi et al. (2005). We first define the model and derive the posterior. We conclude with a discussion of efficient posterior sampling based on the Cholesky decomposition. BLP This note reviews the canonical random coefficients logit or “BLP” model à la Berry et al. (1995). We outline details of the model, the contraction mapping, and both classical and Bayesian approaches to estimation. Multiple Discrete/Continuous Demand This note outlines a method for simulating demand from multiple discrete/continuous demand models. In this class of models, demand equations are often complicated expressions without a closed form, which complicates the process of simulating demand. We focus on a simulation approach based on analytical expressions of the Kuhn-Tucker conditions.
{"url":"https://www.adamnsmith.com/notes.html","timestamp":"2024-11-11T22:43:54Z","content_type":"text/html","content_length":"8524","record_id":"<urn:uuid:7986c1bc-246d-4d67-8284-fa0e1858b4a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00577.warc.gz"}
Obtaining Multiple Solutions Z3 Published on Updated on Playing around with Diophantine solvers, I wanted to obtain the solutions of the following equation: $$ 5a + 4b - 3c = 0 $$ Let’s encode that using Z3 from z3 import * # Encode Equation a, b, c = Ints("a, b, c") s = Solver() s.add(5 * a + 4 * b - 3 * c == 0) # Find solution result = s.model() if result == sat: This code snippet returns [a = 0, b = 0, c = 0] Now there are multiple solutions to this Diophantine equation, so how do we get the others? It turns out after searching around StackOverflow (see references) the only way is to add the previous solutions as constraints. # This encodes the last solution as a constraint block = [] for var in m: block.append(var() != m[var]) Formulaically, this corresponds to: $$ a \ne 0 \vee b \ne 0 \vee c \ne 0 $$ If you look at the references, it’s hard to encode these constraints generally. This is because Z3 is a powerful SMT solver working with many different theories. Though if we restrict ourselves to the Diophantine equations, we can write a function that acts as a generator for all of the solutions: import z3 def get_solutions(s: z3.z3.Solver): result = s.check() # While we still get solutions while (result == z3.sat): m = s.model() yield m # Add new solution as a constraint block = [] for var in m: block.append(var() != m[var]) # Look for new solution result = s.check() Now for our example, this allows us to do the following: from z3 import * a, b, c = Ints("a, b, c") s = Solver() s.add(5 * a + 4 * b - 3 * c == 0) solutions = get_solutions(s) upper_bound = 10 for solution, _ in zip(solutions, range(upper_bound)): The solutions of a linear Diophantine equation can be easily parameterized so I don’t recommend using Z3 in this way. Though I think this exercise is informative for other theories you might be trying to satisfy.
{"url":"https://brandonrozek.com/blog/obtaining-multiple-solutions-z3/","timestamp":"2024-11-05T22:06:02Z","content_type":"text/html","content_length":"24126","record_id":"<urn:uuid:dffaac75-19fb-4eba-9e08-56c92034d768>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00048.warc.gz"}
A weighted co-occurrence matrix (wecoma) A weighted co-occurrence matrix (wecoma) representation Jakub Nowosad This vignette explains what a weighted co-occurrence matrix (wecoma) representation is and how to calculate it using the comat package. If you do not know what a co-occurrence matrix is, it could be worth to read the first package vignette first. The examples below assume the comat package is attached, and the raster_x and raster_w datasets are loaded: The raster_x object is a matrix with three rows and columns with values of 1, 2, and 3. We can imagine that the value of 1 (blueish color) represents population A, the value of 2 (dark green) is population B, and the value of 3 (light green) represents population C. The raster_w object is also a matrix of the same dimensions. It has values between 2 and 9. This object is different from the first one, as it does not represent categories. Its role is to provide some weights to the previous raster. We can think of it as a number of occurrences in each The weighted co-occurrence matrix (wecoma) representation is a modification of the co-occurrence matrix (coma). In the co-occurrence matrix, each adjacency contributes to the output with the constant value 1. In the weighted co-occurrence matrix, on the other hand, each adjacency contributes to the output based on the values from the weight matrix. The contributed value is calculated as the average of the weights in the two adjacent cells. We can use the get_wecoma() function to calculate this weighted co-occurrence matrix (wecoma) representation: In this representation, we do not count the neighbors but sum the contributed values from the weight matrix. The smallest value (5) represents the relation between the adjacent cells between the first and the second category. This is due to the relatively small values of the neighborhooding cells of these classes, but also because there is only one case of adjacent cells of these classes. Central left cell (blueish, category 1) has a value of 6, and the bottom left cell (dark green, category 2) has a value of 4. The output value, 5, is an average of the two adjacent weights. On the other hand, a light green region has the largest values in the weight matrix. Therefore, the output of the get_wecoma() function has the largest value (49) for the relation between the adjacent cells of the third category. This function allows for some parametrization using additional arguments - fun and na_action. The fun argument selects the function to calculate values from adjacent cells to contribute to the output matrix. It has three possible options: "mean" - calculate average values from adjacent cells of the weight matrix, "geometric_mean" - calculate geometric mean values from adjacent cells of the weight matrix, or "focal" assign a value from the focal cell. The na_action argument decides on how to behave in the presence of missing values in the weight matrix. The default, "replace", replaces missing values with 0, "omit" does not use cells with missing values, and "keep" keeps missing values. get_wecoma(raster_x, raster_w, fun = "focal", na_action = "omit") #> 1 2 3 #> 1 12 6 10 #> 2 4 12 16 #> 3 17 13 49 Similarly to the co-occurrence matrix (coma), it is possible to convert wecoma to its 1D representation. This new form is called a weighted co-occurrence vector (wecove), and can be created using the get_wecove() function, which accepts an output of get_wecoma(): my_wecoma = get_wecoma(raster_x, raster_w) get_wecove(my_wecoma, normalization = "pdf") #> [,1] [,2] [,3] [,4] [,5] [,6] [,7] #> [1,] 0.08633094 0.03597122 0.0971223 0.03597122 0.08633094 0.1043165 0.0971223 #> [,8] [,9] #> [1,] 0.1043165 0.352518 You can see the weighted co-occurrence matrix (wecoma) concept, there described as an exposure matrix, in action in the vignettes of the raceland package (Nowosad, Dmowska, and Stepinski, 2020): 1. raceland: R package for a pattern-based, zoneless method for analysis and visualization of racial topography • Jakub Nowosad, Anna Dmowska and Tomasz Stepinski (2020). raceland: Pattern-Based Zoneless Method for Analysis and Visualization of Racial Topography. R package version 1.0.5. https://
{"url":"https://cran.rediris.es/web/packages/comat/vignettes/wecoma.html","timestamp":"2024-11-10T19:20:47Z","content_type":"text/html","content_length":"20745","record_id":"<urn:uuid:64edb919-395e-4cee-8b11-fddc1f425afd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00180.warc.gz"}
What Is The Undefined Term Used To Define An Angle? → Education Answers - Expert Answers to Your Education QuestionsWhat Is The Undefined Term Used To Define An Angle? What Is The Undefined Term Used To Define An Angle? Is There A Difference Between A Defined And Undefined Angle? Less Magazine from lessmagazine.com Are you trying to better understand angles and how they are measured? If so, you may have come across the term “undefined term” in reference to angles and may be wondering what it means. Put simply, an undefined term is a term used to describe an angle without giving a specific value for it. In other words, an undefined term is a term that gives you a general idea of the angle, but not a specific measurement. For example, if you were to say that two lines form an “acute angle”, the term “acute” is an undefined term. It gives you an idea of the angle, but does not give you an exact measurement. It could be an angle of 30 degrees, or it could be an angle of 89 degrees; the term “acute” does not specify. Types of Undefined Terms There are several common undefined terms used to describe angles. These include acute, obtuse, right, straight, reflex, and full. An acute angle is one that measures less than 90 degrees. An obtuse angle is one that measures greater than 90 degrees but less than 180 degrees. A right angle is one that measures exactly 90 degrees. A straight angle is one that measures exactly 180 degrees. A reflex angle is one that measures greater than 180 degrees but less than 360 degrees. A full angle is one that measures exactly 360 degrees. Angle Measurement In order to measure an angle specifically, a unit of measurement must be included. This is usually done in degrees, such as a 30-degree angle or a 70-degree angle. It is also possible to measure angles in radians, which is the standard unit of measure used in trigonometry. One radian is equal to 57.3 degrees. Using Undefined Terms in Geometry In geometry, undefined terms are often used to describe angles and other shapes. For example, a triangle may be described as an “isosceles triangle” without specifying the exact angles or lengths of the sides. This gives the reader a general idea of the shape without giving exact measurements. Similarly, a quadrilateral may be described as a “rectangle” without specifying the exact angles. Using Undefined Terms in Everyday Life Undefined terms are often used in everyday life as well. For example, when someone says that two people have a “close” relationship, they are not giving a specific measurement of the closeness; they are simply giving a general idea of the relationship. Similarly, when someone describes a person as “tall”, they are not giving an exact height; they are giving a general idea of the person’s height. An undefined term is a term used to describe an angle without giving a specific value for it. Common undefined terms used to describe angles include acute, obtuse, right, straight, reflex, and full. In order to measure an angle specifically, a unit of measurement must be included, such as degrees or radians. Undefined terms are often used to describe shapes in geometry, as well as everyday objects and relationships.
{"url":"https://kat1055.com/which-undefined-term-is-used-to-define-an-angle/","timestamp":"2024-11-10T16:15:34Z","content_type":"text/html","content_length":"134715","record_id":"<urn:uuid:81dcbc7c-7437-4a04-9ad4-dda590d697e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00326.warc.gz"}
Don't Miss the Festival's Who Wants to Be a Mathematician! America's most fast-paced and fun math contest will be the finale for day one of the 2012 USASEF. Eight area high school students will compete for $4000 in cash and prizes in Who Wants to Be a Mathematician on Saturday, April 28 at 5:00 on the Carver Stage. Students from DC, MD, VA, DE, and WV are eligible to compete. Teachers who are interested in having their students participate in the qualifying process should write paoffice@ams.org, with the subject line WWTBAM USASEF. Include your name, school, courses taught this semester, and phone number in the body of the message. See descriptions of past performances of this contest, which is sponsored by the American Mathematical Society, by clicking here. Take a peek at a contest that took place last February in Washington More like this In the American Mathematical Society contest Who Wants to Be a Mathematician, eight area high school students will compete at the 2nd USA Science & Engineering Festival Expo by answering multiple choice mathematics questions in a competitive quiz show format. The highest ranking 8 students will… Thanks to Aunni Y Design for all the help getting the word out. We appreciate all the twitter RTs and tweets and blog posts. Thanks for being a great Festival Partner! AFRICAN-AMERICANS CELEBRATE SCIENCE & ENGINEERING 2010 USA Science and Engineering Festival (USASEF) http://www.… We are excited to share the news that the 2012 USA Science & Engineering Festival Expo Map is out! The Festival will run from 10:00 AM to 6:00 PM on Saturday, April 28th and from 10:00 AM to 4:00 PM on Sunday, April 29th. We will also host free evening shows including the Stargazing Party and… How does science and engineering save us, improve us, and preserve our world? Which solutions can be re-imagined for a better tomorrow, and how? Where should we explore next? What should we build next? How will we get there? The next stop is the future! The National Academy of Engineering (NAE)…
{"url":"https://www.scienceblogs.com/usasciencefestival/2012/01/10/dont-miss-the-festivals-who-wa","timestamp":"2024-11-08T23:54:42Z","content_type":"text/html","content_length":"37684","record_id":"<urn:uuid:6b5e2fa4-c4e7-40f7-9d75-7ab127c4d672>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00176.warc.gz"}
The Geography of Job Loss While on the topic of job loss and unemployment, here’s an animated map from Tip Strategies that shows job gains and losses over time. Red means loss and green means gain, and as you can see above, there isn’t much green (read that zero) on the map. The larger the circle is, the greater the number of net loss or gain compared to that of the numbers of the year before in the respective metropolitan statistical area. Here’s what the map looked like in 2004: [via The Big Picture | Thanks, Barry] 5 Comments • There’s a fundamental problem in how the affected population is represented in these circles. If you take the legend as an example, the diameter of the 10,000 person size circle is maybe 1/8th (conservatively) the diameter of the 100,000 person size circle. If that is the case, the actual area of the 100,000 person circle is 64 times as large as the 10,000 person circle, far overstating the difference in population sizes. To accurately show the difference of a factor of 10, the diameter of a should be 3.162 (the square root of 10) times larger than the smaller □ shoot, you’re right, i was too hasty in my post. they’re scaling diameter to show gain and loss when they should be scaling area. tsk. • These style graphs often annoy me because they don’t take into account the local population size. Of course LA, Chicago, and NYC lost more jobs than Detroit, they are several times larger (none however are approaching the 25% unemployment of Detroit). This figure doesn’t really given anyone an idea of where people are losing jobs, so much as where most people live. • The first comment hits my complaint. Tufte calls it the fallacy of the dollar bill — using area to depict a one-dimensional value. Still a haunting graphic when viewed in diameters. I live in Michigan so I am acutely aware of our leadership role in unemployment.
{"url":"https://flowingdata.com/2009/10/09/the-geography-of-job-loss/","timestamp":"2024-11-06T23:19:59Z","content_type":"text/html","content_length":"81059","record_id":"<urn:uuid:387ca55c-5e54-487d-a5cf-a97f4028efea>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00791.warc.gz"}
Disordered Monomer-Dimer Model on Cylinder Graphs We consider the disordered monomer-dimer model on cylinder graphs G[n] , i.e., graphs given by the Cartesian product of the line graph on n vertices, and a deterministic finite graph. The edges carry i.i.d. random weights, and the vertices also have i.i.d. random weights, not necessarily from the same distribution. Given the random weights, we define a Gibbs measure on the space of monomer-dimer configurations on G[n] . We show that the associated free energy converges to a limit and, with suitable scaling and centering, satisfies a Gaussian central limit theorem. We also show that the number of monomers in a typical configuration satisfies a law of large numbers and a Gaussian central limit theorem with appropriate centering and scaling. Finally, for an appropriate height function associated with a matching, we show convergence to a limiting function and prove the Brownian motion limit around the limiting height function in the sense of finite-dimensional distributional • Central limit theorems • Disordered systems • Monomer-dimer models • Random dimer activities ASJC Scopus subject areas • Statistical and Nonlinear Physics • Mathematical Physics Dive into the research topics of 'Disordered Monomer-Dimer Model on Cylinder Graphs'. Together they form a unique fingerprint.
{"url":"https://experts.illinois.edu/en/publications/disordered-monomer-dimer-model-on-cylinder-graphs","timestamp":"2024-11-04T05:58:21Z","content_type":"text/html","content_length":"55902","record_id":"<urn:uuid:3d23395a-5ae8-4046-8b24-d325b7e31244>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00753.warc.gz"}
Further Improving the Use of the ECRI WLI (Part-II) This article was co-authored with Georg Vrba and first appeared on the popular Advisor Perspectives web site on 17 January 2012 In our last article on using the ECRI WLI, we described how best to use the growth figure of the Economic Cycle Research Institute’s Weekly Leading Index (WLI) to predict recessions, but we also highlighted an impediment to our research –an inability of outsiders to replicate the index (and thus know its components) and its “growth figure” which ECRI publishes weekly. Last week, however, the formula to calculate the WLI growth figure (which we will refer to simply as “WLIg”) was found. Armed with that data, we have made further progress to improving the recession-dating performance of the WLI. Doug Short’s last commentary on this same topic prompted an exchange of e-mails among him, Franz Lischka – he’s the person who cracked the formula for the WLIg – Georg Vrba, and Dwaine van Vuuren on how this – fairly arcane and counterintuitive – formula worked and why. Franz’s formula has four components, namely a first moving average MA1, a second moving average MA2, a power coefficient n and a constant m. We do not understand why ECRI has kept this formula a secret for so long. “MA1” = 4 week moving average of the WLI “MA2” = 53 week moving average of MA1 “n”= 2 “m”= 1 WLIg = [(MA1/MA2)^n – m] *100 This produces a virtually identical replicate of the WLIg, with a correlation of 1.0 and an average deviation of 0.0026 from the published WLIg number. As a result of these discussions, we decided it would be useful to perform an optimization on Franz’s formula to see if we could obtain better recession-dating performance from a new WLIg derived from the WLI using the same performance measurement methods we described in our previous article. The results were surprising – and quite pleasing. Those who read last week’s article may recall that even our best recession-predicting method with WLIg yielded four false positives. This time around, we found a WLI growth metric (we decided to call it “WLIg+” which uses MA1=16, MA2=50, n=2.2258 and m=0.9838) that raised the area-under-the-curve (AUC) metric from 0.904 to 0.923 and National Bureau of Economic Research (NBER) capture rate from 86.1% to 93.3%. That last change is deceiving – it is actually a massive improvement, given that there are only 360 weeks of NBER recessions in the last 2,290 weeks of the sample period. The WLIg+ correctly categorized an additional 26 weeks as recession. The resulting “improved” WLIg+ is shown below, together with the original WLIg: The WLIg+ makes recession calls when it drops below zero, and it calls the end of recessions when it rises above zero. This is another improvement, since one need not remember any ostensibly arbitrary thresholds for triggers (like the -2.638 for the original WLIg). We ignored the last recession signal to the right of the chart when counting false positives, as we cannot yet judge any system until the NBER determines definitively whether we are currently at the beginning of a recession (this takes up to 8-12 months!) You will notice that this is a much smoother and “lazier” interpretation of WLI growth. In our prior article, we showed how taking a three-week moving average of the 52-week percent change of the WLIg produced a recession forecasting/dating system with only one false positive. We will call this WLIg+1 as shown below: While we could not replicate a suitable “one-false positive” version of the WLIg+, we did manage to build one with only two false positives (call it “WLIg+2”) : Professor Geoffrey Moore, in his later work “Leading Indicators for the 1990’s,” laid out in detail his 1980’s research into long and short leading indicators, and he also suggested a high-frequency Weekly Leading Index, which, while slightly less reliable, could be updated in a much more timely and frequent fashion. For excellent coverage of a project to replicate the WLI and discover its components (which remain proprietary to this day), see an examination of the model for ECRI’s black box. Many observers, rather unfairly, compare the WLI to monthly LEIs, such as the Conference Board’s. The ECRI WLI, which is a follow-on from Prof. Moore’s work, will no doubt use a number of high-frequency components that many of the standard monthly LEIs will not, and its motivating spirit – to be a high-frequency more timely index that may be less accurate – means we should not condemn the WLI too harshly for false positives. The strong point is its generous lead time going into recessions. The WLI never was intended to be the sole arbiter of recession dating, and ECRI itself uses many longer leading indicators in conjunction with the WLI. For this reason, we suggest the use of the WLI in a three-step process: First observe the WLIg falling below zero as a warning of possible risk of recession in the future. Then monitor WLIg+ for a 2nd opinion. If both WLIg and WLIg+ are in recession territory you could then consult with WLIg+2 for a third confirmation. If you have 3 confirmations, your last step is to consult with WLIg+1. The four WLI growth indices are shown below as at data published on 13 January 2012, to give an idea how this works: As you can see from the chart above, you sacrifice a few weeks waiting for further confirmation, but you reduce your odds of actioning a false alarm. You can also see that all four WLI growth variants are camped in recession territory (below the zero line). While this is a fairly serious warning, one should never rely on one indicator for a proper action plan around recession avoidance. More appropriate would be a composite approach, such as our Composite SuperIndex methodology. In this model we use nine indicators, and only the WLI is flagging recession currently. The WLI is a great tool, and – with the WLIg+ growth variants we described above – it is even more useful for assessing recession risk. But, much like the method it improves upon, it remains subject to false positives. At the time of writing we were not aware that the actual formula to calculate WLIg was described in a 1999 article published by Anirvan Banerji, the Chief Research Officer at ECRI: The three Ps: simple tools for monitoring economic cycles – pronounced, pervasive and persistent economic indicators. Here is the exact formula we derived from this article: (slightly different to the one we used in this article) “MA1” = 4 week moving average of the WLI “MA2” = moving average of MA1 over the preceding 52 weeks “n”= 52/26.5 “m”= 100 WLIg = [m*(MA1/MA2)^n] – m The above provides a deviation of 0 versus our original formula that had an average deviation of 0.0026 from the published WLIg. The differences are negligible between the 2 formulas but the more recent one is a 100% mathematical match. Due to the close match of the 2 formulas, everything we have discussed in this article regarding the use of the WLIg+ growth variants for recession detection/forecasting still holds.
{"url":"https://recessionalert.com/further_improving_wli/","timestamp":"2024-11-11T14:40:48Z","content_type":"text/html","content_length":"47877","record_id":"<urn:uuid:2d9beabe-e06d-47ad-b0ac-66f2f5dea046>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00251.warc.gz"}
[Seminar 2021.05.06] Arithmetic properties of weakly holomorphic modular functions of arbitrary level Date: 6 May (Thr) 14:30 ~ 15:30 Place: Zoom (ID: 854 1988 1532) Speaker : 강순이 (강원대학교) Title: Arithmetic properties of weakly holomorphic modular functions of arbitrary level The canonical basis of the space of modular functions on the modular group of genus zero form a Hecke system. From this fact, many important properties of modular functions were derived. Recently, we have proved that the Niebur-Poincare basis of the space of Harmonic Maass functions also forms a Hecke system. In this talk, we show its applications, including divisibility of Fourier coefficients of modular functions of arbitrary level, higher genus replicability, and values of modular functions on divisors of modular forms. This is a joint work with Daeyeol Jeon and Chang Heon Kim. cf. 봄학기 정수론 세미나 웹페이지: https://sites.google.com/view/snunt/seminars
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&document_srl=1397&order_type=desc&listStyle=viewer&page=7","timestamp":"2024-11-09T04:31:22Z","content_type":"text/html","content_length":"20185","record_id":"<urn:uuid:2ffcbe7d-567e-42b5-a343-aa12d543edef>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00261.warc.gz"}
further improved phatk_dia kernel for Phoenix + SDK 2.6 - 2012-01-13 The steps: 1. AND the low 16-bits of H against the high 16 bits 2. Take the resulting 16-bit number and OR the low 8 bits against the high 8-bits 3. Take the resulting 8-bit number and OR the low 4 bits against the high 4-bits 4. Take the resulting 4-bit number and OR the low 2 bits against the high 2-bits 5. Take the resulting 2-bit number and NOR the first bit against the second bit 6. do bitwise AND of the resulting 1-bit number against the nonce 7. take the result from #6 and XOR the low 16-bits against the high 16-bits 8. take the resulting 16-bit number from #7 and OR the low 8-bits against the high 8-bits 9. store the result by doing output[OUTPUT_SIZE] = OUTPUT[result of #8] = nonce Steps 1-5 create a single bit indicating if the nonce meets H == 0. When you bitwise AND this against the nonce in step 6 you will get 0 for any invalid nonces and for valid nonces you will just get the nonce again. (1 AND X = X) Steps 7-8 are to produce an 8-bit index that is 0 for all invalid nonces and hopefuly unique for each valid nonce assuming there are a small number of valid nonces. However in the worst case (more than 1 hash found in a single execution) at least 1 will be returned. However if 3 or less nonces are found per execution all of them should be returned in most cass. Sorry to jump in in the middle of the conversation, but if I understand what you are trying to do... Can't you just replace all of the steps with: Valid = 1 - min(H, 1u); Nonce = W[3]; OUTPUT[((Nonce & OUTPUT_MASK) + 1) * Valid] = Nonce; if you are trying to remove all control flow? Any invalid nonce will be written into Output[0] and the valid nonces will be randomly distributed through the rest of the array. I really don't know how the architecture handles having 4 billion threads writing to the same address, but... you may want to try it out... Also, it is easy enough to make it work with VECTORS ; Valid = 1 - (min(H.x, H.y), 1u); //If .y is valid, add 1 to the nonce. Nonce = W[3].x + min(H.y, 1); OUTPUT[((Nonce & OUTPUT_MASK) + 1) * Valid] = Nonce; (or you could just double the code for .x and .y) Valid = 1 - (min(H.x, H.y), 1u); //If .y is valid, add 1 to the nonce. Nonce = W[3].x; OUTPUT[((Nonce & OUTPUT_MASK) + 1) * Valid] = Nonce; and have the __init__ file check both Nonce and Nonce+1 another way of doing it would be (the compiler should replace the if statement with a set conditional): Nonce = W[3]; Position = W[3] & OUTPUT_MASK; Position = OUTPUT_MASK + 1; //Invalid nonce are at the last position of the array, valid are distributed at the front OUTPUT[Position] = Nonce; Slightly faster would be to have the Position = the local thread # (since you save an &) and make sure that the size of the output* array is WORKSIZE + 1: Nonce = W[3]; Position = get_local_id(0); Position = WORKSIZE + 1; OUTPUT[Position] = Nonce; EDIT: Ooh, just thought of something else: If it doesn't like writing everything to the same address: Make the buffer size = 2*WORKSIZE... Nonce = W[3]; Position = get_local_id(0); Position += WORKSIZE; OUTPUT[Position] = Nonce; Then all of the threads in a workgroup will write to a different address. The valid nonces will be in the first half, and the invalid will be in the second. Now I have no idea if any of these things would be faster, but I think all of them would work... Sorry to put so much code down... but this kind of coding isn't really an exact science...
{"url":"https://bitcointalk.org/index.php?topic=25860.280","timestamp":"2024-11-05T12:01:08Z","content_type":"application/xhtml+xml","content_length":"134488","record_id":"<urn:uuid:9069d387-ff42-4848-a20a-2085bbf471af>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00675.warc.gz"}
Period of Trig Functions - Seattle Tutoring | Math Tutor Period of Trig Functions Let’s start with the general equations of: Y=A sin (B(x-c))+D In this post, we’ll talk about the B term. This term helps us find the period, the distance required to complete one cycle. It multiplies the angle. For example y=sin(90°) or in radians, y=sin (π/2) is 1. But if you make B=2 instead of 1 (not shown but assumed in the previous equation), it now looks like y=sin(2(90°) which simplifies to y=sin(2(180°). Or in radians, y=sin(2(π/2)) which simplifies to y =sin(π). These now equal 0. To find the period, note that period=2π/B. So in this second example, 2π/2=π. In the first example, 2π/1=2π. In other words, the second example completes a complete cycle in only π radians (180°), whereas it takes 2π in the first example.
{"url":"https://tutoringservicesseattle.com/period-trig-functions/","timestamp":"2024-11-12T22:20:57Z","content_type":"text/html","content_length":"35219","record_id":"<urn:uuid:18af3ef8-7b4f-41c2-bed7-054140cf3617>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00467.warc.gz"}
Jump to navigation Jump to search The Story of Mathematics Philosophy of Mathematics Teaching of Mathematics Curriculum and Syllabus School Mathematics Maths topics by class Textbooks Question Bank Mathematics is a language - many mathematicians have described the process of mathematics as art and poetry. Bertrand Russell once wrote, “Mathematics possesses not only truth but supreme beauty, a beauty cold and austere, like that of sculpture, sublimely pure and capable of a stern perfection, such as only the greatest art can show.” Other mathematicians and scientists have often written about the unreasonable effectiveness of mathematics in explaining the world around. Notwithstanding the poetry and the beauty, a functional knowledge of mathematics and computing is an essential skill for transacting in society. However, to appreciate the poetry of this language or to transact, one must learn the grammar and acquire the vocabulary. And school mathematics is largely about acquiring the skills to communicate and develop the love for the language. This portal is for students and teachers to engage with this language and build those skills. Fun corner Your Phone number will reveal your actual Age. It will take about 15 seconds,read and do it at the same time so that you will not lose the fun. 1. Take a look at your last digit of your cell phone number 2. Use this figure and multiply by 2 3. Then add 5 4. And then multiply by 50 5. And then add the number 1765 6. The last step; with this number, subtract your birth year. Now you see a three-digit number. The first digit is the last digit of your phone number, the next two digits are your actual age! You can use the attached spreadsheet to solve this and find out why it works!! Famous Mathematicians Here is a video on Srinivasa Ramanujam and his magic square shared by Sunitha Madam. Articles by Teachers National Conference on "Developing Mathematics Teachers for Quality Learning for All" was organized on Dec 20-22 at Regional Institute of Education, Ajmer. Praveen Sir from Kolar district participated and made a poster presentation on the use of Origami for mathematics teaching. He has shared: "I got a great opportunity to represent my state/school to the entire country through a confrence called National Conference on Teachers of Mathematics Quality Learning for all. Out of 120 papers recieved, only 70 were selected to present. I was forutnate enough to be one of 70s. I presented a poster on ORIGAMI. Very few people knew ORIgami there. That mae my poster very special.I presenter the poster on the topic : "teaching of geometry using origami in the class room." I could able to present 10 posters on the topic and also demonstrated several 2-d, 3-d objects. Mine was not like the other lectures. Even the convener appreciated the action of Origami." See pictures below. Maths Lab Interesting news If mathematics is a language, then how does it seem to describe all the observed things in the universe from helical DNA structures to black holes. Mathematicians and scientists have talked about the unreasonable effectiveness of mathematics. So is mathematics inherent in the universe or is it part of the way the human brain is wired? This question is being studied and some recent research can be accessed here. Events and Happenings Book Shelf A School Geometry By Hall and Stevens. Must read book for all the math teachers. Costs only Rs80 and is available through Flipcart and infibeam. Classroom resources Find here mind maps, activities, video and multimedia resources for science lessons in Class 9.The resources have been developed to help build conceptual understanding and have been arranged according to the chapters in the textbook. Mathematics foundational materials please Click here For typing formula on KOER, please click here. For Question papers and CCE activities please click here For solved problems please click here For additional resources and teaching activities click below: From the forum Some interesting exchanges from the STF mailing forum. To join the forum, visit this group here. Square root of a number The squre root of whole number is always less than the that square number ( ex; sqare root of 25 is 5, 36 is 6 etc) but in decimal it is reverse (ex; sqare root 0.8 is o.89) what is the reason? Shared by Suchetha SS, GHS Thyamagondulu. CarMetal is a Geogebra like free software A description of another graphic tool like Geogebra. Shared by Tharanath Achar, GPUC Belthangady. P th term of an AP is Q and Q th term of an AP is P . Then find PQ th term ?....Shared by Mallikarjun Sudi, GHS Yelheri and Sneha Titus, APU. [Read More] Pi Day Which day is Pi Day? [Find out] Trigonometry using Geogebra Basics of Trigonometry using Geogebra, lesson shared by Radha Narve, GHS Begur. Click [here] to read more. 1. Check out this online geometry box that you can use to show constructions for students. This tool can also be downloaded. The tool requires flash player. Click here to view.
{"url":"https://teacher-network.in/OER/index.php/Mathematics","timestamp":"2024-11-09T03:09:17Z","content_type":"text/html","content_length":"53883","record_id":"<urn:uuid:f7e9a5e6-b723-4b18-951d-613e94e6be26>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00297.warc.gz"}
How progress is calculated Progress of a single goal/objective You can either work with key results or use a simple percentage metric for your goal. If key results are defined, the overall progress of all key results will define the goal's overall progress. Please note that an initiative's progress do only influence a key result's or an objective's progress if "Enable dynamic progress calculation" is set up on the key result level. You can use weights to define the importance of specific key results: If no weights are defined, all key results will be treated equally. If you assign weights to all key results, the overall progress calculation will consider the respective weight of each key result. Key results calculation The percentage increase of Key results are calculated with the following formula: Therefore, if the final value is the same as the target value, then the percentage of the key results would result in 100%. For example: (280-203)/(280-203)*100=100% Dynamic progress calculation on goal level During goal creation, you can enable the option of dynamic goal calculation. That means any update of a child goal (lower-level goal) updates the progress of the parent goal (higher-level goal) Example: You have created a parent goal with two linked child goals. You change one child's goal to 50% progress. The parent goal's progress will be updated to 25%. Important: Dynamic progress calculation can only be enabled if you have no key results defined. Dynamic progress calculation on key result level During key result creation, you can enable the option of dynamic progress calculation. This would mean that the key result progress will automatically be calculated based on the progress of the initiatives below. Overall progress Our system takes into account all active goals (not drafts or archived goals) for the overall progress calculation of a person/a team. When calculating overall progress, the system averages the progress of individual goals (see above) to calculate overall progress. When you are looking at a filtered view of all goals (i.e., your personal goals, or your team goals) the progress shown will reflect the average progress for those filtered goals (i.e., it will not take into account the progress of any other goals).
{"url":"https://leapsome.zendesk.com/hc/en-us/articles/4408895932305-How-progress-is-calculated","timestamp":"2024-11-14T02:27:31Z","content_type":"text/html","content_length":"25278","record_id":"<urn:uuid:a954e747-8b25-4fa0-81f6-cdca6d7716eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00230.warc.gz"}
A geometrical shape with two parallel straight sides having two circular parts such that one is at the top and the other is at the bottom. Right Circular Cylindrical Solid : This one is a solid which is being generated by rotating of rectangle about one of its sides. Consider the rectangle PQRS which is rotated about its side PQ and makes one complete rotation to reach the initial position. This rotation creates a right circular cylindrical solid. Each and every of this right circular cylindrical solid is having two plane endings. The end of each plane is in a circular shape and the endings of the planes are parallel. End of each plane is known as the base of the above right circular cylindrical solid. A straight line that joins the centers of those two planes is called the axis. The radius of the base circle is called as the radius of the right circular cylindrical solid. The curved surface which joins the two planes ending is the lateral surface-area or curved surface-area. In the above picture, the letters, r represents radius and h represents height. Hollow Cylindrical Solid : A solid which is being formed by two co-axial piston chambers with the same height and same radii is called as hollow-cylindrical solid. Here "R" says the external radius and "r" says the internal radius of the cylinder. In the above picture of a hollow-cylindrical shape, the green colored line represents an internal radius, the red colored line represents an external radius and the yellow line represents the height of the hollow cylindrical solid. Role of radius and height in finding areas and volumes : When we want to find areas and volumes , these two values "r" and "h" that radius and height plays a vital role. When we know the radius and height, we can use the formula and find the areas and Question 1 : A solid right circular cylinder has radius of 14 cm and height of 8 cm. Find its curved surface area and total surface area. Solution : Radius (r) = 14 cm Height (h) = 8 cm Curved surface area = 2 Π r h = 2 ⋅ (22/7) ⋅ 14 ⋅ 8 = 2 ⋅ 22 ⋅ 2 ⋅ 8 = 704 sq.cm Total surface area of cylinder = 2 Π r (h + r) = 2 ⋅ (22/7) ⋅ 14 ⋅ (8 + 14) = 2 ⋅ (22/7) ⋅ 14 ⋅ 22 = 2 ⋅ 22 ⋅ 2 ⋅ 22 = 1936 sq.cm Curved surface area = 704 sq.cm Total surface area = 1936 sq.cm Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. Kindly mail your feedback to v4formath@gmail.com We always appreciate your feedback. ©All rights reserved. onlinemath4all.com
{"url":"https://www.onlinemath4all.com/Cylinder.html","timestamp":"2024-11-04T14:14:05Z","content_type":"text/html","content_length":"29936","record_id":"<urn:uuid:e1d4f6aa-27e7-4478-a0bb-f7c9561e066c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00710.warc.gz"}
Future Value (FV) - Definition, Concept & Calculation — StudyHQ.net (2024 PDF) 68 / 100 Future Value – Definition, Concept & Calculation The future value (FV) is the value that a certain amount of money that we currently hold or that we decide to invest in a certain project will have in the future. What is Future Value? Future value is the future amount of an investment made today, which will grow over a period of time. The information provided by this method is useful because it allows you to calculate how much a company’s future wealth will be maximized, making it an important method for capital investment decisions. The future value (FV) allows us to calculate how the value of the money that we currently have (today) will change considering the different investment alternatives that we have available. In order to calculate the future value, we need to know the value of our money at the current moment and the interest rate that will be applied in the coming periods. The concept of future value is related to that of present value. The latter reflects the value that a flow of money that we will receive in the future would have today. Future value is used to evaluate the best alternative as to what to do with our money today. Also to see how the value of money changes over time. Concept of Future Value The concept of future value seeks to reflect the fact that, if we decide to delay our current consumption, it will be for a prize, something worthwhile. In this way, we expect the future value to be greater than the present value of an amount of money that we currently have since a certain interest rate or return is applied to it. Thus, for example, if today I decide to deposit money in a bank savings account, this amount will grow at the interest rate that the bank offers me. Relationship Between Present Value and Future Value These are two sides of the same coin. They both reflect the value of the same money at different points in time. It is always better to have the money today, instead of waiting, unless we are paid interest for it. In the future value formula we can solve for the present value and vice versa. Formula for Calculating the Future Value The formula for calculating the future value depends on whether the interest applied is simple or compound. It occurs when the interest rate is applied only on the principal or initial amount, not on the interests that are earned over time. The formula is as follows: FV = VP x (1 + rxn) FV = Future Value PV = Present Value (the amount we invest today to earn interest) r = simple interest rate n = number of periods Example: Suppose you invest $ 1,000 in a savings account that offers a simple interest rate of 10%. What is the future value in the next two years? FV = 1,000 x (1 + 10% x 2) = 1,200 euros (interest earned is 200) • Compound interest formula In this case, the interest rate is applied on the initial amount and also on the interest that is earned each period. The formula is as follows: FV = PV x (1 + r) ^n Example of How to Calculate Future Value Example: Suppose the bank now offers you a compound interest rate of 10% on savings. What is the future value in the next two years? FV = 1,000 x (1 + 10%) ^2 = 1,210 euros This implies that the interest earned is 210. The first year the interest is 10% of 1,000 (100 euros), and the second year it is 10% of 1,100 (110 euros). Facebook Comments Box
{"url":"https://studyhq.net/future-value/","timestamp":"2024-11-09T15:46:56Z","content_type":"text/html","content_length":"297987","record_id":"<urn:uuid:319903a2-f2f0-4037-a09e-97d14723413b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00306.warc.gz"}
Encyclopedia of Mathematics is a comprehensive one-volume encyclopedia designed for high school through early college students. More than 1,000 entries, numerous essays, and more than 150 photographs and illustrations cover the principal areas and issues that characterize this area of science. In order to provide a well-rounded, completely accessible reference, the author worked closely with teachers of all levels in developing subject material and a sound understanding of mathematical concepts, and on teaching analytical thinking. This valuable resource unites disparate ideas and provides the meaning, history, context, and relevance behind each one. The easy-to-use format makes finding straightforward and natural answers to questions within arithmetic—such as algebra, trigonometry, geometry, probability, combinatorics, numbers, logic, calculus, and statistics—simple. Encyclopedia of Mathematics also gives historical context to mathematical concepts, with entries discussing ancient Arabic, Babylonian, Chinese, Egyptian, Greek, Hindu, and Mayan mathematics, as well as entries providing biographical descriptions of important people in the development of mathematics. Essay entries include: • Arabic mathematics • Babylonian mathematics • Egyptian mathematics • Mayan mathematics • Number systems • History of calculus • History of functions • History of geometry • History of trigonometry • Indian mathematics. Biographical entries include: • Georg Cantor • Descartes • Euclid • Fibonacci • Carl Friedrich Gauss • David Hilbert • Pythagoras • Brook Taylor • John Von Neumann • Zeno of Elea.
{"url":"https://www.infobasepublishing.com/Bookdetail.aspx?ISBN=1438110081&eBooks=1","timestamp":"2024-11-08T06:19:04Z","content_type":"application/xhtml+xml","content_length":"67492","record_id":"<urn:uuid:c11c418b-deea-4152-9b2f-f16e95d6d12d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00559.warc.gz"}
2021: Department of Mathematics - Northwestern University Bryna Kra, Sarah Rebecca Roland Professor of Mathematics at Northwestern University, has been elected President of the American Mathematical Society. After a year of being President Elect, her two-year term will start on 1 February 2023. Prof. Kra has been a Fellow of the American Mathematical Society since 2012. A professional society since 1888, AMS supports the mathematical sciences by providing access to research, professional networking, conferences and events, advocacy, and a connection to a community passionate about mathematics and its relationship to other disciplines and everyday life. Nov 19, 2021 Nov 12, 2021 Northwestern University's Mathematics Department hosted the 2021 Midwest Dynamical Systems Conference Nov. 12-14. The Midwest Dynamical Systems Conference is one of the most influential, diverse, and longest running conference series in dynamical systems. This conference series has met uninterruptedly since the early 1970s, and it has received continuous support from the National Science Foundation since 1988. The organizing committee for the 2021 Midwest Dynamical Systems Conference was Aaron Brown (Northwestern), Laura DeMarco (Harvard), Ilya Khayutin (Northwestern), Roland Roeder (IUPUI), and Daniel Thompson (Ohio State). NU Mathematics Faculty Member, Ben Antieau Named a 2022 Fellow of the American Mathematical Society Nov 1, 2021 NU Mathematics faculty member, Ben Antieau, has been named a 2022 Fellow of the American Mathematical Society. The Fellows of the American Mathematical Society program recognizes members who have made outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics. Professor Antieau was named a 2022 AMS Fellow for his contributions to K-theory, algebraic geometry, and homotopy theory. Xiumin Du and Aaron Brown Invited to Speak at the 2022 International Congress of Mathematicians Sept 13, 2021 NU Math faculty members, Xiumin Du and Aaron Brown, received prestigious invitations to speak at the 2022 International Congress of Mathematicians. Xiumin will speak in the Analysis Section and Aaron Brown will speak in the Dynamics Section. The International Congress of Mathematicians (ICM) is the largest and most significant conference on pure and applied mathematics as well as one of the world’s oldest scientific congresses. The first ICM took place in Zurich, Switzerland, in 1897. ICMs are run every four years by the International Mathematical Union, in partnership with host country organizers. ICM 2022 will meet July 6-14 in St. Petersburg, Russia. Aaron Brown Wins 2022 New Horizons Mathematics Prize Sept 9, 2021 NU Mathematics faculty member, Aaron Brown, has won the 2022 New Horizons in Mathematics Prize for contributions to the proof of Zimmer’s conjecture. Northwestern University math major, Federico Burdisso, Wins Bronze Medal for Italy at the Tokyo Olympics July 28, 2021 Federico Burdisso, a Northwestern University Mathematics Major, won the Bronze Medal for the 200 Meter Butterfly at the Tokyo Olympics. Click here for more details. NU FACULTY MEMBER, EMMY MURPHY, WINS 2021 MCA AWARD July 10, 2021 The Mathematical Council of the Americas (MCofA) announced the following prizes: the MCA Prize, the Americas Prize and the Solomon Lefschetz Medal, awarded on the occasion of the 3rd Mathematical Congress of the Americas during a virtual ceremony on July 9th. NU Faculty member, Emmy Murphy, was one of five recipients of the MCA Prize. The Mathematical Council of the Americas (MCofA) is a network for professional mathematical societies and research institutes based in the Americas, dedicated to promoting the development of mathematics, in all its aspects, throughout the continent. As a continental collaborative effort, special attention is given to cooperating with the Mathematical Union for Latin America and the Caribbean (UMALCA). The goal of the Mathematical Congress of the Americas (MCA) is to internationally highlight the excellence of mathematical achievements in the Americas and foster collaborations among researchers, students, institutions and mathematical societies in the Americas. May 12, 2021 The 2021 Undergraduate Awards and Lecture were held via zoom on Wednesday, May 12th. Over 50 NU Math faculty, graduate students, award winners, Math office staff, and guests were in attendance. The featured speaker was Prof. Tara Holm of Cornell University, whose talk was titled, "The Geometry of Origami: How the Ancient Japanese Art Triumphed Over Euclid". The abstract of her talk is listed below: • "In ancient Greece, Euclid described a system of geometry in The Elements. There are deep connections between this geometry and questions in algebra, as explained by the 19th century French mathematician and political activist Evariste Galois. These connections will allow us to settle the classical questions of Euclidean geometry. Next, we will explore the ancient Japanese art of origami, and discover how paper folding can be turned into a framework for studying geometry. This paper-folding alternative can do everything Euclid could do, and more! Indeed, this seemingly abstract mathematical theory can have surprising and useful applications." 2021 UG AWARD WINNERS Robert R. Welland Prize for Outstanding Achievement in Mathematics: Senior Career Award in Mathematics: Junior Career Award in Mathematics: • Benjamin Major, Yunru (Rose) Zheng Award for Excellence in Mathematics by a First-Year Student: • Varun Banati, Nick Dorai, Rob Dubinski, Emilya Ershstein, Samuel Fiete, Levi Hoogendoorn, Yuhan (Alex) Jin, Varsha Krishna, Kierthan Lathrop, Eric Ma, Jacob Platnick, Valeriia Rohoza, Austin Segal, Fiona Wang, Yao Xiao, Bobby Yalam, Ada Zhong Award for Outstanding Contributions to Undergraduate Mathematical Life: Award for High Achievement on the William Lowell Putnam Examination: Award for Excellence as an Undergraduate Teaching Assistant: Undergraduate Teaching Assistant Service Award: • Nicholas Karris, Neil Vakharia Award for Excellence as an Undergraduate Teaching Assistant: Undergraduate Teaching Assistant Service Award: • Nicholas Karris, Neil Vakharia April 17, 2021 Northwestern Faculty Members, Aaron Greicius and Sean McAfee, have been awarded a $10,000 Open Educational Resources (OER) Faculty Grant for their proposal to develop open-source resources for the Math 220 sequence. The aim of the project is to develop an extensive open-source learning resource for calculus courses in the Mathematics Department, effectively free of cost to students. This single resource would include a large variety of components-- including a complete textbook with embedded videos from an in-house video library, embedded computational cells, a large bank of exercises for written homework, and integration of an open-source online homework system. This is an ambitious project that will take several years to complete. By the end of the first year, Greicius and McAfee hope to produce skeletal versions of the text corresponding to topics covered in Math 220-1 and Math 220-2. Additionally they plan on working with other faculty to reach consensus on choices of definitions and statements of theory, as well as to recruit future PROFS. BRYNA KRA AND JARED WUNSCH NAMED SIMONS FELLOWS FOR 2021 Feb. 5, 2021 Northwestern University and the Simons Foundation congratulates the outstanding mathematicians and theoretical physicists who have been awarded Simons Fellowships in 2021. Two of Northwestern University Mathematics faculty, Prof. Bryna Kra and Prof. Jared Wunsch, are among the 2021 Simon Fellowship award recipients. Prof. Kra’s project is “Topological and Ergodic Properties of Symbolic Systems.” Prof. Wunsch’s project is “Propagation of Singularities, Diffraction, and Decay of Waves.” The Simons Fellows program extends academic leaves from one term to a full year, enabling recipients to focus solely on research for the long periods often necessary for significant advances. The Simons Foundation seeks to create strong collaborations and foster the cross-pollination of ideas between investigators, as these interactions often lead to unexpected breakthroughs. The Simons Foundation Mathematics and Physical Sciences (MPS) division supports research in mathematics, theoretical physics and theoretical computer science by providing funding for individuals, institutions and science infrastructure. NU MATH PROF. DU AND PROF. KHAYUTIN AWARDED 2021 SLOAN RESEARCH FELLOWSHIPS Feb. 22, 2021 Northwestern University Mathematics faculty members, Prof. Xiumin Du and Prof. Ilya Khayutin, have been awarded 2021 Sloan Research Fellowships. Open to scholars in eight scientific and technical fields—chemistry, computational and evolutionary molecular biology, computer science, Earth system science, economics, mathematics, neuroscience, and physics—the Sloan Research Fellowships are awarded in close coordination with the scientific community. Candidates must be nominated by their fellow scientists and winners are selected by independent panels of senior scholars on the basis of a candidate’s research accomplishments, creativity, and potential to become a leader in his or her field. More than 1000 researchers are nominated each year for 128 fellowship slots. Winners receive a two-year, $75,000 fellowship which can be spent to advance the fellow’s research.
{"url":"https://www.math.northwestern.edu/about/news/2021.html","timestamp":"2024-11-05T16:59:29Z","content_type":"text/html","content_length":"50345","record_id":"<urn:uuid:e3d648bf-5f44-4ed5-9796-c907c714b7da>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00293.warc.gz"}
Convexity and limits in quantum information theory As quantum science is transformed into a viable technology, it is of increasing importance to identify and develop mathematical techniques that allow a rigorous understanding of quantum information transmission. The identification of the fundamental capabilities and limitations of quantum computing poses hard mathematical problems, intercrossing a number of areas and requiring a variety of techniques from linear algebra, functional analysis, probability theory, operator theory, and others. A feasible passage from the classical to the quantum computer would rest on fruitful analogies inspired by classical information theory as a sample of physically realisable theoretical patterns, giving rise to a far-reaching cross-fertilisation between this field and quantum information theory. We illustrate these efforts by several examples: 1. Channel capacities are the optimal rates of broadcasting quantum information that leads to a physically reliable transmission. A genuinely quantum effect was discovered in 2009: two quantum channels that have capacity zero of transmitting quantum information perfectly, when plugged in parallel, may be used to transmit with perfect reliability non-trivial amount of both classical and quantum information. This phenomenon, called superactivation, is linked to a series of problems about the computation of the capacity of channels applied in parallel, in terms of the capacities of the individual channels, and many of its aspects remained unexplained. 2. Entanglement is one of the most intriguing features of quantum mechanics. Arguably, one of its most spectacular manifestations arises through pseudo-telepathy games: cooperative games played by two players against a third verifying party, which cannot be won via classical strategies, but can be won using strategies relying on quantum entanglement. The usefulness of such games is highest when the game has a very large size. It therefore becomes necessary to develop a mechanism which allows the analysis of large games, both in isolation and in conjunction with other quantum games. The formulation of a mathematical framework of limit games will allow the probabilistic analysis of subtle questions about winning probabilities in the important parallel repetition problem. 3. Convex sets and functions play an indispensable role in analysis, optimization, and many other areas of mathematics and data science. We are exploring a notion of convexity that applies to quantum density operators on a Hilbert space. This arena is largely unexplored, but we expect that once the theory is properly developed, “convex density operators” should play a similarly indispensable role in the study of quantum mechanical systems. Participating Faculty: Ghandehari, Madiman, Todroov
{"url":"https://qse.udel.edu/research/convexity-and-limits-in-quantum-information-theory/","timestamp":"2024-11-09T13:49:04Z","content_type":"text/html","content_length":"74012","record_id":"<urn:uuid:c5fb2d2f-97d6-4a63-82cc-fa7c474c16bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00790.warc.gz"}
Leftovers: Metric System Metric Prefixes The metric system is a decimalised system of measurement. It is known as the International System of Units or Système international (SI), and is recognised the world over. The metric prefixes listed below precede a basic unit of measurement to indicate a multiple (or sub-multiple) of that unit. Value Prefix Name Prefix Symbol 10^30 quetta- Q 10^27 ronna- R 10^24 yotta- Y 10^21 etta- Z 10^18 exa- E 10^15 peta- P 10^12 tera- T 10^9 giga- G 10^6 mega- M 10^3 kilo- k 10^2 hecto- h 10 deca- da 10^−1 deci- d 10^−2 centi- c 10^−3 milli- m 10^−6 micro- μ 10^−9 nano- n 10^−12 pico- p 10^−15 femto- f 10^−18 atto- a 10^−21 zepto- z 10^−24 yocto- y 10^−27 ronto- r 10^−30 quecto- q Fundamental Units These are the seven basic SI units from which all other physical quantities are defined. Quantity Name Unit Name Unit Symbol time second s length metre m mass kilogram kg electric current ampere A thermodynamic temperature kelvin K luminous intensity candela cd amount of substance mole mol The Celsius temperature scale is equivalent to the Kelvin temperature scale, with 0 °C = 273.15 K. Derived Units These units are derived from various combinations (products, quotients and powers) of the fundamental units. Quantity Name Symbol SI Base Units Other SI Units acceleration m∙s^−2 acceleration, angular rad∙s^−2 angle, plane radian rad [dimensionless] angle, solid steradian sr [dimensionless] area m^2 capacitance farad F kg^−1∙m^−2∙s^4⋅A^2 C∙V^−1 catalytic activity katal kat mol∙s^−1 concentration mol∙m^−3 density ρ kg∙m^−3 electric charge coulomb C s∙A electric field strength kg∙m∙s^−3∙A^−1 N∙C^−1, V∙m^−1 electrical conductance siemens S kg^−1∙m^−2∙s^3∙A^2 A∙V^−1, Ω^−1 electrical potential volt V kg∙m^2∙s^−3∙A^−1 W∙A^−1, J∙C^−1 energy joule J kg∙m^2∙s^−2 N∙m entropy kg∙m^2∙s^−2∙K^−1 J∙K^−1 force newton N kg∙m∙s^−2 frequency hertz Hz s^−1 heat joule J kg∙m^2∙s^−2 N∙m heat capacity kg∙m^2∙s^−2∙K^−1 J∙K^−1 illuminance lux lx cd∙sr⋅m^−2 impedance ohm Ω kg∙m^2∙s^−3∙A^−2 V∙A^−1 inductance henry H kg∙m^2∙s^−2∙A^−2 V∙s∙A^−1 irradiance kg∙s^−3 W∙m^−2 luminance cd∙m^−2 luminous flux lumen lm cd∙sr magnetic flux weber Wb kg∙m^2∙s^−2∙A^−1 V∙s magnetic flux density tesla T kg∙s^−2∙A^−1 V∙s∙m^−2 moment of inertia kg∙m^2 momentum kg∙m∙s^−1 N∙s momentum, angular kg∙m^2∙s^−1 N∙m∙s permeability kg∙m∙s^−2∙A^−2 H∙m^−1 permittivity kg^−1∙m^−3∙s^4∙A^2 F∙m^−1 power watt W kg∙m^2∙s^−3 J∙s^−1 pressure pascal Pa kg∙m^−1∙s^−2 N∙m^−2 radiant intensity kg∙m^2∙s^−3∙sr^−1 W∙sr^−1 radiation dose gray Gy m^2∙s^−2 J∙kg^−1 radiation dose equivalent sievert Sv m^2∙s^−2 J∙kg^−1 radioactivity becquerel Bq s^−1 resistance ohm Ω kg∙m^2∙s^−3∙A^−2 V∙A^−1 specific heat capacity m^2∙s^−2∙K^−1 specific volume kg^−1∙m^3 stress pascal Pa kg∙m^−1∙s^−2 N∙m^−2 surface density kg∙m^−2 thermal conductivity kg∙m∙s^−3∙K^−1 W∙m^−1∙K^−1 torque kg∙m^2∙s^−2 N∙m velocity m∙s^−1 velocity, angular rad∙s^−1 viscosity, dynamic kg∙m^−1∙s^−1 N∙s∙m^−2, Pa∙s viscosity, kinematic m^2∙s^−1 volume m^3 wave number m^−1 work joule J kg∙m^2∙s^−2 N∙m Non-SI Units These units are often used in conjunction with SI units. Quantity Name Symbol Equivalent Value area hectare ha 1 ha = 10^4 m^2 energy electronvolt eV 1 eV = 1.602176634×10^−19 J length astronomical unit au 1 au = 149,597,870,700 m light year ly 1 ly = 9,460,730,472,580,800 m parsec pc 1 pc = (648,000/π) au mass tonne or metric ton t 1 t = 1000 kg plane/phase angle degree ° 1° = (π/180) rad minute ′ 1′ = (1/60)° = (π/10,800) rad second ″ 1″ = (1/60)′ = (π/648,000) rad pressure bar bar 1 bar = 100,000 Pa time day d 1 d = 24 hr = 1440 min = 86,400 s hour hr 1 hr = 60 min = 3,600 s minute min 1 min = 60 s volume litre l, L 1 L = 10^3 cm^3 = 10^−3 m^3
{"url":"https://www.lynneslair.com/metric/","timestamp":"2024-11-04T01:28:58Z","content_type":"text/html","content_length":"60954","record_id":"<urn:uuid:2466b0fd-afa7-4031-a826-9cfe52c9b7ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00427.warc.gz"}
⇉Papr Reduction Techniques for Coherent Optical Ofdm Transmission Essay Example | GraduateWay Reduction Techniques for Coherent Optical OFDM Transmission Bernhard Goebel, Graduate Student Member, IEEE, Stephan Hellerbrand, Graduate Student Member, IEEE, Norman Haufe, Norbert Hanik, Member, IEEE Institute for Communications Engineering, Technische Universitat Munchen, D-80290 Munich, Germany E-mail: Bernhard. Goebel@tum. de ABSTRACT In coherent optical OFDM systems, the large peak-to-average power ratio (PAPR) gives rise to signal impairments through the nonlinearity of modulator and fiber. We review the most prominent PAPR reduction techniques that have been proposed for mitigating the impairments with regard to their reduction capability, computational complexity and redundancy. Simulation results are presented for Clipping, Selected Mapping, Active Constellation Extension and Trellis Shaping. Keywords: modulation, OFDM, coherent detection, nonlinear fiber effects, PAPR, coding. 1. INTRODUCTION Orthogonal frequency division multiplexing (OFDM) is considered one of the most promising transmission schemes for future 100 Gigabit Ethernet (100 GbE) networks. In combination with coherent detection, it offers virtually unlimited electronic compensation of chromatic dispersion and PMD [1] as well as record spectral efficiencies [2]-[3]. One major drawback of OFDM signals is their large peak-to-average power ratio (PAPR) which gives rise to distortions caused by nonlinear devices such as A/D converter, external modulator and transmission fiber [4]. Upon transmission along the fiber, the Kerr effect creates distortions through four-wave mixing (FWM) between OFDM subcarriers; the strength of these FWM products depends on the signal’s PAPR [5]. Various PAPR reduction techniques have been proposed in a wireless communications context [6] and for optical OFDM systems [5], [7]-[10]. In Section 2, we review the most important PAPR reduction methods for coherent optical OFDM systems with respect to their performance, complexity and introduced redundancy. Section 3 presents numerical simulation results, and Section 4 concludes the paper. 2. PAPR REDUCTION TECHNIQUES FOR OPTICAL OFDM SYSTEMS In OFDM, a high-data-rate bit stream is demultiplexed into N lower-rate streams which modulate N equally spaced subcarriers. The data symbols [X0, X1,…,XN-1], which may be taken e. g. from a QPSK or 16-QAM signal constellation, form a complex OFDM symbol (or data block) of length NT as x(t ) = 1 ?X N n=0 N ? 1 n ? e j 2?? ft , 0 ? t ? NT , (1) where ? f = 1 / NT is the subcarrier spacing [6]. For a sufficiently large N, the real and imaginary part of x(t) follow a Gaussian distribution and the signal power has a central chi-square distribution with two degrees of freedom [6], so that very large power peaks occur with nonzero probability. When the PAPR is calculated from samples of the continuous signal (1), sampling at a rate of at least four times the Nyquist rate is recommendable to fully capture peaks located in between samples [4], [6]. PAPR reduction methods can be broadly classified into two categories. In one group of methods, the signal is manipulated in a way such that peaks are removed; clipping, active constellation extension (ACE) and precoding are examples for this approach. In contrast, selected mapping (SLM) and trellis shaping (TS) are schemes which add redundancy to the signal, thereby creating a degree of freedom to reshape the signal or to replace OFDM symbols with a particularly large PAPR. In general, PAPR reduction methods are difficult to compare. For a rough comparison, it is common to use the (complementary) cumulative distribution function (CCDF) of the PAPR depicted in Fig. 1 (right). The aim of PAPR reduction schemes is to shift the CCDF curve as much to the left as possible. However, the complexity, redundancy and the actual benefit of a method cannot be judged from its CCDF alone. 2. 1 Clipping, Active Constellation Extension and Precoding Clipping all amplitudes that exceed a certain threshold is the simplest PAPR reduction technique. Clipping leads to distortions within as well as out of the signal bandwidth [4]. Filtering the out-of-band clipping noise results in peak re-growth, so that an iterative clipping-and-filtering approach may be necessary [6]. Mach-Zehnder modulators inherently decrease the PAPR through their nonlinear modulation characteristic (“soft clipping”). Active constellation extension (ACE) reduces the PAPR by moving some of the outer data symbols Xn away from the decision boundaries. This is depicted in Fig. 1 (left) for a QPSK constellation. Each constellation point is allowed to be moved within its respective grey-shaded area. The black dots show the resulting constellation of 978-1-4244-4826-5/09/$25. 00 ©2009 IEEE 1 ICTON 2009 Mo. B2. 4 256 OFDM symbols with 256 subcarriers each. ACE requires no side information at the receiver and the BER is expected to improve initially. However, ACE increases the average signal power; scaling it back to the initial power leads to an SNR decrease. As seen from Fig. 1 (right), the PAPR reduction capability of ACE decreases for higher constellation orders since only the outermost points can be moved. Determining which data symbols to move in order to reduce the PAPR is a convex optimization problem which is solved by iteratively clipping the amplitude in time domain and re-setting the wrongly moved symbols in frequency domain. Hence, ACE has a complexity of two FFTs per iteration [9]. Figure 1. ACE for QPSK (left) and comparison of CCDFs (right). Results shown are for N=256 subcarriers and QPSK (black) and 16-QAM (green). The PAPR CCDFs of the original signal, SLM and TS are independent of the modulation format, whereas ACE and precoding degrade with increasing modulation alphabet size. The use of precoding for optical OFDM systems has been proposed in [7], [8]. These schemes reduce the PAPR by decreasing the side lobes of the data symbols’ autocorrelation function, either through multiplication with an appropriate sequence or by a discrete cosine transform (DCT). Both precoding methods have comparable performance, but the DCT is much less complex, especially when the number of subcarriers is large Precoding is a useful PAPR reduction technique at small constellation sizes. For QPSK modulation, it achieves the lowest PAPR while exhibiting the lowest complexity of all methods (cf. Fig. 1). However, when higher-order modulation such as 16-QAM is used to increase the spectral efficiency, the effect of these schemes is limited. 2. 2 Selected Mapping and Trellis Shaping The idea of selected mapping (SLM) is to generate at the transmitter a set of candidate data blocks, all representing the same data, and to select the block with the lowest PAPR [5]. In practice, this is achieved by predefining a number NSLM random phase sequences of length N. The data symbol vector [X0, X1,…,XN-1] is then element-wise multiplied with each phase vector to obtain the set of candidate data blocks. An IFFT operation is required for each candidate block, so that SLM has a relatively large complexity. To let the receiver know which phase vector was used for encoding, log2(NSLM) bits of side information need to be transmitted along with the payload data. Hence, SLM introduces redundancy and reduces the net rate to Rn = N log2M – log2NSLM bits per OFDM The CCDF shown in Fig. 1 was obtained for N = 256, NSLM = 16 and QPSK (i. e. M = 4). In this example, two subcarriers have to be reserved for the side information, so the net rate reduces to 512-4 bits per OFDM symbols. Consequently, to ensure a constant net data rate in bits/s, the symbol rate needs to be increased by 1-254/256 ? 0. 8%. In practice, the transmitted side information requires protection by powerful FEC codes. Therefore, SLM schemes in which no explicit side information is required have been proposed [11]. Trellis shaping (TS) is a coding method that is useful for various signal shaping purposes; its use for PAPR reduction in an optical OFDM context has been proposed in [10]. The required encoder and decoder are depicted in Fig. 2. The input bit sequence for one OFDM symbol is split into vectors s and b. The vector b consists of N(log2M ? 1) bits and the ith group of log2M – 1 bits is used as the least significant bits (LSB) of the M-QAM constellation point in the ith carrier. The remaining N most significant bits (MSB) will be used for shaping. The MSB of ns consecutive constellation points form one shaping symbol, hence there are N/ns shaping symbols. The input vector s consists of N(ns? 1)/ns bits, which are encoded to vector z by using an (ns? 1)? ns inverse syndrome former matrix (H? 1)T of the convolutional shaping code Cs, i. e. z = s(H? 1)T. Consequently, the vector z consists of N bits, which can be used to select the MSB of the QAM constellation points. The original sequence s can be restored from z by using a syndrome former matrix HT of the code Cs according to s = zHT. The shaping code Cs has a rate R = 1/ns and is defined by the generator matrix G. The PAPR reduction capability is largely independent of the shaping code that is used. An arbitrary codeword c in Cs can now be added to z, while leaving the restored sequence unchanged, s = ( z ? c ) H T = zH T + 0 , due to cHT = 0. This property implies that no explicit side information is required at the receiver. 2 ICTON 2009 Mo. B2. 4 Figure 2. Encoder (left) and decoder (right) for trellis shaping. The major task that remains to be solved is how to identify the codeword c that results in the lowest PAPR when added to z. One option is to evaluate the resulting PAPR for all possible codewords in Cs. However, this is a very time-consuming approach. Instead, a Viterbi algorithm based trellis search is used in conjunction with a frequency-domain metric which minimizes the subcarriers’ autocorrelation sidelobes [10]. For large N, the computational complexity of calculating the full metric can become prohibitively large; a sub-optimal metric can be used which minimizes the sidelobes only within a given window [10]. As seen in Section 3. 1, this even leads to lower PAPRs after a certain link distance. The general trellis shaping scheme as depicted in Fig. can be universally applied for multiple purposes by using different metrics. An alternative metric for PAPR reduction was reported in [13]. More importantly, any metric that is directly related to the physical frequency-domain impairments (Kerr-induced FWM subcarrier crosstalk) could be readily applied. Due to the redundancy, which is included in each OFDM symbol, TS reduces the net rate to Rn = N (log2M – 1/ns) bits per OFDM symbol. The reduction of the net rate becomes smaller for increasing constellation size and increasing size ns of the shaping symbol. The CCDF shown in Fig. was obtained using ns = 8, corresponding to rates of Rn = 480 bits per symbol for QPSK and Rn = 992 bits per symbol for 16-QAM, respectively. 3. SIMULATION RESULTS 3. 1 PAPR evolution along the link As shown in Fig. 3 (left), any PAPR reduction is partly undone along propagation as the chromatic dispersion decorrelates the subcarriers’ phases. However, the PAPR remains well below that of an unshaped signal for the entire link irrespective of the data rate, as long as the cyclic prefix (CP) length exceeds the channel memory. For the 56. 3 Gb/s signal (100G PolMux incl. verhead), this is the case only for approximately the first 800 km in the simulated configuration (20% CP). The average PAPR at the transmitter is only partly meaningful; the DCTprecoded signal starts off with the lowest PAPR, but increases rapidly. In Fig. 3 (right), different window sizes of the TS metric are compared for QPSK modulation and 10. 7 Gb/s data rate. It appears that smaller window sizes (corresponding to “local” minimization of autocorrelation side lobes) yield higher PAPRs at the transmitter, but a steadier (and eventually better) PAPR performance along the link. Figure 3. Average PAPR over link distance for different data rates and PAPR reduction schemes (left) and different widow sizes of the metric used for Trellis shaping (right). 3. 2 Performance comparison For a fair comparison, the various schemes should be compared using the Q-factor or BER at their respective optimum transmit powers for a link length of interest. In our simulation setup, we used N = 256, identical 80-km spans of SSMF (D = 16 ps/nm/km, S = 0. 057 ps/nm2/km, ? = 0. 2 dB/km, ? = 1. 3 /W/km, no PMD), ideal MZM, no DCF, EDFA NF 6 dB. The net rate is 10. 7 Gb/s, the overhead of SLM and TS was allowed for by an 3 ICTON 2009 Mo. B2. 4 increased total data rate. Fig. 4 shows the results for link lengths of 400 km (left) and 1600 km (right). The depicted effective Q-factor Qeff for 16-QAM was calculated from the BER, which in turn was estimated analytically from the SNR. It can be seen that TS (ns = 8, window size N/8) and SLM (NSLM = 16) can improve the maximum Q by &gt; 1 dB, whereas clipping the signal (at optimum clipping level) does only improve the signal quality at suboptimal power levels. Because of the non-Gaussian symbol distribution (cf. Fig. 1), calculating a Q-factor for ACE is not sensible. For 16-QAM, a Qeff,ACE can be obtained from the inner (unmoved) constellation points. In our simulations, ACE brought no improvements according to this Qeff,ACE. However, as Qeff,ACE cannot be directly related to the BER, ACE may still bring some gain. To judge this fairly, direct evaluation of the BER is required. Figure 4. Q-factor over input power for 400 km (left) and 1600 km (right) link length and QPSK (black) and 16-QAM (green) subcarrier modulation. . CONCLUSIONS We have introduced and characterized several PAPR reduction schemes proposed for coherent optical OFDM systems. These schemes differ significantly in terms of computational complexity, redundancy and reduction capability. All schemes yield the best performance at high signal power levels. At optimum levels, SLM and Trellis shaping can improve the signal quality by decreasing the nonlinear penalty. The schemes differ considerably with respect to the PAPR evolution along the link. Hence, a good PAPR reduction scheme should guarantee low PAPR values for the entire link distance of interest. REFERENCES [1] S. L. Jansen et al. : 121. 9-Gb/s PDM-OFDM transmission with 2-b/s/Hz spectral efficiency over 1000 km of SSMF, J. Lightwave Technol. , vol. 27, pp. 177-188, Jan. 2008. [2] Y. Ma et al. : 1-Tb/s per channel coherent optical OFDM transmission with subwavelength bandwidth access, in Proc. OFC 2009, San Diego, USA, March 2009, postdeadline paper PDPC1. [3] R. Dischler, F. Buchali: Transmission of 1. 2 Tb/s continuous waveband PDM-OFDM-FDM signal with spectral efficiency of 3. bits/s/Hz over 400 km of SSMF, in Proc. OFC 2009, San Diego, USA, March 2009, postdeadline paper PDPC2. [4] J. Armstrong: OFDM for optical communications, J. Lightwave Technol. , vol. 27, pp. 189-204, Feb. 2008. [5] B. Goebel et al. : On the effect of FWM in coherent optical OFDM systems, in Proc. OFC 2008, San Diego, USA, Feb. 2008, paper JWA58. [6] S. H. Han, J. H. Lee: An overview of peak-to-average power ratio reduction techniques for multicarrier transmission, IEEE Wireless Comm. , vol. 12, pp. 56-65, April 2005. [7] O. Bulakci et al. Precoding based peak-to-average power ratio reduction for optical OFDM demonstrated on compatible single-sideband modulation with direct detection, in Proc. OFC 2008, San Diego, USA, Feb. 2008, paper JThA56. [8] O. Bulakci et al. : Reduced complexity precoding based peak-to-average power ratio reduction applied to optical direct-detection OFDM, in Proc. ECOC 2008, Brussels, Belgium, Sep. 2008, paper P. 4. 11. [9] B. Krongold et al. : Fiber nonlinearity mitigation by PAPR reduction in coherent optical OFDM systems via active constellation extension, in Proc. ECOC 2008, Brussels, Belgium, Sep. 2008, paper P. 4. 13. [10] S. Hellerbrand et al. : Trellis shaping for reduction of the peak-to-average power ratio in coherent optical OFDM systems, in Proc. OFC 2009, San Diego, USA, March 2009, paper JThA48. [11] S. Y. Le Goff et al. : A novel selected mapping technique for PAPR reduction in OFDM systems, IEEE Trans. Comm. , vol. 56, pp. 1775-1779, Nov. 2008. [12] T. T. Nguyen, L. Lampe: On trellis shaping for PAR reduction in OFDM systems, IEEE Trans. Comm. , vol. 55, pp. 1678-1682, Sep. 2007. 4
{"url":"https://graduateway.com/papr-reduction-techniques-for-coherent-optical-ofdm-transmission/","timestamp":"2024-11-04T07:51:47Z","content_type":"text/html","content_length":"82047","record_id":"<urn:uuid:e94dad91-dde3-4b32-a429-a926cff2bc6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00796.warc.gz"}
Florian Kurpicz Skip to main content News Archive • 2024/10: This winter term 2024/25 I have a lectureship (German: Lehrauftrag) for the course Text Indexing (in German) and will be responsible for the Stringology part of the lecture Algorithms 2 (in German) • 2024/10: I will be on the PC of SEA 2025 • 2024/09: I will be on the Artifact Evaluation Committee of ALENEX 2025 • 2024/06: Our papers at ESA (Scalable Distributed Memory String Sorting) and SC (KaMPIng: Flexible and (Near) Zero-overhead C++ Bindings for MPI) have been accepted for publication • 2024/06: We got two brief announcements accepted at this year’s SPAA (ACM Symposium on Parallelism in Algorithms and Architecture): The first one introduces our new MPI-wrapper KaMPIng and second one presentes new Sclalble Distributed String Sorting algorithms using our KaMPIng wrapper • 2024/04: This summer term 2024 I have a lectureship (German: Lehrauftrag) for the course Advanced Data Structures (in German) • 2024/04: Finally, our MPI-wrapper KaMPIng (Karlsruhe MPI next generation) has been revealed to the public. Check out our preprint and obviously KaMPIng which will speed up your MPI development • 2024/04: I presented our paper on faster wavelet tree queries at DCC 2024 (slides, code) • 2024/03: I will be on the PC of ESA 2024 • 2023/12: Our paper on faster queries on wavelet trees has been accepted at DCC 2024 (code) • 2023/10: This winter term 2023/24 I have a lectureship (German: Lehrauftrag) for the course Text Indexing (in German) and will be repsonsible for the Stringology part of the lecture Algorithms 2 (in German) • 2023/09: I presented our paper on block tree construction at ESA 2023 (slides, code) • 2023/07: Our two papers on minimal perfect hash functions on the GPU (code) and block tree construction (code) have been accepted for presentation at ESA 2023 • 2023/06: I will be on the PC of ALENEX 2024 • 2023/04: This summer term 2023 I have a lectureship (German: Lehrauftrag) for the course Advanced Data Structures (in German) • 2023/01: Finally, our open access book chapter, a survey of scalable text index construction, is available online • 2023/01: I presented our paper on PaCHash, a packed and compressed hash tables at ALENEX 2023 (slides, code) • 2023/01: Our paper on bit-parallel wavelet tree construction using vectorized instructions has been accepted at DCC 2023 (code) • 2022/11: I presented our paper on rank and select data structures on bit vectors at SPIRE 2022 (slides, code) • 2022/10: This winter term 2022/23 I have a lectureship (German: Lehrauftrag) for the course Text Indexing (in German) • 2022/10: Our paper on PaCHash (packed and compressed hash tables) has been accepted at ALENEX 2023 (code) • 2022/09: I gave a talk on our preprint about packed and compressed hash tables at the 1. ACDA Workshop (slides, code) • 2022/08: Our paper on bit vector and rank and select data structures using SIMD has been accepted at SPIRE 2020 (code) • 2022/06: I presented results on massive text indices obtained in the SPP Algorithms for Big Data at the final meeting of this project (slides) • 2022/04: This summer term 2022 I a lectureship (German: Lehrauftrag) for the course Advanced Data Structures (in German) • 2022/04: I presented our group here at the Karlsruhe Institute of Technology to new Master students (slides) • 2021/10: This winter term 2021/22 I am independently teaching the lecture Text Indexing and the Stringology part of the lecture Algorithms 2 (in German) • 2021/07: Our article on practical wavelet tree construction has been published in the ACM Journal of Experimental Algorithmics (code) • 2020/11: This winter term 2020/21 I am independently teaching the exercise for Text Indexing, the proseminar on Parallel Algorithms, and the Presentation Skills Course (in German) • 2020/08: Our video presentation for our paper on practical longest common extension data structures accepted at ESA 2020 is online (code) • 2020/07: Creating Teaching Videos (Videos in German) more • 2020/06: Our paper on practical longest common extension data structures has been accepted at ESA 2020 (code) • 2020/04: Our paper on space efficient Lyndon array construction has been accepted at ICALP 2020 (code) • 2020/02: I gave a talk on algorithm engineering bit vectors for pupils at the 2020 BwInf Workshop @ TU Dortmund (slides) • 2020/01: Our paper on distributed memory wavelet tree construction has been accepted at ALENEX 2020 (code) • 2019/10: Presenting our paper on benchmarking suffix array construction at SPIRE 2019 (slides, code) • 2019/10: Our paper on external memory wavelet tree construction has been accepted at SPIRE 2019 (code) • 2019/09: Tutorial on Working with the LiDO3 Cluster more • 2019/01: Presenting our paper on distributed suffix array construction at ALENEX 2019 (slides, code) • 2018/10: Our paper on scalable text index construction using Thrill has been accepted at IEEE BigData 2018 (code) • 2018/04: Presenting results on parallel wavelet tree construction at the 75. Workshop on Algorithms and Complexity (slides, paper, code) • 2018/01: Presenting our paper on parallel wavelet tree construction at ALENEX 2017 (slides, code) • 2017/11: Presenting results on parallel wavelet tree construction at the Workshop on Memory-Efficient Algorithms and their Application in Marine and Life Science (slides, paper, code) • 2017/08: Presenting our paper on suffix sorting and LCP array construction using DivSufSort at PSC 2017 (slides, code) • 2017/08: Our paper on the maximum common subgraph problem (IWOCA 2014) has been invited to a special issue of the European Journal of Combinatorics • 2017/02: I am organizing this years BwInf workshop at TU Dortmund University more • 2017/01: Presenting our paper on distributed full-text indices at ALENEX 2017 (slides, code) • 2016/06: Presenting our paper on parallel pattern matching at CPM 2016 (slides) • 2016/02: I am organizing this years BwInf workshop at TU Dortmund University more • 2016/02: Presenting results on parallel pattern matching at the 71. Workshop on Algorithms and Complexity (slides, paper) • 2015/04: I am organizing this years BwInf workshop at TU Dortmund University more • 2014/10: Presenting our paper on the maximum common subgraph problem at IWOCA 2014 (slides)
{"url":"https://kurpicz.org/news_archive.html","timestamp":"2024-11-14T11:25:10Z","content_type":"text/html","content_length":"33729","record_id":"<urn:uuid:e4af1a57-2585-4767-97e9-1ddc5ac1709f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00642.warc.gz"}
A linear bound for sliding-block decoder window size, II for IEEE Trans. Inf. Theory IEEE Trans. Inf. Theory A linear bound for sliding-block decoder window size, II View publication An input-constrained channel is the set 5 of finite sequences of symbols generated by the walks on a labeled finite directed graph G (which is said to present 5). We introduce a new construction of finite-state encoders for input-constrained channels. The construction is a hybrid of the state-splitting technique of Adler, Coppersmith, and Hassner and the stethering technique of Ashley, Marcus, and Roth. When 5 has finite memory, and p and q are integers where pjq is at most the capacity of 5, the construction guarantees an encoder at rate p : q and having a sliding-block decoder (literally at rate q : p) with look-ahead that is linear in the number of states of the smallest graph G presenting 5. This contrasts with previous constructions. The straight Adler, Coppersmith, and Hassner construction provides an encoder having a sliding-block decoder at rate q : p, but the best proven upper bound on the decoder look-ahead is exponential in the number of states of G. A previous construction of Ashley provides an encoder having a sliding-block decoder whose look-ahead has been proven to be linear in the number of states of G, but the decoding is at rate tq : tp, where t is linear in the number of states of G. ©1996 IEEE.
{"url":"https://research.ibm.com/publications/a-linear-bound-for-sliding-block-decoder-window-size-ii","timestamp":"2024-11-05T06:31:07Z","content_type":"text/html","content_length":"69601","record_id":"<urn:uuid:6664129e-783d-4a1b-857e-030273ae6f4a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00849.warc.gz"}
Sketching Solutions of Homogeneous Linear System II | JustToThePointSketching Solutions of Homogeneous Linear System II Quantity has its own quality, Joseph Stalin Now I am become Death, the destroyer of worlds, Robert Oppenheimer Differential equations An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using: 1. Dependent and independent variables. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent 2. Constants. Fixed numerical values that do not change. 3. Algebraic operations. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction. Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$ It involves (e.g., $\frac{dy}{dx} = 3x +5y$): • Dependent variables: Variables that depend on one or more other variables (y). • Independent variables: Variables upon which the dependent variables depend (x). • Derivatives: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$ The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if: • The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x[0], y[0]) and • Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x[0], y[0]). Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x[0], y[0]) . A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0. The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$ A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where: • y is the dependent variable (a function of the independent variable t), • y′ and y′′ are the first and second derivatives of y with respect to t, • t is the independent variable, • A and B are constants. This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side. Sketching Solutions of a Homogeneous Linear System with Complex Eigenvalues Consider the system of differential equations: $\begin{cases} x’ = -x -y \\ y’ = 2x -3y \end{cases}$ Our goal is to solve this system, find the general solution, and sketch the trajectories of the solutions to understand the system’s behavior over time. Step 1: Representing the System in Matrix Form: x’ = Ax, where x = A =$(\begin{smallmatrix}x\\ t\end{smallmatrix})$ A =$(\begin{smallmatrix}-1 & -1\\ 2 & -3\end{smallmatrix})$ Step 2: Finding the Eigenvalues of Matrix A To solve this system, we first find the eigenvalues and eigenvectors of the matrix A. The characteristic equation is derived from the determinant of A −λI, where I is the identity matrix and λ represents the eigenvalues. To find the eigenvalues λ of matrix A, we solve the characteristic equation: |A - λI| = $\vert\begin{smallmatrix}-1-λ & -1\\ 2 & -3-λ\end{smallmatrix}\vert = (−1−λ)(−3−λ) + 2 = (λ+1)(λ+3) +2 = λ^2 + 4λ + 3 + 2 = λ^2 + 4λ + 5 = 0 $ Solving the Characteristic Equation: $λ^2 + 4λ + 5 = 0 ⇒[\text{Using the quadratic formula}] λ = \frac{-4±\sqrt{-4}}{2} = -2±i$ Step 3: Finding the Eigenvectors We need to solve: (A −λI)v = 0 • λ[1] = -2 + i The matrix A - λI is: $\vert\begin{smallmatrix}−1−(−2+i) & -1\\ 2 & −3−(−2+i)\end{smallmatrix}\vert = \vert\begin{smallmatrix}1-i & -1\\ 2 & -1-i\end{smallmatrix}\vert$ For λ = -2 + i, we solve $\vert\begin{smallmatrix}1-i & -1\\ 2 & -1-i\end{smallmatrix}\vert(\begin{smallmatrix}a_1\\a_2\end{smallmatrix}) = 0$, which gives: (1-i)a[1] -a[2] = 0 ⇒ a[2] = (1-i)a[1]. From the first equation a[2] = (1-i)a[1]. Substitute a[2] into the second equation: 2a[1] + (-1-i)a[1] = 0 ↭ $2a_1 + (-1-i)(1-i)a_1 = 0 ↭ 2a_1 -1 +i -i -1 = 0 ↭ 2a_1-2=0↭ a_1 = 1 ⇒ a_2 = (1-i)$ Thus, the eigenvector corresponding to λ[1]=−2+i is: $\vec{α_1} = (\begin{smallmatrix}a_1\\a_2\end{smallmatrix}) = (\begin{smallmatrix}1\\1-i\end{smallmatrix})$ Step 4: Constructing the General Solution The corresponding solution to the system can be written as $x = e^{-2t}e^{it}(\begin{smallmatrix}1\\1\end{smallmatrix})+ i(\begin{smallmatrix}0\\ -1 \end{smallmatrix}) = e^{-2t}((\begin{smallmatrix}1 \\1\end{smallmatrix})+ i(\begin{smallmatrix}0\\ -1 \end{smallmatrix}))(cos(t) + isin(t)) $ so that we get respectively for the real and imaginary parts of x $x_1 = e^{-2t}((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1\end{smallmatrix})sin(t))$ $x_2 = e^{-2t}((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin{smallmatrix}0\\ -1\end{smallmatrix})cos(t))$ The general solution is $c_1[((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1 \end{smallmatrix})sin(t))]e^{-2t} + c_2[((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin{smallmatrix}0\\ -1 \end{smallmatrix})cos(t))]e^{-2t}$ The real question is what $[((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1\end{smallmatrix})sin(t)), ((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin {smallmatrix}0\\ -1 \end{smallmatrix})cos(t))]]$ look like? As a curve, this is bounded, periodic, the exponential decay e^-2t causes the amplitude of the trajectories to decrease over time, resulting in a spiral inward toward the origin. The system describes a spiral sink, meaning that as time progresses, the solutions are spiraling trajectories that move inward toward the origin. How do you know that it goes around counterclockwise and not clockwise? By calculating the vector at (1, 0) from the system’s velocity field (our original system equations), we obtained (-1, 2), therefore the motion is counterclockwise (Refer to Figure v for a visual representation and aid in understanding it). Given x=1, y=0: x’ = −x−y = −1−0 = −1 (motion to the left). y′ = 2x−3y = 2(1)−0 = 2 (motion upwards). This indicates a counterclockwise rotation around the origin. • Sketch the following homogeneous linear system A = $(\begin{smallmatrix}-2 & 3\\ -3 & -2\end{smallmatrix}), λ = -2 ± 3i$ The real part of the eigenvalue Real(λ) = -2 > 0, which is negative. Since the real part of the eigenvalues is negative, the trajectories will spiral inward towards the origin. The presence of the imaginary part ±3i indicates that the trajectories will be spirals (oscillatory bounded behaviour). To determine the direction of the spiral, you can compute the action of the matrix on a standard basis vector, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$ $(\begin{smallmatrix}-2 & 3\\ -3 & -2\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix} -2 \\ -3 \end{smallmatrix})$. This vector points downwards and to the left, indicating a clockwise rotation if you follow the arrow. This homogeneous linear system is stable because the trajectories spiral inward towards the origin. The negative real part of the eigenvalues (Re(λ)=−2) causes the trajectories to move towards the origin, and the presence of the imaginary part (±3i) indicates that the trajectories are spirals. The system exhibits a stable spiral in a clockwise direction (Refer to Figure i for a visual representation and aid in understanding it) • Sketch the following homogeneous linear system A = $(\begin{smallmatrix}2 & 3\\ -3 & 2\end{smallmatrix}), λ = 2 ± 3i$ The real part of the eigenvalue Real(λ) = 2 > 0, which is positive. Since the real part of the eigenvalues is positive, the trajectories will spiral outward, away from the origin. The presence of the imaginary part ±3i (oscillatory bounded behaviour) indicates that the trajectories will be spirals. To determine the direction of the spiral, you can compute the action of the matrix on a standard basis vector, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$ $(\begin{smallmatrix}2 & 3\\ -3 & 2\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix} 2 \\ -3 \end{smallmatrix})$. This vector points downwards and to the right, indicating a clockwise rotation if you follow the arrow. This homogeneous linear system is unstable because the trajectories spiral outwards away from the origin. The positive real part of the eigenvalues (Re(λ) = 2) causes the trajectories to move away from the origin, and the presence of the imaginary part (±3i) indicates that the trajectories are spirals. The system exhibits an unstable spiral in a clockwise direction (Refer to Figure ii for a visual representation and aid in understanding it) • Sketch the following homogeneous linear system A = $(\begin{smallmatrix}0 & 1\\ -5 & 0\end{smallmatrix})$ Step 1: Finding the Eigenvalues We start by calculating the eigenvalues λ of A by solving the characteristic equation: det(A -λI) = 0 ↭ $det(\begin{smallmatrix}-λ & 1\\ -5 & -λ\end{smallmatrix}) = (−λ)(−λ)−(1)(−5)= λ^2+5 = 0$ Solve for λ: $λ^2 = -5 ↭ λ = ±\sqrt{5}i$, so the eigenvalues are purely imaginary. Eigenvalues that are purely imaginary indicate oscillatory behavior in the system. Since the eigenvalues have no real component, there will be no exponential growth or decay, meaning the trajectories will be closed orbits around the origin. This type of system is known as a center, and the trajectories are expected to be circles or ellipses centered at the origin Step 2: Finding the Eigenvector for $λ = +\sqrt{5}i$ To find the eigenvector corresponding to $λ = ±\sqrt{5}i$, we solve: (A−λI)v = 0 $v = (\begin{smallmatrix}-\sqrt{5}i & 1\\ -5 & -\sqrt{5}i\end{smallmatrix})(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}0\\ 0\end{smallmatrix})$ This gives the system of equations: $\begin{cases} -\sqrt{5}ia_1 + a_2 = 0 \\ -5a_1 -\sqrt{5}a_2 = 0 \end{cases}$ From the first equation, we get: $a_2 = \sqrt{5}ia_1$. Let a[1] = 1, then $a_2 = \sqrt{5}ia_1 = \sqrt{5}i$. So, an eigenvector corresponding to $λ = \sqrt{5}i$ is v = $(\begin{smallmatrix}1\\ \sqrt Step 3: Constructing the Complex Solution The complex solution is: $\vec{x}(t) = Ce^{λt}v = C(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})e^{\sqrt{5}it} =[\text{Using Euler’s formula}] C(\begin{smallmatrix} 1 \\ \sqrt{5}\end Step 4: Separating Real and Imaginary Parts to Form Real Solutions The real and imaginary parts of x(t) give us two linearly independent solutions: The real solutions are: $\vec{x_1} = C_1(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})cos(\sqrt{5}t), \vec{x_1} = C_2(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})sin(\sqrt{5}t)$ Therefore, the general solution can be written as a linear combination of these two real solutions. The general solution for a system with purely imaginary eigenvalues can be expressed as: $\vec{x} (t) = C_1(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})cos(\sqrt{5}t) + C_2(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})sin(\sqrt{5}t)$ Step 5: Interpreting the Solution and Phase Portrait The eigenvalues are purely imaginary, which indicates that the system has oscillatory behavior and periodic motion ($cos(\sqrt{5}t)$ and $sin(\sqrt{5}t)$), so there is no exponential growth or decay (λ has no real part). This results in closed trajectories that form circular or elliptical orbits around the origin. The system represents a center, meaning that trajectories will be closed curves (circles or ellipses) around the origin. They neither converge to nor diverge from the origin. Instead, they form closed orbits. (Refer to Figure iii for a visual representation and aid in understanding it) Step 6: Determining the Direction of Rotation To determine whether the closed trajectories are rotating clockwise or counterclockwise, we examine the effect of A on a standard basis vector,, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$ $ (\begin{smallmatrix}0 & 1\\ -5 & 0\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix}0\\ -5\end{smallmatrix})$. This results, $(\begin{smallmatrix}0\\ -5\end {smallmatrix})$, indicates that the vector $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$ is rotated into the vector $(\begin{smallmatrix}0\\ -5\end{smallmatrix})$, suggesting a clockwise rotation around the origin. General Homogeneous Linear System In the study of differential equations, particularly in advanced calculus or Calculus III, homogeneous linear systems play a crucial role. Specifically, we focus on systems of the form: $\vec{x’} = A \vec{x}$ where $\vec{x}(t)$ is a vector of unknown functions, $\vec{x} = (\begin{smallmatrix}x(t)\\ y(t)\end{smallmatrix})$, A is a constant 2x2 matrix, and $\vec{x’}(t)$ denotes the derivative of $\ vec{x}(t)$ with respect to t. Our goal is to find the general solution to this system, which involves understanding the concepts of linear independence, the Wronskian, and the fundamental matrix. General Solution of the System Theorem. The general solution to the system $\vec{x'} = A\vec{x}$ can be expressed as a linear combination of two linearly independent solutions: $\vec{x(t)} = c_1\vec{x_1} +c_2\vec{x_2}$ where c[1] and c[2] are constants, and $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are linearly independent solutions. 1. These solutions $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are called fundamental solutions because they form a basis for the solution space of the differential equation. Any solution $\vec{x}(t)$ can be written as a linear combination of these two solutions. 2. Two solutions are linearly independent if one is not a scalar multiple of the other. This means they span the full space of solutions. 3. Constants c[1] and c[2]. These constants are determined by initial conditions or specific requirements of the problem. Determining Linear Independence: The Wronskian To check whether two solutions $\vec{x_1}$ and $\vec{x_2}$ are linearly independent, we use the Wronskian. Definition. The Wronskian for two vector-valued functions is defined as: $W(\vec{x_1}(t), \vec{x_2}(t)) := |\vec{x_1} \vec{x_2}| = det(\begin{smallmatrix}x_1(t) & x_2(t)\\ y_1(t) & y_2(t)\end {smallmatrix})$ where $\vec{x_1}(t) = (\begin{smallmatrix}x_1(t)\\y_1(t)\end{smallmatrix})$ and $\vec{x_2} = (\begin{smallmatrix}x_2(t)\\y_2(t)\end{smallmatrix})$ The Wronskian gives us valuable information about the relationship between the two solutions: Fundamental Matrix Solution A powerful tool in solving linear systems is the fundamental matrix. A fundamental matrix X(t) for the system $\vec{x'} = A\vec{x}$ is a 2 x 2 matrix whose columns are linearly independent solutions of the system, $X := [\vec{x_1} \vec{x_2}] = (\begin{smallmatrix}x_1 (t) & x_2(t)\\\ y_1(t) & y_2(t)\end{smallmatrix})$. It provides a compact representation of the full solution space. Each column of X(t) is one of the independent solutions, so the matrix X(t) contains all the information about the general solution to the system. 1. The determinant of X(t), which is the Wronskian of the two solutions, is never zero. Since $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are linearly independent, then det(X(t)) ≠ 0 for all t. 2. The matrix X(t) satisfies the original differential equation: X' = AX. This property follows from the fact that each column of X(t) is a solution to the differential equation X’ = AX ↭ $[\vec{x_1’} \vec{x_2’}] = A[\vec{x_1} \vec{x_2}] =[\text{Simple matrix multiplication}] [A\vec{x_1} A\vec{x_2}] ↭ \vec{x_1’} = A\vec{x_1}, \vec{x_2’} = A\vec{x_2}$ This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007]. 1. NPTEL-NOC IITM, Introduction to Galois Theory. 2. Algebra, Second Edition, by Michael Artin. 3. LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson). 4. Field and Galois Theory, by Patrick Morandi. Springer. 5. Michael Penn, and MathMajor. 6. Contemporary Abstract Algebra, Joseph, A. Gallian. 7. YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra. 8. MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
{"url":"https://justtothepoint.com/calculus/sketchingsolutions2/","timestamp":"2024-11-14T04:06:54Z","content_type":"text/html","content_length":"35026","record_id":"<urn:uuid:4ffb964d-488a-435c-becd-690beb024ba1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00392.warc.gz"}
How to find the limit of a piecewise function with piecewise complex fractions? | Hire Someone To Do Calculus Exam For Me How to find the limit of a piecewise function with piecewise complex fractions? This question is maybe a bit overwhelming that I can do anything I want for this question. I am going to attempt the following example: $$\Lambda(x,y) = \frac{x^2 + y^2}{2} = \frac{x^2 + 2x – \frac{y^2}{2}}{2}$$ where I have written this so that it can be easily seen that it’s a polynomial of degree $\left(\frac{6} {25}\right)$ with the logarithm. Actually, the property of this polynomial being of order $\left(\frac{1}{2}\right)$ will imply the value of the lowest binomial modulo 5. However, this is what makes my logic so complicated. Is there a library or calculator that could help me solve this example? Or maybe could I just use this to limit my search? A: If $$\Lambda'(x,y) = \frac{x^2 + y^2}{2} = \frac {x^2 + 2x – yx + 2y^2}{2} = 0, then \left(\frac{6}{25}\right) = \frac{1}{2} + \frac{1}{20} = \frac{1}{20} = \frac{1}{25}.$$ You just have to multiply by $10$. At this point the equation has a nicer form: $$x^2 + y^2 = 0 + \frac{1}{2}y + \frac{1}{20}y^2 + \frac{1}{25}.$$ How to find the limit of a piecewise function with piecewise complex fractions? Another way to tackle the puzzle was to use SUSY to find the limit of a piece-wise function of each interest by putting a number of the published here in the middle of a formula on the right side of the equation. To get a figure, use x to project the equation to the right side of the figure, and then use the end for F to find the limit. After that, the first step is to find the limit of the piece-wise function. I know I can get past the portion where a term occurs and plug into the equation to get the contour limit of the piece-wise function. But how can I do it through other means? First you have to have the equation has no limits. After that, you assume the ends of the terms on the right side of the solution aren’t on the end of the solution and you can work out the limit number via numerical integration. By replacing the half-infinite term using the half-infinite integral, you have the limit number obtained! Unfortunately it takes a few moments to apply the inverse transform to the integral, but it works! The reverse is much easier, and I’ve used it before when the solution had contained greater than the $1/n^{1/2}$ term that you initially thought was on the left on the right and returned to zero. The other way is to change the side condition on the equation and work out the limit number using the inverse transform! By doing this for every equation on the left side, from equation 3.8.6 up to equation 3.9 this will give the figure exactly to the right for approximately 1.5 hours for a piece of paper instead of 1.5 hours. How Do I Pass My Classes? Is there an additional solution that does the right thing for this problem? Of course, a solution for your problem would be very simple. Some of the common names for things like Newton’s law, ordinary differential equation, orHow to find the limit of a piecewise function with piecewise complex fractions? In LSP, the answer to this question is yes whether I have to do a real analysis to find the limit of a piecewise function with piecewise complex fractions. So, say I have a piecewise function with piecewise complex fractions $\frac{I(s)}{s}$ and $I(0)$ with piecewise complex fractions. Then, I search “limit of a piecewise function $(s:s \rightarrow r)$ with piecewise complex fractions” with a function $f : I(I(s)) \rightarrow \{0,1\}$ $f(x) = q(x) \frac{\left(x-I(s)\ right)^2}{(x-I(s))^3}$ I end up with a piecewise function with piecewise complex numerals $f_i= q(x_i)$ I can do $f_{i} = \cos(q(x_i) + \psi(x_i))$ $f_i$ is a positive real and then use this to get $I(s) = \sum_{i=0}^{\infty}f_{i}^q$ you get $I(0) = q(0)$. My question is: are there any other methods for this?thanks A: One way to answer your question I know it is difficult to actually find limit of a piecewise function directly. What you could do is look at fraction of fraction or singular values of your example. We know as far as you’re aware, that the limit of a piecewise function is the limit of its Laurent series. Let’s make this clear: $$ \lim_{x\to 0+}\frac{q(x)}{x-1}=\lim_{x\to 0^{+}}\frac{q(x)}{\sqrt{x-1}} $$ You could also use a power series method, which has been a popular technique in discrete arithmetic since the beginning of the 1980’s. Edit: Looking at the comments For example: $$u= \lim_{x\to 0}\frac{\left(\sqrt{x-1}-\frac{1}{\sqrt{x-1}}\right)}{(\sqrt{x-1}+1/x)^ 2} $$ $$u=\lim_{x\to 0}\frac{x-1}{\sqrt{(x-1)^2+x-1}}=\lim_{x\to \infty}\frac{x-1}{\sqrt{x^2+x+1}}=\
{"url":"https://hirecalculusexam.com/how-to-find-the-limit-of-a-piecewise-function-with-piecewise-complex-fractions","timestamp":"2024-11-08T21:16:36Z","content_type":"text/html","content_length":"101211","record_id":"<urn:uuid:e5115b68-1889-482a-8039-7a24478696a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00218.warc.gz"}
how to calculate water flow, volume and pressure In this blog we discuss the mathematics involved in water hydraulics to calculate flows, pressures, volumes and velocities etc. Its knowledge is a key for the water operators and workers of municipal sector involved particularly in water supply and waste water management operations. This blog covers most of the major categories of maths calculations that are important to know for daily operations of water system. Let's deal with them one by one. Water pressure is measured in terms of pounds per square inch (psi) and feet of head (height of water column in feet). A 2.31 feet high water column creates a pressure of 1 psi. In this way the water pressure at the bottom of a storage tank can be used for determining the water level in the tank by using this relation of pressure units i.e head and psi. Centrifugal pumps used to pump the water are rated in feet of Total Dynamic Head (TDH) but the system pressures are measured in psi. Thus it is necessary for water operators to become familiar with conversion of pressure units from one to other. If the pressure (psi) is known, the height of the water column (Head) can be determined by multiplying the known value in psi by 2.31. psi x 2.31 = Feet of Head ----------- (a) • A pressure gauge at the bottom of water storage tank reads pressure as 30 psi. What is the water level in the tank? or what is the head of water column in feet? Formula given in (a) to convert psi to feet of head psi x 2.31 = feet of head (put known values) 30 psi x 2.31 = 69.4 feet of water above the gauge Hence with the reading of pressure at the bottom of storage tank we can determine the height of water columns in feet in storage tank. Because the height of water column in feet is linked with the magnitude of pressure it can create. In the example given above the pressure was known and we calculated the height of water column i.e Head in feet. Similarly If the height of a column of water is known, the pressure it exerts can be determined by dividing the feet of head by 2.31. Feet of Head / 2.31 = psi -------------------------- (b) • The level of reservoir is 115 feet above the pump discharge. What is the discharge pressure on the pump? Formula given in (b) to convert feet of head to psi 115 / 2.31 = 49.8 psi These were simple calculations for calculating pressure (psi) and head (feet) by putting known values of either of them in given formulas. The advance questions may require to calculate head (feet) or pressure (psi) before it can be converted. • If a pump is installed at 5320 feet above sea level. The overflow of the reservoir is stationed at 5460 feet above sea level. What will be the discharge pressure on the pump in psi? Neither we are known to head (feet) nor pressure (psi) Now to determine the pressure (psi) we have to calculate head (feet) first For that, find the difference in elevations given above 5460 - 5320 = 140 feet of head Now as we know the value of head (feet), we can calculate pressure (psi) 140 /2.31 = 60.6 psi see more examples • If a discharge pressure gauge on a pump reads 72 psi when the pump is running. The pressure gauge at the top of a hill on 40 feet height above the pump reads 45 psi. What is the friction loss in pipe in feet of head? Now we have got two pressure values one at pump and other is where water is being pumped Secondly we have got 40 feet head the pump is qualifying at top of hill And we have to determine friction loss in pipe responsible for difference in pressures First of all we find the difference between pressures 72 psi - 45 psi = 27 psi Now convert it into feet of head 27 x 2.31 = 62.37 feet of head This is the total head loss Subtract the difference in elevation to find friction loss 62.37 feet - 40 feet = 22.37 feet of head Do it other way Convert pressures in head 72 x 2.31 = 166.32 feet 45 x 2.31 = 103.95 feet Find the difference 166.32 - 103.95 = 62.37 feet This is the total head loss distance from pump to hill + friction loss in pipe = head loss -------(xx) We have to find friction loss, re-arrange (xx) friction loss = head loss - distance from pump to hill friction loss = 62.37 - 40 friction loss = 22.37 feet The amount of water moving through the system in unit time is called flow. It can be measured in one of the three different units. They are gpm (gallons per minute), mgd (million gallons per day) and cfs or cusec (cubic feet per second). How they are converted from one to other mgd = gpm / 700 , gpm = mgd x 700 cfs = gpm / 449 , gpm = cfs x 449 • A system uses 2 mgd. How many gallons per minute does it use? 1 mgd = 700 gpm thus 2 mgd x 700 = 1400 gpm • If a pipeline has a carrying capacity of 3 cfs. How many gpm can it handle? 1 cfs = 449 gpm thus 3 cfs x 449 = 1347 gpm • A well pumps 350 gpm. How many mgd will it pump? 1 mgd = 700 gpm 1 gpm = 1 / 700 mgd thus 350 gpm = 350 / 700 mgd = 0.5 mgd Water meters For the calculations of volume of circular tanks and velocities in circular pipes, the area of circle is required to be calculated. There are two formulae used to calculate the area of circle. 1. Area = d^2 x 0.785 where 'd' is diameter of circle 2. Area = r^2 x 3.1416 where 'r' is radius of circle For determining volume of water the Area is multiplied by the height of tank or length of pipe. • A sedimentation basin is 60 feet in diameter. What is the surface area of the tank? d = 60 , r = 30 Apply both formulae 30 x 30 x 3.1416 = 2830 square feet 60 x 60 x 0.785 = 2830 square feet • A pipeline has diameter of 12 inches. What is the area of the pipe? d = 12" , r = 6" Apply both formulae 6 x 6 x 3.1416 = 113 sq.in 12 x 12 x 0.785 = 113 sq.in If the storage tank is rectangular instead of circular then area is determined by A = L x W where 'L' is length and 'W' is width and for calculating the volume of water in rectangular tank the third dimension i.e height (H) is multiplied. The volume of rectangular tank is calculated by multiplying length, width and height of tank. Volume of rectangular tank (cubic feet) = L x W x H • If a sedimentation basin is 60 feet long & 40 feet wide and 10' deep. What will be the volume of the tank in cubic feet? We have got all three dimensions, now calculate the volume 60' x 40' x 10' = 24,000 cft Volume of circular tank can be calculated by multiplying the area by the height (depth) of the tank. Volume of circular tank (cubic feet) = r^2 x 3.1416 x H Volume of circular tank (cubic feet) = d^2 x 0.785 x H • If a sedimentation basin is 60' in diameter and 12' deep. What is the volume of the tank? d = 60' , r = 30' depth or height of tank = 12' Applying both formulae of area to be multiplied by 'H' 30 x 30 x 3.1416 x 12 = 33,900 cubic feet 60 x 60 x 0.785 x 12 = 33,900 cubic feet Volume of tank or pipe is calculated in cubic feet and gallons as well. If the volume is calculated in cubic feet (cft), it can be converted into gallons. Same conversion can be rendered from gallons to cubic feet. Let's discuss the conversion method between cubic feet and gallons 1 cubic feet = 7.48 gallons in other words 1 cubic feet contains 7.48 gallons • Sedimentation basin is 60 feet long and 40 feet wide and 10' deep. What is the volume of tank in cubic feet? L = 60' , W = 40' and H (depth or height) = 10' Volume (cft) = 60 x 40 x 10 = 24000 cft Now apply conversion formula from cubic feet to gallons 1 cubic feet = 7.48 gallons 24000 cubic feet = 24000 x 7.48 gallons 24000 cubic feet = 179,500 gallons Means 24000 cubic feet space contains 179,500 gallons Thus when cubic feet is known, multiply the value of cubic feet by 7.48 to get gallons Similarly if gallons are known, they are divided by 7.48 to calculate cubic feet • If a circular tank has a diameter of 40 feet and is 10 feet deep. How many gallons will it hold? diameter = 40 ft , radius = 20 ft H (depth or height) = 10 feet Calculate the volume in cubic feet 20 x 20 x 3.1416 x 10 = 12,600 cubic feet 40 x 40 0.785 x 10 = 12,600 cubic feet we calculated cubic feet volume, convert to gallons 12,600 x 7.48 =94,200 gallons Number of gallons held in one-foot section of pipe can be determined by; squaring diameter (in inches), then multiplying by 0.0408. gallons in one foot pipe section = d (inch) x d (inch) x 0.0408 Now for determining the gallons in particular length of pipe, the same formula is further multiplied by number of feet of pipe. Volume (gal) = d x d x 0.0408 x length (pipe) • A 12" line is 1100 feet long. How many gallons does the pipe hold? Solve it by formula given above Volume (gallons) = 12 x 12 x 0.0408 x 1100 = 6460 gallons • A 6" line is 654 feet long. How many gallons does the pipe hold? Volume (gallons) = 6 x 6 x 0.0408 x 654 = 960 gallons The velocity of water moving through pipe can be calculated if the flow in cubic feet per second (cfs) and the diameter of pipe (inches) are known. Area of pipe is calculated in square feet (sq.ft) and the flow is then divided by the area. Velocity (feet per second or fps) = Flow (cfs) / Area (sq.ft) • If a 24" pipe carries a flow of 11 cfs. What is the velocity in the pipe? change diameter from inches to feet d = 24 / 12 = 2 feet Now find area of pipe in sq.ft Area (sq.ft) = 2 x 2 x 0.785 = 3.14 sq.ft Now find velocity in feet per second Velocity (fps) = Flow / Area = 11 / 3.14 = 3.5 fps The flow through pipe (cfs) can be determined if the velocity and pipe diameter are known. The area of pipe is calculated in square feet and it is multiplied by velocity (fps), and the resultant is Flow (cfs). • A 12" pipe carries water at the velocity of 5 feet per second (fps). What is the flow in cfs? convert dia from inches to feet d = 12 / 12 = 1 foot Now calculate area Area (sq.ft) = 1 x 1 x 0.785 = 0.785 sq.ft Now calculate flow if Velocity = Flow / Area Flow = Area x Velocity Flow (cfs) = 0.785 x 5 = 3.925 cfs • A 12" pipe carries 1400 gpm at 4 fps velocity and reduces to a 6" pipe. What is the velocity in the 6" pipe? convert dia from inches to feet d = 6 / 12 = 0.5 foot find Area of pipe in sq.ft Area (sq.ft) = 0.5 x 0.5 x 0.785 Area (sq.ft) = 0.196 sq.ft convert flow from gpm to cfs Flow = 1400 / 449 = 3.12 cfs Now we have values of Flow and Area find velocity now Velocity (fps) = Flow (cfs) / Area (sq.ft) Velocity = 3.12 / 0.196 = 15.92 fps say 16 feet per second Length of time in minutes or hours for one gallon of water to pass through from tank is called 'Detention Time'. It is calculated by using following formula Detention time = Capacity of tank (gal) / Flow (gpm or gpd) = Volume (gal) / Flow (gpm or gpd) If flow is taken in gpm, the detention time results in minutes, to change it in hours divide minutes by 60. If flow is taken in gallons per day (gpd), the resultant detention time will be in day. To convert it into hours, multiply the result by 24 and the detention time will be converted to hours. The formula for Detention time can also be used for calculating time to fill a tank. • A 50,000 gallons tank receives 250,000 gpd flow. What is the detention time in hours? Use formula of Detention time Detention time = Volume / Flow = 50000 / 250000 Detention time = 0.2 days Convert to hours 0.2 x 24 = 4.8 hours • A tank is 60' x 80' x 10' and the flow is 2 mgd. What is the detention time in hours? From the dimensions of tank, it is evident that it is rectangular tank Find volume of tank Volume (cft) = 60 x 80 x 10 = 48000 cubic feet Now change cubic feet to gallons Volume (gal) = 48000 x 7.48 = 359000 gallons Now change flow from mgd to gpd 2 mgd x 1000000 = 2000000 gpd Now find Detention time (days) Detention time = Volume (gal) / Flow (gpd) D.T (days) = 359000 / 2000000 D.T (days) = 0.18 days Change days to hours 0.18 x 24 = 4.3 hours • If a tank is 100 feet in diameter and 22 feet deep. If flow into the tank is 1500 gpm and the flow out of the tank is 300 gpm. What time in hours will it take to fill the tank? diameter = 100 feet and depth = 22 feet Calculate volume in cubic feet Volume (cft) = 100 x 100 x 0.785 x 22 = 173000 cubic feet change cubic feet to gallons Volume (gal) = 173000 x 7.48 = 1290000 gallons Calculate net inflow 1500 gpm - 300 gpm = 1200 gpm Calculate the time to fill (D.T) D.T (min) = 1290000 / 1200 D.T (min) = 1075 minutes Change minutes to hours D.T (hours) = 1075 / 60 D.T (hours) = 17.9 hours We will continue sharing such information for the best use of our viewers and welcome their valued comments to be incorporated in future blogs. Post a Comment
{"url":"https://www.skillsammunition.com/2022/05/how-to-calculate-water-flow-volume-and-pressure.html","timestamp":"2024-11-15T01:25:48Z","content_type":"application/xhtml+xml","content_length":"1049064","record_id":"<urn:uuid:6ce044a6-ab91-4726-a7c8-345f5945e969>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00085.warc.gz"}
Curricular Unit Advanced Physics Topics 1 Experimental Particle and Astroparticle Physics , Advanced Analysis Methods, Top quark physics, Standard model and beyond (EPAP) Lecture course Contact hours 20 (12 T, 8 p) Professor/Researcher in charge Juan Antonio Aguilar –Saavedra, Antonio Onofre, Nuno Castro U. Minho Summary of Contents This course involves the study of advanced analysis methods for PhD students within the field of Particle Physics. Following a theoretical revision on the current status of top quark physics, several applications are discussed. During the course, students are expected to be able to perform simple theoretical calculations related to top quark physics and explore the physics of its decay. The interplay between the top quark physics and the recently discovered Higgs boson is exercised as an application. Students are expected to analyse dedicated samples of ttH Monte Carlo events (with an hands-on approach). A production cross section limit at the LHC is extracted using advanced statistical tools. Students are expected to follow at least 2/3 of the lectures, in both topologies i.e., Theoretical (T) and Theoretical-Pratical (TP). The grading plan involves attendance and participation in discussions, individual and team work as well as a final exam. Coursework will be weighted as follows: Attendance 10% Individual/Team work 35% Quizzes 25% Final Exam 30%
{"url":"http://www.map.edu.pt/fis/Course13_14/Courses1213/epap.html","timestamp":"2024-11-03T00:00:58Z","content_type":"application/xhtml+xml","content_length":"20223","record_id":"<urn:uuid:a7a70ca7-02ae-4002-8b03-16ab5edbe000>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00681.warc.gz"}
2-Level Factorial Design (Default Generators) Printed Results None suppresses all of the output. Summary table The summary table displays the properties of the design, such as the number of factors and the number of experimental runs. The summary table also includes the design generators. When you fold the design, the generators are for the unfolded design. Alias table The alias table shows the alias structure for the design. For example, in a resolution III design you can use the alias table to see which two factor interactions are aliased with each main Design table The design table shows the factor settings for each experimental run in the design. Defining relation The defining relation is the total collection of terms that are held constant to define the fraction in a fractional factorial design. The defining relation is used to calculate the alias structure, which indicates which terms are aliased with each other. When you fold the design, then the defining relation shows the defining relation for the folded design. Content of Alias Table Default interactions □ If the design has 7 or fewer factors, then the alias structure shows all the terms that are aliased. □ If the design has 8–10 factors, then the alias structure shows the aliasing of terms up to and including 3-factor interactions. □ If the design has 11–15 factors, then the alias structure shows the aliasing of terms up to and including 2-factor interactions. Interactions up through order You can choose to show only lower-order interactions. For example, for a resolution III design you can select 2 to show only the main effects and 2-factor interactions.
{"url":"https://support.minitab.com/en-us/minitab/help-and-how-to/statistical-modeling/doe/how-to/factorial/create-factorial-design/create-2-level-factorial-default-generators/create-the-design/specify-the-results-to-display/","timestamp":"2024-11-03T07:20:33Z","content_type":"text/html","content_length":"13007","record_id":"<urn:uuid:83fc414a-5297-403a-918a-0398ddde6dd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00482.warc.gz"}
Five Data Points Can Clinch a Business Case [article] - Hubbard Decision Research Five Data Points Can Clinch a Business Case [article] by Matt Millar | Articles, Bayesian vs. Frequentist?, How To Measure Anything Blogs, News, What's New Pop quiz: which of the following statements about decisions do you agree with: 1. You need at least thirty data points to get a statistically significant result. 2. One data point tells you nothing. 3. In a business decision, the monetary value of data is more important than its statistical significance. 4. If you know almost nothing, almost anything will tell you something. Believing the first two statements will limit your effectiveness in using statistics in a business decision. The second two statements capture one of the important points in Applied Information Economics: small data is often very useful in decision making when there is great uncertainty. This article presents three examples of how a sample of just five data points can tip the scales in a business decision. Example 1: length of employees’ commutes. Decision: management is deciding on a proposal and wants to measure the benefits of the proposed organizational transformation. In their business case, the variable “time spent commuting” has come back with a high information value. If the average time spent commuting is more than 20 minutes, then the decision has an acceptable ROI profile. They randomly select five people and ask them their commute times. Data collected: Taka 25 minutes Bob 20 minutes Frank 35 minutes Asim 55 minutes Jane 35 minutes Using our “rule of five” the 90% confidence interval for the median of our population of employees is 20-55 minutes. Our 90% confidence interval for the mean of the population is 21.2 to 46.8 minutes. This was calculated using our Small Sample calculator found here. [Wonk alert!] In the small sample calculator, we are using a simplifying assumption that the distribution is normally distributed, which obviously is not always the case. Even in the example given, it is unlikely that the distribution of drive times is normally distributed, but this still provides a reasonable approximation for a 90% range estimate for mean drive time. Example 2: minor league, major decision Decision: a baseball team manager needs to decide if he should send a player back to the minor leagues. The manager has brought a player up from the minor leagues, and the player has had 5 at bats and zero hits. The manager has a minimum required batting average of .215 for players in their first year in the majors. Are five at bats without a hit enough data to be 90% confident the player should be sent back to the minor leagues? For this type of data we would use an inverse beta distribution to calculate the 90th percentile of the distribution of batting averages. [Nerd panic! Note this isn’t quite the same as a 90% confidence interval which would be the range from the 5th percentile to the 95th percentile] Entering an alpha of 1 (no hits) and a beta of 6 (5 misses) returns a 90th percentile of .319. The manager can be 90% confident that the player’s batting average is below .319 but cannot be 90% confident that the player’s batting average will be less than .215. However, to get there requires just 4 more at bats with no hits. No pressure young man! Example 3: Big Dig on a small scale Decision: The Executive Team wants to improve project management by being better able to assess a 90% confidence range of development time based on engineers’ initial estimates. The company has carefully tracked original estimates for five projects and can now compare them to actual duration: Software Development Time Initial Estimate Actual Project 1 8 weeks 17 weeks Project 2 22 weeks 42 weeks Project 3 4 weeks 5 weeks Project 4 3 weeks 9 weeks Project 5 11 weeks 11 weeks If we want to get a 90% confidence interval for actual development time based on our data, how would we do that? We can start by plotting the 5 points on a scatter chart. Based on a linear regression of these five points the actual time to completion is 177% of the initial estimate. Next we estimate a 90% confidence interval on the range for actual versus initial estimate. The ratios between the actual and predicted are: 213%, 191%, 125%, 300%, and 100%. Entering these values in the small sample calculator we get a 90% confidence interval for the average of 110% and 261%. So if the initial project estimate is 10 weeks, our best estimate would be 18 weeks and our 90% range would be 11 to 26 weeks. Collecting data is all about resolving uncertainty. And in our busy work environment, we’re often expected to make the best conclusions in a limited amount of time. However, if we target the right variable we can improve our judgment with just a few data points. So get out there and do some measurements! And reward yourself with better decisions.
{"url":"https://hubbardresearch.com/five-data-points-can-guide-decision/","timestamp":"2024-11-10T04:30:33Z","content_type":"text/html","content_length":"265587","record_id":"<urn:uuid:8a96b22e-e246-4e54-b5f3-76f51ff9a999>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00602.warc.gz"}
Doubt in Recognizing LTL formula + General Questions (11) For CTL, it is quite easy to understand why a given formula is a CTL formula but I am having some trouble in recognizing a LTL formula, e.g ¬EF(¬E([1 U a] ∧ Fa)) is not a CTL formula but why its an LTL formula that I dont understand also in the lecture it is mentioned that LTL formulas don't allow nesting of path quantifiers i am not sure about this statement as well. I would like to see some examples of LTL formulas, I am adding the respective slide where we discuss CTL/LTL/CTL* as a reference. You need to distinguish between syntax and semantics. From the syntax (as given by the above grammar rules), it is quite straightforward to decide whether a CTL* formulas belongs to LTL: It just has to start with A, and then no further path quantifier A or E is allowed afterwards. For the semantics, the problem is much more difficult: Here, the question is whether the given CTL* formula can be rewritten to an equivalent one in LTL syntax. Here, you have to use some experience, and also need to understand the formula. Also, the following theorem may be of great help: Theorem ([3]): Given a CTL formula φ, let ψ be the quantifier-free formula that is obtained by removing all path quantifiers from φ. Then, there is an equivalent LTL formula for φ iff Aψ is equivalent to φ. Considering the formula ¬EF(¬E([1 U a] ∧ Fa)), we first have to drive the negation inwards and get first AG¬¬E([1 U a] ∧ Fa), and then AG E([1 U a] ∧ Fa). Next, I assume the U is a strong Until, and if so, then [1 U a] is equivalent to Fa, so that we have AG E(Fa ∧ Fa), i.e., AG EFa. That is not a LTL formula, but it is a CTL formula. According to the above theorem, it would only be equivalent to a LTL formula iff it would be equivalent to AGFa, but that is not the case. You can use the following counterexample to distinguish between AG EF a and AGFa: Using CTL model checking, you can quickly check that EF a holds on {s0,s1} and therefore also AG EF a holds on the same states. However, AGFa means that on all outgoing paths, a should hold infinitely often which is not the case in state s0, since there we can loop in s0. Hence, AG EFa and AGFa are not equivalent, and hence, there is no LTL formula equivalent to the CTL formula AG EF a.
{"url":"https://q2a.cs.uni-kl.de/1996/doubt-in-recognizing-ltl-formula","timestamp":"2024-11-15T01:08:33Z","content_type":"text/html","content_length":"59652","record_id":"<urn:uuid:a934a19d-425a-4bb7-bd0a-34c70a96a5b7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00244.warc.gz"}
Does Python recognize scientific notation? - Mad Penguin Does Python recognize scientific notation? Does Python Recognize Scientific Notation? Python is a popular programming language known for its simplicity, flexibility, and powerful programming capabilities. As a programmer, you may have come across the need to work with scientific notation, a way of writing very large or very small numbers in a more compact and readable form. But does Python recognize scientific notation? In this article, we will explore the answer to this question and delve deeper into the world of scientific notation and its applications in Python. What is Scientific Notation? Before we dive into whether Python recognizes scientific notation, let’s first understand what scientific notation is. Scientific notation is a way of writing numbers in a more compact and readable form, where a number is expressed as a decimal fraction multiplied by a power of 10. For example, the number 1234 can be written in scientific notation as 1.234 × 10^3. This form of notation is commonly used in scientific and mathematical calculations, as it makes it easier to express very large or very small numbers, such as astronomical distances or extremely small measurements. Direct Answer: Yes, Python Recognizes Scientific Notation The answer to the question "Does Python recognize scientific notation?" is a resounding yes. Python’s built-in float and decimal modules support scientific notation, making it easy to work with very large or very small numbers. How Python Handles Scientific Notation Python’s Built-in Functions Python has several built-in functions that handle scientific notation, making it easy to work with numbers in this format. For example: • The format() function can be used to convert a number to scientific notation. • The str() function can be used to convert a number to a string in scientific notation. • The format_values() function from the rng module can be used to format numbers in scientific notation. Example: Converting Numbers to Scientific Notation Here’s an example of how to use the format() function to convert a number to scientific notation: import math number = 123456789 print(format(number, ".2e")) # Output: 1.23e+08 In this example, the format() function is used to convert the number 123456789 to scientific notation, with two decimal places. The output is 1.23e+08, which is the scientific notation equivalent of the original number. Example: Converting Strings to Numbers Here’s an example of how to use the str() function to convert a string in scientific notation to a number: s = "1.23e+08" number = float(s) print(number) # Output: 123456789.0 In this example, the str() function is used to convert the string 1.23e+08 to a number, which is then printed as 123456789.0. Significant Figures and Precision When working with scientific notation, it’s important to consider the concept of significant figures and precision. Significant figures refer to the number of digits in a value that are known to be accurate. Precision, on the other hand, refers to the number of digits in a value that are guaranteed to be accurate. In scientific notation, the number of significant figures is typically indicated by the number of digits in the coefficient (the part before the Exponent) and the precision is determined by the number of digits in the exponent (the part with the "e" and the number). Example: Significant Figures and Precision Here’s an example of how to use the format() function to control the number of significant figures in a scientific notation: import math number = 123456789 print(format(number, ".5g")) # Output: 1.23457e+08 In this example, the format() function is used to convert the number 123456789 to scientific notation, with 5 significant figures. The output is 1.23457e+08, which is the scientific notation equivalent of the original number, with 5 significant figures. In conclusion, Python recognizes scientific notation and provides several built-in functions to work with numbers in this format. By understanding how to use these functions, you can take advantage of Python’s powerful scientific notation capabilities and apply them to your programming tasks. Whether you’re working with astronomical distances, extremely small measurements, or any other large or small numbers, Python’s built-in support for scientific notation makes it easy to handle these numbers with ease. Additional Tips and Resources I hope you found this article helpful! Unlock the Future: Watch Our Essential Tech Videos! Leave a Comment
{"url":"https://www.madpenguin.org/does-python-recognize-scientific-notation/","timestamp":"2024-11-03T01:19:19Z","content_type":"text/html","content_length":"133594","record_id":"<urn:uuid:a21616f4-8b54-462d-a20a-e5093c853543>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00271.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: My sister bought your software program for my kids after she saw them doing their homework one night. As a teacher, shed recognized the value in a program that provided step-by-step solutions and good explanations of the work. My kids love it. Paola Randy, IN After downloading the new program this looks a lot easier to use, understand. Thank you so much. Paola Randy, IN I am a student at Texas State University. I bought your product Algebrator and I can honestly say it is the reason I am passing my math class! Leslie Smith, MA The new version is sooo cool! This is a really great tool will have to tell the other parents about it... No more scratching my head trying to help the kids when I get home from work after a long day, especially when the old brain is starting to turn to mush after a 10 hour day. A.R., Arkansas Search phrases used on 2011-10-15: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • what is the code for solving quadratic equations in matlab? • free range and mode practice for 2nd grade • multiplication and division using mixed numerals • complex trigonometric equations using MATLAB • logbase 10 ti-89 titanium • solving rational expressions calculator • solving equations by adding and subtracting worksheet • java print decimail without the e • Trigonometry Cheat Sheets • prime compound numbers problems kids • how to solve logs on a ti-89 • printable third grade integrated worksheets • free worksheets decimals to fractions • 8th grade pre-algebra lessons • free slope calculator • graphing translations worksheets • fraction from least to greatest • solving equations with square & cube roots worksheet • coordinates ks2 • solving single rational expression in lowest terms • level 5 sheet on adding, subtracting,dividing,timesing • chapters of book of algebra of 10 grade • 6th grade taks test worksheets • adding subtracting multiplying dividing fractions • first grade printable math problems • addition inside of a square root • positive and negative fractions • finding least common denominator calculator • 3rd grade adding decimal • FREE 2nd grade iowa practice test WORKSHEETS • binomial expansion software • graph a quadriatic formula in excel • simplifying radicals expressions answers • pre algebra quizzes free • calculator in simplest form • fun algebra • how to calculate vertex quadratic • download factorise calculator • easy way to understand ratios • math online tests on adding and subtracting fractions • any problum calculator • algebra 1 dividing fractions problems • Sixth grade, permutations and combinations • examples of math trivia mathematics • simplifying radical expressions rationalizing the denominator worksheet • worksheet M&M combinations permutations • exponential expressions examples • calculating density and volume worksheet + sixth grade level • worksheets fractions word problems • combining like terms worksheet • finding the nth power worksheet • how do you solve an equation by completing the square? • mymaths formula worksheet • java divisible • biology exercise 6.grade • rules for adding and subtracting positive and negative numbers • online area and perimeter tests for grade 8 free • Simplifying Algebraic Expressions Worksheets • free printable blank graphs 1:100 scale grid • find an equation for the hyperbola • difference between solving a system of equations by the algebraic method and the graphical method • free common denominator calculator • year 11 maths test answers • rational expression math games • math algebra transforming formula worksheet • algebra 2 manipulatives • free online word problem solver • relationship from graph linear parabolic hyperbolic • solve square root calculator free • positive and negative numbers practice sheets • free australia mathematic worksheets • free online algebra 2 help • answers for excel 92 in 5th grade • free factor trinomials • free math worksheets for square numbers and square roots middle school • square root calculator with variable • What is an equation that equals 26 using the numbers two, nine, twelve, sixteen, and twenty only once • formula using whole number and fraction • free maths work booklets • solving implicitly for y using graphing calculator • how to find the order of fractions • java program,square roots • how to solve a subtraction problem with a negative number • BOOLEAN ALGEBRA SOLVER FREE • multiplying, division , addition, subtracting rules • lewis and loftus programming projects • free math equation simplifier • editorials that teach bias worksheets
{"url":"https://softmath.com/math-book-answers/adding-exponents/solve-this-equation.html","timestamp":"2024-11-07T18:45:36Z","content_type":"text/html","content_length":"35825","record_id":"<urn:uuid:0a7d6d55-c4f9-4999-b836-2c100ce80fcf>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00164.warc.gz"}
How to Calculate Telescope Magnification - Backyard Stargazers One question we often ask is, what is the magnification of your telescope? If you are new to astronomy, you may not know how to calculate this. Don’t panic; it’s a straightforward calculation and an important calculation to understand as each telescope has the Lowest Useful Magnification and the Highest Useful Magnification. To work out the telescope magnification, you need the Focal Length of the telescope and Focal Length of the eyepiece. You then divide the focal length of the telescope by the focal length of the eyepiece; this gives you the telescope magnification. So as you can see it’s an easy calculation but there is a little more to it and its best shown with a real working example. Telescope Magnification Working Example Celestron nexstar 4se computerized telescope Focal Length of the telescope 1325mm Focal Length of Eyepiece (included eyepiece) 25mm 1325MM / 25MM = 53x Magnification So, now we know that in our example, the Celestron Nexstar 4SE comes with 53x Magnification. Telescope Highest Useful Magnification Working Example However, let’s say we want to up our magnification and push the power of our telescope. We need to find the highest useful magnification; this is easily found on the manufactories site. Let’s keep on using the Nextsar 4se as our example. Highest useful magnification 241x At this point it’s worth pointing that you’re Focal Length of the eyepiece gets smaller for more magnification. For example a 25mm eyepiece on a 1500mm focal-length telescope would yield a power rating of 75x (1500/25 = 60) While a using a 10mm eyepiece on the same telescope would give 150x (1500/10 = 150) This is something that drips up a lot of new people into astronomy so the smaller the mm with eyepiece the more the magnification. Let’s get back to our example we know we have a 241x Highest useful magnification. Right we can take a big jump as we know we have a lot of space to work with let’s go with a 5mm eyepiece and apply our calculation. Focal Length of the telescope 1325mm Focal Length of Eyepiece 5mm 1325MM / 5MM = 265x Magnification Over our 241x highest useful magnification so we need to come up a little lets go with a 5.5mm eyepiece. Focal Length of the telescope 1325mm Focal Length of Eyepiece 5.5mm 1325MM / 5.5MM = 240x Magnification This is on the edge of the highest useful magnification so we know that the smallest eyepiece with the Celestron nexstar 4se (our example) is a 5.5mm eyepiece. You can apply this example with your own telescope. Again it’s worth pointing out that there is a lowest useful magnification, it’s a similar process but one for another post this post is all about the power. What Happens If I Go Above the Highest Useful Magnification? The results will not be great; in short the image thought the eyepiece will become distorted and fuzzy also known as “point spread function”. If you want to do more reading on “point spread function” Wikipedia has a great resource explaining it all. So, avoid the disappointment and calculate the correct eyepiece and have much more fun. Should I Always Go With Higher Magnification? It’s easy to think that higher magnification will give you a better view as long as you stay within the highest useful magnification. However, this is not always the case, as other factors can affect your views of the night sky. Seeing Conditions If there is turbulence in the atmosphere, also known as astronomical seeing can give a very unstable view, even more, if you are using high magnification. If there is lots of turbulence in the atmosphere, you would be better off going with a smaller magnification. What this does is steady’s the view in your eyepiece, and I would only move up the magnification when turbulence in the atmosphere improves. The object that you are viewing If you are viewing a star cluster in the eyepiece, for instance, it can look much better at lower magnification. With higher magnification, the view can look cropped in the eyepiece and much less detail leading to disappointment. I hope this post shows you how to calculate telescope magnification but goes behind that by giving you some food for thought when you come to buy a new eyepiece. Bigger is not always better, and I have had some of my brightest and sharpest views with low magnification. Experiment with different eyepieces but don’t exceed the highest useful magnification; you will just be wasting your money. I wish you clear skies and an enjoyable stargazing experience on your next trip out.
{"url":"https://backyardstargazers.com/how-to-calculate-telescope-magnification/","timestamp":"2024-11-05T05:38:08Z","content_type":"text/html","content_length":"150500","record_id":"<urn:uuid:36a83358-b8fa-41e1-a585-4b1399498b46>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00796.warc.gz"}
The Stacks project Lemma 42.47.6. In Lemma 42.47.1 let $f : Y \to X$ be locally of finite type and say $c \in A^*(Y \to X)$. Then \[ c \circ P'_ p(E_2) = P'_ p(Lf_2^*E_2) \circ c \quad \text{resp.}\quad c \circ c'_ p(E_2) = c'_ p(Lf_2^*E_2) \circ c \] in $A^*(Y_2 \to Y)$ where $f_2 : Y_2 \to X_2$ is the base change of $f$. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0FAJ. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0FAJ, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0FAJ","timestamp":"2024-11-14T04:10:07Z","content_type":"text/html","content_length":"14878","record_id":"<urn:uuid:d4cfcef0-6a2a-437b-a00e-9b3cb62a05ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00082.warc.gz"}