content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
How to Use Lambda Functions in Python | AppSignal Blog
Lambda functions in Python are a powerful way to create small, anonymous functions on the fly. These functions are typically used for short, simple operations where the overhead of a full function
definition would be unnecessary.
While traditional functions are defined using the def keyword, Lambda functions are defined using the lambda keyword and are directly integrated into lines of code. In particular, they are often used
as arguments for built-in functions. They enable developers to write clean and readable code by eliminating the need for temporary function definitions.
In this article, we'll cover what Lambda functions do and their syntax. We'll also provide some examples and best practices for using them, and discuss their pros and cons.
Lambda functions have been a part of Python since version 2.0, so you'll need:
• Minimum Python version: 2.0.
• Recommended Python version: 3.10 or later.
In this tutorial, we'll see how to use Lambda functions with the library Pandas: a fast, powerful, flexible, and easy-to-use open-source data analysis and manipulation library. If you don't have it
installed, run the following:
Syntax and Basics of Lambda Functions for Python
First, let's define the syntax developers must use to create Lambda functions.
A Lambda function is defined using the lambda keyword, followed by one or more arguments and an expression:
lambda arguments: expression
Let's imagine we want to create a Lambda function that adds up two numbers:
Run the following:
result = add(3, 5)
This results in:
We've created an anonymous function that takes two arguments, x and y. Unlike traditional functions, Lambda functions don't have a name: that's why we say they are "anonymous."
Also, we don't use the return statement, as we do in regular Python functions. So we can use the Lambda function at will: it can be printed (as we did in this case), stored in a variable, etc.
Now let's see some common use cases for Lambda functions.
Common Use Cases for Lambda Functions
Lambda functions are particularly used in situations where we need a temporarily simple function. In particular, they are commonly used as arguments for higher-order functions.
Let's see some practical examples.
Using Lambda Functions with the map() Function
map() is a built-in function that applies a given function to each item of an iterable and returns a map object with the results.
For example, let's say we want to calculate the square roots of each number in a list. We could use a Lambda function like so:
# Define the list of numbers
numbers = [1, 2, 3, 4]
# Calculate square values and print results
squared = list(map(lambda x: x ** 2, numbers))
This results in:
We now have a list containing the square roots of the initial numbers.
As we can see, this greatly simplifies processes to use functions on the fly that don't need to be reused later.
Using Lambda Functions with the filter() Function
Now, suppose we have a list of numbers and want to filter even numbers.
We can use a Lambda function as follows:
# Create a list of numbers
numbers = [1, 2, 3, 4]
# Filter for even numbers and print results
even = list(filter(lambda x: x % 2 == 0, numbers))
This results in:
Using Lambda Functions with the sorted() Function
The sorted() function in Python returns a new sorted list from the elements of any iterable. Using Lambda functions, we can apply specific filtering criteria to these lists.
For example, suppose we have a list of points in two dimensions: (x,y). We want to create a list that orders the y values incrementally.
We can do it like so:
# Creates a list of points
points = [(1, 2), (3, 1), (5, -1)]
# Sort the points and print
points_sorted = sorted(points, key=lambda point: point[1])
And we get:
[(5, -1), (3, 1), (1, 2)]
Using Lambda Functions in List Comprehensions
Given their conciseness, Lambda functions can be embedded in list comprehensions for on-the-fly computations.
Suppose we have a list of numbers. We want to:
• Iterate over the whole list
• Calculate and print double the initial values.
Here's how we can do that:
# Create a list of numbers
numbers = [1, 2, 3, 4]
# Calculate and print the double of each one
squared = [(lambda x: x ** 2)(x) for x in numbers]
And we obtain:
Advantages of Using Lambda Functions
Given the examples we've explored, let's run through some advantages of using Lambda functions:
• Conciseness and readability where the logic is simple: Lambda functions allow for concise code, reducing the need for standard function definitions. This improves readability in cases where
function logic is simple.
• Enhanced functional programming capabilities: Lambda functions align well with functional programming principles, enabling functional constructs in Python code. In particular, they facilitate the
use of higher-order functions and the application of functions as first-class objects.
• When and why to prefer Lambda functions: Lambda functions are particularly advantageous when defining short, "throwaway" functions that don't need to be reused elsewhere in code. So they are
ideal for inline use, such as arguments to higher-order functions.
Limitations and Drawbacks
Let's briefly discuss some limitations and drawbacks of Lambda functions in Python:
• Readability challenges in complex expressions: While Lambda functions are concise, they can become difficult to read and understand when used for complex expressions. This can lead to code that
is harder to maintain and debug.
• Limitations in error handling and debugging: As Lambda functions can only contain a single expression, they can't include statements, like the try-except block for error handling. This limitation
makes them unsuitable for complex operations that require these features.
• Restricted functionality: Since Lambda functions can only contain a single expression, they are less versatile than standard functions. This by-design restriction limits their use to simple
operations and transformations.
Best Practices for Using Lambda Functions
Now that we've considered some pros and cons, let's define some best practices for using Lambda functions effectively:
• Keep them simple: To maintain readability and simplicity, Lambda functions should be kept short and limited to straightforward operations. Functions with complex logic should be refactored into
standard functions.
• Avoid overuse: While Lambda functions are convenient for numerous situations, overusing them can lead to code that is difficult to read and maintain. Use them judiciously and opt for standard
functions when clarity is fundamental.
• Combine Lambda functions with other Python features: As we've seen, Lambda functions can be effectively combined with other Python features, such as list comprehensions and higher-order
functions. This can result in more expressive and concise code when used appropriately.
Advanced Techniques with Lambda Functions
In certain cases, more advanced Lambda function techniques can be of help.
Let's see some examples.
Nested Lambda Functions
Lambda functions can be nested for complex operations.
This technique is useful in scenarios where you need to have multiple small transformations in a sequence.
For example, suppose you want to create a function that calculates the square root of a number and then adds 1. Here's how you can use Lambda functions to do so:
# Create a nested lambda function
nested_lambda = lambda x: (lambda y: y ** 2)(x) + 1
# Print the result for the value 3
You get:
Integration with Python Libraries for Advanced Functionality
Many Python libraries leverage Lambda functions to simplify complex data processing tasks.
For example, Lambda functions can be used with Pandas and NumPy to simplify data manipulation and transformation.
Suppose we have a data frame with two columns. We want to create another column that is the sum of the other two. In this case, we can use Lambda functions as follows:
# Create the columns' data
data = {'A': [1, 2, 3], 'B': [4, 5, 6]}
# Create data frame
df = pd.DataFrame(data)
# Create row C as A+B and print the dataframe
df['C'] = df.apply(lambda row: row['A'] + row['B'], axis=1)
And we get:
A B C
That's it for our whistle-stop tour of Lambda functions in Python!
Wrapping Up
In this article, we've seen how to use Lambda functions in Python, explored their pros and cons, some best practices, and touched on a couple of advanced use cases.
Happy coding!
P.S. If you'd like to read Python posts as soon as they get off the press, subscribe to our Python Wizardry newsletter and never miss a single post! | {"url":"https://blog.appsignal.com/2024/10/16/how-to-use-lambda-functions-in-python.html","timestamp":"2024-11-04T00:47:47Z","content_type":"text/html","content_length":"216418","record_id":"<urn:uuid:208ea8de-fb8b-4d58-80ba-8dfdf4fb005f>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00636.warc.gz"} |
ISSA Proceedings 2014 ~ A Formal Model Of Conductive Reasoning
No comments yet
Abstract: I propose a formal model of representation and numerical evaluation of conductive arguments. Such arguments consist not only of pro-premises supporting a claim, but also of contra-premises
denying this claim. Offering a simple and intuitive alternative to accounts developed in the area of computational models of argument, the proposed model recognizes internal structure of arguments,
allows infinitely many degrees of acceptability, reflects the cumulative nature of convergent reasoning, and enables to interpret attack relation.
Keywords: argument evaluation, argument structure, attack relation, conductive reasoning, logical force of argument, rebuttal.
1. Introduction
According to Wellman’s original definition (1971) the conclusion of any conductive argument is drawn inconclusively from its premises. Moreover, the premises and the conclusion are about one and the
same individual case, i.e. the conclusion is drawn without appeal to any other case. Wellman also gave three leading examples of conductive arguments, which determine three patterns of conduction:
(1) You ought to help him for he has been very kind to you.
(2) You ought to take your son to the movie because you promised, and you have nothing better to do this afternoon.
(3) Although your lawn needs cutting, you want to take your son to the movies because the picture is ideal for children and will be gone by tomorrow.
Wellman’s definition was an object of many interesting views, opinions and interpretations, mostly surveyed in (Blair & Johnson 2011). However, we do not discuss this issue here, but we simply follow
these authors who, as Walton & Gordon (2013), focus on the third pattern and propose to take conductive arguments to be the same as pro-contra arguments. Such arguments, except of a normal
pro-premise or premises (The picture is ideal for children; It will be gone by tomorrow), have also a con-premise or premises (Your lawn needs cutting).
In the next two chapters we analyze conductive arguments from the logical point of view. The conduction is regarded here as one act of reasoning, in which a conclusion is drawn by the same time from
both types of premises. In Chapter 2 we describe the structure and in Chapter 3 – a method of evaluation of conductive arguments. This method is based on the model of argument proposed in (Selinger
2014). In Chapter 4 we introduce a dialectical component of the analysis. Namely, by means of our model, we discuss definition of attack relation holding between arguments.
2. Structure of conductive arguments
There are many ways of expressing conductive arguments in natural language. Some of them are the following:
– Since A, even though B, therefore C.
– A, therefore C, although B.
– Although B, C because A.
– B, but (on the other hand) A, therefore C.
– Despite B, (we know that) A, therefore C.
In the above schemes the letter A represents a pro-premise (or pro-premises), B – a con-premise (or con-premises) and C – a conclusion. It is worth to note that pro-premises are presented as
overcoming con-premises, so that an argument can be accepted if they really do. There are two types of inference in conductive arguments: pro-premises support and con-premises deny (contradict,
attack) conclusions. They can be represented using the standard diagramming method. Figure 1 shows the diagram of Wellman’s third example.
Relation of support is represented by the solid and relation of contradiction – by the dashed line.[i] In order to reflect this duality in our formal model we follow Walton & Gordon’s idea involving
the assignment of Boolean values to these two types of inference, however, we propose to use simpler formal structures than the so-called argument graphs (cf. Walton & Gordon 2013).
Let L be a language, i.e. a set of sentences. Sequents are all the tuples of the form <P, c, d>, where P ⊆ L is a non-empty, finite set of sentences (premises), c ∈ L is a single sentence (conclusion
), and d is a Boolean value (1 in pro-sequents and 0 in con-sequents). An argument is simply any finite, non-empty set of sequents. If an argument consists of only one sequent then it will be called
an atomic argument.
The premises of an argument are all the premises of all its sequents. The conclusions of an argument are all the conclusions of all its sequents. The first premises are those premises, which are not
the conclusions, and the final conclusions are those conclusions, which are not the premises. Finally, the intermediate conclusions are those sentences, which are both the conclusions and the
premises. A typical (abstract) argument structure is presented in Figure 2 by the diagram corresponding to the set: {<{α1}, α5, 1>, <{α2}, α5, 0>, <{α3}, α5, 0>, <{α4}, α9, 1>, <{α5}, α13, 1>, <{α6},
α15, 1>, <{α7}, α15, 1>, <{α8}, α15, 0>, <{α9}, α16, 1>, <{α10}, α18, 1>, <{α11}, α18, 0>, <{α12, α13, α14}, α20, 1>, <{α15, α16}, α, 1>, <{α17}, α, 1>, <{α18, α19}, α, 0>, <{α20}, α, 0>}. This
argument consists of 16 different sequents (10 of them are pro- and 6 are con-sequents), so it is the sum of the same number of atomic arguments. The premises are all the sentences in the diagram
except of α, which is the final conclusion; the conclusions are: α5, α9, α13, α15, α16, α18, α20, α; the first premises: α1, α2, α3, α4, α6, α7, α8, α10, α11, α12, α14, α17, α19; the intermediate
conclusions: α5, α9, α13, α15, α16, α18, α20.
By the means of our formalism also atypical structures can be distinguished (cf. Selinger 2014). Some of them are illustrated by Figure 3. Circular arguments can have no first premises and/or no
final conclusion (two examples in Figure 3 have neither the first premises nor the final conclusion). They are interesting argument structures, e.g. for those who deal with antinomies, however, we do
not discuss them, since they are mostly regarded as faulty. On the other hand, divergent arguments and incoherent arguments can have more than one final conclusion. They are not faulty (unless from
some purely pragmatic point of view), but they can be represented as the sums of non-divergent and coherent arguments. Therefore, when discussing evaluation of conductive arguments in the next
chapter, we focus on typical argument structures like that shown in Figure 2.
3. Evaluation of conductive arguments
The central question to be considered in this section is: how to transform the values of first premises into the value of final conclusion? We answer this question in three steps concerning
evaluation of atomic, convergent and, finally, conductive arguments.
First we introduce some basic notions. Each partial function v: L’→[0, 1], where L’ ⊆ L, is an evaluation function. The value v(p) is the (degree of) acceptability of p. We consider also a predefined
function w: LχL→[0, 1]. The value w(c/p) is the acceptability of c under the condition that v(p) = 1, so that the function w will be called conditional acceptability.
We assume that L contains the negation connective. If the premises of some sequent deny its conclusion c then evaluation of c will be based on evaluation of the sentence ¬c in the corresponding
pro-sequent, in which the same premises support ¬c. Let us note that for a perfectly rational agent the condition v(¬c) = 1 – v(c) should be satisfied. This postulate will be useful to evaluate
Let v be a given evaluation function (we assume that v is fixed in the following part of our exposition). By ∧P we denote the conjunction of all the sentences belonging to a finite, non-empty set P
(if P is a singleton then ∧P is the sole element of P). We assume that L contains the conjunction connective, and if P⊆ dom(v) then ∧P ∈dom(v).[ii] The value w(c/∧P) will be called the internal
strength of a pro-sequent <P, c, 1>, and the value w(¬c/∧P) – the internal strength of a con-sequent <P, c, 0>.
Let A = {<P, c, d>} be an atomic argument, where P ∈ dom(v), c∉ dom(v), and d is a Boolean value. The function vA is the following extension of v to the set dom(v) ∪ {c}:
(4) If d = 1 then vA(c) = v(∧P)⋅w(c/∧P);
(5) If d = 0 then vA(c) = 1 – v(∧P)⋅w(¬c/∧P).
Thus the acceptability of the conclusion of an atomic argument under condition that its premises are fully acceptable is reduced proportionally to the actual acceptability of the premises. The value
vA(c) will be called the (logical) strength (or force) of an argument A. We will say that a pro-argument is acceptable iff its strength is greater than ½, and a con-argument is acceptable iff its
strength is smaller than ½.
In the next step we consider evaluation of convergent reasoning. Since convergent argumentation is used to cumulate the forces of different reasons supporting (or denying) a claim we have to add
these forces in a way adapted to our scale. Strengths of pro- and con-components will be added separately in each of both groups, independently of the other. Let A = A1 ∪ A2 , where both A1 and A2
are acceptable arguments and they either consist of only pro- or of only con-sequents having the same conclusion c. Let vA1(c) = a1 and vA2(c) = a2.
(6) If A1 and A2 are independent pro-arguments, and a1, a2 > ½, then vA(c) = a1 ⊕a2;
(7) If A1 and A2 are independent con-arguments, and a1, a2 < ½, then vA(c) = 1 – (1–a1)⊕ (1–a2), where x ⊕ y = 2•x + 2•y – 2•x•y ¬– 1.
In (Selinger 2014) we provide a justification of this algorithm, deriving it from the principle (satisfied also by the algorithms given in (4) and (5)) that can be called the principle of
proportionality, according to which the strength of argument should vary proportionally to the values assigned to its components. We also discuss properties of the operation ⊕ (here let us only
mention that it is both commutative and associative, therefore the strengths of any number of converging, independent arguments can be added in any order).
Finally, we consider conductive reasoning. In order to compute the final value of a conductive argument we will subtract the strength of its con- from the strength of its pro-components in a way
adapted to our scale. Let A = Apro Acon, where Apro consists only of pro-sequents and Acon only of con-sequents having the same conclusion c. We assume that both groups of arguments are acceptable,
i.e. vApro(c) > ½ and vAcon(c) < ½.
(8) If vApro(c) < 1, and vAcon(c) > 0, then vA(c) = vApro(c) + vAcon(c) ¬– ½;
The idea of this algorithm is illustrated by Figure 4. Since we want to know how much pro-arguments outweigh con-arguments (or vice versa), we subtract the value ½¬ –vAcon(c) represented by the
interval [vAcon(c), ½] in this figure from the value vApro(c) – ½ represented by the interval [½, vApro(c)]. In order to finally receive the acceptability of c we add this differential to ½. Let us
note that the considered value is directly proportional to the acceptability of pro- and reversely proportional to the acceptability of con-arguments, so that the algorithm satisfies the principle of
The algorithm given by (8) assumes that both pro- and con-arguments are, as defined by Wellman, inconclusive. However in real-life argumentation it happens, for example in mathematical practice, that
initial considerations concerning some hypothesis, which are based on subjective premonitions, analogies, incomplete calculations etc., are finally overcame by a mathematical proof. Then all the
objections raised originally are no longer significant, and the hypothesis becomes a theorem. Therefore, if either pro- or con-arguments are conclusive, then so the whole conductive argument is.
(9) If vApro(c) = 1, and vAcon(c) ≠ 0, then vA(c) = 1;
(10) If vApro(c) ≠ 1, and vAcon(c) = 0, then vA(c) = 0.
If both pro- and con-arguments happen to be conclusive then it is an evidence of a contradiction in underlying knowledge, and the initial evaluation function requires revision. Therefore we claim
that the values of such strongly antinomian arguments cannot be found.
(11) If vApro(c) = 1, and vAcon(c) = 0, then vA(c) is not computable.
Otherwise, the strength of weakly antinomian arguments, which consist of equal inconclusive components, can be computed as ½ using the algorithm given by (8).
In order to complete this section let us add that the acceptability of the conclusions of complex, multilevel argument structures, as the one represented by Figure 3, can be calculated level by level
using the algorithms (4) – (10). An analogous process concerning only pro-arguments is described in (Selinger 2014).
4. Attack relation
Our goal is to define attack relation, which holds between arguments. For the sake of simplicity we consider only attack relation restricted to the set of atomic arguments. There are three components
of atomic arguments that can be an object of a possible attack: premises, inferences and conclusions. The latter is the case of conduction. If we take into account a pro- and a con-argument, which
have the same conclusion, then the stronger of them attacks the weaker one (in the case of an antinomy both arguments attack each other, so that it can be called the mutual attack case).
(12) An argument A attacks (the conclusion of) an argument B iff A = {<P1, c, d>}, B = {<P2, c, 1 – d >}, and either d = 0 and 1 – vA(c) ≤ vB(c), or d = 1 and 1 – vA(c) ≤ vB(c).
The second kind of attack is the attack on a premise. Obviously, it is effective if (i) some premise of an attacked argument is shown to be not acceptable on the basis of the remaining knowledge.
(13) An argument A attacks (a premise of) an argument B iff A = {<P1, c1, 0>}, B = {<P2, c2, d>}, c1 < P2, and v’A(c1) ≤ ½, where v’ is the function obtained from v by deleting c1 from its domain,
i.e. dom(v’) = dom(v) – {c1}.
However, with respect to the proposed method of evaluation, two further situations are possible: (ii) the premises of an attacked argument considered separately are acceptable, however their
conjunction is not; (iii) the conjunction of the premises of an attacked argument is acceptable and the internal strength of its constituent (pro- or con-) sequent is greater than ½, but the product
of these values is not. Thus, in view of the evaluation method proposed here, merely weakening a premise can cause an effective attack, and the definition (13) should be replaced by the following
broader one.
(13’) An argument A attacks (a premise of) an argument B iff A = {<P1, c1, 0>}, B={<P2, c2, d>}, c1 ∉P2, v’A(c1) ≤v(c1), and either d = 1 and v’A(∧P2)∧w(c2/∧P2)∧ ½, or d = 0 and v’A(∧P2) w(~c2/∧P2) ⊆
½, where v’ is the function obtained from v by deleting c1 from its domain.
In order to consider attack on the relationship between the premises and the conclusion of an attacked argument, let us take into account the following Pollock’s example of an undercutting defeater:
(14) The object looks red, thus it is red unless it is illuminated by a red light.
Following Toulmin’s terminology, the sentence The object is illuminated by a red light will be called rebuttal. Let us note, that rebuttals are not con-premises, since they do not entail the negation
of the conclusion (the fact that the object is illuminated by a red light does not imply that the object is not red). Thus Pollock’s example cannot be diagrammed like conductive arguments. Since it
is an arrow that represents the inference, which is denied by the rebuttal, rather the diagram shown by Figure 5 seems to be relevant here.
However, structures such as the one in Figure 5 have no direct representation within the formalism introduced in this paper to examine conductive reasoning. In order to fill this gap we propose to
add the fourth element, namely the set of rebuttals, to the sequents considered so far. Such extended sequents will have the form <P, c, d, R>, where R is the set of (linked) rebuttals.
Since our goal is to define attack relation as holding between arguments, we propose to take an argument without rebuttals (i.e. with the empty set of rebuttals) as being attacked by the argument
with the same premises and conclusion, but with a rebuttal added. For example (14) can be regarded as an attacker of the simple argument
(15) The object looks red, thus it is red.
This argument (15) has the following representation: {<{The object looks red}, The object is red, 1, ∅>}, and its attacker (14): {<{The object looks red}, The object is red, 1, {The object is
illuminated by a red light}>}. In general, an argument of the form {<P, c, d,∅>} can be attacked by any argument of the form {<P, c, d, R>}. Effectiveness of this sort of attack depends on evaluation
of such arguments. It is not the aim of this paper to develop an evaluation method for arguments with rebuttals systematically, however, let us note that the strength of an argument {<P, c, d, R>},
where R ≠∅, seems to be strictly connected with the strength of the corresponding argument {<P∪{~∧R}, c, d, ∅>}, which has an empty set of rebuttals. For example, the strength of (14) depends on the
strength of the argument:
(16) The object looks red, and it is not illuminated by a red light, thus it is red.
If this argument is acceptable then so is its second premise (The object is not illuminated by a red light), which is the negation of the rebuttal in (14). By the same the rebuttal is not acceptable
so that the attack on (15) cannot be effective. Thus (16) cannot be acceptable if (14) attacks the inference of (15). In general, if A = {<P, c, d, R>} attacks (the inference of) B = {<P, c, d,∅>},
then R≠∅ and A’ = {<P∪{~∧R}, c, d, ∅>} is not acceptable. Obviously, the converse does not hold, because not any acceptable set of sentences can be a good rebuttal. If the attack is to be effective
the set R must be relevant to deny the inference in B. A test of relevance that we propose is based on an observation concerning (15) and (16). Intuitively, the inference in (16) is stronger than the
inference in (15), i.e. the internal strength of the sequent in (16) is greater than the internal strength of the sequent in (15). This is because (16) assumes that a possible objection against the
inference in (15) has been overcome. Thus, the condition w(c/∧P∧~∧R) > w(c/∧P) can be proposed to determine the relevance of the rebuttal in A. Following these intuitions we recognize arguments
overcoming rebuttals as hybrid arguments in the sense defined by Vorobej (1995). Such arguments contain a premise that strengthens them, but this premise does not work alone so that it cannot be
taken as the premise of a separate convergent reasoning (in (16) such a premise is the sentence The object is not illuminated by a red light).
Summing up, we claim that (a) non-acceptability of the hybrid counterparts corresponding to arguments having rebuttals and (b) relevance of rebuttals are necessary for attack on inference to be
effective. However, we leave open the question whether they are sufficient.
5. Conclusion
We showed how the model of representation and evaluation of arguments elaborated in (Selinger 2014) can be enriched in order to cover the case of conductive reasoning. The extended model allowed us
to define in formal terms two kinds of attack relation, namely attack on conclusion and attack on premise. However, the definition of attack on inference requires further extension of the model. In
order to initiate more profound studies, we outlined a possible direction of making such an extension.
I would like to thank Professor David Hitchcock for his inspiring remarks concerning my ideas, and for his helpful terminological suggestions.
i. Let us note that Walton & Gordon (2013) interpret both pro-premises as supporting the claim independently of each other, and they draw separate arrows connecting each pro-premise with the
conclusion, which represent convergent reasoning. However, it seems to be problematic whether the premise The picture will be gone tomorrow alone (i.e. without any further information about the
movie) actually supports the conclusion.
ii. In order to avoid this assumption the acceptability of an independent set of sentences can be calculated as the product of the values of its elements. Thus the acceptability of a conjunction can
be smaller than the acceptability of its components considered separately (cf. Selinger 2014).
Blair, J. A., & Johnson, R. H. (2011). Conductive argument: an overlooked type of defeasible reasoning. London: King’s College Publications.
Selinger, M. (2014). Towards formal representation and evaluation of arguments. Argumentation, 26(3), 379-393. (K. Budzynska, & M. Koszowy (Eds.), The Polish School of Argumentation, the special the
issue of the journal.)
Vorobej, M. (1995). Hybrid arguments. Informal Logic, 17(2), 289-296.
Wellman, C. (1971). Challenge and response: justification in ethics. Carbondale: Southern Illinois University Press.
Walton, D., & Gordon, T. F. (2013). How to formalize informal logic. In M. Lewiński, & D. Mohammed (Eds.), Proceedings of the 10th OSSA Conference at the University of Windsor, May 2013 (pp. 1-13).
Windsor: Centre for Research in Reasoning, and the University of Windsor. | {"url":"https://rozenbergquarterly.com/issa-proceedings-2014-a-formal-model-of-conductive-reasoning/","timestamp":"2024-11-09T03:03:56Z","content_type":"application/xhtml+xml","content_length":"104047","record_id":"<urn:uuid:bc299108-e3c7-4b0d-aaf1-74226fd18594>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00140.warc.gz"} |
finds an x that solves the matrix equation m.x==b.
finds an x that solves the array equation a.x==b.
open allclose all
Basic Examples (3)
Scope (16)
Basic Uses (9)
Solve a case where is a matrix:
Find a solution for an exact, rectangular matrix:
Compute a solution at arbitrary precision:
Solve the system when is a matrix:
Solve for CenteredInterval matrices:
Find random representatives mrep and brep of m and b:
Verify that sol contains LinearSolve[mrep,brep]:
Solve for when is a matrix of different dimensions:
When no right‐hand side for is given, a LinearSolveFunction is returned:
This contains data to solve the problem quickly for a few values of :
Special Matrices (6)
As the result is typically not sparse, the result is returned as an ordinary list:
Sparse methods are used to efficiently solve sparse matrices:
Solve a system with structured matrices:
Use a different type of matrix structure:
An identity matrix always produces a trivial solution:
Solve a linear system whose coefficient matrix is a Hilbert matrix:
Solve a system whose coefficients are univariate polynomials of degree :
Options (7)
Method (6)
"Cholesky" (1)
"Krylov" (2)
The following suboptions can be specified for the method "Krylov":
Possible settings for "Method" include:
Possible settings for "Preconditioner" include:
Possible suboptions for "Preconditioner" include:
"Multifrontal" (1)
Applications (11)
Spans and Linear Independence (3)
The following three vectors are not linearly independent:
The equation with a generic right-hand side does not have a solution:
Equivalently, the equation with the identity matrix on the right-hand side has no solution:
The following three vectors are linearly independent:
The equation with a generic right-hand side has a solution:
Equivalently, the equation with the identity matrix on the right-hand side has a solution:
The solution is the inverse of :
Determine if the following vectors are linearly independent or not:
As does not have a solution for an arbitrary , they are not linearly independent:
Equation Solving and Invertibility (6)
Solve the following system of equations:
Rewrite the system in matrix form:
Use LinearSolve to find a solution:
Show that the solution is unique using NullSpace:
Verify the result using SolveValues:
Find all solutions of the following system of equations:
First, write the coefficient matrix , variable vector and constant vector :
LinearSolve gives a particular solution:
NullSpace gives a basis for solutions to the homogeneous equation :
Define to be an arbitrary linear combination of the elements of :
The general solution is the sum of and :
Determine if the following matrix has an inverse:
Since the system has no solution, does not have an inverse:
Verify the result using Inverse:
Determine if the following matrix has a nonzero determinant:
Since the system has a solution, 's determinant must be nonzero:
Confirm the result using Det:
Find the inverse of the following matrix:
To find the inverse, first solve the system :
Verify the result using Inverse:
Solve the system , with several different by means of computing a LinearSolveFunction:
Perform the computation by inverting the matrix and multiplying by the inverse:
The results are practically identical, even though LinearSolveFunction is multiple times faster:
Calculus (2)
Newton's method for finding a root of a multivariate function:
Compare with the answer found by FindRoot:
Approximately solve the boundary value problem using discrete differences:
Properties & Relations (9)
For an invertible matrix , LinearSolve[m,b] gives the same result as SolveValues for the corresponding system of equations:
Create the corresponding system of linear equations:
Confirm that SolveValues gives the same result:
LinearSolve always returns the trivial solution to the homogenous equation :
Use NullSpace to get the complete spanning set of solutions if is singular:
Compare with the result of SolveValues:
If is nonsingular, the solution of is the inverse of when is the identity matrix:
In this case there is no solution to :
Use LeastSquares to minimize :
Compare to general minimization:
If can be solved, LeastSquares is equivalent to LinearSolve:
For a square matrix, LinearSolve[m,b] has a solution for a generic b iff Det[m]!=0:
For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has full rank:
For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has an inverse:
For a square matrix, LinearSolve[m,b] has a solution for a generic b iff m has a trivial null space:
Possible Issues (3)
Solution found for an underdetermined system is not unique:
All solutions are found by Solve:
LinearSolve gave the solution corresponding to :
With ill-conditioned matrices, numerical solutions may not be sufficiently accurate:
The solution is more accurate if sufficiently high precision is used:
Some of the linear solvers available are not deterministic. Set up a system of equations:
The "Pardiso" solver is not deterministic:
The Automatic solver method is deterministic:
Wolfram Research (1988), LinearSolve, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearSolve.html (updated 2024).
Wolfram Research (1988), LinearSolve, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearSolve.html (updated 2024).
Wolfram Language. 1988. "LinearSolve." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/LinearSolve.html.
Wolfram Language. (1988). LinearSolve. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LinearSolve.html | {"url":"https://reference.wolfram.com/language/ref/LinearSolve?view=all","timestamp":"2024-11-03T22:43:33Z","content_type":"text/html","content_length":"228946","record_id":"<urn:uuid:ef3a640d-1c78-496c-a1e8-f86b60ab372e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00546.warc.gz"} |
Key Concepts & Glossary
Key Equations
Average rate of change [latex]\frac{\Delta y}{\Delta x}=\frac{f\left({x}_{2}\right)-f\left({x}_{1}\right)}{{x}_{2}-{x}_{1}}[/latex]
Key Concepts
• A rate of change relates a change in an output quantity to a change in an input quantity. The average rate of change is determined using only the beginning and ending data.
• Identifying points that mark the interval on a graph can be used to find the average rate of change.
• Comparing pairs of input and output values in a table can also be used to find the average rate of change.
• An average rate of change can also be computed by determining the function values at the endpoints of an interval described by a formula.
• The average rate of change can sometimes be determined as an expression.
• A function is increasing where its rate of change is positive and decreasing where its rate of change is negative.
• A local maximum is where a function changes from increasing to decreasing and has an output value larger (more positive or less negative) than output values at neighboring input values.
• A local minimum is where the function changes from decreasing to increasing (as the input increases) and has an output value smaller (more negative or less positive) than output values at
neighboring input values.
• Minima and maxima are also called extrema.
• We can find local extrema from a graph.
• The highest and lowest points on a graph indicate the maxima and minima.
absolute maximum
the greatest value of a function over an interval
absolute minimum
the lowest value of a function over an interval
average rate of change
the difference in the output values of a function found for two values of the input divided by the difference between the inputs
decreasing function
a function is decreasing in some open interval if [latex]f\left(b\right)<f\left(a\right)[/latex] for any two input values [latex]a[/latex] and [latex]b[/latex] in the given interval where [latex]
increasing function
a function is increasing in some open interval if [latex]f\left(b\right)>f\left(a\right)[/latex] for any two input values [latex]a[/latex] and [latex]b[/latex] in the given interval where [latex]
local extrema
collectively, all of a function’s local maxima and minima
local maximum
a value of the input where a function changes from increasing to decreasing as the input value increases.
local minimum
a value of the input where a function changes from decreasing to increasing as the input value increases.
rate of change
the change of an output quantity relative to the change of the input quantity | {"url":"https://courses.lumenlearning.com/odessa-collegealgebra/chapter/key-concepts-glossary-54/","timestamp":"2024-11-07T14:14:45Z","content_type":"text/html","content_length":"50050","record_id":"<urn:uuid:8c6d114a-2aa0-417a-8309-5f7db8a3d57c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00054.warc.gz"} |
Hz to Watts Conversion Calculator - GEGCalculators
Hz to Watts Conversion Calculator
Hertz (Hz) and watts (W) are distinct units measuring different properties. Hz denotes frequency, while watts represent power. Converting Hz to watts requires additional information about the
specific electrical system and equipment involved. The two units are not directly interchangeable, and conversion depends on voltage, current, and the type of electrical load.
Hz to Watts Conversion Calculator
Apparent Power (VA):
1. How do you convert Hz to watts?
□ Estimation: Hz (Hertz) and watts (W) measure different properties. Hz measures frequency, while watts measure power. The conversion depends on the specific electrical system and equipment.
2. What is 60 Hz in watts?
□ Estimation: 60 Hz is a measure of frequency, not power (watts). It is commonly used for alternating current (AC) electrical systems.
3. Are Hz and watts the same?
□ Estimation: No, Hz and watts are not the same. Hz measures frequency, while watts measure power.
4. What is watts per hertz?
□ Estimation: Watts per Hertz is not a standard unit. It is not commonly used for electrical measurements.
5. Does 60Hz mean 60 watts?
□ Estimation: No, 60 Hz does not mean 60 watts. Hz measures the frequency of AC power, while watts measure the amount of power consumed or generated.
6. How many Hz is a 60 watt bulb?
□ Estimation: The frequency (Hz) of a bulb is not typically associated with its power rating in watts. A 60-watt bulb operates at the standard AC frequency of the electrical system (e.g., 60 Hz
in the U.S.).
7. What is 60 Hz equal to?
□ Estimation: 60 Hz is equal to 60 cycles or oscillations per second. It is a common frequency for AC electrical power in many countries.
8. Is 60Hz a lot of electricity?
□ Estimation: Hz (Hertz) measures the frequency of AC power, not the quantity of electricity (watts). 60 Hz is a standard frequency used in many electrical systems.
9. What is 50 Hz power?
□ Estimation: 50 Hz power refers to electrical power with a frequency of 50 Hertz. It is commonly used in many countries, including parts of Europe and Asia.
10. How do you calculate watts?
□ Estimation: Watts (W) are calculated by multiplying the voltage (V) by the current (I) in an electrical circuit. The formula is W = V * I.
11. Does Hz affect wattage?
□ Estimation: Hz (frequency) does not directly affect wattage (power). Wattage depends on voltage, current, and the type of electrical load.
12. What is the formula for calculating watts?
□ Estimation: The formula for calculating watts is W = V * I, where W is the power in watts, V is the voltage in volts, and I is the current in amperes.
13. What is the relationship between watts and frequency?
□ Estimation: Watts (power) and frequency (Hz) are independent properties in electrical systems. Frequency affects the behavior of AC circuits but does not directly determine power.
14. Can you convert Hz to amps?
□ Estimation: Hz (frequency) cannot be directly converted to amperes (amps). Amperes depend on current, which is related to voltage and resistance in AC circuits.
15. What is watt in simple words?
□ Estimation: In simple words, a watt (W) is a measure of how much energy is used or produced per second. It is a unit of power and commonly used to quantify electrical energy.
16. How do you calculate watts per hour?
□ Estimation: Watts per hour is not a standard unit. To calculate energy usage, you can multiply the power in watts by the number of hours the device is active to get watt-hours (Wh).
17. What happens if I use a 50 Hz appliance in 60 Hz?
□ Estimation: Using a 50 Hz appliance on a 60 Hz power supply may lead to increased operation speed and potential overheating or damage due to the difference in frequency. Compatibility should
be checked.
18. Why is power at 60 Hz?
□ Estimation: The choice of 60 Hz for power frequency in the U.S. and some other countries was historically based on a balance between technical considerations and economic factors.
19. What does 60Hz mean on a LED light bulb?
□ Estimation: 60 Hz mentioned on an LED light bulb typically indicates that it is designed to operate on a 60 Hz AC power supply, which is common in the United States.
20. What is the conversion for 60 watt bulbs?
□ Estimation: There is no direct conversion for 60-watt bulbs to Hz or frequency. The wattage of a bulb indicates its power consumption or brightness, not its frequency.
21. What is a 60 watt bulb equal to?
□ Estimation: A 60-watt bulb is a common incandescent light bulb that produces approximately 800 lumens of light. It is a measure of its brightness, not frequency.
22. What is 60 Hz in volts?
□ Estimation: 60 Hz refers to the frequency of AC electrical power, not voltage. The voltage can vary depending on the electrical system (e.g., 120V or 240V in the U.S.).
23. What does 60Hz mean on an appliance?
□ Estimation: 60 Hz mentioned on an appliance indicates that it is designed to operate on a 60 Hz AC power supply, which is common in many parts of the world.
24. Why does America use 110V 60Hz?
□ Estimation: The U.S. uses a 110V (or 120V) 60 Hz power supply due to historical reasons and a choice made by electrical engineers and industry standards.
25. How many hertz is 110 volts?
□ Estimation: Voltage (e.g., 110V) and frequency (e.g., 60 Hz) are independent electrical properties. 110V refers to the voltage level, while 60 Hz refers to the frequency of AC power.
26. What is the best frequency for electricity?
□ Estimation: There is no single “best” frequency for electricity. Different regions use different frequencies (e.g., 50 Hz or 60 Hz) based on historical and technical considerations.
27. What kind of current runs at 60Hz?
□ Estimation: A 60 Hz current is typically associated with alternating current (AC) used in many countries, including the United States.
28. How many volts is 50 Hz?
□ Estimation: The voltage (e.g., 230V) and frequency (e.g., 50 Hz) of electrical power are independent. 50 Hz power is commonly used in countries with various voltage levels.
29. How many amps is 50 Hz?
□ Estimation: The current (amps) in an electrical circuit depends on the voltage and the resistance or impedance of the circuit. Frequency (Hz) does not directly determine current.
30. Which is better 50Hz or 60Hz electricity?
□ Estimation: The choice between 50 Hz and 60 Hz electricity depends on regional standards and requirements. Neither is inherently better; they serve their respective regions effectively.
31. How much is 1000 watts of power?
□ Estimation: 1000 watts (1 kilowatt) is equivalent to 1,000 joules of energy per second. It is a common unit of electrical power.
32. Is 240 watts a lot of electricity?
□ Estimation: Whether 240 watts is considered a lot of electricity depends on the context. It is relatively low for many appliances but can be significant for others.
33. How many watts does a fridge use?
□ Estimation: The power consumption of a refrigerator can vary widely depending on its size, efficiency, and usage. A typical household fridge may use around 100-800 watts.
34. What is the relationship between Hz and power?
□ Estimation: The relationship between Hz (frequency) and power (watts) is that the frequency of AC power affects the behavior of electrical devices and circuits, but it does not directly
determine power.
35. How does Hz affect electricity?
□ Estimation: Hz (frequency) affects electricity by determining the rate at which the direction of current in AC circuits alternates. It influences the behavior of devices like motors and
36. Is 50 Hz safer than 60 Hz?
□ Estimation: Neither 50 Hz nor 60 Hz is inherently safer than the other. Electrical safety depends on various factors, including proper grounding, insulation, and electrical standards.
37. How do you calculate watts from volts and Hertz?
□ Estimation: You cannot directly calculate watts from volts and Hertz. Watts depend on voltage, current, and the type of load in an electrical circuit.
38. How many watts is 230V 50Hz?
□ Estimation: The power (watts) of a 230V 50Hz electrical system depends on the current and the type of load connected. It is calculated using the formula W = V * I.
39. What is the formula for 1 watt of power?
□ Estimation: 1 watt of power is equal to the product of 1 volt and 1 ampere. The formula is W = 1V * 1A.
40. How do you convert frequency to electricity?
□ Estimation: Frequency (Hz) and electricity are related but distinct concepts. Frequency refers to the rate of alternating current oscillations, while electricity involves the flow of
electrons in a circuit.
41. Does higher frequency mean more power?
□ Estimation: No, higher frequency (Hz) does not necessarily mean more power (watts). Power depends on voltage, current, and the type of electrical load.
42. Does higher frequency use more power?
□ Estimation: Not necessarily. While some devices may consume more power at higher frequencies, power consumption depends on the specific device and its design.
43. How do you calculate amps from hertz?
□ Estimation: You cannot directly calculate amperes (amps) from Hertz (frequency). Amperes depend on voltage and the resistance or impedance of the circuit.
44. How many amps is 208V 60Hz?
□ Estimation: The current (amps) in a circuit with 208V and 60Hz depends on the electrical load and resistance. It is calculated using Ohm’s law (I = V / R).
45. Can voltage be in Hz?
□ Estimation: Voltage (V) and Hertz (Hz) are distinct units used to describe different aspects of an electrical system. Voltage cannot be in Hz, as they measure different properties.
46. Does higher watts mean more power?
□ Estimation: Yes, higher watts (W) typically indicate more power in an electrical device. It represents the rate at which energy is used or produced.
47. What does 750 watts mean?
□ Estimation: 750 watts (W) represent a power level of 750 joules of energy per second. It is a unit commonly used for electrical appliances.
48. Is 1000 watts a lot?
□ Estimation: Whether 1000 watts is considered a lot depends on the context. It is a moderate amount of power and can vary from being relatively low for some applications to substantial for
49. What is 1 watt for 1 hour?
□ Estimation: 1 watt-hour (Wh) represents the consumption or production of 1 watt of power for 1 hour. It is a unit of energy.
50. How many watts does a TV use?
□ Estimation: The power consumption of a TV varies depending on its size and type (LED, LCD, plasma, etc.). A typical LED TV may use around 30-100 watts.
51. How much does a 100 watt bulb use in an hour?
□ Estimation: A 100-watt bulb consumes 100 watt-hours (Wh) of energy in 1 hour of operation.
52. Can I use 220V 50Hz in the USA?
□ Estimation: The U.S. primarily uses a 120V 60Hz power supply. Using a 220V 50Hz appliance in the USA may require a voltage converter and compatibility check.
53. Can I use a 60Hz appliance on 50Hz power supply?
□ Estimation: Using a 60Hz appliance on a 50Hz power supply may result in the appliance running slightly slower, potentially affecting its performance. Compatibility should be checked.
54. What happens if you use a 220V 50Hz appliance in a 220V 60Hz power supply?
□ Estimation: Using a 220V 50Hz appliance on a 220V 60Hz power supply may lead to increased operation speed and potential overheating or damage due to the difference in frequency. Compatibility
should be checked.
GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and
more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable
for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and
up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations.
Leave a Comment | {"url":"https://gegcalculators.com/hz-to-watts-conversion-calculator/","timestamp":"2024-11-09T13:18:12Z","content_type":"text/html","content_length":"176486","record_id":"<urn:uuid:83dfa283-8d2f-418d-82a6-7d8aa3599852>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00601.warc.gz"} |
Essential Mathematics for Economic Analysis (6th Edition) - eBook
Acquire the critical mathematical skills you need to master and succeed in Economics.
Essential Mathematics for Economic Analysis, 6th edition, (PDF) by Sydsaeter, Hammond, Strøm, and Carvajal is a global best-selling textbook providing an extensive introduction to all the
mathematical resources you need to study economics at an intermediate level.
This ebook has been applauded for covering various mathematical knowledge, techniques, and tools, progressing from elementary calculus to more advanced topics.
With a plethora of practice examples, questions, and solutions integrated throughout, this latest edition provides you a wealth of opportunities to apply them in specific economic situations, helping
you develop key mathematical skills as your course progresses.
Key features:
Numerous exercises and worked examples throughout each chapter allow you to practice skills and improve techniques.
Review exercises at the end of each chapter test your understanding of a topic, allowing you to progress confidently.
Solutions to exercises are provided in the book and online, showing you the steps needed to arrive at the correct answer.
Pair this text with MyLab® Math
MyLab® is the teaching and learning platform that empowers you to reach every student. By combining trusted author content with digital tools and a flexible platform, MyMathLab personalises the
learning experience and improves results for each student.
If you would like to purchase both the physical text and MyMathLab, search for:
9781292359342 Essential Mathematics for Economic Analysis, 6th edition with MyMathLab
The package (not all available) consists of:
978-1292359281 Essential Mathematics for Economic Analysis, 6E
978-1292359311 Essential Mathematics for Economic Analysis 6th edition MyMathLab
978-1292359335 Essential Mathematics for Economic Analysis Sixth Edition Pearson eText
MyLab® Math is not included. If MyLab is a recommended/mandatory component of the course, please ask your instructor for the correct ISBN. MyLab should only be purchased when required by an
instructor. Instructors, contact your Pearson representative for more information. We don’t sell MyLab.
Additional ISBNs: 9781292359281, 9781292359298, 9781292359328, 2021006079, 2021006080, 978-1292359281, 978-1292359298, 978-1292359328, 9781292359311, 9781292359335, 9781292359328, 978-1292359328,
NOTE: This sale only includes the eBook Essential Mathematics for Economic Analysis, 6th edition, in PDF. No access codes are included.
There are no reviews yet.
Be the first to review “Essential Mathematics for Economic Analysis (6th Edition) – eBook”
You must be logged in to post a review. | {"url":"https://textbooks.dad/product/essential-mathematics-for-economic-analysis-6th-edition-ebook/","timestamp":"2024-11-15T01:16:24Z","content_type":"text/html","content_length":"109927","record_id":"<urn:uuid:68b0e0a9-f4e2-48c4-b042-24567bed4f58>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00059.warc.gz"} |
HP 39gII considered annoying, part 1 - Printable Version
HP 39gII considered annoying, part 1 - Pete Wilson - 05-20-2013
My first thought after (finally) getting a 39gII was that it was really worrying as a basis for the HP Prime, a new flagship for HP.
I intended to post some issues I had, but I had opportunity to run a quick calculation and compared my (emulated) 48gx (though my 50g would have been similar) to the 39gII. While involving a unit
calculation is probably extremely unfair (I can't imagine improving the 50g model much), it does hilight other issues I have along the way. I would welcome any suggestions on improving the use/flow
of the 39gII as well.
So the (lunch) problem was to figure out if the first atomic bomb, apparently based around 100 lbs of uranium, was terribly inefficient given the quoted fact that no six inch pure uranium nuggets are
floating around in space, since that size would go critical (as opposed to the potential for other pure nuggets). (These figures aren't exact, but that was the remembered facts.)
A quick check of The Elements says the density of Uranium is 19.05 g/cm^3. A quick Google says the volume of a sphere is 4/3*pi*r^3.
On the 48gx:
100 rShift Units Mass LB
19.05 g rShift Units VOL rShift cm^3
/ lShift Units UBASE
gives 2.381E-3_m^3 as the volume of 100 lbs of Uranium. Switching modes to fix 6:
rShift MODES CHOOSE dwn OK right 6 OK OK
yields 0.002381_m^3. Compute the diameter with:
3 * 4 / lShift pi / lShift >Num 3 rShift x_root_y 2 *
so 0.165675_m in diameter
rShift UNITS LENG lShift IN
or 6.522644_in in diameter. So not too wasteful.
Contrast with doing this on a 39gII:
Turn it on and wait the two seconds to boot up.
100 Math Units dwn*7 right dwn*3 OK / 19.05 Math right dwn OK
* 1 Math up*3 right dwn OK ENTER
(Couldn't figure out best way to enter 19.05_g/cm^3.)
Giving us 5.249_(lb*g^-1*cm^3) which we base with:
Math up*4 right dwn*4 OK shift ANS ENTER
which gives us 2.381E-3_m^3 then we change modes to Fix 6:
shift Modes dwn Choose dwn OK right 6 OK HOME
Since the display doesn't change the old answers (!) we do:
shift ANS ENTER
and see 0.00238106_m^3 (what happened to fix 6???) of uranium. Then we compute the diameter of a sphere of that volume:
2 * ( 3 shift NTHROOT ( 3 / 4 * up COPY bs*4 / shift pi ENTER
(We lost precision, but we can't take NTHROOT of a unit and I couldn't figure out a way to strip units from ANS.)
So now we have 0.165675 meters which we convert to inches
Math right dwn*2 OK up COPY Math dwn*2 right OK , 1 Math right dwn*15 OK ENTER
And we have 6.522638_inch.
I think you can see some annoyances :)
Re: HP 39gII considered annoying, part 1 - Chris Smith - 05-21-2013
Hmm that does look a little frustrating.
I assume the Prime is probably the same. Looks pretty but functionally retarded for the sake of education *choke* clear my throat: I mean government mandated training.
Time to start stockpiling 50g's with the ammo and canned goods?
Perhaps I'm getting cynical in my not so old age...
Edited: 21 May 2013, 7:35 a.m.
Re: HP 39gII considered annoying, part 1 - Gilles Carpentier - 05-21-2013
Quote: I assume the Prime is probably the same
At least the Prime have a 'Units' key and a '_' key on its keyboard (look the pictures) , so I suppose there are no need to navigate in complex menus for work with units. And a "touch screen" allows
many new possibilities in human-calc-interface. For example, I don't know if it is the case, but you can easily imagine that pushing 'units' key displays all possibilities of units on the touch
so wait a see...
I also notice that the Prime has a 'non-shifted' EEX key ;) The shifted EEX on the 39Gii is annoying
And the Prime have "user keyboard" So I suppose you can use this to personalize the keyboard.
Edited: 21 May 2013, 8:16 a.m.
Re: HP 39gII considered annoying, part 1 - Chris Smith - 05-21-2013
Having used and written software for touch screen devices going back to the 90's, I still don't buy it. Without physical tactile feedback, its hard to do anything in a deterministic fashion when
using it.
Look at the Prime interface - it has menus and icons on the same screen. Its going to be really hard to use them accurately, just like its impossible to draw on an iPad past child-like splodging
despite all those terrible adverts to the contrary.
The non shifted EEX is good though. I don't know why that was shifted to start with as even the cheap no brand scientific calculators you can get in the UK "PoundLand" chain for £1 don't have shifted
EEX :)
For ref the nSpire has units and _ right there, but the implementation is still horrid compared to the 50g.
Re: HP 39gII considered annoying, part 1 - Gilles Carpentier - 05-21-2013
Contrast with doing this on a 39gII: Turn it on and wait the two seconds to boot up.
100 Math Units dwn*7 right dwn*3 OK / 19.05 Math right dwn OK
* 1 Math up*3 right dwn OK ENTER
(Couldn't figure out best way to enter 19.05_g/cm^3.)
Giving us 5.249_(lb*g^-1*cm^3) which we base with:
Math up*4 right dwn*4 OK shift ANS ENTER
which gives us 2.381E-3_m^3
Less keystroke is:
Math Unit 1 2 100 Math 8 4 / 19.05 Math -> 2 <- ( -> / Math 5 2 ) ENTER
-> 2.381E-3_m^3
But I prefer the 48/50 way to do and the RPN entry and to see all intermediate results in an interactive way
Edited: 21 May 2013, 11:38 a.m.
Re: HP 39gII considered annoying, part 1 - Pete Wilson - 05-21-2013
Unless you memorize the shortcuts, the 8 4 and 1 2 are really incredibly poor. At the very least, they should display in front of each line, so you aren't counting lines or arrowing down to see what
it was - that way you would stand a chance of using some of them. Of course, no pagedown keys. I don't think anyone could say learning Math 5 G for inch (oops! I meant 3 G) is reasonable.
Re: HP 39gII considered annoying, part 1 - Gilles Carpentier - 05-22-2013
I totally agree ...
I hope a huge improvment for this with Prime and perhaps with future ROM for the 39GII (?). The way HP39GII works for this is not ergonomic at all
I also noticed that you can use the first letter in the choose box... But it does not help so much | {"url":"https://archived.hpcalc.org/museumforum/printthread.php?tid=243945","timestamp":"2024-11-06T16:40:51Z","content_type":"application/xhtml+xml","content_length":"10403","record_id":"<urn:uuid:12822832-3490-4a9c-8f48-5398634a5461>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00757.warc.gz"} |
VISVX: Vanguard Small Cap Value Index Fund | Logical Invest
What do these metrics mean?
'The total return on a portfolio of investments takes into account not only the capital appreciation on the portfolio, but also the income received on the portfolio. The income typically consists of
interest, dividends, and securities lending fees. This contrasts with the price return, which takes into account only the capital gain on an investment.'
Which means for our asset as example:
• The total return, or increase in value over 5 years of Vanguard Small Cap Value Index Fund is 74%, which is lower, thus worse compared to the benchmark SPY (109.2%) in the same period.
• Compared with SPY (33.3%) in the period of the last 3 years, the total return, or performance of 21.3% is lower, thus worse.
'The compound annual growth rate (CAGR) is a useful measure of growth over multiple time periods. It can be thought of as the growth rate that gets you from the initial investment value to the ending
investment value if you assume that the investment has been compounding over the time period.'
Using this definition on our asset we see for example:
• The compounded annual growth rate (CAGR) over 5 years of Vanguard Small Cap Value Index Fund is 11.7%, which is smaller, thus worse compared to the benchmark SPY (15.9%) in the same period.
• Looking at annual return (CAGR) in of 6.7% in the period of the last 3 years, we see it is relatively smaller, thus worse in comparison to SPY (10.1%).
'Volatility is a statistical measure of the dispersion of returns for a given security or market index. Volatility can either be measured by using the standard deviation or variance between returns
from that same security or market index. Commonly, the higher the volatility, the riskier the security. In the securities markets, volatility is often associated with big swings in either direction.
For example, when the stock market rises and falls more than one percent over a sustained period of time, it is called a 'volatile' market.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (20.9%) in the period of the last 5 years, the volatility of 26.6% of Vanguard Small Cap Value Index Fund is higher, thus worse.
• During the last 3 years, the historical 30 days volatility is 20.4%, which is greater, thus worse than the value of 17.6% from the benchmark.
'Risk measures typically quantify the downside risk, whereas the standard deviation (an example of a deviation risk measure) measures both the upside and downside risk. Specifically, downside risk in
our definition is the semi-deviation, that is the standard deviation of all negative returns.'
Which means for our asset as example:
• The downside deviation over 5 years of Vanguard Small Cap Value Index Fund is 19%, which is higher, thus worse compared to the benchmark SPY (14.9%) in the same period.
• Looking at downside risk in of 14.1% in the period of the last 3 years, we see it is relatively higher, thus worse in comparison to SPY (12.3%).
'The Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) is a way to examine the performance of an investment by adjusting for its risk. The ratio
measures the excess return (or risk premium) per unit of deviation in an investment asset or a trading strategy, typically referred to as risk, named after William F. Sharpe.'
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (0.64) in the period of the last 5 years, the Sharpe Ratio of 0.35 of Vanguard Small Cap Value Index Fund is lower, thus worse.
• During the last 3 years, the Sharpe Ratio is 0.2, which is lower, thus worse than the value of 0.43 from the benchmark.
'The Sortino ratio improves upon the Sharpe ratio by isolating downside volatility from total volatility by dividing excess return by the downside deviation. The Sortino ratio is a variation of the
Sharpe ratio that differentiates harmful volatility from total overall volatility by using the asset's standard deviation of negative asset returns, called downside deviation. The Sortino ratio takes
the asset's return and subtracts the risk-free rate, and then divides that amount by the asset's downside deviation. The ratio was named after Frank A. Sortino.'
Which means for our asset as example:
• Compared with the benchmark SPY (0.9) in the period of the last 5 years, the downside risk / excess return profile of 0.49 of Vanguard Small Cap Value Index Fund is smaller, thus worse.
• During the last 3 years, the excess return divided by the downside deviation is 0.29, which is lower, thus worse than the value of 0.62 from the benchmark.
'The Ulcer Index is a technical indicator that measures downside risk, in terms of both the depth and duration of price declines. The index increases in value as the price moves farther away from a
recent high and falls as the price rises to new highs. The indicator is usually calculated over a 14-day period, with the Ulcer Index showing the percentage drawdown a trader can expect from the high
over that period. The greater the value of the Ulcer Index, the longer it takes for a stock to get back to the former high.'
Using this definition on our asset we see for example:
• The Ulcer Index over 5 years of Vanguard Small Cap Value Index Fund is 12 , which is higher, thus worse compared to the benchmark SPY (9.32 ) in the same period.
• Looking at Ulcer Ratio in of 8.99 in the period of the last 3 years, we see it is relatively lower, thus better in comparison to SPY (10 ).
'A maximum drawdown is the maximum loss from a peak to a trough of a portfolio, before a new peak is attained. Maximum Drawdown is an indicator of downside risk over a specified time period. It can
be used both as a stand-alone measure or as an input into other metrics such as 'Return over Maximum Drawdown' and the Calmar Ratio. Maximum Drawdown is expressed in percentage terms.'
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (-33.7 days) in the period of the last 5 years, the maximum reduction from previous high of -45.4 days of Vanguard Small Cap Value Index Fund is lower, thus worse.
• Compared with SPY (-24.5 days) in the period of the last 3 years, the maximum drop from peak to valley of -21.3 days is larger, thus better.
'The Maximum Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. It is the
length of time the account was in the Max Drawdown. A Max Drawdown measures a retrenchment from when an equity curve reaches a new high. It’s the maximum an account lost during that retrenchment.
This method is applied because a valley can’t be measured until a new high occurs. Once the new high is reached, the percentage change from the old high to the bottom of the largest trough is
Which means for our asset as example:
• Compared with the benchmark SPY (488 days) in the period of the last 5 years, the maximum days under water of 527 days of Vanguard Small Cap Value Index Fund is larger, thus worse.
• During the last 3 years, the maximum days below previous high is 527 days, which is greater, thus worse than the value of 488 days from the benchmark.
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks
(equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (123 days) in the period of the last 5 years, the average time in days below previous high water mark of 147 days of Vanguard Small Cap Value Index Fund is larger,
thus worse.
• Compared with SPY (176 days) in the period of the last 3 years, the average days under water of 198 days is higher, thus worse. | {"url":"https://logical-invest.com/app/mutual_fund/visvx/vanguard-small-cap-value-index-fund","timestamp":"2024-11-11T08:25:38Z","content_type":"text/html","content_length":"59386","record_id":"<urn:uuid:aa258aac-5124-451c-aab6-a902a0fcf003>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00435.warc.gz"} |
Shunting Puzzles
Transum's Tram Shunting Puzzles
These puzzles are Transum's versions of the shunting or switching puzzles made popular by train enthusiasts and puzzle solvers.
The objective is to move the trams, represented by coloured circles, onto their parking spots, represented by circles of the same colour but a dashed border. There are buttons to switch the points
and make the tram move to the left, to the right, or to stop.
The green buttons control the green tram
Left Stop Right
There is a certain amount of tolerance at the parking spaces so the tram does not have to stop directly over the parking spot but it has to overlap most of the parking space.
Trams can go over parking spaces during the course of the action but you should be careful that two trams don't crash into each other. A crash will require the operation to start again from the
When all of the trams are parked in the correct places you will have the option of claiming a Transum virtual trophy. On the trophy will be recorded the time it took you to complete the shunting
puzzle. You can try as often as you wish and look for time saving strategies so that you can improve on your personal best for each level.
If identifying the colours of the trams is difficult you can add colour names.
You might also like to look at our collection of Classic Shunting Puzzles.
There is are example solutions to these puzzles but they are only available to those who have a Transum Subscription.
Best Times
As the objective is to complete the puzzle in the shortest amount of time you might be interested to know what the current fastest times are. Those people who have claimed a Transum Trophy for
completing a puzzle have their times entered into our database. You can see the current leaders on our Best Shunting Times page.
Friday, July 18, 2014
"Though this activity is not explicitly mathematical it does require abstract thought, logical deduction and the generation of a strategy. The real skill is to have more than one tram moving at the
same time to minimise the time taken to complete the task. Currently the record for level 1 is 27 seconds! Can you beat that?"
Rob, Hull, England
Wednesday, February 22, 2017
"Could you put letters or numbers in the circles, being colour blind makes what could be an entertaining puzzle a nigh on impossible task beyond the simple solutions.
[Transum: Thanks for the prompt Rob. There is now the option to add colour names to the trams, destination circles and buttons. You will find the link just above these comments at the bottom of the
left column of text.]"
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your
If you liked this ... Classic dynamic program A number puzzle A strategy game
Car Park Puzzle Online Logo Go Figure Remainder Race
Can you get your car out of the very crowded car park An online version of the Logo programming Arrange the digits one to nine (with the help of A game involving chance and choice requiring
by moving other cars forwards or backwards? language with 30 mathematical challenges. tags) to make the four calculations correct. an ability to calculate remainders.
The short web address is: The short web address is: The short web address is: The short web address is:
Transum.org/go/?to=carpark Transum.org/go/?to=logo Transum.org/go/?to=gofigure Transum.org/go/?to=remainder | {"url":"https://www.transum.org/Software/Shunting/","timestamp":"2024-11-03T06:19:21Z","content_type":"text/html","content_length":"43753","record_id":"<urn:uuid:ba995b04-351e-4bb8-94a9-c081218c1098>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00327.warc.gz"} |
Advanced Model Recursions
Advanced Model Recursions
The surrogate and nested model constructs admit a wide variety of multi-iterator, multi-model solution approaches. For example, optimization within optimization (for hierarchical multidisciplinary
optimization), uncertainty quantification within uncertainty quantification (for interval-valued probability, second-order probability, or Dempster-Shafer approaches to mixed aleatory-epistemic UQ),
uncertainty quantification within optimization (for optimization under uncertainty), and optimization within uncertainty quantification (for uncertainty of optima) are all supported, with and without
surrogate model indirection. Three important examples are highlighted: mixed aleatory-epistemic UQ, optimization under uncertainty, and surrogate-based UQ.
In addition, concurrency can be exploited across sub-iteration instances. For example, multiple inner loop UQ assessments can be performed simultaneously within optimization under uncertainty or
mixed aleatory-epistemic UQ studies, provided the outer loop algorithm supports concurrency in its evaluations. Both meta-iterators and nested models support iterator_servers, processors_per_iterator
, and iterator_scheduling specifications which can be used to define a parallel configuration that partitions servers for supporting sub-iteration concurrency.
Mixed Aleatory-Epistemic UQ
Mixed UQ approaches employ nested models to embed one uncertainty quantification (UQ) within another. The outer level UQ is commonly linked to epistemic uncertainties (also known as reducible
uncertainties) resulting from a lack of knowledge, and the inner UQ is commonly linked to aleatory uncertainties (also known as irreducible uncertainties) that are inherent in nature. The outer level
generates sets of realizations of the epistemic parameters, and each set of these epistemic parameters in used within a separate inner loop probabilistic analysis over the aleatory random variables.
In this manner, ensembles of aleatory statistics are generated, one set for each realization of the epistemic parameters.
In Dakota, we support interval-valued probability (IVP), second-order probability (SOP), and Dempster-Shafer theory of evidence (DSTE) approaches to mixed uncertainty. These three approaches differ
in how they treat the epistemic variables in the outer loop: they are treated as intervals in IVP, as belief structures in DSTE, and as subjective probability distributions in SOP. This set of
techniques provides a spectrum of assumed epistemic structure, from strongest assumptions in SOP to weakest in IVP.
Interval-valued probability (IVP)
In IVP (also known as probability bounds analysis [AP07, FT06, KKVA09]), we employ an outer loop of interval estimation in combination with an aleatory inner loop. In interval analysis, it is assumed
that nothing is known about the uncertain input variables except that they lie within certain intervals. The problem of uncertainty propagation then becomes an interval analysis problem: given inputs
that are defined within intervals, what are the corresponding intervals on the outputs?
Starting from a specification of intervals and probability distributions on the inputs, the intervals may augment the probability distributions, insert into the probability distributions, or some
combination (refer to the Nested Models section for more information). We generate an ensemble of cumulative distribution functions (CDF) or Complementary Cumulative Distribution Functions (CCDF),
one CDF/CCDF result for each aleatory analysis. Plotting an entire ensemble of CDFs or CCDFs in a “horsetail” plot allows one to visualize the upper and lower bounds on the family of distributions
(see Fig. 56).
Given that the ensemble stems from multiple realizations of the epistemic uncertainties, the interpretation is that each CDF/CCDF instance has no relative probability of occurrence, only that each
instance is possible. For prescribed response levels on the CDF/CCDF, an interval on the probability is computed based on the bounds of the ensemble at that level, and vice versa for prescribed
probability levels. This interval on a statistic is interpreted simply as a possible range, where the statistic could take any of the possible values in the range.
A sample input file is shown in Listing 62, in which the outer epistemic level variables are defined as intervals. Samples will be generated from these intervals to select means for \(X\) and \(Y\)
that are employed in an inner level reliability analysis of the cantilever problem.
Listing 63 shows excerpts from the resulting output. In this particular example, the outer loop generates 50 possible realizations of epistemic variables, which are then sent to the inner loop to
calculate statistics such as the mean weight, and cumulative distribution function for the stress and displacement reliability indices. Thus, the outer loop has 50 possible values for the mean
weight, but since there is no distribution structure on these observations, only the minimum and maximum value are reported. Similarly, the minimum and maximum values of the CCDF for the stress and
displacement reliability indices are reported.
When performing a mixed aleatory-epistemic analysis, response levels and probability levels should only be defined in the (inner) aleatory loop. For example, if one wants to generate an interval
around possible CDFs or CCDFS, we suggest defining a number of probability levels in the inner loop (0.1, 0.2, 0.3, etc). For each epistemic instance, these will be calculated during the inner loop
and reported back to the outer loop. In this way, there will be an ensemble of CDF percentiles (for example) and one will have interval bounds for each of these percentile levels defined. Finally,
although the epistemic variables are often values defining distribution parameters for the inner loop, they are not required to be: they can just be separate uncertain variables in the problem.
# Dakota Input File: cantilever_uq_sop_rel.in
top_method_pointer = 'EPISTEMIC'
id_method = 'EPISTEMIC'
samples = 50 seed = 12347
model_pointer = 'EPIST_M'
id_model = 'EPIST_M'
sub_method_pointer = 'ALEATORY'
primary_variable_mapping = 'X' 'Y'
secondary_variable_mapping = 'mean' 'mean'
primary_response_mapping = 1. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 1.
variables_pointer = 'EPIST_V'
responses_pointer = 'EPIST_R'
id_variables = 'EPIST_V'
continuous_interval_uncertain = 2
num_intervals = 1 1
interval_probabilities = 1.0 1.0
lower_bounds = 400.0 800.0
upper_bounds = 600.0 1200.0
descriptors 'X_mean' 'Y_mean'
id_responses = 'EPIST_R'
response_functions = 3
descriptors = 'mean_wt' 'ccdf_beta_s' 'ccdf_beta_d'
id_method = 'ALEATORY'
mpp_search no_approx
response_levels = 0.0 0.0
num_response_levels = 0 1 1
compute reliabilities
distribution complementary
model_pointer = 'ALEAT_M'
id_model = 'ALEAT_M'
interface_pointer = 'ALEAT_I'
variables_pointer = 'ALEAT_V'
responses_pointer = 'ALEAT_R'
id_variables = 'ALEAT_V'
continuous_design = 2
initial_point 2.4522 3.8826
descriptors 'w' 't'
normal_uncertain = 4
means = 40000. 29.E+6 500. 1000.
std_deviations = 2000. 1.45E+6 100. 100.
descriptors = 'R' 'E' 'X' 'Y'
id_interface = 'ALEAT_I'
analysis_drivers = 'cantilever'
deactivate evaluation_cache restart_file
id_responses = 'ALEAT_R'
response_functions = 3
descriptors = 'weight' 'stress' 'displ'
Statistics based on 50 samples:
Min and Max values for each response function:
mean_wt: Min = 9.5209117200e+00 Max = 9.5209117200e+00
ccdf_beta_s: Min = 1.7627715524e+00 Max = 4.2949468386e+00
ccdf_beta_d: Min = 2.0125192955e+00 Max = 3.9385559339e+00
As compared to aleatory quantities of interest (e.g., mean, variance, probability) that must be integrated over a full probability domain, we observe that the desired minima and maxima of the output
ranges are local point solutions in the epistemic parameter space, such that we may employ directed optimization techniques to compute these extrema and potentially avoid the cost of sampling the
full epistemic space.
In dakota/share/dakota/test, test input files such as dakota_uq_cantilever_ivp_exp.in and dakota_uq_short_column_ivp_exp.in replace the outer loop sampling with the local and global interval
optimization methods. In these cases, we no longer generate horse tails and infer intervals, but rather compute the desired intervals directly.
Second-order probability (SOP)
SOP is similar to IVP in its segregation of aleatory and epistemic uncertainties and its use of nested iteration. However, rather than modeling epistemic uncertainty with a single interval per
variable and computing interval-valued statistics, we instead employ subjective probability distributions and compute epistemic statistics on the aleatory statistics (for example, probabilities on
probabilities – the source of the “second-order” terminology [GN99]). Now the different hairs of the horsetail shown in Fig. 56 have a relative probability of occurrence and stronger inferences may
be drawn. In particular, mean, 5\(^{th}\) percentile, and 95\(^{th}\) percentile probability values are a common example. Second-order probability is sometimes referred to as probability of frequency
(PoF) analysis, referring to a probabilistic interpretation of the epistemic variables and a frequency interpretation of the aleatory variables. The PoF terminology is used in a recent National
Academy of Sciences report on the Quantification of Margins and Uncertainties (QMU) [NationalRCotNAcademies08].
Rather than employing interval estimation techniques at the outer loop in SOP, we instead apply probabilistic methods, potentially the same ones as used for the aleatory propagation on the inner
loop. The previous example in Listing 62 can be modified to define the epistemic outer loop using uniform variables instead of interval variables (annotated test #1 in dakota/share/dakota/test/
dakota_uq_cantilever_sop_rel.in). The process of generating the epistemic values is essentially the same in both cases; however, the interpretation of results is quite different. In IVP, each “hair”
or individual CDF in the horsetail plot in Fig. 56 would be interpreted as a possible realization of aleatory uncertainty conditional on a particular epistemic sample realization. The ensemble then
indicates the influence of the epistemic variables (e.g. by how widespread the ensemble is). However, if the outer loop variables are defined to be uniformly distributed in SOP, then the outer loop
results will be reported as statistics (such as mean and standard deviation) and not merely intervals. It is important to emphasize that these outer level output statistics are only meaningful to the
extent that the outer level input probability specifications are meaningful (i.e., to the extent that uniform distributions are believed to be representative of the epistemic variables).
In dakota/share/dakota/test, additional test input files such as dakota_uq_cantilever_sop_exp.in and dakota_uq_short_column_sop_exp.in explore other outer/inner loop probabilistic analysis
combinations, particulary using stochastic expansion methods.
Dempster-Shafer Theory of Evidence
In IVP, we estimate a single epistemic output interval for each aleatory statistic. This same nested analysis procedure may be employed within the cell computations of a DSTE approach. Instead of a
single interval, we now compute multiple output intervals, one for each combination of the input basic probability assignments, in order to define epistemic belief and plausibility functions on the
aleatory statistics computed in the inner loop. While this can significantly increase the computational requirements, belief and plausibility functions provide a more finely resolved epistemic
characterization than a basic output interval.
The single-level DSTE approach for propagating epistemic uncertainties is described in this section. An example of nested DSTE for propagating mixed uncertainties can be seen in dakota/share/dakota/
test in the input file dakota_uq_ishigami_dste_exp.in.
Optimization Under Uncertainty (OUU)
Optimization under uncertainty (OUU) approaches incorporate an uncertainty quantification method within the optimization process. This is often needed in engineering design problems when one must
include the effect of input parameter uncertainties on the response functions of interest. A typical engineering example of OUU would minimize the probability of failure of a structure for a set of
applied loads, where there is uncertainty in the loads and/or material properties of the structural components.
In OUU, a nondeterministic method is used to evaluate the effect of uncertain variable distributions on response functions of interest (refer to the main UQ section for additional information on
nondeterministic analysis). Statistics on these response functions are then included in the objective and constraint functions of an optimization process. Different UQ methods can have very different
features from an optimization perspective, leading to the tailoring of optimization under uncertainty approaches to particular underlying UQ methodologies.
If the UQ method is sampling based, then three approaches are currently supported: nested OUU, surrogate-based OUU, and trust-region surrogate-based OUU. Additional details and computational results
are provided in [EGWojtkiewiczJrT02].
Another class of OUU algorithms is called reliability-based design optimization (RBDO). RBDO methods are used to perform design optimization accounting for reliability metrics. The reliability
analysis capabilities provide a rich foundation for exploring a variety of RBDO formulations. [EAP+07] investigated bi-level, fully-analytic bi-level, and first-order sequential RBDO approaches
employing underlying first-order reliability assessments. [EB06] investigated fully-analytic bi-level and second-order sequential RBDO approaches employing underlying second-order reliability
When using stochastic expansions for UQ, analytic moments and analytic design sensitivities can be exploited as described in [EWC08]. Several approaches for obtaining design sensitivities of
statistical metrics are discussed here.
Finally, when employing epistemic methods for UQ, the set of statistics available for use within optimization are interval-based. Robustness metrics typically involve the width of the intervals, and
reliability metrics typically involve the worst case upper or lower bound of the interval.
Each of these OUU methods is overviewed in the following sections.
Nested OUU
In the case of a nested approach, the optimization loop is the outer loop which seeks to optimize a nondeterministic quantity (e.g., minimize probability of failure). The uncertainty quantification
(UQ) inner loop evaluates this nondeterministic quantity (e.g., computes the probability of failure) for each optimization function evaluation. Fig. 57 depicts the nested OUU iteration where \(\
mathit{\mathbf{d}}\) are the design variables, \(\mathit{\mathbf{u}}\) are the uncertain variables characterized by probability distributions, \(\mathit{\mathbf{r_{u}(d,u)}}\) are the response
functions from the simulation, and \(\mathit{\mathbf{s_{u}(d)}}\) are the statistics generated from the uncertainty quantification on these response functions.
Listing 64 shows a Dakota input file for a nested OUU example problem that is based on the textbook test problem. In this example, the objective function contains two probability of failure
estimates, and an inequality constraint contains another probability of failure estimate. For this example, failure is defined to occur when one of the textbook response functions exceeds its
threshold value. The environment keyword block at the top of the input file identifies this as an OUU problem. The environment keyword block is followed by the optimization specification, consisting
of the optimization method, the continuous design variables, and the response quantities that will be used by the optimizer. The mapping matrices used for incorporating UQ statistics into the
optimization response data are described here.
The uncertainty quantification specification includes the UQ method, the uncertain variable probability distributions, the interface to the simulation code, and the UQ response attributes. As with
other complex Dakota input files, the identification tags given in each keyword block can be used to follow the relationships among the different keyword blocks.
# Dakota Input File: textbook_opt_ouu1.in
top_method_pointer = 'OPTIM'
id_method = 'OPTIM'
## (NPSOL requires a software license; if not available, try
## conmin_mfd or optpp_q_newton instead)
convergence_tolerance = 1.e-10
model_pointer = 'OPTIM_M'
id_model = 'OPTIM_M'
sub_method_pointer = 'UQ'
primary_response_mapping = 0. 0. 1. 0. 0. 1. 0. 0. 0.
secondary_response_mapping = 0. 0. 0. 0. 0. 0. 0. 0. 1.
variables_pointer = 'OPTIM_V'
responses_pointer = 'OPTIM_R'
id_variables = 'OPTIM_V'
continuous_design = 2
initial_point 1.8 1.0
upper_bounds 2.164 4.0
lower_bounds 1.5 0.0
descriptors 'd1' 'd2'
id_responses = 'OPTIM_R'
objective_functions = 1
nonlinear_inequality_constraints = 1
upper_bounds = .1
method_source dakota
interval_type central
fd_step_size = 1.e-1
id_method = 'UQ'
model_pointer = 'UQ_M'
samples = 50 sample_type lhs
seed = 1
response_levels = 3.6e+11 1.2e+05 3.5e+05
distribution complementary
id_model = 'UQ_M'
interface_pointer = 'UQ_I'
variables_pointer = 'UQ_V'
responses_pointer = 'UQ_R'
id_variables = 'UQ_V'
continuous_design = 2
normal_uncertain = 2
means = 248.89 593.33
std_deviations = 12.4 29.7
descriptors = 'nuv1' 'nuv2'
uniform_uncertain = 2
lower_bounds = 199.3 474.63
upper_bounds = 298.5 712.
descriptors = 'uuv1' 'uuv2'
weibull_uncertain = 2
alphas = 12. 30.
betas = 250. 590.
descriptors = 'wuv1' 'wuv2'
id_interface = 'UQ_I'
analysis_drivers = 'text_book_ouu'
# fork asynch evaluation_concurrency = 5
id_responses = 'UQ_R'
response_functions = 3
Latin hypercube sampling is used as the UQ method in this example problem. Thus, each evaluation of the response functions by the optimizer entails 50 Latin hypercube samples. In general, nested OUU
studies can easily generate several thousand function evaluations and gradient-based optimizers may not perform well due to noisy or insensitive statistics resulting from under-resolved sampling.
These observations motivate the use of surrogate-based approaches to OUU.
Other nested OUU examples in the directory dakota/share/dakota/test include dakota_ouu1_tbch.in, which adds an additional interface for including deterministic data in the textbook OUU problem, and
dakota_ouu1_cantilever.in, which solves the cantilever OUU problem with a nested approach. For each of these files, the “1” identifies formulation 1, which is short-hand for the nested approach.
Surrogate-Based OUU (SBOUU)
Surrogate-based optimization under uncertainty strategies can be effective in reducing the expense of OUU studies. Possible formulations include use of a surrogate model at the optimization level, at
the uncertainty quantification level, or at both levels. These surrogate models encompass both data fit surrogates (at the optimization or UQ level) and model hierarchy surrogates (at the UQ level
only). Fig. 58 depicts the different surrogate-based formulations where \(\mathbf{\hat{r}_{u}}\) and \(\mathbf{\hat{s}_{u}}\) are approximate response functions and approximate response statistics,
respectively, generated from the surrogate models.
SBOUU examples in the dakota/share/dakota/test directory include dakota_sbouu2_tbch.in, dakota_sbouu3_tbch.in, and dakota_sbouu4_tbch.in, which solve the textbook OUU problem, and
dakota_sbouu2_cantilever.in, dakota_sbouu3_cantilever.in, and dakota_sbouu4_cantilever.in, which solve the cantilever OUU problem. For each of these files, the “2,” “3,” and “4” identify formulations
2, 3, and 4, which are short-hand for the “layered containing nested,” “nested containing layered,” and “layered containing nested containing layered” surrogate-based formulations, respectively. In
general, the use of surrogates greatly reduces the computational expense of these OUU study. However, without restricting and verifying the steps in the approximate optimization cycles, weaknesses in
the data fits can be exploited and poor solutions may be obtained. The need to maintain accuracy of results leads to the use of trust-region surrogate-based approaches.
Trust-Region Surrogate-Based OUU (TR-SBOUU)
The TR-SBOUU approach applies the trust region logic of deterministic SBO to SBOUU. Trust-region verifications are applicable when surrogates are used at the optimization level, i.e., formulations 2
and 4. As a result of periodic verifications and surrogate rebuilds, these techniques are more expensive than SBOUU; however they are more reliable in that they maintain the accuracy of results.
Relative to nested OUU (formulation 1), TR-SBOUU tends to be less expensive and less sensitive to initial seed and starting point.
TR-SBOUU examples in the directory dakota/share/dakota/test include dakota_trsbouu2_tbch.in and dakota_trsbouu4_tbch.in, which solve the textbook OUU problem, and dakota_trsbouu2_cantilever.in and
dakota_trsbouu4_cantilever.in, which solve the cantilever OUU problem.
Computational results for several example problems are available in [EGWojtkiewiczJrT02].
Bi-level and sequential approaches to reliability-based design optimization (RBDO) and their associated sensitivity analysis requirements are described in the Optimization Under Uncertainty theory
A number of bi-level RBDO examples are provided in dakota/share/dakota/test. The dakota_rbdo_cantilever.in, dakota_rbdo_short_column.in, and dakota_rbdo_steel_column.in input files solve the
cantilever, short column, and steel column OUU problems using a bi-level RBDO approach employing numerical design gradients. The dakota_rbdo_cantilever_analytic.in and
dakota_rbdo_short_column_analytic.in input files solve the cantilever and short column OUU problems using a bi-level RBDO approach with analytic design gradients and first-order limit state
approximations. The dakota_rbdo_cantilever_analytic2.in, dakota_rbdo_short_column_analytic2.in, and dakota_rbdo_steel_column_analytic2.in input files also employ analytic design gradients, but are
extended to employ second-order limit state approximations and integrations.
Sequential RBDO examples are also provided in dakota/share/dakota/test. The dakota_rbdo_cantilever_trsb.in and dakota_rbdo_short_column_trsb.in input files solve the cantilever and short column OUU
problems using a first-order sequential RBDO approach with analytic design gradients and first-order limit state approximations. The dakota_rbdo_cantilever_trsb2.in, dakota_rbdo_short_column_trsb2.in
, and dakota_rbdo_steel_column_trsb2.in input files utilize second-order sequential RBDO approaches that employ second-order limit state approximations and integrations (from analytic limit state
Hessians with respect to the uncertain variables) and quasi-Newton approximations to the reliability metric Hessians with respect to design variables.
Stochastic Expansion-Based Design Optimization
For stochastic expansion-based approaches to optimization under uncertainty, bi-level, sequential, and multifidelity approaches and their associated sensitivity analysis requirements are described in
the Optimization Under Uncertainty theory section.
In dakota/share/dakota/test, the dakota_pcbdo_cantilever.in, dakota_pcbdo_rosenbrock.in, dakota_pcbdo_short_column.in, and dakota_pcbdo_steel_column.in input files solve cantilever, Rosenbrock, short
column, and steel column OUU problems using a bi-level polynomial chaos-based approach, where the statistical design metrics are reliability indices based on moment projection (see the Mean Value
section in Reliability Methods theory section). The test matrix in the former three input files evaluate design gradients of these reliability indices using several different approaches: analytic
design gradients based on a PCE formed over only over the random variables, analytic design gradients based on a PCE formed over all variables, numerical design gradients based on a PCE formed only
over the random variables, and numerical design gradients based on a PCE formed over all variables. In the cases where the expansion is formed over all variables, only a single PCE construction is
required for the complete PCBDO process, whereas the expansions only over the random variables must be recomputed for each change in design variables. Sensitivities for “augmented” design variables
(which are separate from and augment the random variables) may be handled using either analytic approach; however, sensitivities for “inserted” design variables (which define distribution parameters
for the random variables) must be
computed using \(\frac{dR}{dx} \frac{dx}{ds}\) (refer to Stochastic Sensitivity Analysis section in the Optimization Under Uncertainty theory section).
Additional test input files include:
• dakota_scbdo_cantilever.in, dakota_scbdo_rosenbrock.in, dakota_scbdo_short_column.in, and dakota_scbdo_steel_column.in input files solve cantilever, Rosenbrock, short column, and steel column OUU
problems using a bi-level stochastic collocation-based approach.
• dakota_pcbdo_cantilever_trsb.in, dakota_pcbdo_rosenbrock_trsb.in, dakota_pcbdo_short_column_trsb.in, dakota_pcbdo_steel_column_trsb.in, dakota_scbdo_cantilever_trsb.in,
dakota_scbdo_rosenbrock_trsb.in, dakota_scbdo_short_column_trsb.in, and dakota_scbdo_steel_column_trsb.in input files solve cantilever, Rosenbrock, short column, and steel column OUU problems
using sequential polynomial chaos-based and stochastic collocation-based approaches.
• dakota_pcbdo_cantilever_mf.in, dakota_pcbdo_rosenbrock_mf.in, dakota_pcbdo_short_column_mf.in, dakota_scbdo_cantilever_mf.in, dakota_scbdo_rosenbrock_mf.in, and dakota_scbdo_short_column_mf.in
input files solve cantilever, Rosenbrock, and short column OUU problems using multifidelity polynomial chaos-based and stochastic collocation-based approaches.
Epistemic OUU
An emerging capability is optimization under epistemic uncertainty. As described in the section on nested models, epistemic and mixed aleatory/epistemic uncertainty quantification methods generate
lower and upper interval bounds for all requested response, probability, reliability, and generalized reliability level mappings. Design for robustness in the presence of epistemic uncertainty could
simply involve minimizing the range of these intervals (subtracting lower from upper using the nested model response mappings), and design for reliability in the presence of epistemic uncertainty
could involve controlling the worst case upper or lower bound of the interval.
We now have the capability to perform epistemic analysis by using interval optimization on the “outer loop” to calculate bounding statistics of the aleatory uncertainty on the “inner loop.”
Preliminary studies [ES09] have shown this approach is more efficient and accurate than nested sampling, which was described in the example from this section. This approach uses an efficient global
optimization method for the outer loop and stochastic expansion methods (e.g. polynomial chaos or stochastic collocation on the inner loop). The interval optimization is described here. Example input
files demonstrating the use of interval estimation for epistemic analysis, specifically in epistemic-aleatory nesting, are: dakota_uq_cantilever_sop_exp.in, and dakota_short_column_sop_exp.in. Both
files are in dakota/share/dakota/test.
Surrogate-Based Uncertainty Quantification
Many uncertainty quantification (UQ) methods are computationally costly. For example, sampling often requires many function evaluations to obtain accurate estimates of moments or percentile values of
an output distribution. One approach to overcome the computational cost of sampling is to evaluate the true function (e.g. run the analysis driver) on a fixed, small set of samples, use these sample
evaluations to create a response surface approximation (e.g. a surrogate model or meta-model) of the underlying “true” function, then perform random sampling (using thousands or millions of samples)
on the approximation to obtain estimates of the mean, variance, and percentiles of the response.
This approach, called “surrogate-based uncertainty quantification” is easy to do in Dakota, and one can set up input files to compare the results using no approximation (e.g. determine the mean,
variance, and percentiles of the output directly based on the initial sample values) with the results obtained by sampling a variety of surrogate approximations. Example input files of a standard UQ
analysis based on sampling alone vs. sampling a surrogate are shown in textbook_uq_sampling.in and textbook_uq_surrogate.in in the dakota/share/dakota/examples/users directory.
Note that one must exercise some caution when using surrogate-based methods for uncertainty quantification. In general, there is not a single, straightforward approach to incorporate the error of the
surrogate fit into the uncertainty estimates of the output produced by sampling the surrogate. Two references which discuss some of the related issues are [GMSE06] and [SSG06]. The first reference
shows that statistics of a response based on a surrogate model were less accurate, and sometimes biased, for surrogates constructed on very small sample sizes. In many cases, however, [GMSE06] shows
that surrogate-based UQ performs well and sometimes generates more accurate estimates of statistical quantities on the output. The second reference goes into more detail about the interaction between
sample type and response surface type (e.g., are some response surfaces more accurate when constructed on a particular sample type such as LHS vs. an orthogonal array?) In general, there is not a
strong dependence of the surrogate performance with respect to sample type, but some sample types perform better with respect to some metrics and not others (for example, a Hammersley sample may do
well at lowering root mean square error of the surrogate fit but perform poorly at lowering the maximum absolute deviation of the error). Much of this work is empirical and application dependent. If
you choose to use surrogates in uncertainty quantification, we strongly recommend trying a variety of surrogates and examining diagnostic goodness-of-fit metrics.
Known Issue: When using discrete variables, there have been sometimes significant differences in data fit surrogate behavior observed across computing platforms in some cases. The cause has not yet
been fully diagnosed and is currently under investigation. In addition, guidance on appropriate construction and use of surrogates with discrete variables is under development. In the meantime, users
should therefore be aware that there is a risk of inaccurate results when using surrogates with discrete variables. | {"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/advanced/advancedmodelrecursions.html","timestamp":"2024-11-01T19:10:25Z","content_type":"text/html","content_length":"66787","record_id":"<urn:uuid:5b0b4d2c-90c8-4398-836a-348b1429f0ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00889.warc.gz"} |
A vague description of topics covered each class:
Lecture 1: Basic Topology and Graphs
Lecture 2: Simplicial Complexes
Lecture 3: Cell complexes
Lecture 4: Homotopy of maps
Lecture 5: Homotopy equivalence of spaces
Lecture 6: The simplicial approximation theorem
Lecture 7: The simplicial approximation theorem 2
Lecture 8: The fundamental group: Basics
Lecture 9: Functoriality of the fundamental group
Lecture 10: Edge loop group
Lecture 11: The fundamental group of the circle and the fundamental theorem of algebra
Lecture 12: Free groups: 3 definitions
Lecture 13: Free groups: Equivalence of definitions
Lecture 14: Group presentations: Basic properties
Lecture 15: Group presentations: Tietze transformations and van Dyk's lemma
Lecture 16: Free products and their universal property
Lecture 17: Push outs: Presentations and universal properties
Lecture 18: The Seifert van Kampen Theorem: Statement and applications
Lecture 19: A sketch of the proof of the Seifert van Kampen theorem
Lecture 20: Review of homework problems
Lecture 21 & 22: Covering spaces: Definitions and statement of properties
Lecture 23: Covering spaces: Proof of path lifting and uniquesness of lifts
Lecture 24: Covering spaces: Homotopy lifting and applications | {"url":"http://www.robertkropholler.com/algebraictopology/schedule.html","timestamp":"2024-11-03T23:12:56Z","content_type":"text/html","content_length":"3236","record_id":"<urn:uuid:913d3c8e-f0ee-424e-a032-272eee943eae>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00210.warc.gz"} |
Problem solving aptitude questions
problem solving aptitude questions
Related topics:
divisible examples | 6th grade algebra problems with answer key | distributive property worksheets algebra | hard math equasions | sum of error equation | poems
Home related to math | solving for x in algebra with multiple variables | algebra lesson for 1st graders | divisible examples
Rational Expressions and Their
Radical Expressions and Equations Author Message
Algebraic Expressions
Simplifying Algebraic Expressions ROT Posted: Saturday 30th of Dec 10:05
Rational Expressions and Functio I am taking an online problem solving aptitude questions course. For me it’s a bit hard to study this course all by myself. Is there some
Rational Expressions and Functions one studying online? I really need some guidance .
Radical Expressions
Rational Expressions Worksheet
Adding and Subtracting Rational Registered:
Expressions 03.08.2003
Rational Expressions From:
Multiplying and Dividing Rational
Dividing Fractions, Mixed Numbers, and
Rational Expressions AllejHat Posted: Sunday 31st of Dec 09:16
Multiplying and Dividing Rational Can you please be more descriptive as to what sort of service you are expecting to get. Do you want to understand the principles and solve
Expressions your math questions by yourself or do you require a software that would offer you a step-by-step solution for your math problems?
Multiplying and Dividing Rational
Simplifying Rational Expressions Registered:
Complex Rational Expressions 16.07.2003
Rational Expressions and Equations From: Odense, Denmark
Integration of Polynomial Rational
Algebraic Expressions
Radical Expressions & Radical Outafnymintjo Posted: Monday 01st of Jan 19:00
Functions I have also used Algebrator quite a few times to solve math assignments . I must say that it has significantly improved my problem solving
Rational Class and Expression skills. You should give it a try and see if it helps.
Adding and Subtracting Rational
Expressions Registered:
Rational Expressions 22.07.2002
Radical Expressions From: Japan...SUSHI
Multiplying Rational Expressions TIME!
Rational Expressions and Common
rational expressions
Polynomial Expressions chemel Posted: Tuesday 02nd of Jan 10:40
Rational Functions, and Multiplying To begin with, thanks for replying guys ! I’m interested in this software . Can you please tell me how to purchase this software? Can we
and Dividing Rational Expressions order it online , or do we buy it from some retail store?
Simplifying Radical Expressions
Adding and Subtracting Rational
Expressions Registered:
Rational Expressions and Equations 09.03.2005
Rational Expressions From:
Simplifying Expressions
Quadratic Expressions,Equations and
Functions fveingal Posted: Tuesday 02nd of Jan 18:44
RATIONAL EXPRESSIONS I remember having problems with adding fractions, monomials and rational inequalities. Algebrator is a truly great piece of math software.
Absolute Value and Radical I have used it through several algebra classes - Intermediate algebra, College Algebra and Basic Math. I would simply type in the problem
Expressions,Equations and Functions and by clicking on Solve, step by step solution would appear. The program is highly recommended.
Rational Expressions & Functions
From: Earth
Mibxrus Posted: Thursday 04th of Jan 11:47
Accessing the program is simple . All you want to know about it is available at https://algebra-expression.com/algebraic-expressions.html.
You are assured satisfaction. And in addition , there is a money-back guarantee. Hope this is the end of your hunt.
From: Vancouver,
Home Rational Expressions and Their Simplification Radical Expressions and Equations Algebraic Expressions Simplifying Algebraic Expressions Rational Expressions and Functio Rational Expressions and
Functions Radical Expressions Rational Expressions Worksheet Adding and Subtracting Rational Expressions Rational Expressions Multiplying and Dividing Rational Expressions Dividing Fractions, Mixed
Numbers, and Rational Expressions Multiplying and Dividing Rational Expressions Multiplying and Dividing Rational Expressions Simplifying Rational Expressions Complex Rational Expressions Rational
Expressions and Equations Integration of Polynomial Rational Expressions Algebraic Expressions Radical Expressions & Radical Functions Rational Class and Expression Evaluator Adding and Subtracting
Rational Expressions Rational Expressions Radical Expressions Multiplying Rational Expressions Rational Expressions and Common Denominators rational expressions Polynomial Expressions Rational
Functions, and Multiplying and Dividing Rational Expressions Simplifying Radical Expressions Adding and Subtracting Rational Expressions Rational Expressions and Equations Rational Expressions
RATIONAL EXPRESSIONS II Simplifying Expressions Quadratic Expressions,Equations and Functions RATIONAL EXPRESSIONS Absolute Value and Radical Expressions,Equations and Functions Rational Expressions
& Functions
Author Message
ROT Posted: Saturday 30th of Dec 10:05
I am taking an online problem solving aptitude questions course. For me it’s a bit hard to study this course all by myself. Is there some one studying online? I really need some
guidance .
AllejHat Posted: Sunday 31st of Dec 09:16
Can you please be more descriptive as to what sort of service you are expecting to get. Do you want to understand the principles and solve your math questions by yourself or do
you require a software that would offer you a step-by-step solution for your math problems?
From: Odense, Denmark
Outafnymintjo Posted: Monday 01st of Jan 19:00
I have also used Algebrator quite a few times to solve math assignments . I must say that it has significantly improved my problem solving skills. You should give it a try and
see if it helps.
From: Japan...SUSHI
chemel Posted: Tuesday 02nd of Jan 10:40
To begin with, thanks for replying guys ! I’m interested in this software . Can you please tell me how to purchase this software? Can we order it online , or do we buy it from
some retail store?
fveingal Posted: Tuesday 02nd of Jan 18:44
I remember having problems with adding fractions, monomials and rational inequalities. Algebrator is a truly great piece of math software. I have used it through several algebra
classes - Intermediate algebra, College Algebra and Basic Math. I would simply type in the problem and by clicking on Solve, step by step solution would appear. The program is
highly recommended.
From: Earth
Mibxrus Posted: Thursday 04th of Jan 11:47
Accessing the program is simple . All you want to know about it is available at https://algebra-expression.com/algebraic-expressions.html. You are assured satisfaction. And in
addition , there is a money-back guarantee. Hope this is the end of your hunt.
From: Vancouver,
Posted: Saturday 30th of Dec 10:05
I am taking an online problem solving aptitude questions course. For me it’s a bit hard to study this course all by myself. Is there some one studying online? I really need some guidance .
Posted: Sunday 31st of Dec 09:16
Can you please be more descriptive as to what sort of service you are expecting to get. Do you want to understand the principles and solve your math questions by yourself or do you require a software
that would offer you a step-by-step solution for your math problems?
Posted: Monday 01st of Jan 19:00
I have also used Algebrator quite a few times to solve math assignments . I must say that it has significantly improved my problem solving skills. You should give it a try and see if it helps.
Posted: Tuesday 02nd of Jan 10:40
To begin with, thanks for replying guys ! I’m interested in this software . Can you please tell me how to purchase this software? Can we order it online , or do we buy it from some retail store?
Posted: Tuesday 02nd of Jan 18:44
I remember having problems with adding fractions, monomials and rational inequalities. Algebrator is a truly great piece of math software. I have used it through several algebra classes -
Intermediate algebra, College Algebra and Basic Math. I would simply type in the problem and by clicking on Solve, step by step solution would appear. The program is highly recommended.
Posted: Thursday 04th of Jan 11:47
Accessing the program is simple . All you want to know about it is available at https://algebra-expression.com/algebraic-expressions.html. You are assured satisfaction. And in addition , there is a
money-back guarantee. Hope this is the end of your hunt. | {"url":"https://algebra-expression.com/algebra-expression-simplifying/graphing-equations/problem-solving-aptitude.html","timestamp":"2024-11-12T03:37:11Z","content_type":"text/html","content_length":"90916","record_id":"<urn:uuid:40491326-9b24-401b-a292-3157679b9503>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00650.warc.gz"} |
Huber Loss: Why Is It, Like How It Is?
Huber Loss: Why Is It, Like How It Is?
Photo by Federico Respini on Unsplash
On the day I was introduced to Huber loss by Michal Fabinger, the very first thing that came to my mind was the question: “How did someone joined these two functions in a mathematical way?”. Apart
from that, the usage of Huber loss was pretty straightforward to understand when he explained. However, he left me with enough curiosity to explore a bit more about its derivation. This poor excuse
for an article is a note on how I explained my self the answers I obtained for my questions. I sincerely hope this article might assist you in understanding the derivation of Huber loss a bit more
Huber loss function is a combination of the mean squared error function and the absolute value function. The intention behind this is to make the best of both worlds. Nevertheless, why is it exactly
like this? (For the moment just forget about getting the mean).
How it should be.
If we are combining the MSE and absolute value function, why do we need all those other terms, like the -δ² and the co-efficient 2δ . Can’t we keep the function as follows ?
How it shouldn’t be.
Well, the short answer is, No. Why? Because we need to combine those two functions in a way that they are differentiable. It’s also necessary to keep their derivatives continuous obviously because
its use cases are associated with derivations (E.g.- Gradient Descent). So let’s dig in a little bit more and find out why does it have to be in this form?
Our intention is to keep the junctions of the functions differentiable right? So let’s take a look at the joint as shown in the following figure.
Huber loss function compared against Z and Z²
The joint can be figured out by equating the derivatives of the two functions. Our focus is to keep the joints as smooth as possible. This becomes the easiest when the two slopes are equal. So let’s
differentiate both functions and equalize them.
Equalizing the derivatives
(More information about the signum / sign function can be found at the end)
This is useful to understand the concept. But, in order to make some use of it, we need to have a parameter in the function to control the point where we need to switch from one function to the
other. In other words, we need to have a handle on the junction of the two functions. That’s why we need to introduce δ . For the convenience of this calculations, let’s keep the z² function
unchanged and change the second case. Therefore δ should be introduced to the second function(associating the absolute value function). I guess the other way around is also possible. But there could
be reasons why it’s not preferred. Your feedback on this is appreciated.
So hereafter I would be referring to the second case of the Huber loss as the second function since that’s what we are trying to generate here.
For easier comprehension, let’s consider for the case of z>0 only at this time. Let’s say we need to join the two functions at an arbitrary point δ. Two main conditions which need to be satisfied
here are as follows.
At z= δ ,
1. the slope of z² should be equal to the slope of the second function.
2. value of z² should be equal to the value of the second function.
Based on condition (1), let’s find out the slope of function z² at z= δ .
Derivative at δ for z²
Now we need to find a linear function with this gradient( 2 δ ) so that we could extend z² from z= δ to ∞ . The function with this gradient can be found by integration.
Integration of 2δ
Here c is an arbitrary constant. So in order to find c let’s use condition (2).
Finding value of c
So there we have it ! The second function can be conclusively written as 2δz- δ²for the case of z>0 . We can do the same calculation when z<0 as well. Since the functions are symmetrical along z=0 ,
we can use the absolute value function for the second case. Then the final Huber loss function can be written as follows.
There we have it !
In Wikipedia you can see the same formula written in a different format:
Format of the Huber loss in Wikipedia.
Here you can think about it as they have started the process by keeping (1/2)z²unchanged, opposed to z² which I used in this derivation. As an exercise you can try to derive the second function by
starting from,
The signum/sign function (sgn)
Here the function sgn() is the derivation of absolute value function. It is also known as signum or sign function. Intuition behind this is very simple. If you differentiate the two sides (from z=0
axis) of the absolute value function, it would result in 1 with the sign of z as shown in the following figure. However the derivative at z=0 doesn’t exist. But here we don’t have to worry too much
about that. You can find more information about the sgn() function in its wikipedia page.
Absolute value function and signum function compared.
For further discussions on how to join functions, here’s a Stackoverflow question.
Special thanks to Michal Fabinger (Tokyo Data Science) for explaining neural network essentials in a clear and concise way from where I obtained necessary the knowledge to compose this article. | {"url":"https://www.cantorsparadise.org/huber-loss-why-is-it-like-how-it-is-dcbe47936473/","timestamp":"2024-11-12T03:24:06Z","content_type":"text/html","content_length":"39376","record_id":"<urn:uuid:0d538bfa-4854-4c72-b7dc-fdc38e038a64>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00276.warc.gz"} |
Book Reviews of "The Essential Exponential" by A Bartlett and "Truth About Oil" by C J Campbell
The Essential Exponential! For the Future of Our Planet
by Albert A. Bartlett
University of Nebraska
294 pages, $25 paperback
(Visit the Al Bartlett website for articles, presentations, and videos by Prof. Al Bartlett.)
The Truth About Oil and the Looming Energy Crisis
by C. J. Campbell
Ireland Eagle Print
56 pages, $30
If we are going to respond intelligently to oil extraction's peak and decline, and to the broader problem of population growth in a world of depleting nonrenewable resources, it is imperative that
the public be educated about our predicament. Two recent books by prominent scientists are outstanding resources for this task.
In articles and oral presentations, University of Colorado physicist Albert A. Bartlett has worked for decades to explain the exponential function, exponential growth, its manifestations, and the
momentous implications. He maintains that "The greatest shortcoming of the human race is its inability to understand the exponential function." The Essential Exponential! brings together his papers
on this vitally important but obscure phenomenon.
An exponential function is one in which a variable increases at a fixed rate (percent) per time period, as opposed to a linear or arithmetic function, in which growth is by a fixed amount per period.
An example of exponential growth is the doubling sequence 1, 2, 4, 8, 16, 32, ..., whereas 1, 2, 3, 4, 5, 6, ..., illustrates arithmetic growth. An exponential function may be expressed as
Nt = N0ekt
where Nt is the value at time t, N0 is the initial value, k is the growth per unit time, t is time, and e is the base of natural logarithms, 2.71828.
The time required for a variable growing exponentially to double is constant. It turns out that the doubling time (T2) may be calculated by dividing 70 by P, the growth rate per unit time, or
T2 = 70/P
That's really all there is to it. Although the mathematics may look unfamiliar, exponential growth, as Bartlett shows, is going on all around us. An important example is oil consumption, which he
addresses in his classic 1978 article, "Forgotten Fundamentals of the Energy Crisis," which alone is worth the price of the book. After concisely explaining exponential growth and doubling times,
Bartlett argues convincingly that protracted growth of both population and per-capita energy use are driving our energy problem. The steady growth of oil consumption is analogous to the reproduction
of bacteria by fission in a bottle, with the bottle representing fixed oil supply, and the steady growth of oil consumption analogous to bacterial population growth. If more oil is discovered (more
bottles are added) the reprieve is illusory; if resource use is growing exponentially, quadrupling the amount of the resource extends its lifetime by only two doubling times!
Bartlett draws the moral that: "The question of how long our resources will last is perhaps the most important question that can be asked in a modern industrial society." Resource lifetime depends on
the resource endowment and how fast its consumption is growing. Consumption growth makes oil's lifetime much shorter than most people realize. After warning that "Modern agriculture is the use of
land to convert petroleum into food," and adding that he is not trying to predict the future, just illustrate what steady growth in energy consumption implies, Bartlett advocates education in the
"forgotten fundamental" of the arithmetic of growth; conserving; recycling; researching alternative energies, and shifting to a decentralized, humane-scale economy.
In articles on coal and fossil fuel lifetimes, Bartlett uses the exponential function to deflate widely publicized, large estimates of fossil fuels' lifetimes at current rates of consumption. He
points out, correctly, that consumption is growing, and mathematically demonstrates the consequences. His more general treatment, "Expert Predictions of the Lifetimes of Non-renewable Resources,"
should be required reading for energy analysts. Citing a 1992 claim that world fossil fuel reserves will last 600 years at current rates of consumption, Bartlett shows that even modest steady growth
in consumption causes startlingly large declines in resource lifetime. If consumption increases just one percent a year, the estimated lifetime for world fossil fuels drops from 600 years to 195; at
two percent annual growth, it drops to 128 years; and if consumption grows three percent annually, fossil fuels will last just 98 years.
Turning to oil peak and decline, Bartlett presents a mathematically rigorous analysis of production data available as of 1995 in "An Analysis of U.S. and World Oil Production Patterns Using
Hubbert-Style Curves" (2000). A Gaussian error curve best fits the data, and implies that if the world's total estimated oil recovery is about two trillion barrels, about half had been extracted as
of 1995, and annual production will peak in 2004. Different estimates generate different peak forecasts: with three trillion barrels of oil, peak occurs in 2019; with four trillion, extraction peaks
in 2030. Put another way, every additional billion barrels of oil recovered delays peak about five and a half days. (It would have been interesting had this article been updated.) Bartlett concludes
somberly that current rates of oil consumption are unsustainable, and that "a society cannot be sustainable as long as it remains vitally dependent on oil." Given his analysis, his conclusion is
A main driver of energy use is population growth. Using simple arithmetic and elementary algebra, "Zero Growth of the Population of the United States" presents the combinations of births and
immigration per year which would halt our population growth. This implies a tradeoff whereby more births require less immigration, and vice versa, to maintain a given growth rate. ZPG is desirable on
resource preservation, environmental protection, and other grounds, Bartlett argues, and both lowering fertility and stopping or reducing immigration are essential to national survival.
"Democracy Cannot Survive Overpopulation" argues convincingly that overpopulation, by raising the number of constituents per elected official, makes it harder for individuals to gain access to
representatives and have a voice in politics. Also, overpopulation breeds government regulation to cope with problems caused by population pressure.
Bartlett observes with dismay that evasion of Malthus's warnings about population growth is widespread. This evasion takes two forms: denial of the problem, and diversion of attention from the
arithmetic of population growth to other things, by invoking other causes of environmental problems (e.g., high personal consumption), arguing that sustainable development is the answer, or asserting
that overpopulation is a problem in developing countries, not in America. Bartlett rebuts these claims, pointing out that population growth is at the heart of environmental problems, that immigration
contributes substantially to population growth, and that our high resource use makes America one of the world's most overpopulated countries.
Indeed, resource use receives much attention. "Sustained Availability: A Management Program for Nonrenewable Resources" is a rigorous mathematical derivation of the depletion rate which would allow a
nonrenewable resource to last forever. It turns out to be negative -- less is used each successive year. Since America is already descending the Hubbert curve, such a management plan doesn't make
sense for oil, Bartlett argues, but it does for coal. Some "experts" recommend rapid depletion of nonrenewable resources on the assumption that we can always develop alternatives. He advocates the
prudent course -- the one which will leave us least badly off if it turns out to be wrong -- which is conservation.
In a long, thoughtful essay on sustainability and population growth, making many valid points, Bartlett notes that sustainable means able to persist for an indefinitely long time, therefore
"sustainable growth is an oxymoron." He warns that population growth can devour resource savings from improved efficiency, and that if we do not stop the growth of population and resource
consumption, nature will.
Some articles by other scientists are included, the best being M. King Hubbert's "Exponential Growth as a Transient Phenomenon in Human History" (1976). Surveying the exponential growth in population
and in coal, oil, and iron extraction, Hubbert asks if this is sustainable -- put another way, how many doublings of these phenomena are possible. He invokes grains of wheat placed on chessboard
squares in geometric sequence, 1, 2, 4, 8, etc., to argue that Earth cannot tolerate many doublings. Rapid growth of population and industrial output, then, "must be a transient and ephemeral
phenomenon of temporary duration."
The book presents the exponential function itself, doubling times, and real-world applications, etc., in two series of articles, "The Arithmetic of Growth" and "The Exponential Function." Population
applications are sobering. Steady 1.9 percent annual growth (i.e., the 1976 world population growth rate) implies, Bartlett points out, that population doubles in just 36 years -- and that food
production must also double in 36 years just to hold constant the population share of people dying of hunger, whose number would also double. To have fewer hunger victims, food production must
greatly outrun population growth. "Thus, before we have done any serious calculations we can see that the population explosion is the most serious problem facing mankind!" Using this growth rate,
Bartlett also shows that if humanity began with a single couple, they must have lived in 849 A.D. Since they obviously didn't, population must have grown faster than exponentially. (The graph of
population growth since 8000 B.C., resembling a hockey stick, bears this out.)
Bartlett's use of the exponential function to demolish one of Julian Simon's loopier bloviations is not to be missed. Simon claimed in 1995 that we had enough technology to adequately support "an
ever-growing population for the next 7 billion years." Challenged, Simon said that "7 billion" should read "7 million." Starting from 1995's population of 5.7 billion and assuming an annual growth
rate of one percent, Bartlett calculates that in 7 million years, the world's population would be 2.3 x 1030410 people. This number, he adds, is roughly 30,000 times larger than the estimated number
of atoms in the known universe. Imagine what we'd have been spared if only Julian Simon had been required to study the exponential function under Al Bartlett!
Some general observations stand out in Bartlett's book. In just a few doubling times, exponential growth can yield huge quantities. Even modest growth rates can generate large numbers surprisingly
quickly. Estimates of resource lifetimes at current consumption rates are worthless, given growing resource use. While non-physical things, such as compound interest, can grow for long periods, the
finitude of matter makes growth of physical phenomena such as populations and resource extraction unsustainable.
Because understanding Bartlett's book depends on your grasp of the exponential function, how to approach it does, too. Readers familiar with the function can start with "Forgotten Fundamentals of the
Energy Crisis." If you've forgotten it since high school or college, you should start with "The Arithmetic of Growth: Methods of Calculation," "The Exponential Function -- Part I," or both (placed,
perhaps mistakenly, toward the back of the book). If need be, consult an algebra or calculus textbook. The payoffs for your time investment will come throughout your study of the book; and prose
pieces aside, Bartlett requires not reading, but study: following his mathematical derivations, working his examples, thinking things through. Make the effort, and you will be astonished at how the
mathematics make the principles starkly clear, and pleased with yourself at your grasp of Bartlett's message.
Bartlett is renowned as a teacher, and in these clear, well-structured articles, we see why. His expositions are well supported with numerical examples, his prose crisp and readable. The articles
make a nice blend of differing levels or rigor.
Working through The Essential Exponential! is like getting a solid, college-level tutorial on the exponential function by a first-rate teacher. It's a worthwhile challenge. If politicians,
businessmen, and opinion leaders mastered this book, we'd all be a lot better off.
Our situation would also improve if everybody read petroleum geologist Colin Campbell's short book The Truth About Oil & the Looming Energy Crisis, written for general readers. Campbell argues that
oil depletion is the "most critical but least understood of subjects," and that we all need to understand it, both to plan our lives, and "to give the politicians the mandate for the unpopular
actions they will be obliged to take." Our enormous dependence on a steady, cheap oil supply means depletion will force radical restructuring of the way we live. While difficult, this "is not a
hopeless cause. We have perhaps twenty years to adapt before oil production need fall below present levels, and even then we face no more than a gentle decline." Unfortunately, given our long
conditioning to believe in markets and technology, oil peak will be a profoundly traumatic shock.
Written in the form of an imaginary public inquiry into depletion, Campbell's book presents previous oil discovery and production, forecasts future discovery and production, and explains depletion's
consequences. Tables giving country-by-country production and reserves data, production forecasts, and so on, accompany. Campbell points out that oil is very unevenly distributed, present in large
quantities in only a few locations. Moreover, the planet has been thoroughly explored.
Muddles over data and definitions are serious, especially regarding "reserves." As an oil field is developed, estimates of reserves get revised upward, creating a misleading impression that reserves
are growing. In fact, Campbell observes, all the oil was found when the field was discovered, so accuracy demands backdating reserve revisions to when the field was found. His chart of discovery and
production trends since 1930 shows that world oil discovery peaked in the 1960s and that since the mid-1980s, annual production has exceeded discovery by a growing margin. Discovery is projected to
decline to virtually nothing by 2050.
As Campbell sensibly points out, oil must be found before it can be extracted, therefore "falling discovery must in due time be reflected in falling production." So extrapolation from past discovery
is a good way to forecast future production. Resource finitude implies extraction peak and decline, but Campbell acknowledges that economic and political factors, especially price, will affect
Campbell breaks down oil by physical characteristics and the nature of its location: "regular" (i.e., conventional), shale, heavy, deepwater, and polar oil, as well as liquids obtained from
extracting and processing natural gas. Factoring in oil price shocks when ceilings on production capacity are hit, and resulting recessions and lowered demand for oil, he tentatively forecasts a peak
for regular oil in 2005, with all liquids peaking before this decade's end, after which supply will start dropping by about 2.5 percent a year.
Oil's peak and decline will greatly disrupt economic activity, especially trade and food production. However, reduced carbon emissions may relieve climate-change concerns, and less energy-intensive
fishing may enable fish stocks to recover. Campbell sees three possible responses to peak and decline profiteering by the oil-producing countries, which could trigger devastating recessions; seizure
of oil by consuming countries, who might accelerate extraction to reduce prices, bringing on an earlier peak and faster subsequent decline; and restrained consumption. Only the last makes sense, he
argues, and could be managed by an international Depletion Protocol, whereby importers cut their oil imports at the same rate as global depletion, keeping price reasonably linked to production cost
and eliminating profiteering with its destabilizing international shifts of liquidity.
Given the unreliability of publicly available data, the "most urgent need," he rightly maintains, is to get an accurate picture of the oil and gas situation, through well-funded research accessing
industry data or collecting data firsthand. Once obtained, accurate data should be disseminated to the public, which needs to understand that depletion is a geological phenomenon and that shortages
and rising prices "do not necessarily speak of fraud, conspiracy, gouging and profiteering," though these may occur.
We must also minimize oil waste, using fiscal incentives and penalties such as high prices for "gas-guzzler" vehicles and revising corporate taxation so transport costs may no longer be charged
against taxable income. While free markets could operate within the depletion provisions, rationing may eventually be necessary to ensure everyone a minimum supply. Unpleasant? Maybe, but there is no
blinking Campbell's point that when things get scarce, "the most obvious response is to use less of them."
Finally, we should shift to renewable energy. Renewables have made little headway, Campbell observes, because they are competing with "fossil fuels being dumped onto the market at far below
replacement cost -- that being, in fact, infinitely high." Shifting to renewables should be undertaken at the local level, which may, he concludes, also enhance our sense of solidarity and community.
Compact and informative, this book is a good education in oil depletion basics. It includes a CD-Rom containing ten PowerPoint presentations, for Windows and Macintosh, on discovery of oil to date,
how much remains to be found, depletion modeling, myths about oil, examples of depletion, and more, with numerous charts on, e.g., production and discovery trends and worldwide location of regular
oil. Thanks to the CD-Rom, Campbell's book is suitable for lectures and slideshows, greatly enhancing its power as a teaching tool.
Exponential growth and oil depletion are two of the most powerful forces shaping the future of every reader of this page. They imply that our growth-based way of life is doomed. Our existence in
ignorance of them is a night march toward a precipice.
Mastering these wise books turns on searchlights in the darkness. | {"url":"https://www.thesocialcontract.com/artman2/publish/tsc1601/article_1342.shtml","timestamp":"2024-11-06T17:51:46Z","content_type":"application/xhtml+xml","content_length":"38674","record_id":"<urn:uuid:446792d1-3aa6-49b0-b535-77b4cc91b4c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00665.warc.gz"} |
Resource Library
This site gives a definition and an example of numerical summaries. Topics include mean, median, quantiles, variance, and standard deviation.
This site gives a definition and an example of normal distributions. Topics include assessing normality and normal probability plots.
This site gives a definition and an example of categorical data. Topics include two-way tables, bar graphs, and segmented bar graphs.
This site gives an explanation, a definition and an example of inference in linear regression. Topics include confidence intervals for intercept and slope, significance tests, mean response, and
prediction intervals.
This site gives an explanation, a definition and an example of multiple linear regression. Topics include confidence intervals, tests of significance, and squared multiple correlation.
This site gives an explanation, a definition and an example of ANOVA for regression. Topics include analysis of variance calculations for simple and multiple regression, and F-statistics.
This site gives an explanation, a definition of and an example using experimental design. Topics include experimentation, control, randomization, and replication.
This site gives an explanation, a definition and an example of sampling in statistical inference. Topics include parameters, statistics, sampling distributions, bias, and variability.
This site gives an explanation, a definition and an example of probability models. Topics include components of probability models and the basic rules of probability.
This site gives an explanation, a definition and an example of conditional probability. Topics include the probabilities of intersections of events and Bayes' formula. | {"url":"https://causeweb.org/cause/resources/library?field_material_type_tid=99&page=46","timestamp":"2024-11-04T01:42:15Z","content_type":"text/html","content_length":"66864","record_id":"<urn:uuid:9050a785-c796-4662-a03a-309877684dd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00368.warc.gz"} |
CBSE Class 10 Maths - MCQ Questions and Online Tests - Unit 12 - Areas
Related to Circles
CBSE Class 10 Maths – MCQ and Online Tests – Unit 12 – Areas Related to Circles
Every year CBSE conducts board exams for 10th standard. These exams are very competitive to all the students. So our website provides online tests for all the 10th subjects. These tests are also very
effective and useful for those who preparing for competitive exams like NEET, JEE, CA etc. It can boost their preparation level and confidence level by attempting these chapter wise online tests.
These online tests are based on latest CBSE Class 10 syllabus. While attempting these our students can identify the weak lessons and continuously practice those lessons for attaining high marks. It
also helps to revise the NCERT textbooks thoroughly.
CBSE Class 10 Maths – MCQ and Online Tests – Unit 12 – Areas Related to Circles
Question 1.
Perimeter of a sector of a circle whose central angle is 90° and radius 7 cm is
(a) 35 cm
(b) 25 cm
(c) 77 cm
(d) 7 cm
Answer: (b) 25 cm
Question 2.
The area of a circle that can be inscribed in a square of side 10 cm is
(a) 40π cm²
(b) 30π cm²
(c) 100π cm²
(d) 25π cm²
Answer: (d) 25π cm²
Question 3.
The perimeter of a square circumscribing a circle of radius a units is
(a) 2 units
(b) 4α units
(c) 8α units
(d) 16α units
Answer: (c) 8α units
Question 4.
The perimeter of the sector with radius 10.5 cm and sector angle 60° is
(a) 32 cm
(b) 23 cm
(c) 41 cm
(d) 11 cm
Answer: (a) 32 cm
Question 5.
In a circle of diameter 42 cm, if an arc subtends an angle of 60° at the centre, where π = \(\frac{22}{7}\) then length of arc is
(a) 11 cm
(b) \(\frac{22}{7}\) cm
(c) 22 cm
(d) 44 cm
Answer: (c) 22 cm
Question 6.
The perimeter of a sector of radius 5.2 cm is 16.4 cm, the area of the sector is
(a) 31.2 cm²
(b) 15 cm²
(c) 15.6 cm²
(d) 16.6 cm²
Answer: (c) 15.6 cm²
Question 7.
If the perimeter of a semicircular protractor is 72 cm where π = \(\frac{22}{7}\), then the diameter of protractor is
(a) 14 cm
(b) 33 cm
(c) 28 cm
(d) 42 cm
Answer: (c) 28 cm
Question 8.
If the radius of a circle is doubled, its area becomes
(a) 2 times
(b) 4 times
(c) 8 times
(d) 16 times
Answer: (b) 4 times
Question 9.
If the sum of the circumferences of two circles with radii R[1] and R[2] is equal to circumference of a circle of radius R, then
(a) R[1] + R[2] = R
(b) R[1] + R[2] > R
(c) R[1] + R[2] < R
(d) Can’t say;
Answer: (a) R[1] + R[2] = R
Question 10.
The perimeter of a circular and square fields are equal. If the area of the square field is 484 m² then the diameter of the circular field is
(a) 14 m
(b) 21 m
(c) 28 m
(d) 7 m
Answer: (c) 28 m
Question 11.
The radius of sphere is r cm. It is divided into . two equal parts. The whole surface area of two parts will be
(a) 8πr² cm²
(b) 6πr² cm²
(c) 4πr² cm²
(d) 3πr² cm²
Answer: (b) 6πr² cm²
Question 12.
If the diameter of a semicircular protractor is 14 cm, then its perimeter is .
(a) 27 cm
(b) 36 cm
(c) 18 cm
(d) 9 cm
Answer: (b) 36 cm
Question 13.
A race track is in the form of a circular ring whose outer and inner circumferences are 396 m and 352 m respectively. The width of the track is
(a) 63 m
(b) 56 m
(c) 7 m
(d) 3.5 m
Answer: (c) 7 m
Question 14.
The area of the largest square that can be inscribed in a circle of radius 12 cm is
(a) 24 cm²
(b) 249 cm²
(c) 288 cm²
(d) 196√2 cm²
Answer: (c) 288 cm²
Question 15.
The area of the largest triangle that can be inscribed in a semicircle of radius r is
(a) r²
(b) 2r²
(c) r³
(d) 2r³
Answer: (a) r²
Question 16.
The area (in cm²) of the circle that can be inscribed in a square of side 8 cm is
(a) 64 π
(b) 16 π
(c) 8 π
(d) 32 π
Answer: (b) 16 π
Question 17.
If the perimeter of a circle is equal to that of a square, then the ratio of their areas is
(a) 22 : 7
(b) 14 : 11
(c) 7 : 22
(d) 11 : 14
Answer: (b) 14 : 11
Question 18.
The circumference of two concentric circles forming a ring are 88 cm and 66 cm. Taking π = \(\frac{22}{7}\), the width of the ring is
(a) 14 cm
(b) 7 cm
(c) \(\frac{7}{2}\) cm
(d) 21 cm
Answer: (c) \(\frac{7}{2}\) cm
Question 19.
A steel wire when bent in the form of a square encloses an area of 121 cm². If the same wire is bent in the form of a circle, then the circumference of the circle is
(a) 88 cm
(b) 44 cm
(c) 22 cm
(d) 11 cm
Answer: (b) 44 cm
Question 20.
The diameter of a circle whose area is equal to sum of the areas of the two circles of radii 40 cm and 9 cm is
(a) 41 cm
(b) 49 cm
(c) 82 cm
(d) 62 cm
Answer: (c) 82 cm
0 comments: | {"url":"https://www.cbsetips.in/2021/01/cbse-class-10-maths-mcq-questions-and_77.html","timestamp":"2024-11-06T21:40:46Z","content_type":"application/xhtml+xml","content_length":"151583","record_id":"<urn:uuid:236f2c20-0a14-47c1-9c03-bed40cacf2ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00768.warc.gz"} |
Floating FIX Mode
As discussed in a forum thread, it's often valuable to see the maximum number of digits with a meaningful contribution to the result value. The idea for the 41 was to use the I/O_SVC interrupt
polling point to adjust the FIX setting in a dynamic fashion, depending on the value stored in the stack register X.
Some folks call this a FIX ALL mode, but I favor the Floating FIX terminology - after all a fix ALL would always be a static FIX_9, for it isn't about ALL digits but ALL NEEDED ones. Bur semantics
aside, the code below shows the core of the routine, i.e. the actual determination of the FIX settings.
The formulas used are as follows:
Let x be represented by the following convention used in the 41 platform, with one digit for the mantissa sign, 10 digits for the mantissa, one for the exponent sign and two for the exponent. This
enables a numeric range between +/-9,999999999 E99, with a "whole" around zero defined by the interval +/-1E-99.
Then the fix setting to use is a function of the number in X , represented as follows:
1. If number >=1 (or x="0")
let z# = number of mantissa digits equal to zero, starting from the most significant one (i.e. from PT= 3 to PT=12), and XP = value of exponent (yz). Then we have:
FIX = max { 0 , [(9-z#) + XP }
2. if number < 1 (or x="9")
let |XP| = 100 - xyz, and z# as defined above. Then we have:
FIX = min { 9 , [(9-z#) + |XP| }
And here is the code to be executed by the OS upon each qualifying I/O_SVC event:
384 CLRF 0 default is XP > 0
0F8 READ 3(X)
2FA ?C#0 M is it zero??
15B JNC +43d (has 10 zeroes in mantissa)
2A0 SETDEC decimal math
01C PT= 3
006 A=0 S&X
2E2 ?C#0 @PT
037 JC +06 [EXIT]
166 A=A+1 S&X # of zero digits in A[S&X]
354 ?PT= 12
01F JC +03 [EXIT]
3DC PT=PT+1
3D3 JNC -06
130 LDI S&X
009 CON: mantissa field length
0A6 A<>C S&X #z in C[S&X]
1C6 A=A-C S&X mant# = (9 - z#)
086 B=A S&X keep copy in B[S&X]
0F8 READ 3(X)
106 A=C S&X put it in A[S&X]
356 ?A#0 XS lower than one?
03B JNC +07 no, jump to section
388 SETF 0 marks XP < 0
016 A=0 XS yes, remove sign
130 LDI S&X
100 CON: normalize constant
0A6 A<>C S&X
1C6 A=A-C S&X |XP| = (100 - XP) in A[S&X]
130 LDI S&X
009 CON: maximum FIX setting
260 SETHEX HEX mode
306 ?A<C S&X is |XP| < 9 ?
063 JNC +12d no, out of bounds
0A6 A<>C S&X put |XP| in C[S&X]
066 A<>B S&X put (9-z#) in A[S&X]
38C ?FSET 0
01B JNC +03
206 C=C+A S&X mant# + |XP|
03B JNC +07
246 C=A-C S&X mant# - |XP|
2F6 ?C#0 XS zeros in tens, hundreds...
023 JNC +04 no, stay put
046 C=0 S&X yes, it was integer!
013 JNC +02 skip next
0C6 C=B S&X put (9-z#) in C[S&X]
0FC RCR 10 puts it in C<4>
10E A=C ALL save result in A<4>
3B8 READ 14(d) read flag register
158 M=C ALL save it for later
05C PT= 4
0A2 A<>C @PT get fix# to C<4>
01C PT= 3
210 LD@PT- 8 FIX mode
3A8 WRIT 14(d) temporary settings
0F8 READ 3(X) puts value in C
099 ?NC XQ Sends C to display - sets HEX
02C ->0B26 [DSPCRG]
198 C=M ALL recall original FIX settings
205 ?NC GO Set MSG flag (from C)
00E ->0381 [STMSF]+3 | {"url":"https://www.hpmuseum.org/forum/archive/index.php?thread-3099.html","timestamp":"2024-11-08T02:13:30Z","content_type":"application/xhtml+xml","content_length":"16992","record_id":"<urn:uuid:9d6ab359-e57f-4a55-b61c-a5e20f32b018>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00167.warc.gz"} |
Difference between simple and compound interest - Termscompared
Difference between simple and compound interest
The fee that a lender charges for an amount of money lent to a borrower is known as interest. Interest is usually charged in percentage terms.
Definitions and explanations:
Simple Interest:
If interest is charged only on the principal amount of loan given to the borrower, the interest in known as simple interest.
Compound Interest:
If interest is charged on principal amount as well as the interest already charged on the principal amount such that this interest amount is added to the principal amount is known as compound
interest. Thus as the word compound suggests, the interest is charged on two elements i.e., principle amount and any interest already earned thereon.
Difference between simple and compound interest:
The main points of difference between simple interest and compound interest are given below:
1. Effective rate of interest:
For the same percentage/rate of interest, simple interest is always lower than the compound interest for the same principal amount. The reason being simple interest is only charged on the principal
amount of loan, whereas compound interest is charged on the principal amount plus accumulative amount of interest already charged. This makes the effective interest rate of compound interest higher
than simple interest.
2. Calculations and formulas:
The calculations of simple interest are easy to understand and straight forward. The calculations of compound interest can become difficult and complicated especially if the calculations include
long-term loans.
Formula for simple interest:
I = p × r × n
• I = Simple interest
• p = Principal Amount
• r = Rate of interest
• n = number of years of loan
Formula for compound interest:
I = p × [1-(1 + r/t)^n]
• I = Compound interest
• p = Principal Amount
• r = Rate of interest
• n = number of years of loan
• t= number of times of compounding
A borrower takes a loan of $50,000 on simple interest rate at 6% per annum. If the duration of loan is three years, the effective charge to the borrower would be:
I = p × r × n
= $50,000 × (6/100) × 3
= $50,000 × 0.06 × 3
= $50,000 × 0.18
Total amount to be repaid = Principle + Interest
= $50,000 + $9,000
= $59,000
If the loan is borrowed on the same terms but compound interest is applied, then the effective charge to the borrower or profit for the lender would be computed as below:
I = p × [1-(1 + r/t)^n]
= $50,000 × [1-(1 + (6/100))^3]
= $50,000 × [1- (1.06)^3]
= $50,000 × (1-1.191016)
= $50,000 × 0.191016
= $9,550.8
Total amount to be repaid = Principle + Interest
= $50,000 + $9,550.8
= $59,550.8
So, in above example, if simple interest is applied the borrower will have to repay a sum of $59,000, while if compound interest is applied the borrower will pay a sum of $59,550.8.
3. Application:
The simple interest is usually applied in car loans and leases. Borrowers also use and prefer to borrow loans on simple interest rates because in this way they can avoid the risk of overpaying their
interest and also a predetermined liability creates certainty. Simple interest is also applied to normal investments including risk free investment accounts and certificate deposits. The compound
interest is used mostly for the purposes of savings especially by pension funds, insurance companies etc. This type of interest provides benefit to the depositor or lender. Compound interest is
applied by the banks upon the fixed savings deposits of lenders where the lender cannot withdraw their deposits for a certain span of time during which their investment is compounded based on the
borrowing rate of bank. Credit card companies apply compound interest rate and charge the card holder on principal plus the accumulated interest amount.
Simple interest versus compound interest – tabular comparison
A tabular comparison of simple interest and compound interest is given below:
Simple interest vs Compound interest
Effective interest Rate
The effective interest rate of compound interest is higher The effective interest rate of simple interest is lower
Calculations and formulas
Compound interest calculations can become complicated P× [1-(1 + R/t)^n] Simple interest is easy to calculate P × R× n
Compound interest is used mostly by lenders for growth in their investments Simple interest is usually preferred by borrowers to increase certainty and decrease their borrowing cost
Conclusion – simple interest vs compound interest:
In practice, simple interest is applied where a business needs to minimize its risk of borrowing. Simple interest rate is in nature favorable for the borrower because of a fixed cash outflow.
Compound interest is suitable for savings because it has a multiplier effect therefore investments in compound interest rate are normally for longer periods. However, in essence whichever type of
interest is applied it will end up increasing the wealth of the lender.
Leave a reply | {"url":"https://www.termscompared.com/difference-between-simple-and-compound-interest/","timestamp":"2024-11-12T10:21:17Z","content_type":"text/html","content_length":"63471","record_id":"<urn:uuid:0d7b9947-2eeb-4267-b2b3-f62eafebef9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00568.warc.gz"} |
Mini-Workshop: Particle Systems with Several Conservation Laws: Fluctuations and Hydrodynamic Limit | EMS Press
Mini-Workshop: Particle Systems with Several Conservation Laws: Fluctuations and Hydrodynamic Limit
• Christian Klingenberg
Universität Würzburg, Germany
• Gunter M. Schütz
Forschungszentrum Jülich GmbH, Germany
• Bálint Tóth
Technical University of Budapest, Hungary
“Particle Systems with Several Conservation Laws: Fluctuations and Hydrodynamic Limit” connects different fields where intimate connections are just emerging. In many applications (like traffic flow,
dust models in astrophysics, compressible fluid models) very natural microscopic descriptions of the stochastic dynamics of interacting particles can often be related to macroscopic continuum
descriptions using nonlinear evolutionary PDEs. It is hard to rigorously relate these two levels of modelling of the same physical or biological phenomena, though.
Scientific progress in the area of hyperbolic conservation laws for systems of one and two equations has been used in rigorously proving the hydrodynamic limit of corresponding interacting particle
systems. Techniques like the theory of compensated compactness in PDEs are emerging as powerful tools in the interacting particle system community. Many other original ideas were and are currently
being developed within this second context for systems with two and more conservation laws. This had lead the organizers to believe that time has come to devote a high profile meeting to this subject
which is situated at the intersection between nonlinear hyperbolic pde theory, probability theory of interacting particle systems, nonequilibrium statistical physics.
More specifically, the choice of the topic was motivated by the following closely related issues: As it is well-known solutions of systems of hyperbolic PDEs develop shocks and this fact causes major
difficulties in the mathematical analysis as well as in the physical interpretation of the microscopic particle structure of a shock. Moreover, in the presence of macroscopic currents, boundary
conditions in finite systems determine the bulk behaviour of stationary solutions both of PDEs and particle systems. This has been shown to lead to boundary-induced nonequilibrium analogs of phase
transitions which are novel phenomena of particular importance in applications which usually deal with effectively finite systems. It raises the question how microscopic laws of interaction find an
appropriate description in terms of boundary conditions of an associated hyperbolic PDE. In our current but not fully developed understanding, the hydrodynamic limit, existence of shocks, and the
nature of boundary conditions appear to be very intricately linked problems which require investigation within a common framework. In this context the workshop was concerned with the following
• Derivation of hydrodynamic limit
• Microscopic structure of the shocks
• Open boundary problems
• Dynamical phase transitions
• Large deviations
• Treatment of the theory of conservation laws with entropies coming form microscopic models
The participants, coming from the US, France, Hungary and Germany, were mathematicians from PDE theory and probability theory and physicists working in the field of nonequilibrium statistical
mechanics. With all of them being specialists coming from different fields, but sharing a common research interest, this miniworkshop turned out to be a highly fruitful “joint venture”. A number of
very successful expository lectures on recent progress in the field helped to bridge the gaps between the different communities. More specialized talks, partly on open problems, led the participants
to leave the confines of their respective communities and to interact with each other. All of us enjoyed enormously the externally tranquil, but scientifically vivid and stimulating atmosphere of
Cite this article
Christian Klingenberg, Gunter M. Schütz, Bálint Tóth, Mini-Workshop: Particle Systems with Several Conservation Laws: Fluctuations and Hydrodynamic Limit. Oberwolfach Rep. 2 (2005), no. 2, pp.
DOI 10.4171/OWR/2005/22 | {"url":"https://ems.press/journals/owr/articles/827","timestamp":"2024-11-07T17:22:31Z","content_type":"text/html","content_length":"90300","record_id":"<urn:uuid:b96db8af-ae1b-499e-bf2d-6587f9af182d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00540.warc.gz"} |
Is there more friction in laminar or turbulent flow?
It is clear that velocity gradient near the surface for laminar flow is smaller than for the turbulent one, thus wall shear stress for the laminar flow is smaller than for the turbulent one. This
means that laminar flow has smaller skin friction drag than the turbulent flow due to faster velocities near the surface.
What is the coefficient of friction for laminar flow?
it is mainly used to find out an object’s normal force and frictional force. thus, it is equal to 16/re.
Does laminar flow have less friction?
The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or “eddies.” The laminar flow creates less skin friction drag than the turbulent flow, but is less
What is the difference between turbulent and laminar flow?
Laminar flows are smooth and streamlined, whereas turbulent flows are irregular and chaotic. A low Reynolds number indicates laminar flow while a high Reynolds number indicates turbulent flow. The
flow behavior drastically changes if it is laminar vs. turbulent.
Why is friction factor higher in turbulent flows?
In the turbulent flow, the boundary layer has a much greater degree of mixing when compared to a laminar boundary layer. where the large eddies in a turbulent flow promote much more rapid and
thorough mixing which increases the mass transfer which results higher values of friction factor.
How does friction factor vary in laminar flow?
When the fluid flow is laminar (Re < 2000), the friction factor has a direct relationship on the Reynolds number, such that: f m = 64 / Re or f f = 16 / Re . where: fm = Moody friction factor (fm =
4.0 ff)
Does laminar flow have friction?
Therefore friction will occur between layers within the fluid. Fluids with a high viscosity will flow more slowly and will generally not support eddy currents and therefore the internal roughness of
the pipe will have no effect on the frictional resistance. This condition is known as laminar flow.
Does turbulent flow reduce drag?
Pressure drag is more significant than skin friction drag on large bodies – like your fuselage and nacelles. And since a turbulent boundary layer has more energy to oppose an adverse pressure
gradient, engineers often force the boundary layer to turn turbulent over fuselages to reduce overall drag.
Which is faster laminar or turbulent flow?
Fluid flow that is slow tends to be laminar. As it speeds up a transition occurs and it crinkles up into complicated, random turbulent flow. But even slow flow coming from a large orifice can be
turbulent; this is the case with smoke stacks.
How do you determine whether the flow is laminar or turbulent?
For practical purposes, if the Reynolds number is less than 2000, the flow is laminar. If it is greater than 3500, the flow is turbulent. Flows with Reynolds numbers between 2000 and 3500 are
sometimes referred to as transitional flows. | {"url":"https://rattleinnaustin.com/is-there-more-friction-in-laminar-or-turbulent-flow/","timestamp":"2024-11-13T18:05:48Z","content_type":"text/html","content_length":"72731","record_id":"<urn:uuid:17c7aabb-da32-4566-880a-830100e3c7c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00022.warc.gz"} |
Delaunay Triangulation -
QGIS Delaunay Triangulation
Delaunay Triangulation - What is it and what can we do with it?
In mathematics and computational geometry, a Delaunay Triangulation (DT) for a set P of points (Let P = {P1,…Pn}) in a plane is a triangulation DT(P) such that no point in P is inside the
circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation; they tend to avoid skinny triangles. - Wikipedia
Page on Delaunay Triangulation
But what does that mean?
Delaunay Triangulation Convex Hull
For this demo, the point data was points of interest in the Bay Area. Many of those points happen to be shopping centers, so only the shopping centers were selected and used that to create these
maps, that way we can see what Convex Hull does - it triangulates an area based on points. We could think of this as a Shopping Center Area.
A Triangulation creates a plane out of triangles. The effect of Delaunay Triangulation is a more-detailed version of the Convex Hull. It takes point data, and it triangulates each of the points
(creating triangles between the points). So it takes the premise of Convex Hull - putting a rubber band around the area that contains all the points, and it uses more rubber bands, putting a band
around every 3 points and creating a series of triangles within the Convex Hull. If you compare the two images above, you'll note that the Delaunay Triangulation creates the same outside shape as the
Convex Hull, but on top of that it creates a triangle between all of the points.
But that's just if you're using an x,y coordinate system. If we have point data using the x,y,z coordinate system - where x is longitude, y is latitude and z is depth - not only can you still use
Delaunay Triangulation, but it is helpful in showing elevation and other 3D effects (They have a really good explanation here: http://www.mathworks.com/help/matlab/math/delaunay-triangulation.html).
Primary Sources
http://www.cs.uu.nl/docs/vakken/ga/slides9alt.pdf (Added by PT) | {"url":"http://djjr-courses.wikidot.com/soc128:qgis-delaunay-triangulation","timestamp":"2024-11-14T16:49:11Z","content_type":"application/xhtml+xml","content_length":"23544","record_id":"<urn:uuid:575ade41-d354-4050-9c84-603f67b85f46>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00351.warc.gz"} |
scipy.cluster.hierarchy.dendrogram(Z, p=30, truncate_mode=None, color_threshold=None, get_leaves=True, orientation='top', labels=None, count_sort=False, distance_sort=False, show_leaf_counts=True,
no_plot=False, no_labels=False, leaf_font_size=None, leaf_rotation=None, leaf_label_func=None, show_contracted=False, link_color_func=None, ax=None, above_threshold_color='C0')[source]#
Plot the hierarchical clustering as a dendrogram.
The dendrogram illustrates how each cluster is composed by drawing a U-shaped link between a non-singleton cluster and its children. The top of the U-link indicates a cluster merge. The two legs
of the U-link indicate which clusters were merged. The length of the two legs of the U-link represents the distance between the child clusters. It is also the cophenetic distance between original
observations in the two children clusters.
The linkage matrix encoding the hierarchical clustering to render as a dendrogram. See the linkage function for more information on the format of Z.
pint, optional
The p parameter for truncate_mode.
truncate_modestr, optional
The dendrogram can be hard to read when the original observation matrix from which the linkage is derived is large. Truncation is used to condense the dendrogram. There are several modes:
No truncation is performed (default). Note: 'none' is an alias for None that’s kept for backward compatibility.
The last p non-singleton clusters formed in the linkage are the only non-leaf nodes in the linkage; they correspond to rows Z[n-p-2:end] in Z. All other non-singleton clusters are
contracted into leaf nodes.
No more than p levels of the dendrogram tree are displayed. A “level” includes all nodes with p merges from the final merge.
Note: 'mtica' is an alias for 'level' that’s kept for backward compatibility.
color_thresholddouble, optional
For brevity, let \(t\) be the color_threshold. Colors all the descendent links below a cluster node \(k\) the same color if \(k\) is the first node below the cut threshold \(t\). All
links connecting nodes with distances greater than or equal to the threshold are colored with de default matplotlib color 'C0'. If \(t\) is less than or equal to zero, all nodes are
colored 'C0'. If color_threshold is None or ‘default’, corresponding with MATLAB(TM) behavior, the threshold is set to 0.7*max(Z[:,2]).
get_leavesbool, optional
Includes a list R['leaves']=H in the result dictionary. For each \(i\), H[i] == j, cluster node j appears in position i in the left-to-right traversal of the leaves, where \(j < 2n-1\)
and \(i < n\).
orientationstr, optional
The direction to plot the dendrogram, which can be any of the following strings:
Plots the root at the top, and plot descendent links going downwards. (default).
Plots the root at the bottom, and plot descendent links going upwards.
Plots the root at the left, and plot descendent links going right.
Plots the root at the right, and plot descendent links going left.
labelsndarray, optional
By default, labels is None so the index of the original observation is used to label the leaf nodes. Otherwise, this is an \(n\)-sized sequence, with n == Z.shape[0] + 1. The labels[i]
value is the text to put under the \(i\) th leaf node only if it corresponds to an original observation and not a non-singleton cluster.
count_sortstr or bool, optional
For each node n, the order (visually, from left-to-right) n’s two descendent links are plotted is determined by this parameter, which can be any of the following values:
Nothing is done.
'ascending' or True
The child with the minimum number of original objects in its cluster is plotted first.
The child with the maximum number of original objects in its cluster is plotted first.
Note, distance_sort and count_sort cannot both be True.
distance_sortstr or bool, optional
For each node n, the order (visually, from left-to-right) n’s two descendent links are plotted is determined by this parameter, which can be any of the following values:
Nothing is done.
'ascending' or True
The child with the minimum distance between its direct descendents is plotted first.
The child with the maximum distance between its direct descendents is plotted first.
Note distance_sort and count_sort cannot both be True.
show_leaf_countsbool, optional
When True, leaf nodes representing \(k>1\) original observation are labeled with the number of observations they contain in parentheses.
no_plotbool, optional
When True, the final rendering is not performed. This is useful if only the data structures computed for the rendering are needed or if matplotlib is not available.
no_labelsbool, optional
When True, no labels appear next to the leaf nodes in the rendering of the dendrogram.
leaf_rotationdouble, optional
Specifies the angle (in degrees) to rotate the leaf labels. When unspecified, the rotation is based on the number of nodes in the dendrogram (default is 0).
leaf_font_sizeint, optional
Specifies the font size (in points) of the leaf labels. When unspecified, the size based on the number of nodes in the dendrogram.
leaf_label_funclambda or function, optional
When leaf_label_func is a callable function, for each leaf with cluster index \(k < 2n-1\). The function is expected to return a string with the label for the leaf.
Indices \(k < n\) correspond to original observations while indices \(k \geq n\) correspond to non-singleton clusters.
For example, to label singletons with their node id and non-singletons with their id, count, and inconsistency coefficient, simply do:
# First define the leaf label function.
def llf(id):
if id < n:
return str(id)
return '[%d %d %1.2f]' % (id, count, R[n-id,3])
# The text for the leaf nodes is going to be big so force
# a rotation of 90 degrees.
dendrogram(Z, leaf_label_func=llf, leaf_rotation=90)
# leaf_label_func can also be used together with ``truncate_mode``,
# in which case you will get your leaves labeled after truncation:
dendrogram(Z, leaf_label_func=llf, leaf_rotation=90,
truncate_mode='level', p=2)
show_contractedbool, optional
When True the heights of non-singleton nodes contracted into a leaf node are plotted as crosses along the link connecting that leaf node. This really is only useful when truncation is
used (see truncate_mode parameter).
link_color_funccallable, optional
If given, link_color_function is called with each non-singleton id corresponding to each U-shaped link it will paint. The function is expected to return the color to paint the link,
encoded as a matplotlib color string code. For example:
dendrogram(Z, link_color_func=lambda k: colors[k])
colors the direct links below each untruncated non-singleton node k using colors[k].
axmatplotlib Axes instance, optional
If None and no_plot is not True, the dendrogram will be plotted on the current axes. Otherwise if no_plot is not True the dendrogram will be plotted on the given Axes instance. This can
be useful if the dendrogram is part of a more complex figure.
above_threshold_colorstr, optional
This matplotlib color string sets the color of the links above the color_threshold. The default is 'C0'.
A dictionary of data structures computed to render the dendrogram. Its has the following keys:
A list of color names. The k’th element represents the color of the k’th link.
'icoord' and 'dcoord'
Each of them is a list of lists. Let icoord = [I1, I2, ..., Ip] where Ik = [xk1, xk2, xk3, xk4] and dcoord = [D1, D2, ..., Dp] where Dk = [yk1, yk2, yk3, yk4], then the k’th link
painted is (xk1, yk1) - (xk2, yk2) - (xk3, yk3) - (xk4, yk4).
A list of labels corresponding to the leaf nodes.
For each i, H[i] == j, cluster node j appears in position i in the left-to-right traversal of the leaves, where \(j < 2n-1\) and \(i < n\). If j is less than n, the i-th leaf node
corresponds to an original observation. Otherwise, it corresponds to a non-singleton cluster.
A list of color names. The k’th element represents the color of the k’th leaf.
It is expected that the distances in Z[:,2] be monotonic, otherwise crossings appear in the dendrogram.
>>> import numpy as np
>>> from scipy.cluster import hierarchy
>>> import matplotlib.pyplot as plt
A very basic example:
>>> ytdist = np.array([662., 877., 255., 412., 996., 295., 468., 268.,
... 400., 754., 564., 138., 219., 869., 669.])
>>> Z = hierarchy.linkage(ytdist, 'single')
>>> plt.figure()
>>> dn = hierarchy.dendrogram(Z)
Now, plot in given axes, improve the color scheme and use both vertical and horizontal orientations:
>>> hierarchy.set_link_color_palette(['m', 'c', 'y', 'k'])
>>> fig, axes = plt.subplots(1, 2, figsize=(8, 3))
>>> dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y',
... orientation='top')
>>> dn2 = hierarchy.dendrogram(Z, ax=axes[1],
... above_threshold_color='#bcbddc',
... orientation='right')
>>> hierarchy.set_link_color_palette(None) # reset to default after use
>>> plt.show() | {"url":"http://scipy.github.io/devdocs/reference/generated/scipy.cluster.hierarchy.dendrogram.html","timestamp":"2024-11-05T04:34:17Z","content_type":"text/html","content_length":"53516","record_id":"<urn:uuid:b441bb9b-b791-4f58-842e-1b5e65b199a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00435.warc.gz"} |
GATE Biotechnology Syllabus (BT) 2024 - Download GATE 2024 Syllabus
GATE Biotechnology Syllabus (BT) 2024 – Download GATE Syllabus in PDF
Candidates need to review the GATE syllabus for the stream that they wish to appear. The syllabus defines the topics or subtopics that the candidates need to prepare for appearing in the examination.
It’s a very important part of the examination as it helps the students know what they have to study.
This article will give you complete information regarding the GATE Biotechnology Syllabus 2023.
GATE Biotechnology Syllabus (BT) 2024
GATE Biotechnology Syllabus (BT) Consists of 5 Sections, Engineering Mathematics, General Biotechnology, Recombinant DNA Technology, Plant and Animal Biotechnology, Bioprocess Engineering, and
Process Biotechnology.
Section 1: Engineering Mathematics
Linear Algebra
Algebra of matrices: Inverse and rank of a matrix; System of linear equations; Symmetric, skew-symmetric and orthogonal matrices; Determinants; Eigenvalues and eigenvectors, Diagonalisation of
matrices; Cayley-Hamilton Theorem.
Functions of a single variable: Limit, continuity, and differentiability; Mean value theorems, Indeterminate forms, and L’Hospital’s rule; Maxima and minima; Taylor’s theorem, Fundamental theorem and
mean value-theorems of integral calculus; Evaluation of definite and improper integrals; Applications of definite integrals to evaluate areas and volumes.
Functions of two variables: Limit, continuity, and partial derivatives; Directional derivative, Total derivative; Tangent plane and normal line; Maxima, minima and saddle points, Method of Lagrange
multipliers; Double and triple integrals, and their applications.
Sequence and series: Convergence of sequence and series; Tests for convergence, Power series; Taylor’s series; Fourier Series; Half range sine and cosine series.
Vector Calculus
Gradient, divergence, and curl; Line and surface integrals; Green’s theorem, Stokes theorem, and Gauss divergence theorem (without proofs).
Complex Variable
Analytic functions; Cauchy-Riemann equations; Line integral, Cauchy’s integral theorem and integral formula (without proof); Taylor’s series and Laurent series; Residue theorem (without proof) and
its applications.
Ordinary Differential Equation
First-order equations (linear and nonlinear); Higher order linear differential equations with constant coefficients; Second-order linear differential equations with variable coefficients; Method of
variation of parameters; Cauchy-Euler equation; Power series solutions; Legendre polynomials, Bessel functions of the first kind and their properties.
Partial Differential Equation
Classification of second-order linear partial differential equations; Method of separation of variables; Laplace equation; Solutions of one-dimensional heat and wave equations.
Axioms of probability; Conditional probability; Bayes’ Theorem; Discrete and continuous random variables: Binomial, Poisson, and normal distributions; Correlation and linear regression.
Numerical Methods
The solution of systems of linear equations using LU decomposition, Gauss elimination and Gauss-Seidel methods; Lagrange and Newton’s interpolations, Solution of polynomial and transcendental
equations by Newton-Raphson method; Numerical integration by trapezoidal rule, Simpson’s rule and Gaussian quadrature rule; Numerical solutions of first-order differential equations by Euler’s method
and 4th order Runge-Kutta method.
Section 2: General Biotechnology
Biochemistry: Biomolecules-structure and functions; Biological membranes, structure, action potential and transport processes; Enzymes- classification, kinetics, and mechanism of action; Basic
concepts and designs of metabolism (carbohydrates, lipids, amino acids, and nucleic acids) photosynthesis, respiration, and electron transport chain; Bioenergetics
Microbiology: Viruses- structure and classification; Microbial classification and diversity(bacterial, algal, and fungal); Methods in microbiology; Microbial growth and nutrition; Aerobic and
anaerobic respiration; Nitrogen fixation; Microbial diseases and host-pathogen interaction.
Cell Biology: Prokaryotic and eukaryotic cell structure; Cell cycle and cell growth control; Cell-Cell communication, Cell signalling, and signal transduction
Molecular Biology and Genetics: Molecular structure of genes and chromosomes; Mutations and mutagenesis; Nucleic acid replication, transcription, translation, and their regulatory mechanisms in
prokaryotes and eukaryotes; Mendelian inheritance; Gene interaction; Complementation; Linkage, recombination, and chromosome mapping; Extra chromosomal inheritance; Microbial genetics (plasmids,
transformation, transduction, conjugation); Horizontal gene transfer and Transposable elements; RNA interference; DNA damage and repair; Chromosomal variation; Molecular basis of genetic diseases.
Analytical Techniques: Principles of microscopy-light, electron, fluorescent and confocal; Centrifugation- high speed and ultra; Principles of spectroscopy-UV, visible, CD, IR, FTIR, Raman, MS, NMR;
Principles of chromatography- ion exchange, gel filtration, hydrophobic interaction, affinity, GC, HPLC, FPLC; Electrophoresis; Microarray
Immunology: History of Immunology; Innate, humoral and cell-mediated immunity; Antigen; Antibody structure and function; Molecular basis of antibody diversity; Synthesis of antibody and secretion;
Antigen-antibody reaction; Complement; Primary and secondary lymphoid organ; B and T cells and macrophages; Major histocompatibility complex (MHC); Antigen processing and presentation; Polyclonal and
monoclonal antibody; Regulation of immune response; Immune tolerance; Hypersensitivity; Autoimmunity; Graft versus host reaction.
Bioinformatics: Major bioinformatic resources and search tools; Sequence and structure databases; Sequence analysis (biomolecular sequence file formats, scoring matrices, sequence alignment,
phylogeny); Data mining and analytical tools for genomic and proteomic studies; Molecular dynamics and simulations (basic concepts including force fields, protein-protein, protein-nucleic acid,
protein-ligand interaction)
Section 3: Recombinant DNA Technology
Restriction and modification enzymes; Vectors; plasmid, bacteriophage, and other viral vectors, cosmids, Ti plasmid, yeast artificial chromosome; mammalian and plant expression vectors; cDNA and
genomic DNA library; Gene isolation, cloning, and expression; Transposons and gene targeting; DNA labeling; DNA sequencing; Polymerase chain reactions; DNA fingerprinting; Southern and northern
blotting; In-situ hybridization; RAPD, RFLP; Site-directed mutagenesis; Gene transfer technologies; Gene therapy.
Section 4: Plant and Animal Biotechnology
Totipotency; Regeneration of plants; Plant growth regulators and elicitors; Tissue culture and Cell suspension culture system: methodology, the kinetics of growth and, nutrient optimization;
Production of secondary metabolites by plant suspension cultures; Hairy root culture; transgenic plants; Plant products of industrial importance
Animal cell culture; media composition and growth conditions; Animal cell and tissue preservation; Anchorage and non-anchorage dependent cell culture; Kinetics of cell growth; Micro & macro-carrier
culture; Hybridoma technology; Stem cell technology; Animal cloning; Transgenic animals
Section 5: Bioprocess Engineering and Process Biotechnology
Chemical engineering principles applied to a biological system, Principle of reactor design, ideal and non-ideal multiphase bioreactors, mass and heat transfer; Rheology of fermentation fluids,
Aeration, and agitation; Media formulation and optimization; Kinetics of microbial growth, substrate utilization, and product formation; Sterilization of air and media; Batch, fed-batch and
continuous processes; Various types of microbial and enzyme reactors; Instrumentation control and optimization; Unit operations in solid-liquid separation and liquid-liquid extraction; Process
scale-up, economics, and feasibility analysis.
Engineering principle of bioprocessing- Upstream production and downstream; Bioprocess design and development from lab to industrial scale; Microbial, animal and plant cell culture platforms;
Production of biomass and primary/secondary metabolites; Biofuels, Bioplastics, industrial enzymes, antibiotics; Large scale production and
purification of recombinant proteins; Industrial application of chromatographic and membrane-based bioseparation methods; Immobilization of biocatalysts (enzymes and cells) for bioconversion
processes; Bioremediation-Aerobic and anaerobic processes for stabilization of solid/liquid wastes.
GATE is one of the most competitive examinations that needs complete focus and a good understanding of basic concepts. In order to clear the exam, the candidates must have a thorough preparation.
Every subject requires a keen understanding and practice. You must devote a good amount of time to every subject based on your comfort and knowledge. Before starting the preparations, you must create
a timetable that covers all your subjects/topics.
Try to appear for as many mock tests as possible.
Good luck with your future!
People are also reading: | {"url":"https://learndunia.com/gate-biotechnology-syllabus/","timestamp":"2024-11-05T09:44:14Z","content_type":"text/html","content_length":"126756","record_id":"<urn:uuid:620dd713-01c7-4b59-bc83-3408d2cc4f19>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00358.warc.gz"} |
Data Exploration with just 1 line of Python
In this post, you’ll see getting all your standard data analysis done in less than 30 seconds with just 1 line of Python. The wonders of Pandas Profiling.
The vanilla pandas way (the boring way)
Anyone working with data in Python will be familiar with the pandas package. If you’re not, pandas is the go-to package for most rows-&-columns formatted data. If you don’t have pandas make sure to
install it using pip install in your terminal:
pip install pandas
Now, let’s see what the default methods can do for us:
Pretty decent, but also bland… And where did the “method” column go?
For those unaware of what’s happening above:
Any pandas DataFrame has a .describe()method which returns the output above. However, unnoticed in this method are categorical variables. In our example above the “method” column is completely
omitted from the output.
Let’s see if we can do any better. (hint: we can!)
Pandas Profiling (the fancy way)
This is just the beginning of the report.
How would you like it if I told you I could produce the following statistics with just 3 lines of Python…? Actually just 1 line if we don’t count our imports.
• Essentials: type, unique values, missing values
• Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range
• Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness
• Most frequent values
• Histogram
• Correlations highlighting of highly correlated variables, Spearman, Pearson and Kendall matrices
• Missing values matrix, count, heatmap and dendrogram of missing values
(List of features are directly from the Pandas Profiling GitHub)
Well we can using the Pandas Profiling package! To install the Pandas Profiling package simply use pip install in your terminal:
pip install pandas_profiling
Seasoned data analysts might scoff at this at first glance for being fluffy and flashy, but it can definitely be useful for getting a quick first-hand impression of your data:
See, 1 line, just as I promised! #noclickbait
The first thing you’ll see it the Overview (see the picture above) which gives you some very high-level statistics on your data and variables as well as warnings like high correlation between
variables, high skewness and more.
But this isn’t even close to everything. Scrolling down we find that there are multiple parts to the report, but simply showing the output of this 1-liner with pictures wouldn’t do it any justice, so
I’ve made a GIF instead:
I highly recommend you to explore the features of this package yourself, after all, it’s just one line of code and you might find it useful in your future data analysis.
import pandas as pd
import pandas_profiling
Closing thoughts
This was just a really quick and short one. I just discovered Pandas Profiling myself and thought I would share!
#python #pandas #machine-learning #data-science #data-analysis | {"url":"https://morioh.com/a/e5fb23d1b0d6/data-exploration-with-just-1-line-of-python","timestamp":"2024-11-04T02:48:52Z","content_type":"text/html","content_length":"83696","record_id":"<urn:uuid:2d0646b4-a0d2-4ffb-8099-cd5fef885c80>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00411.warc.gz"} |
PPT - Finite-size scaling in complex networks PowerPoint Presentation, free download - ID:3301056
1. The 2nd KIAS Conference on Statistical Physics (NSPCS06), July 3-6, 2006, KIAS Finite-size scaling in complex networks Meesoon Ha (KIAS) in collaboration with Hyunsuk Hong (Chonbuk Nat’l Univ.)
and Hyunggyu Park (KIAS) Korea InstituteforAdvancedStudy
2. Outline • Controversial issues of critical behavior in CP on scale-free (SF) networks MF vs. Non-MF? Network cutoff dependence? • Mean-field (MF) approach and FSS for the Ising modelin regular
lattices • FSS exponents in SF networks (from Ising to directed percolation, CP and SIS) • Numerical results (with two different types of network cutoff) • Summary
4. Current controversial issues • Non-MF Critical Behavior of Contact Process (CP) in SF networks? Castellano and Pastor-Satorras (PRL `06) claimed that the critical behavior of CP is non-MF in SF
networks, based on the discrepancy between numerical results and their MF predictions. They pointed out the large density fluctuations at highly connected nodes as a possible origin for such a
non-MF critical behavior. However, it turns out thatall of their numerical results can be explained well by the proper MF treatment.In particular, the unbounded density fluctuations are not
critical fluctuations, which are justdue to the multiplicative nature of the noise in DP systems(Ha, Hong, and Park, cond-mat/0603787). • Cutoff dependence of FSS exponents? Natural cutoff vs.
Forced sharp cutoff
6. For well-known equilibrium models and some nonequilibrium models, it is known that this thermodynamic droplet length scale competes with system size in high dimensions and governs FSS as .
-Binder, Nauenberg, Privman, and Young, PRB (1985): 5D Ising model test - Luebeck and Jassen, PRE (2005): 5D DP model test -Botet, Jullien, and Pfeuty, PRL (1982): FSS in infinite systems Why do
we care this droplet length?
8. Conjecture: FSS in SF networks with Note that our conjecture is independent of the type of network cutoffs!!
11. Numerical Results • Extensive simulations are performed on two different types of network cutoff • Based on independent measurements two exponents and critical temperature are determined. • Our
conjecture is perfectly confirmed well in terms of data collapse with our numerical finding. (Goh et al. PRL `01 for static; Cantanzaro et al. PRE `05 for UCM)
14. CP on UCM: Ours (cond-mat/0603787) vs. Castellano and Pastor-Satorros (PRL ’06)
15. Summary & Ongoing Issue • The heterogeneity-dependent MF theoryis still valid in SF networks! • No cutoff dependence on critical behavior, if it is not too strong. • We conjecture the FSS
exponent value for the Ising model and DP systems (CP, SIS), which is numerically confirmed perfectly well. • Heterogeneous FSS exponents for Synchronization? Thank you !!!
17. Unbounded density fluctuations of CP: Not only at criticalityBut also everywhere on SF networks!!! | {"url":"https://www.slideserve.com/delu/finite-size-scaling-in-complex-networks-powerpoint-ppt-presentation","timestamp":"2024-11-10T01:57:07Z","content_type":"text/html","content_length":"92872","record_id":"<urn:uuid:9d6c1620-d296-4a57-b9b7-ec782751183b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00256.warc.gz"} |
The function PerformMannWhitneyUTest returns the probability associated with a 2-sample U-test, that is the probability which indicates the likelihood that two samples have come from the same two
underlying populations having the same median.
The parameters Data1 and Data2 contain the data to be compared, the boolean flag OneSided determines whether the problem is a one-sided or a two-sided one. The variable parameter UStatistic contains
the value of the U statistic on return (if no error ocurred).
Returned error codes:
≥0 ... everything is OK, the returned value represents the p-value
-1 ... at least one of the data field is empty | {"url":"https://imagelab.at/help/ilabpas_performmannwhitneyutest.htm","timestamp":"2024-11-04T07:25:38Z","content_type":"text/html","content_length":"3555","record_id":"<urn:uuid:903d68ec-740d-48e3-b077-e33b876f7392>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00178.warc.gz"} |
Lesson 17
Two Related Quantities, Part 2
Let’s use equations and graphs to describe stories with constant speed.
17.1: Walking to the Library
Lin and Jada each walk at a steady rate from school to the library. Lin can walk 13 miles in 5 hours, and Jada can walk 25 miles in 10 hours. They each leave school at 3:00 and walk \(3\frac14\)
miles to the library. What time do they each arrive?
17.2: The Walk-a-thon
Diego, Elena, and Andre participated in a walk-a-thon to raise money for cancer research. They each walked at a constant rate, but their rates were different.
1. Complete the table to show how far each participant walked during the walk-a-thon.
│time in hours │miles walked│miles walked│miles walked│
│ │ by Diego │ by Elena │ by Andre │
│ 1 │ │ │ │
│ 2 │6 │ │ │
│ │12 │11 │ │
│ 5 │ │ │17.5 │
2. How fast was each participant walking in miles per hour?
3. How long did it take each participant to walk one mile?
4. Graph the progress of each person in the coordinate plane. Use a different color for each participant.
5. Diego says that \(d=3t\) represents his walk, where \(d\) is the distance walked in miles and \(t\) is the time in hours.
1. Explain why \(d=3t\) relates the distance Diego walked to the time it took.
2. Write two equations that relate distance and time: one for Elena and one for Andre.
6. Use the equations you wrote to predict how far each participant would walk, at their same rate, in 8 hours.
7. For Diego’s equation and the equations you wrote, which is the dependent variable and which is the independent variable?
1. Two trains are traveling toward each other, on parallel tracks. Train A is moving at a constant speed of 70 miles per hour. Train B is moving at a constant speed of 50 miles per hour. The trains
are initially 320 miles apart. How long will it take them to meet? One way to start thinking about this problem is to make a table. Add as many rows as you like.
2. How long will it take a train traveling at 120 miles per hour to go 320 miles?
3. Explain the connection between these two problems.
│ │train A │ train B │
│starting position │0 miles │320 miles │
│ after 1 hour │70 miles│270 miles │
│ after 2 hours │ │ │
Equations are very useful for solving problems with constant speeds. Here is an example.
A boat is traveling at a constant speed of 25 miles per hour.
1. How far can the boat travel in 3.25 hours?
2. How long does it take for the boat to travel 60 miles?
We can write equations to help us answer questions like these.
Let's use \(t\) to represent the time in hours and \(d\) to represent the distance in miles that the boat travels.
When we know the time and want to find the distance, we can write: \(\displaystyle d = 25t\)
In this equation, if \(t\) changes, \(d\) is affected by the change, so we \(t\) is the independent variable and \(d\) is the dependent variable.
This equation can help us find \(d\) when we have any value of \(t\). In \(3.25\) hours, the boat can travel \(25(3.25)\) or \(81.25\) miles.
When we know the distance and want to find the time, we can write: \(\displaystyle t = \frac{d}{25}\) In this equation, if \(d\) changes, \(t\) is affected by the change, so we \(d\) is the
independent variable and \(t\) is the dependent variable.
This equation can help us find \(t\) when for any value of \(d\). To travel 60 miles, it will take \(\frac{60}{25}\) or \(2 \frac{2}{5}\) hours.
These problems can also be solved using important ratio techniques such as a table of equivalent ratios. The equations are particularly valuable in this case because the answers are not round numbers
or easy to quickly evaluate.
We can also graph the two equations we wrote to get a visual picture of the relationship between the two quantities:
• coordinate plane
The coordinate plane is a system for telling where points are. For example. point \(R\) is located at \((3, 2)\) on the coordinate plane, because it is three units to the right and two units up.
• dependent variable
The dependent variable is the result of a calculation.
For example, a boat travels at a constant speed of 25 miles per hour. The equation \(d=25t\) describes the relationship between the boat's distance and time. The dependent variable is the
distance traveled, because \(d\) is the result of multiplying 25 by \(t\).
• independent variable
The independent variable is used to calculate the value of another variable.
For example, a boat travels at a constant speed of 25 miles per hour. The equation \(d=25t\) describes the relationship between the boat's distance and time. The independent variable is time,
because \(t\) is multiplied by 25 to get \(d\). | {"url":"https://curriculum.illustrativemathematics.org/MS/students/1/6/17/index.html","timestamp":"2024-11-04T10:45:15Z","content_type":"text/html","content_length":"98356","record_id":"<urn:uuid:558191ca-d153-4ba0-a13a-a8a8600377b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00561.warc.gz"} |
Problem E
John is on a mission to get two people out of prison. This particular prison is a one-story building. He has managed to get hold of a detailed floor plan, indicating all the walls and doors. He also
knows the locations of the two people he needs to set free. The prison guards are not the problem – he has planned a diversion that should leave the building practically void.
The doors are his main concern. All doors are normally opened remotely from a control room, but John can open them by other means. Once he has managed to open a door, it remains open. However,
opening a door takes time, which he does not have much of, since his diversion will only work for so long. He therefore wants to minimize the number of doors he needs to open. Can you help him plan
the optimal route to get to the two prisoners?
On the first line one positive number: the number of test cases, at most 100. After that per test case:
• one line with two space-separated integers $h$ and $w$ ($2 \leq h,w \leq 100$): the width and height of the map.
• $h$ lines with $w$ characters describing the prison building:
□ ‘.’ is an empty space.
□ ‘*’ is an impenetrable wall.
□ ‘#’ is a door.
□ ‘$’ is one of the two people to be liberated.
John can freely move around the outside of the building. There are exactly two people on the map. For each person, a path from the outside to that person is guaranteed to exist.
Per test case:
• one line with a single integer: the minimum number of doors John needs to open in order to get to both prisoners.
Sample Input 1 Sample Output 1
*$*.*.*.*.* 4
*...*...*.* 0
*********.* 9 | {"url":"https://open.kattis.com/contests/k9ofpj/problems/jailbreak","timestamp":"2024-11-05T22:33:55Z","content_type":"text/html","content_length":"31115","record_id":"<urn:uuid:ce0744d3-a8bc-4b7e-9157-60bec95b5164>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00578.warc.gz"} |
Multiplication toss
Stage 1 to 3 – a thinking mathematically context for practise resource focused on using and developing multiplicative thinking.
This task is from Dianne Siemon.
Syllabus outcomes and content descriptors from Mathematics K–10 Syllabus (2022) © NSW Education Standards Authority (NESA) for and on behalf of the Crown in right of the State of New South Wales,
Collect resources
You will need:
• different coloured pencils or markers
• two spinners
• paper clip for spinner.
Multiplication toss – part 1
Watch the Multiplication toss – part 1 video (5:38).
OK, everybody, welcome back!
We're here today to have a look at the game multiplication toss, which some people also call how close to 100.
To play today I'm using a spinner, and I just made it by printing out a decagon and drawing lines across the opposite angles and labelling it from 0 to 9.
[Screen shows 10 by 10 grid paper, one 0 to 9 spinner made from a decagon, and coloured markers. Presenter points to decagon and traces the lines which make the segments in the spinner.]
And I'm going to use my paper clip that I found in the drawer, and a pen and I can flick it...
[Screen shows presenter placing a large paper clip in the middle of the spinner, and placing the point of a pen at one end of the paper clip to keep it in place in the centre of the spinner.
Presenter flicks the paper clip with her finger and spins it.]
And that will give me the numbers that I'm going to use.
And in fact, I could start with 5, and I now also have a 0 which is no good for me because what I know is that 5 times 0 or 0 fives is 0.
[The paper clip spins and lands on the number 5. Presenter spins again and lands on 0.]
So, for my first recording of my game, I can't block out anything because 5 zeros is the same as 5 times 0, which is equivalent to 0.
[Presenter points to grid, then writes 5 zeros equals 5 times 0 equals 0 next to the grid.]
So, fingers crossed my next go is more lucky!
Ah 0 and 2, so this time I could say 0 twos is equivalent to 0 times 2, which is also 0.
[Presenter spins and lands on 0, then spins again and lands on 2. Presenter writes 0 twos equals 0 times 2 equals 0.]
Okay, third time lucky!
Come on, spinner!
Excellent, so this time I got an 8 and ah...I think that's a 5 so I can actually now get to colour in my board here and because I got an 8 and a 5, I can choose to make 8 fives or 5 eights.
[Presenter spins and lands on 8, then spins again and lands on 5. Presenter points to grid.]
So, I'm just going to go with 5 eights because I like them better.
So, I need 8 in my rows, so 1 2 3 4 5 6 7 8 and I need 5 down here so that's 2 3 4 5.
[Presenter counts 8 squares across the top row of the grid, then 5 squares down the first column of the grid.]
So, I get to draw a border all around this area of my game board.
And I'm going to record this as 5 eights.
[Presenter uses a green marker to draw a border around the area, outlining an area of 5 squares down and 8 squares across, which are rows 1 to 5 and columns 1 to 8 of the grid. Presenter writes 5
eights within the green outlined area of the grid.]
And I'm also going to record it over here.
So, 5 eights is equivalent in value to 5 times 8, which is equivalent to 40.
[Presenter continues to write next to the grid, writing 5 eights equals 5 times 8 equals 40.]
Now if I wasn't sure I could use the grid here to help me work out how many squares are encased in my green section.
And because mathematicians like to code and keep a record of their ideas, I might also put a green marker here to say that corresponds to this section on my game board.
[Presenter points to grid indicating the 40 squares outlined in green marker. Presenter puts a green dot next to writing about 5 eights and points to the area of 40 squares, showing how this writing
corresponds to the section outlined in green on the game board.]
Alright, let's see. I've had a disastrous start, but I could have a successful finish. I'm going to call that a 3.
And a 0. I got too excited so I could say 0 threes or 3 zeros, but I know they're the same as 0, so 3 zeros is equivalent to 3 times zero, which is 0. OK.
[Presenter spins and lands on 3, then spins again and lands on 0. Presenter writes 3 zeros equals 3 times 0 equals 0.]
Come on, spinner!
Four... Fives, so I could do 4 fives so that would be across here like this. Or I could do 5 fours which would...ok, go like this.
[Presenter spins and lands on 4, then spins again and lands on 5. Presenter points to 4 rows and 5 columns, which are rows 6 to 9 and columns 1 to 5 of the grid. Presenter then points to 5 rows and 4
columns. These are rows 6 to 10 and columns 1 to 4 of the grid.]
And I might actually do that. I'm going to use a different colour mark at this time so I know this is 4 because, actually I can subitise that many.
And that takes me all the way down to here.
[Presenter uses a blue marker to draw a line around rows 6 to 10 and columns 1 to 4 of the grid, which is an area of 4 squares across and 5 squares down.]
I realised I didn't actually have to count those 'cause I know my board is 10 by 10. Five fours.
[Presenter writes 5 fours within the blue outlined area of the grid.]
I am I going to record that, over here. Five fours is equivalent in value to 5 times 4, which is 20.
And I actually know that because that's the same as saying 10 twos and you just rename that as 20.
Like this, you could say that's 10 twos which is the same as 2 tens.
[Presenter marks a blue dot next to the grid and writes 5 fours equals 5 times 4 equals 20 equals 10 twos equals 2 tens.]
We could just keep going, but we won't.
[Presenter spins and lands on 2, then spins again and lands on 4. Presenter uses a pink marker to draw a line around rows 1 to 4 and columns 9 to 10, which is an area of 4 squares down and 2 squares
across. Presenter writes 4 twos within the pink outlined area of the grid. Presenter marks a pink dot next to the grid and writes 4 twos equals 4 times 2 equals 8.
Next, presenter spins and lands on 0, then spins again and lands on 6. Presenter writes next to the grid 0 sixes equals 0 times 6 equals 0.
Next, presenter spins and lands on 6, then spins again and lands on 4. Presenter uses a yellow marker to draw a line around columns 5 to 10 and rows 6 to 9, which is an area of 6 squares across and 4
squares down. Presenter writes 6 fours within the yellow outlined area of the grid. Presenter marks a yellow dot next to the grid and writes 6 fours equals 6 times 4 equals 24.]
I could write that 0 sevens or 7 zeros. Zero times 7 which equals 0.
[Presenter spins and lands on 0, then spins again and lands on 7. Presenter writes next to the grid 0 sevens equals 0 times 7 equals 0.]
Let's try.... come on 1! Six...and a 9... now I definitely know I can't go here because I've got 1, 2, 3, 4, 5, 6...one row of 6 left that I could use or one row of 2.
[Presenter spins and lands on 6, then spins again and lands on 9. Presenter points to the bottom row on the grid where 6 squares in columns 5 to 10 are available. Presenter then points to the only
other squares available on the grid which are 2 squares in row 5, columns 9 and 10.]
So, in this case I have to record 6 nines ...but I couldn't go.
[Presenter writes next to the grid 6 nines, then an arrow, then the words couldn’t go.]
So, they were my 10 goes and I have 8 squares remaining and I covered 92 centimetres squared. How did you go in your game?
[End of transcript]
• Players take turns to spin the spinners. If a 3 and 6 are spun, players can enclose either a block out of 3 rows of 6 (3 sixes) or 6 rows of 3 (6 threes).
• The game continues with no overlapping areas.
• The winner is the player with the largest area blocked out after 10 spins.
• Eventually the space on the grid paper gets really small.
• Then, you have to think:
□ What if my 3 sixes won’t fit as 3 sixes or as 6 threes?
□ Players can partition to help them! So, for example, I can rename 3 sixes as 2 sixes and 1 six (if that helps me fit the block into my game board).
Multiplication toss – part 2
Watch the Multiplication toss part 2 video (2:22). This will show you a strategy to help you when your board starts to get full.
Ok. So, I have been playing multiplication toss again and I have found myself in a pickle.
[Screen shows 10 by 10 grid paper, two 0 to 9 spinners made from decagons, and a large paper clip on each spinner.
The grid has areas outlined in colour. There is a purple line around rows 1 to 3 and columns 1 to 6, which is an area of 3 squares down and 6 squares across. The text 3 sixes is written in this area.
There is a red line around rows 1 to 2 and columns 7 to 10, which is an area of 2 squares down and 4 squares across. The text 2 fours is written in this area. There is a pink line around rows 4 to 10
and columns 1 to 7, which is an area of 7 squares across and 6 squares down. The text 7 sixes is written in this area. There is a green line around rows 3 and 4 and columns 7 to 9, which is an area
of 2 squares down and 3 squares across. The text 2 threes is written in this area. There is an orange line around rows 5 to 7 and column 7, which is an area of 3 squares down and 1 square across. The
text 3 ones is written in this area. There is a dark blue line around 1 square in row 10, column 10. The text 1 one is written in this area.
To the right of the grid is a list written in different-coloured markers. The list is:
3 sixes equals 18, in purple marker.
2 fours equals 8, in red marker.
42 equals 7 sixes, in pink marker.
6 equals 2 threes, in green marker.
3 ones equals 3 times 1 equals 3, in orange marker.
1 one equals 1, in dark blue marker.
The two 0 to 9 spinners are below the grid. The paper clip on one spinner has landed on the number 6, and the paper clip on the other spinner has landed on the number 3.]
I have a bit of a mathematical conundrum because this is my game board that I've been playing with and here are the areas I blocked out.
[Presenter points to the game board, then to the written list which shows the areas outlined, or blocked out, on the game board.]
And I've spun on my 2 spinners, this time, a 6 and a 3. I know that I have more than 18 squares over here, but I don't have an array of 6 threes or 3 sixes, exactly, that I could use.
[Presenter points to spinner with the paper clip on the number 6, then the spinner with the paper clip on the number 3. Presenter then points to the area on the grid which has not been blocked out.
This area is 20 squares, in 6 rows. In this area, rows 1 to 3 have 3 squares, rows 4 and 5 have 4 squares, and row 6 has 3 squares.]
So, if I had...I almost have it because I have some threes across here, but 1, 2, 3, 4, 5, 6...would mean that this square here is in the way.
[Presenter uses a pencil to count 6 squares down in the area that has not been blocked out. The presenter includes these squares when tracing around an area of 18 squares. However, the square in row
10, column 10 which says 1 one is included in this area.]
And if I did it down here, I have this section up here that's in my way.
[Presenter points to the area that has not been blocked out below the area outlined in orange which is an area of 3 squares down and 1 square across.]
So, what I need to do now is to try to partition or break apart my 6 threes.
So, what I could think about is, I could think about using 3 of my threes here, and the other 3 of my threes down here, and that would fit! So, let's draw that in.
[Presenter outlines 9 squares in pencil. The 9 squares are in rows 5 to 7 and columns 8 to 10, which is an area of 3 squares down and 3 squares across. The presenter then outlines another 9 squares
in pencil. The 9 squares are in rows 8 to 10 and columns 7 to 9, which is an area of 3 squares down and 3 squares across. Presenter then outlines the 2 areas of 9 squares in light blue marker.]
And so I have 3 threes. And 3 threes.
[Presenter writes 3 threes in the 2 areas outlined in light blue marker.]
And I know that 6 threes is 18, and I know that actually 'cause I had this turn up here and so 3 sixes is 18, which also means that 6 threes is 18.
[Presenter points to list where 3 sixes equals 18 is written in purple marker and to the area on the grid where 3 squares down and 6 squares across is outlined in purple. Presenter adds to list so it
reads 3 sixes equals 18 equals 6 threes.]
And so here now I have 9 and 9 and when you join 9 and 9 together, that still makes 18.
[Presenter points to 2 areas outlined in light blue marker which are marked 3 threes. Each area is 9 squares.]
So, the area is equivalent in value. I've just partitioned it slightly, so I'm going to record it by saying something like 6 threes is equivalent to 3 threes combined with 3 threes.
And we could also write that as 3 times 3 plus 3 times 3, which is equivalent in value to 18. I just partitioned it, so it looks a little bit different.
[Presenter adds to the end of the list in light blue marker:
6 threes equals 3 threes plus 3 threes
equals 3 times 3 plus 3 times 3
equals 18.
Presenter then points again to the 2 areas of 3 threes on the grid.]
I wonder if you could use this strategy to help you out with some of your games.
[End of transcript]
Multiplication toss – part 3
Feeling skeptical about Michelle's thinking? Watch the Multiplication toss – part 3 video (1:51) to see how she proves 6 threes = 3 threes + 3 threes = 18. It's a strategy you can then use to prove
your thinking too!
So, we were thinking further about this idea down here that you could partition an array into a different array, and still be able to cover the same area.
[Screen shows 10 by 10 grid paper, two 0 to 9 spinners made from decagons, and a large paper clip on each spinner.
The grid has areas outlined in colour. There is a purple line around rows 1 to 3 and columns 1 to 6, which is an area of 3 squares down and 6 squares across. The text 3 sixes is written in this area.
There is a red line around rows 1 to 2 and columns 7 to 10, which is an area of 2 squares down and 4 squares across. The text 2 fours is written in this area. There is a pink line around rows 4 to 10
and columns 1 to 7, which is an area of 7 squares across and 6 squares down. The text 7 sixes is written in this area. There is a green line around rows 3 and 4 and columns 7 to 9, which is an area
of 2 squares down and 3 squares across. The text 2 threes is written in this area. There is an orange line around rows 5 to 7 and column 7, which is an area of 3 squares down and 1 square across. The
text 3 ones is written in this area. There is a light blue line around rows 5 to 7 and columns 8 to 10, which is an area of 3 squares down and 3 squares across. The text 3 threes is written in this
area. There is also a light blue line around rows 8 to 10 and columns 7 to 9, which is an area of 3 squares down and 3 squares across. The text 3 threes is written in this area. There is a dark blue
line around 1 square in row 10, column 10. The text 1 one is written in this area.
To the right of the grid is a list written in different-coloured markers. The list is:
3 sixes equals 18 equals 6 threes, in purple and light blue marker.
2 fours equals 8, in red marker.
42 equals 7 sixes, in pink marker.
6 equals 2 threes, in green marker.
3 ones equals 3 times 1 equals 3, in orange marker.
1 one equals 1, in dark blue marker.
6 threes equals 3 threes plus 3 threes equals 3 times 3 plus 3 times 3 equals 18, in light blue marker.
Presenter points to 2 areas outlined in light blue marker which are marked 3 threes, to indicate each area of 9 squares.]
So, we thought we'd use some evidence to show you how this works.
So, I just made a copy of my game board.
[Presenter holds a second game board. This game board is on blue paper and has the same areas of squares outlined as on the first game board. The areas of squares on the blue game board have the same
labels and are outlined in the same colours as on the first game board.]
You can see that they're exactly the same, except that we now have run out of white paper, so we're using blue, and so if I cut out this area, which I'm saying is the same... that 3 sixes is the same
as 6 threes, which is the same as 3 threes combined with 3 threes more.
It makes sense why you might go: “Oh my gosh! What are you talking about?”
So, there's my 3 sixes.
[Presenter then cuts out the area of squares labelled 3 sixes on the blue game board. This is an area of 18 squares. Presenter places this next to the first game board.]
And then here is 1 lot of 3 threes.
[Presenter cuts out an area of squares labelled 3 threes from the blue game board. This is an area of 9 squares. The presenter then cuts out the second area of squares labelled 3 threes from the blue
game board.]
And so, here's my 3 sixes from my game board and here's one of my 3 threes. And here's the other 3 threes. And we can see that they match my game board.
[Presenter puts the cut out of 3 sixes on top of the area of 3 sixes on the first game board. Presenter then puts each cut out of 3 threes on top of each area of 3 threes on the first game board.]
And now if I take them and lay them over the top of each other like this....
[Presenter picks up the cut out of 3 sixes and the 2 cut outs of 3 threes. Presenter puts the 2 cut outs of 3 threes, which amounts to an area of 18 squares, on top of the cut out of 3 sixes, which
is an area of 18 squares.]
I can also see that they have the exact same area and so whilst we're naming it differently and it looks a bit different when it's cut up, this is how I can see that 3 sixes is equivalent in value in
area of 3 threes and 3 threes.
[Presenter lays the cut out of 3 sixes on the table, then puts the 2 cut outs of 3 threes beneath the cut out of 3 sixes.]
And in fact, what it's making me think about too is how many other ways could I partition 3 sixes and name them so that I still have an area of 18 squares, but I can start to think about all the
different ways that that area could be composed.
[Presenter picks up the cut out of 3 sixes to show the area of 18 squares.]
Over to you, mathematicians!
[End of transcript]
Share your work with your class on your digital platform. You may like to:
• write comments
• share pictures of your work
• comment on the work of others. | {"url":"https://education.nsw.gov.au/teaching-and-learning/curriculum/mathematics/mathematics-curriculum-resources-k-12/thinking-mathematically-resources/mathematics-s1-s3-multiplication-toss","timestamp":"2024-11-14T17:11:08Z","content_type":"text/html","content_length":"209423","record_id":"<urn:uuid:99e6794d-5d1a-4be8-8dc3-c75df24d61f5>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00226.warc.gz"} |
What is harder than Calculus? - Is Discrete Math hard?
What is harder than Calculus? – Is Discrete Math hard?
Calculus is a branch of mathematics that deals with solving differential calculus. Calculus is to be considered as the entry point if the student wants to get further study that includes physics,
chemistry, biology, economics, and finance. calculus is used for tracking space shuttles and for noting the pressure developing in the dam that makes the water rise.
Quick Takeaways
□ Of course, calculus is a hard subject especially if you are a beginner in mathematics.
□ However, if you are thinking of doing a degree in mathematics, you must have knowledge of it.
□ Also, calculus is different from discrete maths. Thus many of the students find discrete maths more difficult than Calculus.
However, you must read till the end so that you get a better understanding of if discrete maths is harder than calculus
What is harder than Calculus?
Yes, calculus is considered to be hard as it is the advanced form of mathematics that acts as a bridge between two courses of math such as high school math and advanced math in college. If a student
wants to pursue further math he/she must know calculus calculation. Choosing math for higher education in which calculus is the introduction part of mathematics. High school calculus is divided into
two parts:
• Pre-calculus has topics like series, sequences, probability, limits, statistics, and derivatives.
• Calculus is divided into more parts in college degrees such as calculus I, calculus II, calculus III, and calculus IV.
The student who wants to pursue a degree in math for that calculus is considered to be an easier part of math as the student already has the concepts built up in higher education. Harder topic in
math from calculus is:
• real-analysis
• abstract-algebra
• complex analysis
• Topology
• differential geometry.
Is discrete math hard?
No discrete math is not as hard as the student believes but for the student who is taking discrete math with calculus and linear algebra. Discrete math mainly focuses on the concepts of mathematics
given by various mathematicians. But for students who have not taken a discrete math class in proofs, it will be hard for them to study discrete. Student who finds discrete math hard can study online
and cover the topic in which they are having doubts. Many reasons show discrete math is not hard to study:
1. Discrete is practical:
Discrete is a practical subject and other classes are theoretical which is boring when compared to the practical class. For example, linear algebra is a theoretical class and when we come to a
real-life situation we can use it. As discrete math can be used in computer science also.
2. Creative Thinking:
In calculus or algebra, the student needs to learn a lot of formulas and solve the questions. But in discrete math, you don’t need to memorize any formula; it enables the student to think
mathematically and solve the questions. The student whose creativity in thinking math is high will be enabled to solve the question. Here we can say discrete math is not hard.
3. Discrete Math Gives You An Added Advantage
Some topics in discrete math are used further such as probability and counting in discrete math which questions are asked in the contest or interview.
4. Learning makes exciting :
There are some topics in discrete math that make the students learn with excitement such as probability, number theory, and counting. It makes the teacher and the student learn them by not getting
bored. But in calculus or algebra, their student has to sit for an hour watching the teacher. There is no such excitement as compared to discrete math.
Where can discrete math be required?
If you are interested in studying computer science but not hardware study where discrete math is used. The commonly used discrete math in the following topic or subject in computer science-
• Compiler design
• Databases
• Security
• Operating systems
• Automata theory
• Functional programming
• Algorithms
• Computer architecture
• Machine learning
• Networks
• Distributed systems
Topic going to cover under discrete math:
The following are some main topics that you are going to learn under discrete math:
• Probability
• Boolean algebra
• Matrices
• Mathematical Reasoning
• Induction and Recursion
• Permutation and Combination
• Relations
• Trees
• Modeling Computation
What is harder, discrete, or calculus?
When compared discrete math to calculus they are different when we talk calculus is based on algebraical calculation, is more theoretical based, and needs to learn formulas but discrete math focuses
on questions like puzzle-solving and is more practical based, need not needs to learn a lot of formulas. If a student has taken both math i.e discrete math the student needs to have basic knowledge
and understanding for solving problems and for calculus the student needs to learn the formula and have to memorize them with lots of practice.
Is calculus needed to be learned for discrete?
Calculus which is used to teach the student during high school education is enough to learn discrete math. No need to learn calculus at the college level.
We can conclude that discrete math includes the concept that deals with distinct values. but calculus math includes the form of continuous mathematics in which the student has to thoroughly learn the
formula otherwise the student will not be able to solve the questions. | {"url":"https://robotopicks.com/what-is-harder-than-calculus-is-discrete-math-hard/","timestamp":"2024-11-04T10:51:59Z","content_type":"text/html","content_length":"92133","record_id":"<urn:uuid:3943ab97-e9fc-4276-b08e-62cc020c6956>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00331.warc.gz"} |
The Great Pyramid: a stone book that hides a great secret
Recently (specifically, in March 2022) Iván Martínez, from VM Gran Misterio, has conducted an interview with me, in which, with the help of my colleague Diego Méndez, we have presented some of the
fundamental ideas that are collected in our books Ecos de la Atlántida and El árbol de los mitos. Basically, in addition to comparing myths and symbols from different parts of the world, as regards
the so-called "myth of origins", we compile a series of numerical constants that emerge from the detailed study, both geometric and arithmetical, of the measurements of the Great Pyramid of Giza.
These were established by Sir Flinders Pétrie, at the end of the 19th century, and are still in force (see below).
Basically, these measurements show that the builders of the Great Pyramid had knowledge of the number Pi (Π), the number Fi (Φ) and the meter. We will demonstrate this with the following operations:
Π / 6 = 0.5236 (Giza royal cubit, basic measure of the Great Pyramid).
Φ^2 / 5 = 0.5236 (Giza royal cubit).
It is noteworthy that the first of these arithmetic operations (Π / 6 = 0.5236) is equivalent to drawing a circumference of 3.14159 meters, with a diameter of 1 meter. This implies that the builders
of the Great Pyramid knew the meter, which is equivalent to one ten-millionth of the distance between the Pole and the terrestrial Equator.
Moreover, if we divide the circumference of the earth (40,030 kilometers, if we look only at the diameter, and do not take into account the slight flattening of the poles) by the perimeter of the
Great Pyramid at its base (926.1 meters), we will obtain a relation 1:43,200 (approximately). If we multiply 432 by 60 we obtain the number of years of the stellar cycle known as the Precession of
the Equinoxes (25,920). I talk about all this in annexes 1 and 2.
Annex 1: Pi, Fi, the meter, and squaring the circle (below).
Annex 2: Interview with Iván Martínez, from VM Gran Misterio (below).
I recently had the opportunity to see an audiovisual document of great importance. This is the documentary La revelación de las pirámides. I advise the reader to take a look at it, as it provides a
large amount of serious and verified information. Well, in that documentary I found a third relationship that I hadn't noticed:
Π - Φ^2 = 0.5236 (Giza royal cubit).
Again, we get the royal cubit of the Great Pyramid. Which means that we can obtain this through the measurement of Pi, Fi, the standard meter, and the difference between Pi and the square of Fi.
Amazing! This makes me think that not only is Fi a divine number (or golden, as it is called), but also the royal cubit of the Great Pyramid.
Viewing the documentary The Revelation of the Pyramids made me think of an idea that I had already expressed in Ecos de la Atlántida: What if the Great Pyramid is a "stone book" in which, in addition
to encrypting a series of universal measurements, refers to a fact, or event, that took place in the past? The proximity of the Sphinx, which pointed to Regulus (in the constellation of Leo) when
that star was located just in front of the monument, a little over 11,000 years ago (perhaps pointing to the Universal Deluge), makes me think of the possibility, mentioned in the aforementioned
documentary, that the Great Pyramid is also a commemorative monument that intends to preserve -for later generations- a warning: perhaps the event that took place 11,000 years ago, and that is likely
to happen again in the future, as a consequence of the great sidereal cycle called Precession of the Equinoxes. Note that the Great Pyramid incorporates the ratio 1:43,200, which if we reduce it to
432 (and multiply it by 60) gives us the exact number of the Great Precessional Year: 25,920 years, divided into 12 Zodiacal eras.
Thus, the Great Pyramid would be a scale model of the dimensions of the Earth (1:43,200) and the duration of the Great Precessional Year (25,920 years).
Another documentary, with which I have some discrepancies in detail, but which from my point of view contributes valuable ideas regarding the Forbidden History, has an impact on this aspect: the
Great Pyramid warns us that a great catastrophe is happening suddenly. periodic on Earth, at a certain moment of the Great Precessional Cycle. This great catastrophe would have coincided, in the
past, with the so-called Younger Dryas, which happened about 12,800 years ago. As I said above, in Ecos de la Atlántida I mention a similar idea, put forward by Martin B. Sweatman and Dimitrios
Tsikritsis of the University of Edinburgh in their article “Decoding Göbleki Tepe with Archaeoastronomy: what does the fox say?” .
In the Abstract of this work it is literally stated: “We have interpreted much of the symbolism at Göbekli Tepe in terms of astronomical events. Comparing the bas-reliefs on some of the pillars at
Göbekli Tepe with asterisms of stars we find clear evidence that the famous 'vulture stone' marks the date of 10950 BC. C. +/- 250 years, which corresponds to the Younger Dryas event [see above],
estimated around 10890 BC. We have also found evidence that Göbekli Tepe's primary function was to observe meteor showers and signal comet encounters. In fact, the people of Göbekli Tepe had a
special interest in the Taurid meteor stream, the same meteor stream that is proposed to be responsible for the Younger Dryas event. Is Göbekli Tepe the smoking gun [the definitive proof] of the
planetary encounter that caused the Younger Dryas, and as a consequence, of a coherent catastrophism?
Now I think that like the builders of Göbleki Tepe, dated around 11,500 before the present (approximately), those who built the Great Pyramid wanted to make future generations (with respect to their
times) of an event, perhaps recurrent in time, which took place in the Age of Leo, and could happen again in the future. This event would be related to the so-called Taurid meteors (according to
Martin B. Sweatman and Dimitrios Tsikritsis), or to the so-called Planet X and the Dead Sun, located beyond the Kuiper belt, on the outer edge of our solar system (according to the authors of the
documentary Forbidden History).
The measurements of the Great Pyramid according to Flinders Pétrie
To finish, I would like to present here some quotations from two works by Sir Flinders Pétrie, in which he documents his precise and exhaustive measurement of the Great Pyramid. These works are:
I) The Pyramids and Temples of Gizeh, published in 1883.
II) The Pyramids and Temples of Gizeh (extended version), published in 1883.
He says (I, page 28): “The four sides there yield a mean value of 20'632 [inches] +/- 004, and this is certainly the best determination of the cubit [real cubit] that we can hope from the Great
Pyramid”. According to his measurements, the tomb in the King's Chamber had a height of 2 cubits (II, 86), or 41.31 inches. From this measurement of the cubit, established from English inches, he
gives a commonly accepted measurement of the Great Pyramid of 440 royal cubits on a side by 280 royal cubits in height (II, 183). Furthermore, he considers that these measurements (440 cr versus 280
cr) are confirmed by what he calls “Pi number theory” (II, 199).
And what does this theory of the number Pi consist of? He explains it himself: “For the whole form the Π proportion (height is the radius of a circle = circumference of Pyramid) has been very
generally accepted of late years, and is a relation strongly confirmed by the presence of the numbers 7 and 22 in the number of cubits in height and base respectively; 22:7 being one of the best
known approximations to Π” (I, 93). One of the fruits of this relationship 440 cr (base of one side of the pyramid) by 280 cr (height of the pyramid) is its inclination (51^o51') (II, 184), which
bases the rest of the measures and proportions of the monument (see below), both inside and outside (II, 222).
In addition to pointing out the close precision of the monument, which has Pi as its unit of reference or measurement (from which, as we have seen, we obtain the actual cubit of the Great Pyramid),
Flinders Pétrie echoes the perfection in the carving of the blocks, so well fitted that not even a hair fits between block and block: “In fact, the means employed for placing and cementing the blocks
of soft limestone, weighing a dozen to twenty tons each, with such hair-like joints, are almost inconceivable at present” (I, 86). The aforementioned metrologist and scholar also marvels at the
elaborate drilling of stone vessels found in Egypt during the Old Kingdom, or in other pieces of hard stone such as granite, in which mechanical pressure (not manual) of up to two tons has been made.
Not to mention the regularity and perfection of each of the curves of this drilling, or the incredible speed of execution, in stones as hard as feldspar or quartz, which excludes manual work (I, 78;
II, 177 ).
Flinders Pétrie also exposes his ignorance of the Great Pyramid builders' methods of raising the stone blocks to great heights (I, 84), and alludes to the curious circumstance that inside, except for
the markings of the discharge chamber (above the Chamber of the King), no inscriptions of any kind are found (I, 90). Be that as it may, he reaches the following conclusion: “[The Great Pyramid]
There may be seen the very beginning of architecture, the most enormous piles of building ever raised, the most accurate construction known, the finest masonry, and the employment of the most
ingenious tools” (I, 1).
Annex 1: Pi, Fi, the meter, and squaring the circle
Here I present a passage from my article The great pyramid was not built in the time of Cheops: all the evidence.
Jean Pierre Adam writes: “It is necessary to know that, of all the civilizations of Antiquity, from China to Rome, Egypt has certainly been the most indifferent to research in general, and especially
to mathematics ... Egypt, in all its history, has transmitted only seven documents dealing with the subject, of which only one, the Rhind papyrus, is of some importance. This is how we know that
Egypt was satisfied with a number Pi = 3, like many other peoples, and that it never managed to overcome the multiplication table by two. A single exception appears in the form of calculating the
surface of the circle, using the elevation to the square of the 8/9 of the diameter, which would give a value of Pi = 3.16, but nothing proves that the author of the exercise has thought never to get
a value of Pi, which also would have been unable to write ”(page 168).
In my book Ecos de la Atlántida we find the following reasoning: “The number Pi (Π) [3,14159] is the door of Knowledge (hence it is represented with its Greek sign, which has the shape of a door).
It is the fundamental key of Geometry, and at the same time of Gnosis. What relationship does it have with the Great Pyramid? Next I am about to show that this enormous construction represents
another method of preserving memory. This 'stone book' is not only a 'memorial' of a past event (which it is), but also the receptacle of Sacred Knowledge (Gnosis) ... We have the proof in the basic
measure of the entire set of the Great Pyramid: the royal cubit of the pyramid of Cheops. This, which measures 0.5236 meters (later we will check how we can obtain this figure from the study of the
geometry of the monument), is the result of dividing Π by 6 (the result is 0.523598). Later on we will verify that Pi is used by the builders of the pyramid to obtain, from a given inclination of its
faces (51 degrees 51 minutes), the 'squaring of the circle', having the perimeter of its four faces as the base of the square , and the height as the radius of the circle. But from the royal cubit
(0.5236) we also obtain the so-called 'golden number': Fi (Φ). This is evident if we perform the following operation: Φ2 / 5 (the result is 0.5236). This sacred number (Φ) is also found in the
geometry of the pyramid, as we will see in due course ”(page 382).
The Fi number, also called the "golden ratio," or "golden number," is expressed by the number 1.618033. In geometric terms it constitutes "the existing relationship in the equation 'AB is to AC as AC
is to BC.', Being C an interior point of the segment that joins A and B". This proportion, which during the 16th century was known in Italy as Divine Proportion, is also related to another irrational
number well known to geometers and builders: the number Pi (see above). Although the Fi number was discovered by the mathematician Leonardo de Pisa, better known as Fibonacci (1175-1250), it was
presumably used since ancient times (in this case, it is implicit in the proportions of the Great Pyramid).
Not only that. Those who built the Great Pyramid knew the measurement today called the standard meter: the ten millionth part of the distance that separates the Pole from the terrestrial Equator,
according to the definition of the French Academy of Sciences (from the late 18th century). This is easy to demonstrate.: If we draw a line one meter long, which makes the diameter of a circle, its
circumference will measure 3.14159 meters. Well, an arc equivalent to 1/6 of said circumference (marked by a hexagon inscribed in it) measures exactly 0.5236 meters; again, the royal cubit of the
Great Pyramid. What does that mean, besides the obvious fact that the builders of this monument knew the exact dimensions of the Earth (the geodetic meter), and also such complex numbers (and at the
same elementary) such as the number Pi and the number Fi?
Obtaining the royal cubit from the meter as a standard measure
André Pochan (page 148) accepts the value given by Flinders Pétrie of the royal cubit of the pyramid of Cheops: “From the twelve measurements made on the walls of the chamber, Pétrie obtains the
value of the cubit, that is 0.52367 meters, value which must be considered as the best and closest to the cubit that was used during the IV Dynasty ”. And what values in cubits do we find in the
Great Pyramid? They are numerous: the width of the Great Gallery (including the side benches) is 4 cr (royal cubits); the length of the King's Chamber is 20 cr; the width of the King's Chamber is 10
cr; the height of the sarcophagus of the King's Chamber is 2 cr; the dimensions of the Queen's Chamber are: length (11 cr), width (10 cr), height (9 cr), top (12 cr); plinth of the pyramid (1 cr).
And let us especially note the base of a face (440 cr) and the height of the pyramid (280 cr, to which we must add a real elbow of the socket). It so happens that if we add the length of the base
(440 cr) and the diameter of the circle that has the height as a radius (280 cr x 2 = 560 cr) we obtain a length of 1,000 royal cubits. This is no accident, of course. And it gives us an idea that
the builders of the Great Pyramid knew the decimal number base.
Now we have to ask ourselves. Is there evidence that the builders of the Great Pyramid knew the meter? The answer is yes. To give an example, the King's Chamber of the pyramid of Cheops is exactly 43
meters high above the base; the diagonal of its main wall measures 12 meters, and its volume is 321 m. However, the King's Chamber is calculated in royal cubits, which comply with the so-called
"isiac triangle" (or Pythagoras, with the proportions 3/4/5): the diagonal of its smaller wall (15 cr), the length (20 cr) and the interior diagonal (25 cr) form a triangle of proportions 15/20/25
that, when dividing its lengths by 5, turns out to be the triangle 3/4/5 (Miquel Pérez-Sánchez Pla).
Miquel Pérez-Sánchez Pla writes: “Among the particularities that the monument already presented at the beginning there are four especially important: an approximation to the squaring of the circle;
the presence of the number Fi - or golden number -, equal to 1.6180 and considered the number of beauty; its orientation with the four cardinal points; and the proportion between the height of the
monument and the distance to the sun ”. Next we will deal with the first three.
Pi and Fi are clearly recognizable in the pyramid of Cheops. To obtain these numbers we will use the measurements made by Flinders Pétrie, which are usually employed as reference data: 440 royal
cubits as the side of the base of the pyramid, and 280 royal cubits as its height. As regards to Pi, it is enough to calculate the perimeter of its base (1,760 royal cubits) and compare it with the
result of multiplying the height (280 royal cubits, equivalent to the radius) by 2Π (2 x 3.14159 = 6, 28318). This gives us 1,759.3 royal cubits. Ultimately, we obtain the number Pi by dividing the
perimeter of the base of the pyramid of Cheops by twice its height.
Obtaining the number Pi from the measurements of the Great Pyramid. Here the approximation to the "squaring of the circle" is fulfilled in a geometric way (source: Peter Tompkins)
Obtaining arithmecally the number Pi
Here we find two "coincidences". First, the result of Pi (3.14286) coincides with the product of the division between 22 and 7 (which is at the base of all pyramid calculations, as Flinders Pétrie
indicates). Second, the base of the pyramid is a square whose perimeter is equal to the circumference of a circle whose radius is the height of the pyramid. In short, in the Great Pyramid, the
so-called “squaring of the circle” is fulfilled - to a great extent. In order for this result to be obtained, the constructors had to know the number Pi with fairly considerable precision, which
contradicts our view of Egyptian mathematics (expressed above by Jean Pierre Adam).
(Flinders Pétrie makes it quite clear. Considered that the Egyptians calculated the number Pi as the result of dividing 22 by 7. Doing the corresponding calculation [3.142857 x 2 x 280], the result
is 1,759.99, which coincides with the area of the base of the pyramid [1760 royal cubits]. André Pochan, page 148.)
As regards the number Fi, its calculation is very simple, given the proportions of the Great Pyramid: we have to draw the apothem of one of its faces, which divides its base in half (440: 2 = 220
royal cubits). If the value of the base of a half-face is 1 (220 royal cubits), the apothem is given by Fi (220 x 1.618033 = 355.97 royal cubits), and the height takes on the root value of Fi (220 x
1.27 = 279.84 royal cubits). In short, we obtain a good approximation of Fi (1.61818) dividing the apothem (356 royal cubits) by the half face of the base (220 royal cubits).
Obtaining the Fi number from the measurements of the Great Pyramid (source: Peter Tompkins)
Obtaining arithmecally the number Fi
How can we obtain these results? (The number Pi, the number Fi, and the squaring of the circle). For this, it is necessary that the inclination of the pyramid is exactly 51 degrees and 51 minutes.
Only in this way can the “square of the circle” be achieved (in which the ratio of its height to the perimeter of its base is equal to the ratio of the radius of a circle to its circumference). But
also the royal cubit must have a very precise length: exactly 0.5236 cm. And how did the builders of the Great Pyramid achieve this measure? They did so by applying a simple rule, which presupposes
knowledge of the geodetic meter: if we draw a line one meter long, which makes the diameter of a circle, the circumference will measure 3.14159 meters (equal to the number Pi). Well, an arc
equivalent to 1/6 of said circumference (marked by a hexagon inscribed in it) measures exactly 0.5236 meters: it is the real cubit of the Great Pyramid. In this way, the real cubit of the pyramid of
Cheops is the result of the knowledge: 1) of the meter and 2) of the number Pi (again).
All this suggests that the builders of the Great Pyramid were familiar with the shape and size of the Earth, since the meter (which is the basis for the measurement of the Egyptian royal cubit) is
the ten-millionth part of a meridian arc between the pole and the equator (see above). An indication of this "sublime science" is in the perfect orientation with respect to true North, and its
incredible precision: the base is uniform, with a deviation of only 2.1 cm; the mean deviation of the sides from the cardinal points is an arc of 3 minutes 6 seconds; and the largest difference in
length of the sides (for an average of 230.41 m) is 4.4 cm. Peter Tompkins writes: "The Great Pyramid was so precisely aligned with the cardinal points of the compass that it surpassed in precision
any human construction made to date [in the time of Flinders Pétrie]".
Note that Jean Pierre Adam attributes to the poor precision of the number Pi of the ancient Egyptians (which he figures as 3) the reason “why the inclination of the sides of the pyramid of Cheops
gives a value of 51º50 '” (page 168 ), when in fact the opposite happens. It is precisely this inclination that ensures that the fundamental measurements of Pi, Fi, the royal cubit and the square of
the circle (and even the geodetic meter) are fulfilled, implicit in a very clear and evident way in the proportions of the Great Pyramid. This is not a minor error in the work of Jean Pierre Adam, as
we have seen.
Annex 2: Interview with Iván Martínez, from VM Gran Misterio (March 2022)
IVÁN MARTÍNEZ: There is a very important part in this, but very few people know about it. In Egypt, next to the Sphinx, there is a small temple, a temple that is not touristic, it is not walkable, it
is closed. There I was able to take some impressive photos of some giant stone blocks… They are gigantic blocks in one piece; but what is impressive is that they were behind other blocks that are
more modern, covering these older ones. And the photos speak for themselves. They are two types of stone that exist in Egypt, in terms of its ancient construction, that do not fit together; more
similar to Sacsahuaman [in Peru] than to what we have in Egypt itself, as if there had been two types of constructions: one ultra-ancient, and another from the new Egypt that was built on top...
There is nothing like it in Egypt, only in that temple.
JOSÉ LUIS ESPEJO: We should also talk about the Osireion of Abydos, which has very similar stones, for example, to those of the temple of the Sphinx...
IVÁN MARTÍNEZ: Right, … in the Osireion there is also that type of construction. What happens is that many times it is flooded with water, they don't let you go down. But the times they let you down
you see it. And there was also a very strange symbol on the wall...
JOSÉ LUIS ESPEJO: The Flower of Life. This symbol is very curious. I give it a certain meaning, because note that it is a sexifolia, and this appears, for example, in the rural symbology of the
Iberian Peninsula… What does sexifolia mean? It is called the Flower of Life. I think that the fundamental part of this symbol is the number 6, because one of the things that I think we have
highlighted in this book [Ecos de la Atlántida] as in El árbol de los mitos is the importance of 6 and 60… The Egyptians they had a numeral system of base 10, practically from the fourth millennium
before Christ. However, the pyramid of Cheops has a base 6 numeral system, which is the one that must be attributed, for example, to Sumeria. Why?... One of the things that is not usually talked
about is that the royal cubit of the pyramid of Cheops, which is 0.5236 meters, is found in the base of the pyramid, but you also find it in what it is the tomb of the king's chamber, which is two
royal cubits high. It is easy to distinguish and calculate. In fact, it was done by Flinders Pétrie, who was a measurement specialist. By the way, the word pyramid has the word middá [The name
“pyramid” derives from the Greek πυρα (pyre, “fire”) and —paradoxically— from the Hebrew הדמ (midda, “measuring”)]. In short, this real cubit is obtained by dividing the number Pi [Π] by 6. If you
divide the number Π, which is 3.14159, by 6 you have exactly the royal cubit. And not only that… If you make a circumference with a diameter of one meter you have exactly a circumference of 3.14159
[meters]. Which means that the Egyptians not only knew the number Π, but also knew the meter. And if they knew the meter they knew the size of the Earth. Because a meter is one ten-millionth of the
distance between the Equator and the Pole. But not only that. They also knew the number Fi [Φ], which is so important in mathematics, and it is the number - let's say - of gold, which is called. How
can the number Φ be obtained? You just have to see the next operation. Φ^2/5 [Fi squared divided by five] is exactly the real cubit: 0.5236. I encourage viewers to do these calculations with a
calculator and you will see. What does all this mean? That inside the great pyramid there is a series of measurements and figures of high mathematics that are incomprehensible in a Chalcolithic
society, in the year 2500 before Christ, which is -more or less- when the pyramid was dated... They are incomprehensible. What this means is that the Great Pyramid is hiding, encrypting, a series of
measurements and knowledge that come -let's say- from a very technological society, with high mathematical knowledge, which are not compatible with a society from the copper era , from the time of
Cheops. And another very curious thing too, which must be said... This book, Ecos de la Atlántida, we have divided into two parts. The first part is what I call The Legend, where we basically talk
about comparative mythology and comparative symbology. The second part is what I call, what we call, The Inheritance, where we talk about both the material legacy and the immaterial legacy. Both the
Sphinx and the Great Pyramid are part of the so-called Material Legacy, and that "material legacy" encrypts a series of knowledge and secrets; of measures. But later, if we talk about the Immaterial
Legacy, we can talk, for example, about what the Map of the Sky is, which is the Firmament. We could talk about that, and we could expand a lot. For example, the meaning of the Swastika, in relation
to the constellation of Hercules, which at that time was called "the knees of Hercules". 11,000 years ago the Pole was in the middle of the knees of Hercules. If we rotate the knees of Hercules we
have the Trinacria, which is the primitive Swastika. We are talking about an antiquity of 11,000 years. But then we can see other things. For example, what I call the “seal”; the "seal of the
Ancients". When we see for example the constellation of Canis Maioris, the constellation of Orion and the constellation of Taurus we see three symbols or signs. The first would be the "rosa canina",
the "dog star" [Sirius], which the Freemasons turned into the "rosa canina"; Keep in mind that the rose is a symbol of "secret". When we talk about Orion's belt we are talking about the three stars,
or three points, which is a Masonic symbol. And when we talk about Taurus we are talking about the Hyades, which is a stellar group... in the shape of a compass or the letter A. We are talking here
about the three fundamental symbols of universal Freemasonry. When do I consider this map of Heaven, which includes Hercules, which includes Draco, to be made? Hercules kills the Dragon… If we look
at a star map we will see that in the northern part Hercules kills the Dragon, which is like saying that the hero kills the beast. I would only like to mention one example of this myth, which is that
of Indra killing the celestial serpent Vritra. This appears in the Rig Veda. Literally it is said: Indra killed the celestial serpent, and then the waters began to flow. What does the celestial
serpent represent, that is, Draco? Represents the boreal glaciers. What does the fact that Indra, or the Hero, may already be Gilgamesh, may already be Hercules, may already be Orion, or different
heroes that appear in universal mythology, kill the dragon represent? It means that the waters begin to run. And that's kind of a mythical explanation of the Flood. When the boreal glaciers melted,
the waters began to circulate and flow. That was obviously a great catastrophe. And then also, to end this topic because it is getting a bit long, I want to talk about the fact that, within the
intangible heritage, or the intangible legacy, we would have to talk about the name of the deity, within the Judeo-Christian cultural area, in what I am registered, to which I belong… We are talking
about Yahweh and Elohim. Yahweh is how the deity was called in Judah, in southern Israel. Elohim is how the deity was called in northern Israel, in Samaria. Well, I think it's no coincidence that "I
am who I am", that famous phrase that the deity said to Moses, appears in Exodus 3:14. Chapter 3, verse 14. Curiously, 3.14 is the number Pi [Π]. It is also not by chance that Elohim is written as
Alhim, and has the value of the numerical Qabalah [Gematria] of 3.1415. And I am going to read it to you: Elohim [Alhim] is made up of Aleph, which has the number 1, Lamed, the number 30, He, the
number 5, Yod, the number 10, and Mem, the number 40. This is the numeric Kabbalah; that is, the numerical value of each of these letters. If we rearrange them [if it is an anagram] we have the
number 3.1415 [using for this the first digit of each figure]. Again the number Π. Is it by chance? Obviously not. That means that at a certain moment, what I call a "committee of wise men",
encrypted certain information, which in this case is the number Pi, in the name of the deity. Both in the South, that is, in Judah, that is, the name of Yahweh, and in the North, which is Elohim.
What do I mean by that? That when we are talking about... When the Freemasons, for example, talk about the deity having the... representative letter G, we are talking about Gnosis, that is,
knowledge, the G for Geometry, and the G for God. We are talking about the Great Architect. What I mean by this? Now to finish. That knowledge is very old, it has been transmitted from generation to
generation, that it is visible to all… You just have to know how to read it; and to read it you have to have the key to the symbology; know the symbols to be able to read, let's say, the hidden
knowledge, which is visible to everyone. And that there is a "committee of wise men" that has somehow preserved that knowledge from generation to generation, since ancient times. Both in an
immaterial way, that is, representing it in the heavens, in the Firmament, with the Map of the Sky, and in great works, such as the pyramid of Cheops. And with this I finish.
IVÁN MARTÍNEZ: In fact, aside from what you've said, the G is also a spiral. It is the golden number with the Fibonacci sequence… The secret of mathematics hidden in those symbols that secret
societies have left us, now discreet. And of course, ancient man, those cultures, whoever discovered mathematics, was knowledge for a select few, for a few who could transmit it and were suspicious
of it. Just like secret societies; that is to say, the ancients formed the secret societies without having the names of today, but stories also resonate with me, such as the tablets in Sumer, which
spoke of the Apkallu, the Seven Wise Men, also known as seven individuals, characters, or beings, from outside, which were distributed throughout the world and generated, or copied, a part of the
mother culture, in their own way, with symbols that coincided. One of them was the story of Oannes, the fish-man, who came out of the water, although there are certain variations in older stories
that told us that he came out in a bucket in the water. They are older stories, but definitely there was someone or something, with a mother knowledge, who copied it in other places.
JOSÉ LUIS ESPEJO: … That Oannes was converted, in the Jewish tradition, into Ioannes, John.
IVÁN MARTÍNEZ: Yes, I saw that in the Vatican. In the Vatican yo see Oannés, big. I was pulling the thread, and it is indeed what you say.
JOSÉ LUIS ESPEJO: The ancient equator [Jim Alison's theory] would have as a pole [a point located] in the area of Alaska, but that is impossible, because 18,000 years ago the Pole was in the
approximate area of Greenland… What does that mean? Whatever the ancient Ecuador is, it has nothing to do with an inclination of the earth's axis... The ancient Ecuador has an angle, with respect to
the current earth's axis, of 30 degrees. It is as if there had been a displacement of the earth's axis of 30 degrees; but it is known that 18,000 years ago, which was the height of the last ice age,
the Earth's axis was in the Greenland area. It was not in Alaska… Which means that this ancient Ecuador marks something that is not a climatological or geophysical issue, but rather marks something
else. There must be something in Alaska or its surroundings that explains the reason for that ancient Ecuador. An Ecuador of 40,000 exact kilometers, which divides the Earth in two. That is something
unknown. What is there to explain the fact that this ancient equator exists if climatologically, and by means of geomorphology and geophysics it is not explainable?... For there to have been an
ancient equator with that contour [with a slope 30 degrees from today's equator, according to Jim Alison's theory], there would have to have been a shift of the continents, or a 30-degree axis shift.
And that has not been scientifically proven. The wobble of the earth's axis is what explains the seasons, for example... The fact that there is spring and summer, and that we don't always have the
same climate... What is the "precession of the equinoxes" thing? Simply this wobble of the terrestrial axis [inclined 23.5 degrees] produces a displacement of the constellations, on a specific point,
at the vernal equinox, that is, at the spring equinox, of approximately 1 degree every 72 years; and this lasts 25,920 years. And this displacement occurs contrary to the movement of the planet,
movement of translation and rotation. That is to say, it has a clockwise direction, it does not have an anticlockwise direction… But it is purely visual. We simply see the constellations, the stars,
move across the horizon at a rate of 1 degree every 72 years. In total, 25,920 years [72 x 360 = 25,920].
IVÁN MARTÍNEZ: Purely visual, but in the past it could have governed the constructions as well.
JOSÉ LUIS ESPEJO: Yes. What does this mean? Well, that "committee of wise men" established that each zodiacal age would have its own gods. There are the bull gods, the ram gods, the fish gods, the
lion gods… And that brings me to the subject of the Sphinx. The Sphinx is oriented perfectly to the East. The Sphinx is named Horus on the Horizon. What does that mean? It just so happens that… in
9000 BC, specifically on May 28, 9000 BC. [according to the Stellarium program], the star Regulus, which is the Heart of the Lion, has that name, it was perfectly facing the Sphinx. The Lion [the
Sphinx] was facing the Lion [the Leo constellation]. Moreover, with Regulus, which is the only star in Leo that coincides with the Ecliptic. And what is the Ecliptic? It is, say, the apparent path of
the Sun on the Horizon. Well, exactly 11,000 years ago, just over 11,000 years ago, well that happened in 9000 BC, in what was the spring equinox of 9000 BC, which happened on May 28, there was a
correspondence between the Sphinx and the Regulus star. That happened only once. It happens only once every 25,920 years. And it so happens that at that time the Pole was between the knees of
Hercules. If we look at a sky chart we will see that Hercules is on top of Draco, and between his knees, between his legs, we see that there is an empty space. The Pole was there 11,000 years ago.
That is the origin of the Swastika, which is one of the oldest symbols in existence. And that obviously has nothing to do with the Nazis. On the other hand, the usual Swastika is dextrorotatory, that
is, it turns to the right, and that of the Nazis was counterclockwise, which turns to the left. What does this mean? Well, those who built the Sphinx knew perfectly well the Precession of the
Equinoxes. That is, that time cycle of 25,920 years. Moreover, we can also say one thing. Those who built the pyramids turned the pyramid of Cheops into a scale model of the Earth. Why? Because if we
compare the perimeter of the pyramid, including the plinth, which is 927.1 meters [926.1 meters actually], with the perimeter of the Earth, the Equator, which is 40,030 kilometers [if we calculate it
from the diameter , without taking into account a slight flattening of the Poles, as actually happens], and we do the division, we get a scale of 1:43,200 [approximately], which leads us to the
number 432, which together with 72 and 60 are sacred numbers around the world. If we multiply 432 by 60, and there we return to the issue of sexagesimal numbering used by the builders of the Great
Pyramid... If we multiply 432 by 60 we have exactly 25,920, which is the number of years of the precession. In other words, the Great Pyramid, at the same time that tells us the number Pi, the number
Fi, and the meter, it is also telling us that it is a scale model of the Earth at a scale of 1:43,200, and it is also telling us the exact duration of the Great Precessional Cycle, which is 25,920
meters [years]. It is one more example that in the Great Pyramid, the pyramid of Cheops, there are many figures, many proportions, which are encrypted, that you only have to see through, let's say...
make a few rules of three, a few multiplications and divisions. It is not too complicated to establish these correlations.
DIEGO MÉNDEZ: Much of this material that you are giving, we have also based on a cum laude doctoral thesis by Miquel Pérez Sánchez... from the Universitat Politècnica de Catalunya, of which I was
lucky enough to attend some of its conferences, and thanks to the book [by Miguel Pérez Sánchez] I have some data that are surprising, amazing. For example, the number Pi not only appears in these
measurements that José Luis Espejo has given us, but if we add up all the visible surfaces of the Great Pyramid, it gives us in real cubits precisely the number of 314,159, that is, 314159. Again the
number Pi is also there. And for example, meters. They measure in meters. How can we know that they knew the meter, apart from the fact that Π/6 of a circumference of a meter in diameter gives us the
real cubit, which is precisely the height of the base of the Great Pyramid. It seems as if they were indicating the measurement standard of the Great Pyramid, both the meter and the royal cubit. The
volume of the king's chamber is 321.00 cubic meters. The height of the King's chamber is also 43.00 meters [from the floor of the Giza plateau], or the edge of the Great Pyramid is also given in
exact meters, 218.00. In other words, we have a series of figures that are there, that are verifiable, that are simply there. Now, the conclusion that we can draw from all this, well, let each one
think for himself. The meter, its measure was defined at the end of the 18th century, I seem to remember. As José Luis Espejo had already told us, it is one ten-millionth of half of a terrestrial
IVÁN MARTÍNEZ: Not even our machines are that perfect. 0.0001 margin of error. We can't make a Great Pyramid. I'd say it's almost impossible.
DIEGO MENDEZ: Exactly. That question was asked by José Luis Espejo. He asked me as an engineer. With today's machinery and technology, would we be able to build the Great Pyramid? I say, well, it
would be one of the greatest monuments made today in the world, but we have pulleys, we have cranes, we can move tons and tons. But the fact is also in designing it. Design it with all this that we
are commenting on, all this numerology. And all this knowledge. We are talking…
IVÁN MARTÍNEZ: Diego, how long did it take to build the Great Pyramid? How many years does history leave us?
DIEGO MÉNDEZ: Of course, there is no more data, I think I remember, what Herodotus gives us regarding the years of construction of the Great Pyramid. He tells us about 20 years.
IVÁN MARTÍNEZ: The new Egyptian museum, which was in Cairo and now they are putting it on the Giza plain, has taken more than 30 years. So you can see. Today's technology is slower when it comes to
building... Sure, we can't compare it to the Great Pyramid, but... It's that key point that today we may not be able to copy what they did in the past...
DIEGO MÉNDEZ: Only the design of the Great Pyramid is brutal. It is not only the external part, but we already know that the internal part has different chambers... The underground chamber, you told
me the other day, I don't know if you noticed that there were some gamma symbols, like L's but inverted... This is also discussed in the book. I think it's the beginning of the book [Ecos de la
Atlántida]. That gamma, somehow, this G that we were talking about, of Geometry, or of God, or of the Freemasons, perhaps is also related. In the book, of course, what is tried is to look for
connections with all these anomalies, with all these symbolic elements... I think that the work of comparative mythology that Ecos de la Atlántida has is very important. Hence, it has been a kind of
springboard to think about the new book, a new book called El árbol de los mitos. It is an atlas of comparative mythology, and one of the few, if not the only one, that speaks and compares
mythologies from all over the world, without exception. I don't know if there were about 300 different cultures, contrasted with each other, with some tables that will help the reader understand all
these analogies and coincidences between different cultures, and somehow I think that Ecos de la Atlántida [El árbol de los mitos] has focused above all on some of these mythemes, understanding
mytheme as those common elements in the myth… There are two or three myths that are discussed in Ecos de la Atlántida, which would be the Deluge… The Deluge is one of those myths spread by
everywhere, and then the Civilizing Hero. You spoke, you commented, Oannes, for example, would be a Civilizing Hero. We have Quetzalcoatl, we have Osiris, and so, scattered throughout the world, in
all its mythologies, there is always some wise man, or some god, who favors knowledge. We already have Prometheus, for example, who is the giver of fire, and in different cultures we also have a
giver of fire who is a god. That is to say, that encrypted knowledge, that knowledge somehow encapsulated in the myths, in the symbology, is what Ecos de la Atlántida is about. Try to elucidate what
knowledge these gods brought us, where these gods came from... Hence Ecos de la Atlántida. Where do these gods come from? What is the original earth? The origin of these divinities… Many of these
myths tell us that these gods come by sea, they were navigators, therefore in some way also in the myth when we see the catastrophes that it speaks of, they end humanity, by divine punishment.
Normally, these catastrophes are by fire and water, and somehow something falls from the sky as well. The sky is cloudy, the sun is black. There is no light. And then what happens? A rise in sea
JOSÉ LUIS ESPEJO: I would like to make a note there, when you talk about the sun turning black. That appears both in Greek mythology, in the myth of Phaethon, and in Egyptian mythology [the myth of
Sekhmet], as well as in Mayan mythology. , in the Popol Vuh. I would like to talk about the myth of blind Orion. Or of the blind Horus, whose eyes are gouged out. Horus, when he fights Seth, Horus
tears off his testicles and part of his thigh, hence the myth of Mekhestiu, in Egyptian mythology, while Seth tears out [Horus'] eyes. Whenever this great conflagration happens, which is at the same
time a battle, it is a fight, the hero is blinded. Orion also remains blind in the myth, after a series of adventures he has with Atlas and his daughters, for example with Merope; Merope disappears…
I mean, one way [the myths] have of symbolizing this darkness is the “blind hero”. It is very curious and it is also something universal. I also want to make a note on the subject of heroes. Before
you spoke of the Apkallu, who were seven. The Annedotus were also seven [in Sumeria], the Rishi were also seven [in India], the Shebtiu were also seven [in Egypt]... The Shebtiu were the builder gods
of the beginning of time. The number 7 appears constantly when we talk about the Civilizing Heroes... We also talk about the Maruts (in India), the Nommos (in Sudan)... These Civilizing Heroes, who
are not only one, but who come as a team, in company... something universal, they appear in all the myths of the world... | {"url":"https://www.joseluisespejo.com/index.php/500-the-great-pyramid-a-stone-book-that-hides-a-great-secret","timestamp":"2024-11-08T09:22:20Z","content_type":"application/xhtml+xml","content_length":"73556","record_id":"<urn:uuid:67eb5cb2-61dc-4f50-b891-3465b5f7808e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00041.warc.gz"} |
Euler, Leonhard
EULER, LEONHARD (1707-1783), Swiss mathematician, was born at Basel on the 15th of April 1707, his father Paul Euler, who had considerable attainments as a mathematician, being Calvinistic pastor of
the neighbouring village of Riechen. After receiving preliminary instructions in mathematics from his father, he was sent to the university of Basel, where geometry soon became his favourite study.
His mathematical genius gained for him a high place in the esteem of Jean Bernoulli, who was at that time one of the first mathematicians in Europe, as well as of his sons Daniel and Nicolas
Bernoulli. Having taken his degree as master of arts in 1723, Euler applied himself, at his father's desire, to the study of theology and the Oriental languages with the view of entering the church,
but, with his father's consent, he soon returned to geometry as his principal pursuit. At the same time, by the advice of the younger Bernoullis, who had removed to St Petersburg in 1725, he applied
himself to the study of physiology, to which he made a happy application of his mathematical knowledge; and he also attended the medical lectures at Basel. While he was engaged in physiological
researches, he composed a dissertation on the nature and propagation of sound, and an answer to a prize question concerning the masting of ships, to which the French Academy of Sciences adjudged the
second rank in the year 1727.
In 1727, on the invitation of Catherine I., Euler took up his residence in St Petersburg, and was made an associate of the Academy of Sciences. In 1730 he became professor of physics, and in 1733 he
succeeded Daniel Bernoulli in the chair of mathematics. At the commencement of his new career he enriched the academical collection with many memoirs, which excited a noble emulation between him and
the Bernoullis, though this did not in any way affect their friendship. It was at this time that he carried the integral calculus to a higher degree of perfection, invented the calculation of sines,
reduced analytical operations to a greater simplicity, and threw new light on nearly all parts of pure mathematics. In 1735 a problem proposed by the academy, for the solution of which several
eminent mathematicians had demanded the space of some months, was solved by Euler in three days, but the effort threw him into a fever which endangered his life and deprived him of the use of his
right eye. The Academy of Sciences at Paris in 1738 adjudged the prize to his memoir on the nature and properties of fire, and in 1740 his treatise on the tides shared the prize with those of Colin
Maclaurin and Daniel Bernoulli - a higher honour than if he had carried it away from inferior rivals.
In 1741 Euler accepted the invitation of Frederick the Great to Berlin, where he was made a member of the Academy of Sciences and professor of mathematics. He enriched the last volume of the Mélanges
or Miscellanies of Berlin with five memoirs, and these were followed, with an astonishing rapidity, by a great number of important researches, which are scattered throughout the annual memoirs of the
Prussian Academy. At the same time he continued his philosophical contributions to the Academy of St Petersburg, which granted him a pension in 1742. The respect in which he was held by the Russians
was strikingly shown in 1760, when a farm he occupied near Charlottenburg happened to be pillaged by the invading Russian army. On its being ascertained that the farm belonged to Euler, the general
immediately ordered compensation to be paid, and the empress Elizabeth sent an additional sum of four thousand crowns.
In 1766 Euler with difficulty obtained permission from the king of Prussia to return to St Petersburg, to which he had been originally invited by Catherine II. Soon after his return to St Petersburg
a cataract formed in his left eye, which ultimately deprived him almost entirely of sight. It was in these circumstances that he dictated to his servant, a tailor's apprentice, who was absolutely
devoid of mathematical knowledge, his Anleitung zur Algebra (1770), a work which, though purely elementary, displays the mathematical genius of its author, and is still reckoned one of the best works
of its class. Another task to which he set himself immediately after his return to St Petersburg was the preparation of his Lettres à une princesse d'Allemagne sur quelques sujets de physique et de
philosophie (3 vols., 1768-1772). They were written at the request of the princess of Anhalt-Dessau, and contain an admirably clear exposition of the principal facts of mechanics, optics, acoustics
and physical astronomy. Theory, however, is frequently unsoundly applied in it, and it is to be observed generally that Euler's strength lay rather in pure than in applied mathematics.
In 1755 Euler had been elected a foreign member of the Academy of Sciences at Paris, and some time afterwards the academical prize was adjudged to three of his memoirs Concerning the Inequalities in
the Motions of the Planets. The two prize-questions proposed by the same academy for 1770 and 1772 were designed to obtain a more perfect theory of the moon's motion. Euler, assisted by his eldest
son Johann Albert, was a competitor for these prizes, and obtained both. In the second memoir he reserved for further consideration several inequalities of the moon's motion, which he could not
determine in his first theory on account of the complicated calculations in which the method he then employed had engaged him. He afterwards reviewed his whole theory with the assistance of his son
and W.L. Krafft and A.J. Lexell, and pursued his researches until he had constructed the new tables, which appeared in his Theoria motuum lunae (1772). Instead of confining himself, as before, to the
fruitless integration of three differential equations of the second degree, which are furnished by mathematical principles, he reduced them to the three co-ordinates which determine the place of the
Moon; and he divided into classes all the inequalities of that planet, as far as they depend either on the elongation of the Sun and Moon, or upon the eccentricity, or the parallax, or the
inclination of the lunar orbit. The inherent difficulties of this task were immensely enhanced by the fact that Euler was virtually blind, and had to carry all the elaborate computations it involved
in his memory. A further difficulty arose from the burning of his house and the destruction of the greater part of his property in 1771. His manuscripts were fortunately preserved. His own life was
only saved by the courage of a native of Basel, Peter Grimmon, who carried him out of the burning house.
Some time after this an operation restored Euler's sight; but a too harsh use of the recovered faculty, along with some carelessness on the part of the surgeons, brought about a relapse. With the
assistance of his sons, and of Krafft and Lexell, however, he continued his labours, neither the loss of his sight nor the infirmities of an advanced age being sufficient to check his activity.
Having engaged to furnish the Academy of St Petersburg with as many memoirs as would be sufficient to complete its Acta for twenty years after his death, he in seven years transmitted to the academy
above seventy memoirs, and left above two hundred more, which were revised and completed by another hand.
Euler's knowledge was more general than might have been expected in one who had pursued with such unremitting ardour mathematics and astronomy as his favourite studies. He had made very considerable
progress in medical, botanical and chemical science, and he was an excellent classical scholar, and extensively read in general literature. He was much indebted to an uncommon memory, which seemed to
retain every idea that was conveyed to it, either from reading or meditation. He could repeat the Aeneid of Virgil from the beginning to the end without hesitation, and indicate the first and last
line of every page of the edition which he used. Euler's constitution was uncommonly vigorous, and his general health was always good. He was enabled to continue his labours to the very close of his
life. His last subject of investigation was the motion of balloons, and the last subject on which he conversed was the newly discovered planet Herschel (Uranus). He died of apoplexy on the 18th of
September 1783, whilst he was amusing himself at tea with one of his grandchildren.
Euler's genius was great and his industry still greater. His works, if printed in their completeness, would occupy from 60 to 80 quarto volumes. He was simple and upright in his character, and had a
strong religious faith. He was twice married, his second wife being a half-sister of his first, and he had a numerous family, several of whom attained to distinction. His éloge was written for the
French Academy by the marquis de Condorcet, and an account of his life, with a list of his works, was written by Von Fuss, the secretary to the Imperial Academy of St Petersburg.
The works which Euler published separately are: Dissertatio physica de sono (Basel, 1727, in 4to); Mechanica, sive motus scientia analytice exposita (St Petersburg, 1736, in 2 vols. 4to); Einleitung
in die Arithmetik (ibid., 1738, in 2 vols. 8vo), in German and Russian; Tentamen novae theoriae musicae (ibid. 1739, in 4to); Methodus inveniendi lineas curvas, maximi minimive proprietate gaudentes
(Lausanne, 1744, in 4to); Theoria motuum planetarum et cometarum (Berlin, 1744, in 4to); Beantwortung, etc., or Answers to Different Questions respecting Comets (ibid., 1744, in 8vo); Neue Grundsatze
, etc., or New Principles of Artillery, translated from the English of Benjamin Robins, with notes and illustrations (ibid., 1745, in 8vo); Opuscula varii argumenti (ibid., 1746-1751, in 3 vols.
4to); Novae et correctae tabulae ad loca lunae computanda (ibid., 1746, in 4to); Tabulae astronomicae solis et lunae (ibid., 4to); Gedanken, etc., or Thoughts on the Elements of Bodies (ibid. 4to);
Rettung der gottlichen Offenbarung, etc., Defence of Divine Revelation against Free-thinkers (ibid., 1747, in 4to); Introductio in analysin infinitorum (Lausanne, 1748, in 2 vols. 4to); Scientia
navalis, seu tractatus de construendis ac dirigendis navibus (St Petersburg, 1749, in 2 vols. 4to); Theoria motus lunae (Berlin, 1753, in 4to); Dissertatio de principio minimae actionis, una cum
examine objectionum cl. prof. Koenigii (ibid., 1753, in 8vo); Institutiones calculi differentialis, cum ejus usu in analysi Infinitorum ac doctrina serierum (ibid., 1755, in 4to); Constructio lentium
objectivarum, etc. (St Petersburg, 1762, in 4to); Theoria motus corporum solidorum seu rigidorum (Rostock, 1765, in 4to); Institutiones calculi integralis (St Petersburg, 1768-1770, in 3 vols. 4to);
Lettres à une Princesse d'Allemagne sur quelques sujets de physique et de philosophie (St Petersburg, 1768-1772, in 3 vols. 8vo); Anleitung zur Algebra, or Introduction to Algebra (ibid., 1770, in
8vo); Dioptrica (ibid., 1767-1771, in 3 vols. 4to); Theoria motuum lunae nova methodo pertractata (ibid., 1772, in 4to); Novae tabulae lunares (ibid., in 8vo); Théorie complète de la construction et
de la manœuvre des vaisseaux (ibid., 1773, in 8vo); Eclaircissements sur établissements en faveur tant des veuves que des morts, without a date; Opuscula analytica (St Petersburg, 1783-1785, in 2
vols. 4to).
See Rudio, Leonhard Euler (Basel, 1884); M. Cantor, Geschichte der Mathematik.
Note - this article incorporates content from Encyclopaedia Britannica, Eleventh Edition, (1910-1911)
About Maximapedia | Privacy Policy | Cookie Policy | GDPR | {"url":"https://maximapedia.com/e/euler-leonhard.html","timestamp":"2024-11-05T07:25:39Z","content_type":"text/html","content_length":"16109","record_id":"<urn:uuid:14edf3ca-5450-482e-9926-97f7238bcbc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00842.warc.gz"} |
protons and ions traversing a shielding material
The current web calculator for electronic stopping powers exploits a) the "SRIM Module.exe" (with an upper energy limit of 5 GeV/amu) included in SRIM 2013 code (SRIM Tutorials) - whose maximum
available energy is 10 GeV/amu -, i.e., the used electronic stopping power tables are those provided by "SRIM Module.exe" code with a low energy limit of 1 eV; and b) the energy-loss equation (i.e.,
Eq. (2.18) in Sect. 2.1.1 of [Leroy and Rancoita (2016)]) as discussed here. The overall approach is referred to as SR-treatment framework.
The following link give access to the Web Applications for the calculation of the residual spectral fluence or residual energy for protons and ions traversing an absorber:
- Web Calculator for residual spectral fluences or residual energies of uniderectioal protons and ions traversing an absorber up to high energies
How to use this Calculator for particle spectral fluences or mono-energetic particles traversing an absorber
For an incoming particle spectral fluence or mono-energetic particles, this tool allows one to calculate the residual spectral fluence or residual energy after traversing a shielding material.
The input parameters and options for the tool are described below. When the input form has been completed, pressing the "CALCULATE" button will start the calculation and open the "Results" page
(allow for pop-up in your browser settings). The result page will be also linked at the bottom of the calculator page.
Input Parameters:
- Input type
- Incident particle
- Target material
- Traversed path
- Number of steps
- Particle spectral fluence.
Input type
In the web Calculator, using the selector at the top of th calculator panel, the user can select the calculation of the residual spectral fluence or the residual energy for incoming protons or ions.
Spectral fluence is the default option:
User has to change selection for mono-energetic particles:
Incident Particle
In the web Calculator, using the pull down menu, the user can select the species of incident particles, i.e., protons or any other elemental ions.
Except for proton and alpha particle masses, the user can also modify the mass (in amu) of the incident particle (e.g., for all isotopes one can refer to this page): the default mass is that of the
most abundant isotope (MAI). Further information are available at the following webpage.
Target Material
In the section "Target Selection" it is possible to specify an User Defined target material or a predefined Compound material. User can also select the target as gas, this is allowed only for single
element and natural gas target (H, He, N, O, F, Ne, Cl, Ar, Kr, Xe, Rn).
The stopping power in target gases is usually higher than that in an equivalent solid target. The Gas/ Solid correction disappears for higher velocity ions with energies above 2 MeV/amu. But at lower
velocities the effect can be quite large - almost a 2 times change in stopping bacause of the Phase effect near the Bohr velocity, 25 keV/amu.
in the User Defined section individual elements can be selected as well as the composition of the target material choosing the number of elements in the compound. The required parameters for each
element are:
- Atomic number (Z)/Chemical symbol
- Stoichiometric index or element fraction
Electronic Stopping Power for User Defined Compounds
Electronic Stopping Power for User Defined Compounds can be determined by means of Bragg's additivity rule, i.e., the overall Electronic Stopping Power in units of MeV cm^2/g (i.e., the mass
electronic stopping power) is obtained as a weighted sum in which each material contributes proportionally to the fraction of its atomic weight. For instance, in case of a GaAs medium ones obtains
(e.g., Eq. (2.20) at page 15 in [ICRUM (1993)]):
where A[Ga] [A[As]] are the Electronic Stopping Power (in units of MeV cm^2/g) and the atomic weight of Gallium [Arsenic], respectively.
As discussed in SRIM. (see help of "The Stopping and Range in Compounds" in SRIM-2013), the Compound Correction is usually zero for compounds containing heavy atoms, Al(Z>=13) or greater. All
experiments with compounds such as Al[2]O[3], SiO[2], Fe[2]O[3], Fe[3]O[4], SiC, Si[3]N[4], ZnO, and many more, show less than 2% deviation from Bragg's additivity rule which estimates the stopping
by the sum of the stopping in the elemental constituents. That is, the stopping in Al[2]O[3] is the same as the sum of the stopping in 2 Al + 3 O target atoms. For these compounds there is no need
for a Compound Correction. This correction should be accounted for in compounds containing mostly H, C, N, O and F for ion stopping below 2 MeV per atomic mass unit and is negligible above 5 MeV per
atomic mass unit. In the current calculator, no correction is applied for target atoms lighter than Al. Further details are available at SRIM Compound, and SRIM Compound Theory.
Predefined compounds
In the Compoud section it is possible to select a predefined compound including the SRIM compound corrections in the stopping power calculation.
For instance, in the following plot, it is shown the percentage difference of the stopping power of H[2]O (selected as User Defined material) and Water_Liquid (selected as a Compound) as a function
of the incoming proton energy in MeV:
Traversed Path
This input define the traversed path by a particle fluence. The traversed path is expressed in [g cm^-2] ad is given by:
(absorber thickness in cm) x (absorber density in g cm^-3) .
The lower limit of traversed path is equivalent to about 5 μm of Si (1.16x10^-3 [g cm^-2]), e.g., 0.96 cm in Dry Air at sea level with density equal to 1.20484x10^-3 g/cm^3 (as implemented in SRIM
from ICRU-37 table 5.5).
Number of Steps
In the web Calculator, using the pull down menu, the user can select the number of steps of the calculation - i.e., the traversed path is divided by the number of the steps.
The results of each steps is used as input for the following one to obtain the final result for the total traversed path.
In each step, the minimum traversed path is equivalent to about 5 μm of Si (1.16x10^-3 [g cm^-2]), e.g., 0.96 cm in Dry Air at sea level with density equal to 1.20484x10^-3 g/cm^3 (as implemented in
SRIM from ICRU-37 table 5.5). The number of steps will be accordingly modified to keep each step above the minimum.
Particle Spectral Fluence
This section define the points of the spectral fluence as a function of energy.
The input format is one point per line (Energy - Flux , separated by a space or tab); it is also possible to copy and paste values. The minimum value of the particle spectral fluence is 1 keV.
Mono-Energetic Particle
This section define the list of energy of the incoming protons or ions.
The input format is one energy per line; it is also possible to copy and paste values. The minimum value of the particle energy is 1 keV.
The result page contains the graph of the input spectral fluence and the spectral fluence after traversing the absorber. The table provides the values of the spectral fluence (above 1 keV) and the
residual spectral fluence (above 1 keV) after traversing the absorber. In addition, the incoming and residual spectral fluences are provided. In case of mono-energetic particles a table with the
incoming energy and residual energy is provided.
Residual energies were obtained disregarding multiple scattering effect, i.e., directions of flight of emerging particles were unchaged with respect to those of impinging directions.
Extension for high energy particles
As discussed in this webpage the mass electronic stopping power is derived at sufficiently high energies by mean of energy-loss equation (i.e., Eq. (2.18) in Sect. 2.1.1 of [Leroy and Rancoita (2016)
]) while, at low energies, SRIM treatment has to be employed.
• Electronic stopping power for single elements.
For every ions passing through any elemental medium up top uranium - with the exception of Z = 85 and Z = 87 for which no data are avalable to account the density effect -, the transition energy at
which the SRIM treatment is replaced by that employing Eq. (2.18) (from Sect. 2.1.1 of [Leroy and Rancoita (2016)]) is such that i) protons and ions (from He up to U) are considered almost
fully-ionized and the term accounting for the non-participation of inner electrons of the medium (with atomic number Z) in the collision loss process is negligible (as discussed in this web page),
ii) difference among the mass electronic stopping powers (derived foloowing the two approaches) typically does not exceed 5%.
In Figure 3, It is shown the percentage difference among the two approaches at the upper limit of the chosen transition energy range, for every incident ion in each elemental target. The overall
average difference for every ions in every target is 2% (± 1.4%). The worst case (5.41 ±0.38%) occours for incident ions with z = 91, the best case (0.34 ±0.33%) for for incident ions with z = 34.
Figure 3. Percentage difference between SRIM and energy-loss equation at the upper limit of the transition energy range, for every incident ion in each elemental target . The mass of the incoming
particle corresponds to that of the most aboundant isotope.
In Figures 4 (protons in silicon medium) and 5 (iron-ions in silcon medium) , the so finally mass electronic stopping powers within sr-niel frameworth are shown together with the corresponding SRIM
Figure 4. Mass electronic stopping power as a function of energy for protons in Silicon. Black solid curve is SR-NIEL treatment, red dashed curve is SRIM calculation.
Figure 5. Mass electronic stopping power as a function of energy for Fe ions in Silicon. Black solid curve is SR-NIEL treatment, red dashed curve is SRIM calculation.
• Electronic stopping power for compounds
For every ions passing through a compound reported here, the electronic stopping power is derived means of the SRIM treatment at low energies and bty that from SR-framework at high energies,
similarly to what already discussed for elemental media. For few compounds belonging to the ICRU list the parameters employed for the energy loss formula (including those for the densiity effect) are
reported in Table II of Sternheimer et al. (1984).
In Figure 6, It is shown the percentage difference among the two approaches at the upper limit of the chosen transition energy range, for every incident ion in each of compound media (see compound
list webpage). The overall average difference for every ions in every target is 2.25% (± 1.24%). The worst case (4.81 ±0.91%) occours for incident ions with = 91, the best case (0.65 ±0.95%) for
incident ions with = 1. In about 0.3% of all possible combinations among incident particles and compound media, such a percentage difference exceeds the value of 7%. In those cases, only the
electronic stopping power from SRIM is availble.
Figure 6. Percentage difference between SRIM and energy-loss equation at the upper limit of the transition energy range, for every incident ion in each of compound media: the compound number is
provived in the compound list webpage. The mass of the incoming particle corresponds to the one of the most aboundant isotope
In Figures 7 (protons in propane medium) and 8 (iron-ions in propane medium) , the so finally mass electronic stopping powers within sr-niel frameworth are shown together with the corresponding SRIM
Figure 7. Mass electronic stopping power as a function of energy for protons in Propane. Black solid curve is SR-NIEL treatment, red dashed curve is SRIM calculation.
Figure 8. Mass electronic stopping power as a function of energy for Fe ions in Propane. Black solid curve is SR-NIEL treatment, red dashed curve is SRIM calculation. | {"url":"https://www.sr-niel.org/index.php/sr-niel-web-calculators/web-calculator-for-determine-the-residual-spectral-fluence-or-the-residual-energy-up-to-high-energy-for-protons-and-ions-traversing-a-shielding-material","timestamp":"2024-11-14T14:57:03Z","content_type":"text/html","content_length":"41933","record_id":"<urn:uuid:8f8be44f-d571-4ae6-acc7-105ab1dd710b>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00061.warc.gz"} |
Multiple Axis Bar Charts In R 2024 - Multiplication Chart Printable
Multiple Axis Bar Charts In R
Multiple Axis Bar Charts In R – You could make a Multiplication Graph or chart Pub by labeling the columns. The still left line need to say “1” and signify the amount increased by a single. On the
right hand side of the dinner table, brand the posts as “2, 4, 6 and 8 and 9”. Multiple Axis Bar Charts In R.
Suggestions to learn the 9 times multiplication table
Discovering the nine occasions multiplication dinner table is not an easy task. Counting down is one of the easiest, although there are several ways to memorize it. Within this secret, you place both
your hands around the table and amount your hands one by one from one to 10. Fold your 7th finger to be able to start to see the ones and tens on it. Then count the amount of hands and fingers left
and proper of the folded away finger.
When studying the desk, children may be afraid of bigger numbers. This is because incorporating larger numbers continuously turns into a task. However, you can exploit the hidden patterns to make
learning the nine times table easy. One of many ways is to create the 9 occasions table with a cheat page, read through it out loud, or exercise composing it lower commonly. This method can make the
table more memorable.
Habits to look for on the multiplication graph
Multiplication chart night clubs are perfect for memorizing multiplication details. You will find the product of two phone numbers by exploring the columns and rows of the multiplication graph or
chart. For instance, a line that is all twos as well as a row that’s all eights ought to satisfy at 56. Patterns to search for with a multiplication graph club are like individuals in a
multiplication table.
A pattern to look for with a multiplication chart is definitely the distributive residence. This home may be observed in all of the columns. As an example, a product or service x two is equal to five
(occasions) c. This exact same home relates to any line; the sum of two columns equates to the price of one other line. For that reason, an odd amount instances a much quantity is surely an even
number. A similar applies to the items of two odd numbers.
Creating a multiplication graph from memory
Developing a multiplication graph from storage will help kids discover the diverse amounts within the occasions desks. This simple physical exercise allows your son or daughter to commit to memory
the numbers and see the best way to multiply them, that helps them later on once they get more information challenging math concepts. To get a enjoyable and fantastic way to memorize the phone
numbers, it is possible to prepare shaded buttons to ensure each corresponds to a particular periods table quantity. Ensure that you tag every row “1” and “” to help you swiftly determine which
number comes very first.
As soon as children have enhanced the multiplication graph or chart bar from storage, they should dedicate them selves to the task. This is why it is better try using a worksheet instead of a
conventional notebook computer to rehearse. Colourful and computer animated character templates can interest the feelings of your respective kids. Before they move on to the next step, let them color
every correct answer. Then, exhibit the chart within their study area or bedrooms to work as a prompt.
Employing a multiplication graph or chart in your everyday living
A multiplication chart shows you how to flourish figures, one to 10. It also reveals the item of two amounts. It may be useful in everyday life, like when splitting up funds or accumulating
information on individuals. The subsequent are one of the methods use a multiplication graph or chart. Make use of them to help your child know the idea. We have now pointed out just a few of the
most prevalent uses of multiplication desks.
You can use a multiplication graph or chart to assist your kids learn how to decrease fractions. The secret to success is usually to stick to the denominator and numerator left. Using this method,
they will see that a fraction like 4/6 may be reduced to a fraction of 2/3. Multiplication charts are particularly helpful for youngsters mainly because they assist them to identify variety habits.
You can find FREE computer versions of multiplication chart night clubs on the web.
Gallery of Multiple Axis Bar Charts In R
Solved ggplot Multiple Stacked Bar Charts For Large X axis Dataset R
R Graph Gallery RG 8 Multiple Arranged Error Bar Plot trallis Type
Multiple Bar Charts In R Data Tricks
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/multiple-axis-bar-charts-in-r/","timestamp":"2024-11-02T11:53:12Z","content_type":"text/html","content_length":"53616","record_id":"<urn:uuid:26c85b92-76b4-4188-b24e-c1c133e2154c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00344.warc.gz"} |
Efficient Exact Inference in Planar Ising Models
N. N. Schraudolph and D. Kamenetsky. Efficient Exact Inference in Planar Ising Models. Technical Report 0810.4401, arXiv, 2008.
Short version
We give polynomial-time algorithms for the exact computation of lowest-energy (ground) states, worst margin violators, log partition functions, and marginal edge probabilities in certain binary
undirected graphical models. Our approach provides an interesting alternative to the well-known graph cut paradigm in that it does not impose any submodularity constraints; instead we require
planarity to establish a correspondence with perfect matchings (dimer coverings) in an expanded dual graph. We implement a unified framework while delegating complex but well-understood subproblems
(planar embedding, maximum-weight perfect matching) to established algorithms for which efficient implementations are freely available. Unlike graph cut methods, we can perform penalized
maximum-likelihood as well as maximum-margin parameter estimation in the associated conditional random fields (CRFs), and employ marginal posterior probabilities as well as maximum a posteriori (MAP)
states for prediction. Maximum-margin CRF parameter estimation on image denoising and segmentation problems shows our approach to be efficient and effective. A C++ implementation is available from
BibTeX Entry
author = {Nicol N. Schraudolph and Dmitry Kamenetsky},
title = {\href{http://nic.schraudolph.org/pubs/SchKam08.pdf}{
Efficient Exact Inference in Planar {I}sing Models}},
number = {\href{http://arxiv.org/abs/0810.4401}{0810.4401}},
institution = {\href{http://arxiv.org/}{arXiv}},
year = 2008,
b2h_type = {Other},
b2h_topic = {Ising Models},
b2h_note = {<a href="b2hd-SchKam09.html">Short version</a>},
abstract = {
We give polynomial-time algorithms for the exact computation
of lowest-energy (ground) states, worst margin violators, log
partition functions, and marginal edge probabilities in certain
binary undirected graphical models. Our approach provides an
interesting alternative to the well-known graph cut paradigm
in that it does not impose any submodularity constraints; instead
we require planarity to establish a correspondence with perfect
matchings (dimer coverings) in an expanded dual graph. We
implement a unified framework while delegating complex but
well-understood subproblems (planar embedding, maximum-weight
perfect matching) to established algorithms for which efficient
implementations are freely available. Unlike graph cut methods,
we can perform penalized maximum-likelihood as well as
maximum-margin parameter estimation in the associated conditional
random fields (CRFs), and employ marginal posterior probabilities
as well as \emph{maximum a posteriori} (MAP) states for prediction.
Maximum-margin CRF parameter estimation on image denoising and
segmentation problems shows our approach to be efficient and
effective. A C++ implementation is available from
Generated by | {"url":"https://schraudolph.org/bib2html/b2hd-SchKam08.html","timestamp":"2024-11-13T12:49:45Z","content_type":"text/html","content_length":"5924","record_id":"<urn:uuid:c3fc0f37-6850-49f2-91b0-9ba9bbb9f543>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00364.warc.gz"} |
Thomas’ Calculus 13th Edition Chapter 14: Partial Derivatives - Section 14.2 - Limits and Continuity in Higher Dimensions - Exercises 14.2 - Page 796 42
Work Step by Step
Consider first approach : $(x,y) \to (0,0)$ along $y=0$ This implies that $\lim\limits_{x \to 0}\dfrac{x^4}{x^4+(0)^2}=1$ Next, let us consider our second approach : $(x,y) \to (0,0)$ along $y=x^2$
This implies that $\lim\limits_{x \to 0}\dfrac{x^4}{x^4+(x^2)^2}=\dfrac{1}{2}$ This shows that there are different limit values for different approach, so, the limit does not exist at the point (0,0)
for the function $f(x,y)=\dfrac{x^4}{x^4+y^2}$. | {"url":"https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-14-partial-derivatives-section-14-2-limits-and-continuity-in-higher-dimensions-exercises-14-2-page-796/42","timestamp":"2024-11-07T23:36:00Z","content_type":"text/html","content_length":"85921","record_id":"<urn:uuid:96101ca2-432e-42ce-83ef-5ad2fabcc464>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00203.warc.gz"} |
The truth value of an array with more than one element is ambiguous. - sopython
The truth value of an array with more than one element is ambiguous.
>>> x = np.arange(10)
>>> if x<5: print('Small!')
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
That x<5 is not actually a boolean value, but an array of 10 bools, indicating which values are under 5.
If what the user wants is (x<5).any() or (x<5).all() and they just failed to read the error, close as a dup (separate questions for numpy and pandas).
If what the user wanted was to use x<5 as a mask array to do further array processing instead of whatever loop with an if or while they were attempting, that could be a good question, or a “I don’t
want to read the numpy tutorial, I want you to write me a new numpy tutorial instead” question, but probably not a dup. | {"url":"https://sopython.com/canon/119/the-truth-value-of-an-array-with-more-than-one-element-is-ambiguous/","timestamp":"2024-11-12T07:14:11Z","content_type":"text/html","content_length":"7414","record_id":"<urn:uuid:9a34ca74-7377-47d8-9fbb-6f6b604efb4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00697.warc.gz"} |
The Giza Complex - The Vanishing Point - John Legon 2005
John Legon's Vanishing Point Update - September 2005
Giza Pyramid Vanishing Point Circle relationships
(This page is for archival purposes, please access the Mr. Legon's existing page here.)
Following a recent discussion with Stephen Goodfellow, I've been taking another look at the Giza Vanishing Point, and have been checking my original calculations with the help of a CAD program. I
have also been looking again at the anomalous boundary wall to the south of the Third Pyramid, and its connection with the two encompassing circles of the three pyramids.
In my original computation of the Vanishing Point, I used analytical geometry to determine the centres of the circles, based upon the coordinates of the pyramid-corners through which the
circumferences of the two circles had to pass. Those coordinates were expressed in terms of Egyptian royal cubits, measured southwards and westwards from the north-east corner of the Great Pyramid,
as determined by my analysis of the Giza Site Plan. Essentially, the coordinate points represented a conversion into cubits of Petrie's survey data in inches, although Petrie himself did not think
that a connection between the three pyramids had been planned. The centres of the circles were obtained from the points of intersection of the perpendicular bisectors of the notional chords which can
drawn between the respective corners of the three pyramids, as detailed in my original diagram below:
With the assistance of CAD, it is a relatively simple matter to determine the dimensions of circles passing through any three points, given the coordinates, and hence to confirm that the results of
the calculations which I carried out back in April 1987 are correct. Note that the above diagram was sent to Stephen for his own information. If I had known that it would later appear on his web site
(and now on my own), I might have taken more care with my handwriting....
The following diagram shows where the centres of the two circles are located in relation to the three pyramids. Only a portion of the larger circle has been drawn, since the dimensions are too vast
for the full circumference to be represented effectively on the same scale as the pyramids themselves.
Now with regard to the dimensions of these circles, I had always taken the view that they were coincidental by-products of the Giza Site Plan, without any real significance of their own. After all,
it seemed highly unlikely that the architect of the site plan could have calculated the radii and the centres, let alone have configured the layout to obtain specific results. In any case, I had
already ascribed the exact dimensions and relative positions of the three pyramids to a highly logical and coherent design, in which every measurement had been explained and often with reference to
more than one requirement. There was little reason to think that any further factors had to be taken account - least of all two circles of enormous size.
However, there was a nagging suspicion that my preconceptions were not entirely justified. From the outset, one or two of the dimensions were clearly significant, as I mentioned to Stephen in a
letter many years ago. It was exceedingly strange that the centre of the large circle was just 11,000 cubits southwards from the vanishing point itself, with a computed discrepancy of only 0.13
cubit. Not only did the chances of randomly obtaining such a round number with such accuracy seem rather slight, but the number was significant in its own right. In the Giza plan, as we have seen,
the modular design placed the south side of the Second Pyramid 1100 cubits southwards from the north side of the Great Pyramid, so that the north-south dimension encompassing these two pyramids was
just 5/2 times the base of the Great Pyramid of 440 cubits. In addition, the coordinates of the vanishing point and the radius of the small circle corresponded to whole numbers of cubits to within
0.05 cubit, and although not particularly interesting, conformed to the cubit system.
It was only recently, however, that some further relationships came to light, when Stephen pointed out that the circumference of the large circle was just 20 times the diameter of the small circle.
The exact factor of 20.04 was close enough to a round figure to suggest deliberate intent. Now for the reasons outlined above, I hadn't paid much attention to the dimensions of the circles, and
cannot recall having calculated the circumferences. I knew that the radius of the large circle corresponded to a round 17,500 cubits, with a discrepancy of only 0.04%, but had declined to draw any
further conclusions. The diameter of the large circle is, however, practically just 35,000 cubits, so that given the approximation to p of 22/7, the circumference will be 110,000 cubits. Not only is
it inherently quite surprising to obtain these simple multiples of 10,000 cubits, but the circumference is also just 10 times the distance northwards from the centre of the circle to the Vanishing
Point - through which, by definition, the circumference must pass.
Once again, therefore, it is not just the whole numbers of thousands of cubits which are significant, but the fact that these dimensions are mathematically meaningful. Being a multiple of 7000
cubits, the diameter of 35,000 cubits would have been an ideal choice - if indeed it was chosen - since the circumference would also contain a round number of thousands of cubits, according to the
'classic' value for p of 22/7. The same reasoning applies also to the Great Pyramid, which is thought to embody the p-proportion through its height of 280 cubits and side-length of 440 cubits.
Furthermore, the circumference of the large circle is just 250 times the side-length of the Great Pyramid, and the dimensions of 440 and 250 cubits are consecutive in the Giza plan.
It must be noted that the diameter of the large circle is highly sensitive to the exact placing of the pyramid corners through which the circumference passes, owing to the flatness of the curve which
connects them. Indeed, if the three corners had been placed in a straight line, then that line would belong to the circumference of a circle with infinite radius. Although the slight bend in the line
reduced the diameter to a 'sensible' dimension, the dimension changes rapidly with slight adjustments of the pyramid corner positions. It turns out - and I still find this hard to believe - that the
precise diameter of 35,000 cubits can be obtained for the large circle by shifting the south-east corner of the Second Pyramid a mere 0.006 of a cubit from the position as defined by the site-plan
coordinates in whole cubits!
If it had ever been intended to define a circle with a diameter of 35,000 cubits by means of the three pyramid-corners, therefore, then it would have been virtually impossible to achieve a more
accurate result than that actually obtained by the site-plan dimensions. It is true that the circumference of the circle will not be exactly 110,000 cubits if the exact value of p is used instead of
22/7, yet the relationship with the Great Pyramid still stands, since the dimensions of this pyramid arguably reflect p with greater accuracy than 22/7.
Now turning to the small circle, my computations had shown that the radius was 2742 cubits, while the centre was 2740 cubits eastwards from the Vanishing Point. This number was significant to me as
being practically ten times the height of the Second Pyramid, which the survey-data and theoretical factors had shown to be 274 cubits. Whilst the dimensions of the large circle seemed to refer to
the Great Pyramid, therefore, a comparable relationship existed between the small circle and the Second Pyramid. At the same time, the diameter of the small circle is a fair approximation to
one-twentieth of the circumference of the large circle, as Stephen suggested - this requiring a radius of about 2748 cubits.
The Boundary Wall
We have already referred to the anomalous boundary wall to the south of the Third Pyramid, which seems to run straight over the Vanishing Point. Stephen wanted to know whether the wall described a
large circle, and whether it was possible - given the fragment that exists - to accurately determine the size of the circle. It might be interesting at this point to quote from a letter I sent
Stephen on 16th April 1987:
There are a few strange things about this wall, the first being that while all the other boundary walls at Giza are aligned north-south or east-west, this wall diverges by about 7 degrees from an
east-west alignment - in such as way as to just encompass the vanishing point within the boundary. Secondly, this wall was not built in a straight line but in fact represents the arc of a very large
circle, the radius of curvature of which is in the region of 11000 cubits - or comparable to the larger of your two ground circles. A chord joining the ends of this curved section of boundary wall
actually falls precisely on the point of intersection of the two ground circles, which point would therefore have been covered over if the wall had been built in a straight line!
Although at that time I had answered Stephen's question, I now find that the details are not quite correct. The rough figure of 11,000 cubits which I gave for the radius of curvature of the wall,
actually referred to the diameter! Apart from this simple mistake, much depends on the accuracy of Petrie's plan of the boundary wall, upon which I had based my calculations. In a letter dated 27th
May 1987, I wrote:
These walls were all surveyed more than a hundred years ago by Flinders Petrie, who was also puzzled by the line of the southern wall and wrote: "it is impossible to suppose its skew and bowing line
to have been laid out along with the very regular lines of the other parts." Yet this wall is of the same construction, and its curvature must have been produced by a deliberate effort of the
builders - so I think it must definitely have had a special significance.
Looking again at Petrie's work, it is clear that he took great trouble to determine the exact lines of the walls around the Third Pyramid, as he says: "They were all fixed in the survey by
triangulation, with more accuracy than the wall-surface can be defined." In my reassessment of the curvature of the south boundary wall, I took a scan of Petrie's plan, and determined the coordinates
in pixels of three points along the wall - at either end and at the approximate mid-point. These coordinates were then related to the size and position of the Third Pyramid, scaled to the dimensions
of the Giza plan in cubits, and entered into the CAD program. This gave a radius of curvature of around 4800 cubits.
As a check on this estimation, I imported the scan into the CAD program, and scaled and positioned it so that the base of the Third Pyramid matched the site-plan location. By this means, I could not
only determine the centre and radius of the circle, but also superimpose the accurately-plotted circumference on Petrie's plan, in order to make a comparison with the curvature of the wall as it was
actually built. As can be seen in the diagram below, in which the arc of the circle is shown in magenta, the agreement with the line of the wall is rather close, being in fact practically
This new evaluation also confirmed that the Vanishing Point is extremely close to the inner (north) side of the wall. It now appears that if the wall had been built in an exact straight line between
the ends of its curvature, then the vanishing point would have been just outside the enclosure. Not only was the anomalous curve similar to that of the pyramid ground circles, therefore, but the
position of the wall and the effect of the curve were significant in relation to the vanishing point. At this stage, however, the radius of curvature and the centre-point of the circle appeared to be
arbitrary. Clearly, the circle of the wall did not make contact with the pyramids, or with any other structures as far as I could see, so the question arose as to whether the size and position had
any significance of their own.
Again with the assistance of CAD, it was easy to plot the entire circle and relate the dimensions to the vanishing point circles. The result of this exercise is shown in the diagram below. Much to my
surprise, the circle of the wall was bounded to the east and west by tangents to the small and large circles drawn parallel to the north-south axis of the plan. Consequently, the size and position of
the wall circle were entirely a function of the pyramid ground circles. First, the diameter and east-west position of the wall circle are defined by the eastward extent of the small circle, and by
the westward extent of the large circle, thus being equally dependent upon both circles. Second, the north-south position of the wall circle is such that the circumference intersects the vanishing
point, which is itself defined by the intersection of the pyramid ground circles:
It will noted that there is a slight discrepancy between the circumference of the wall circle and the tangents as drawn to the small and large circles. Given, however, that the wall circle had to be
extrapolated from a fairly short segment, the agreement seems remarkably good. It is, of course, possible to construct a theoretical circle which is defined exactly by the tangents to the pyramid
circles; and when overlaid on the wall as built, the departure is no greater than the thickness of the wall itself.
Now as Stephen will confirm, I have always been extremely skeptical about the idea that the pyramid-builders had intended to lay out the Giza pyramids in such a way as to define anything resembling a
vanishing point. However, when all the evidence is considered, it does seem to me that there is quite a strong argument to support the idea that the builders were aware of the fact that the three
pyramids were bounded by circles which defined two points by their intersection - one being comparable to the artist's conception of a vanishing point. Perhaps after the three pyramids had been
built, the architects turned their attention to the location of the boundary walls, which can be shown to have been laid out on a definite plan. It was then, perhaps, that the dimensions of the
enclosing circles were considered, and found to be of sufficient interest to justify the enhancement of the unified plan by setting out the southern enclosure wall in a manner which was entirely
contrary to their usual practice. For to construct a wall with such a continuous but slight curve can only have been the result of conscious effort...
John Legon, 17/09/05
( This page is for archival purposes - please access the existing page here.)
John Legon's Home Page | {"url":"http://goodfelloweb.com/giza/legon_2005.html","timestamp":"2024-11-05T16:17:53Z","content_type":"text/html","content_length":"22809","record_id":"<urn:uuid:807241b8-1974-4c45-ae55-596d8728c283>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00784.warc.gz"} |
5 Centimeters to Inches
5 cm to in conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the
United Kingdom) and as a fraction (exact result). Every display form has its own advantages and in different situations particular form is more convenient than another. For example usage of
scientific notation when working with big numbers is recommended due to easier reading and comprehension. Usage of fractions is recommended when more precision is needed.
If we want to calculate how many Inches are 5 Centimeters we have to multiply 5 by 50 and divide the product by 127. So for 5 we have: (5 × 50) ÷ 127 = 250 ÷ 127 = 1.9685039370079 Inches
So finally 5 cm = 1.9685039370079 in | {"url":"https://unitchefs.com/centimeters/inches/5/","timestamp":"2024-11-04T05:43:03Z","content_type":"text/html","content_length":"23044","record_id":"<urn:uuid:33df3b94-68bd-4e4f-a555-c3606cdada07>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00430.warc.gz"} |
Future Value Calculator
A calculator for working out the future value of an investment, taking compound interest into account.
Enter amounts as whole numbers, or in the case of decimals only include the decimal point.
Interest rate
Interest rate must be greater than 0 and less than 100
[Daily ] | {"url":"https://software.safish.com/tools/future-value-calculator","timestamp":"2024-11-07T19:46:36Z","content_type":"text/html","content_length":"88333","record_id":"<urn:uuid:d9e1cf56-e4e5-4536-b82e-2841ffa1a22a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00837.warc.gz"} |
Celebratio Mathematica is an open-access publication of Mathematical Sciences Publishers.
Design & Software ©2012–2024 MSP. All rights reserved.
The Bibliographic Data, being a matter of fact and not creative expression, is not subject to copyright. To the extent possible under law, Mathematical Sciences Publishers has
waived all copyright and related or neighboring rights to the Bibliographies on Celebratio Mathematica, in their particular expression as text, HTML, BibTeX data or otherwise.
The Abstracts of the bibliographic items may be copyrighted material whose use has not been specifically authorized by the copyright owner. We believe that this not-for-profit,
educational use constitutes a fair use of the copyrighted material, as provided for in Section 107 of the U.S. Copyright Law. If you wish to use this copyrighted material for purposes
that go beyond fair use, you must obtain permission from the copyright owner.
Website powered by ProCelebratio 0.5 from MSP. | {"url":"https://celebratio.org/Bing_RH/bibf/28/49/173/","timestamp":"2024-11-12T18:21:06Z","content_type":"text/html","content_length":"17019","record_id":"<urn:uuid:0408dc74-ee2d-40ba-86a9-20556080b600>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00616.warc.gz"} |
A balanced lever has two weights on it, the first with mass 72 kg and the second with mass 9 kg. If the first weight is 4 m from the fulcrum, how far is the second weight from the fulcrum? | HIX Tutor
A balanced lever has two weights on it, the first with mass #72 kg # and the second with mass #9 kg#. If the first weight is # 4 m# from the fulcrum, how far is the second weight from the fulcrum?
Answer 1
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the distance of the second weight from the fulcrum, you can use the principle of moments:
( \text{Moment}_1 = \text{Moment}_2 )
Where ( \text{Moment}_1 = \text{force}_1 \times \text{distance}_1 ) and ( \text{Moment}_2 = \text{force}_2 \times \text{distance}_2 ).
Given: ( \text{force}_1 = 72 \text{ kg} ), ( \text{distance}_1 = 4 \text{ m} ), ( \text{force}_2 = 9 \text{ kg} ), ( \text{distance}_2 = ? )
Rearrange the equation to solve for ( \text{distance}_2 ): ( \text{distance}_2 = \frac{\text{Moment}_1}{\text{force}_2} - \text{distance}_1 )
Substitute the given values: ( \text{distance}_2 = \frac{(72 \text{ kg} \times 4 \text{ m})}{9 \text{ kg}} - 4 \text{ m} )
( \text{distance}_2 = 32 \text{ m} - 4 \text{ m} )
( \text{distance}_2 = 28 \text{ m} )
Therefore, the second weight is 28 meters from the fulcrum.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-balanced-lever-has-two-weights-on-it-the-first-with-mass-72-kg-and-the-second--8f9af8b224","timestamp":"2024-11-07T22:14:01Z","content_type":"text/html","content_length":"580196","record_id":"<urn:uuid:1c13097d-8ed5-4e40-b888-debe4f746a26>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00659.warc.gz"} |
LM 21.2 Electrical forces Collection
21.2 Electrical forces by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
21.2 Electrical forces
“Charge” is the technical term used to indicate that an object has been prepared so as to participate in electrical forces. This is to be distinguished from the common usage, in which the term is
used indiscriminately for anything electrical. For example, although we speak colloquially of “charging” a battery, you may easily verify that a battery has no charge in the technical sense, e.g., it
does not exert any electrical force on a piece of tape that has been prepared as described in the previous section.
Two types of charge
We can easily collect reams of data on electrical forces between different substances that have been charged in different ways. We find for example that cat fur prepared by rubbing against rabbit fur
will attract glass that has been rubbed on silk. How can we make any sense of all this information? A vast simplification is achieved by noting that there are really only two types of charge. Suppose
we pick cat fur rubbed on rabbit fur as a representative of type A, and glass rubbed on silk for type B. We will now find that there is no “type C.” Any object electrified by any method is either
A-like, attracting things A attracts and repelling those it repels, or B-like, displaying the same attractions and repulsions as B. The two types, A and B, always display opposite interactions. If A
displays an attraction with some charged object, then B is guaranteed to undergo repulsion with it, and vice-versa.
The coulomb
Although there are only two types of charge, each type can come in different amounts. The metric unit of charge is the coulomb (rhymes with “drool on”), defined as follows:
One Coulomb (C) is the amount of charge such that a force of `9.0×10^9` N occurs between two point-like objects with charges of 1 C separated by a distance of 1 m.
The notation for an amount of charge is `q`. The numerical factor in the definition is historical in origin, and is not worth memorizing. The definition is stated for point-like, i.e., very small,
objects, because otherwise different parts of them would be at different distances from each other.
A model of two types of charged particles
Experiments show that all the methods of rubbing or otherwise charging objects involve two objects, and both of them end up getting charged. If one object acquires a certain amount of one type of
charge, then the other ends up with an equal amount of the other type. Various interpretations of this are possible, but the simplest is that the basic building blocks of matter come in two flavors,
one with each type of charge. Rubbing objects together results in the transfer of some of these particles from one object to the other. In this model, an object that has not been electrically
prepared may actually possesses a great deal of both types of charge, but the amounts are equal and they are distributed in the same way throughout it. Since type A repels anything that type B
attracts, and vice versa, the object will make a total force of zero on any other object. The rest of this chapter fleshes out this model and discusses how these mysterious particles can be
understood as being internal parts of atoms.
Use of positive and negative signs for charge
Because the two types of charge tend to cancel out each other's forces, it makes sense to label them using positive and negative signs, and to discuss the total charge of an object. It is entirely
arbitrary which type of charge to call negative and which to call positive. Benjamin Franklin decided to describe the one we've been calling “A” as negative, but it really doesn't matter as long as
everyone is consistent with everyone else. An object with a total charge of zero (equal amounts of both types) is referred to as electrically neutral.
Criticize the following statement: “There are two types of charge, attractive and repulsive.”
(answer in the back of the PDF version of the book)
Coulomb's law
A large body of experimental observations can be summarized as follows:
Clever modern techniques have allowed the `1"/"r^2` form of Coulomb's law to be tested to incredible accuracy, showing that the exponent is in the range from `1.9999999999999998` to
Note that Coulomb's law is closely analogous to Newton's law of gravity, where the magnitude of the force is `Gm_1m_2"/"r^2`, except that there is only one type of mass, not two, and gravitational
forces are never repulsive. Because of this close analogy between the two types of forces, we can recycle a great deal of our knowledge of gravitational forces. For instance, there is an electrical
equivalent of the shell theorem: the electrical forces exerted externally by a uniformly charged spherical shell are the same as if all the charge was concentrated at its center, and the forces
exerted internally are zero.
Conservation of charge
An even more fundamental reason for using positive and negative signs for electrical charge is that experiments show that charge is conserved according to this definition: in any closed system, the
total amount of charge is a constant. This is why we observe that rubbing initially uncharged substances together always has the result that one gains a certain amount of one type of charge, while
the other acquires an equal amount of the other type. Conservation of charge seems natural in our model in which matter is made of positive and negative particles. If the charge on each particle is a
fixed property of that type of particle, and if the particles themselves can be neither created nor destroyed, then conservation of charge is inevitable.
As shown in figure b, an electrically charged object can attract objects that are uncharged. How is this possible? The key is that even though each piece of paper has a total charge of zero, it has
at least some charged particles in it that have some freedom to move. Suppose that the tape is positively charged, c. Mobile particles in the paper will respond to the tape's forces, causing one end
of the paper to become negatively charged and the other to become positive. The attraction between the paper and the tape is now stronger than the repulsion, because the negatively charged end is
closer to the tape.
What would have happened if the tape was negatively charged?
(answer in the back of the PDF version of the book)
Discussion Questions
• A - If the electrical attraction between two point-like objects at a distance of 1 m is `9×10^9` N, why can't we infer that their charges are `+1` and `-1` C? What further observations would we
need to do in order to prove this?
• B - An electrically charged piece of tape will be attracted to your hand. Does that allow us to tell whether the mobile charged particles in your hand are positive or negative, or both?
21.2 Electrical forces by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license. | {"url":"https://www.vcalc.com/collection/?uuid=1e852652-f145-11e9-8682-bc764e2038f2","timestamp":"2024-11-06T23:47:43Z","content_type":"text/html","content_length":"59292","record_id":"<urn:uuid:c32a4fd1-c209-466e-a380-8b466fe44faf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00210.warc.gz"} |
CWG Issue 299
This is an unofficial snapshot of the ISO/IEC JTC1 SC22 WG21 Core Issues List revision 115d. See http://www.open-std.org/jtc1/sc22/wg21/ for the official list.
299. Conversion on array bound expression in new
Section: 7.6.2.8 [expr.new] Status: CD1 Submitter: Mark Mitchell Date: 19 Jul 2001
[Voted into WP at October 2005 meeting.]
In 7.6.2.8 [expr.new], the standard says that the expression in an array-new has to have integral type. There's already a DR (issue 74) that says it should also be allowed to have enumeration type.
But, it should probably also say that it can have a class type with a single conversion to integral type; in other words the same thing as in 8.5.3 [stmt.switch] paragraph 2.
Suggested resolution:
In 7.6.2.8 [expr.new] paragraph 6, replace "integral or enumeration type (6.8.2 [basic.fundamental])" with "integral or enumeration type (6.8.2 [basic.fundamental]), or a class type for which a
single conversion function to integral or enumeration type exists".
Proposed resolution (October, 2004):
Change 7.6.2.8 [expr.new] paragraph 6 as follows:
Every constant-expression in a direct-new-declarator shall be an integral constant expression (7.7 [expr.const]) and evaluate to a strictly positive value. The expression in a
direct-new-declarator shall [DEL:have:DEL] integral type, [DEL:or:DEL] enumeration type[DEL: (3.9.1):DEL][DEL:with a:DEL] non-negative[DEL: value:DEL]. [Example: ...
Proposed resolution (April, 2005):
Change 7.6.2.8 [expr.new] paragraph 6 as follows:
Every constant-expression in a direct-new-declarator shall be an integral constant expression (7.7 [expr.const]) and evaluate to a strictly positive value. The expression in a
direct-new-declarator shall [DEL:have integral or enumeration type (6.8.2 [basic.fundamental]) with a non-negative value:DEL]. [Example: ... | {"url":"https://cplusplus.github.io/CWG/issues/299.html","timestamp":"2024-11-02T18:14:50Z","content_type":"text/html","content_length":"4862","record_id":"<urn:uuid:d312a81c-eab0-4e07-b01e-ae716800edfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00735.warc.gz"} |
Concept information
gaseous alpha-endosulfan mass concentration
• Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be
described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chmical formula for alpha-endosulfan is C9H6Cl6O3S. It is a member of the group of halogenated
organics. The IUPAC name is (1S,2R,8S,9S)-1,9,10,11,12,12-hexachloro-4,6-dioxa-5λ4-thiatricyclo[7.2.1.02,8]dodec-10-ene 5-oxide
• alpha-endosulfan mass concentration
{{#each properties}}
{{toUpperCase label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }} {{#if prefLabel }}
{{#if vocabName }}
{{ vocabName }}
{{/if}} {{/each}} | {"url":"https://vocabulary.actris.nilu.no/skosmos/actris_vocab/en/page/alpha-endosulfanmassconcentration","timestamp":"2024-11-08T06:25:12Z","content_type":"text/html","content_length":"21503","record_id":"<urn:uuid:10f3d37a-fb35-4110-b66d-13e2da73d3c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00068.warc.gz"} |
Lists 2010 : StackExchange sites
One of the trends of 2010 was the proliferation of StackExchange sites. I guess by now most of us visit MathOverflow along with the arXiv daily. But, there are plenty of other StackExchange sites
around that may be of interest to the mathematics-community :
• Mathematics somewhat less high-brow than MathO.
• Physics still in the beta-phase (see below)
• TeX for TeX and LaTeX-lovers
• iPad 4 edu for those who want to use their iPad in the classroom
• etc. etc.
“Opening a StackExchange site is damn hard. First you have to find at least 60 people interested in the site. Then, when this limit is reached, a large amount of people (in the hundreds, but it
really depends on the reputation of each participant) must commit and promise to create momentum for the site, adding questions and answers. When this amount is reached, the site is open and stays in
closed beta for seven days. During this time, the committers have to enrich the site so that the public beta (which starts after the first seven days) gets enough hits and participants to show a
self-sustained community.” (quote from ForTheScience’s StackExchange sites proliferation, this post also contains a list of StackExchange-projects in almost every corner of Life)
The site keeping you up to date with StackExchange proposals and their progress is area51. Perhaps, you want to commit to some of these proposals
or simply browse around area51 until you find the ideal community for you to belong to… | {"url":"http://www.neverendingbooks.org/lists-2010-stackexchange-sites/","timestamp":"2024-11-01T20:59:39Z","content_type":"text/html","content_length":"31606","record_id":"<urn:uuid:3f890c37-1a6c-41bb-b8ae-601afb73104d>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00655.warc.gz"} |
RE: st: Binomial regression
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Binomial regression
From "Newson, Roger B" <[email protected]>
To <[email protected]>
Subject RE: st: Binomial regression
Date Sun, 5 Aug 2007 17:15:53 +0100
Two suggestions re binomial regression:
Suggestion 1: For what it's worth, confidence intervals for risk differences (in some cases) can be reported using the -somersd- package, downloadable from SSC using the -ssc- command. Given 2 binary (0,1) variables x and y, the user can type
somersd x y, transf(z) tdist
and get a confidence interval for the risk difference
Pr(y==1|x==1) - Pr(y==1|x==0)
This method has the advantage (compared to -binreg-, -glm- etc) of using the Normalizing and variance-stabilizing hyperbolic arctangent or z-transformation, recommended by Edwardes (1995) for the general Somers' D for binary X-variates (including the special case where the y-variate is also binary).
If there is a categorical confounding variable w, then the user can type
somersd x y, transf(z) tdist wstrata(w)
and get a confidence interval for a within-strata risk difference for pairs of observations with the same value of w. The user can alternatively specify multiple categorical confounding w-variables, and/or w-variables which specify propensity groups based on a propensity score for x==1 calculated from multiple confounding variables.
Suggestion 2: To output confidence intervals for baseline odds with confidence intervals for odds ratios, the user can specify a baseline variate of ones, and then enter it into the model with the -noconst- option. For instance, the user can type:
gene byte baseline=1
logit y baseline x, noconst or
This trick can also be used with geometric means and their ratios. See Newson (2003).
I hope this helps.
Edwardes, M. D. d. B. 1995. A confidence interval for Pr(X < Y) − Pr(X > Y) estimated from simple cluster samples. Biometrics 51: 571–578.
Newson R. 2003. Stata tip 1: The eform() option of regress. The Stata Journal 3(4): 445. Download post-publication update from
Roger Newson
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: [email protected]
Web page: www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
Opinions expressed are those of the author, not of the institution.
-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Maarten buis
Sent: 04 August 2007 07:47
To: [email protected]
Subject: Re: st: Binomial regression
--- Marcello Pagano <[email protected]> wrote:
> I agree wholeheartedly that the risk difference is sometimes
> preferable to the odds ratio. Witness what is currently going on with
> the attack on Avandia. Rather than report a risk difference of 0.2%
> in the MI rate, we are faced with a risk INCREASE of 40% -- the
> effect of going from 0.5% to 0.7%. If reported as a risk difference
> it would probably not have made the headlines it has nor created the
> furor it has.
At this point I think that there is room for improvement in Stata
output. When reporting odds ratios after -logit-, Stata will not report
the baseline odds (-exp(_cons)-), So Stata reports that the odds
increased with 40%, but not that the baseline odds is .005 (at these
low probabilities risks and odds are almost the same). I would like to
see the baseline odds and the odds ratios, because both give very
useful information about the size of the effect, as Marcello's
example illustrates.
-- Maarten
Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands
visiting address:
Buitenveldertselaan 3 (Metropolitan), room Z434
+31 20 5986715
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2007-08/msg00186.html","timestamp":"2024-11-13T14:52:49Z","content_type":"text/html","content_length":"13287","record_id":"<urn:uuid:bbd80f0b-5528-4726-9107-9da82bc8814b>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00825.warc.gz"} |
Magic Teller Machine Solution - Actuaries Digital
Puzzles (The Critical Line)
Magic Teller Machine Solution
Oliver Chambers outlines the solutions to his Magic Teller puzzle and announces the winner of the first instalment of The Critical Line series.
We had three correct entries to the Magic Teller Machine puzzle. Congratulations to: Tim Hillman, Ting Chen, and Paul Swinhoe. This month’s winner was Paul Swinhoe who will receive a $50 book
voucher. Unfortunately, we didn’t receive any correct proofs of the Riemann Hypothesis.
The Magic Teller Machine:
You have several coins. Each of these coins has a front and a back, and on each side of the coin is a non-negative integer. We can represent a coin as the ordered paid $$\langle x, y \rangle$$ and
the value of each coin is the sum of the integers on the front and back, $$x+y$$. In front of you is a magic teller machine, it knows the contents of your coin purse. If you have a coin $$\langle x,
y \rangle$$, but you do not have either the coin $$\langle x+1, y \rangle$$ or $$\langle x, y +1\rangle$$, then the MTM will allow you to exchange your coin $$\langle x, y \rangle$$ for two new
coins $$\langle x+1, y \rangle$$ and $$\langle x, y+1 \rangle$$. You are allowed to make any (finite) number of transactions with the MTM. The aim of the game is to make a sequence of transactions
with the MTM so that all of your coins have a value greater than $2$. Is this possible in either of the following scenarios:
1. You start with the $$6$$ unique coins with a value at most $$2$$:
$$\langle 0, 0 \rangle, \langle 1, 0 \rangle, \langle 0, 1 \rangle, \langle 2, 0 \rangle, \langle 1,1 \rangle, \langle 0, 2 \rangle$$
2. You start with a single coin of value zero: $$\langle 0, 0 \rangle$$
Demonstrate that you can exchange your coins until they all have a value greater than $$2$$, or prove that it’s impossible.
Solution 1:
We will show that it is impossible to convert the coin $$\langle 0, 0 \rangle$$ into coins with value greater than $$2$$ in a finite number of moves. This will also show that scenario (a) is
Suppose, for the sake of contradiction, that we have a finite sequence moves that will exchange $$\langle 0, 0 \rangle$$ into coins of values greater than $$2$$. For each coin $$\langle x, y \
rangle$$ we will consider the minimum number of times that this coin will appear in our collection of coins throughout the strategy (i.e. not at the same time). Clearly $$\langle 0, 0 \rangle$$
appears once and it must be exchanged with the MTM. This will produce two coins $$\langle 1, 0 \rangle$$ and $$\langle 0, 1 \rangle$$. These must also be exchanged with the MTM (as they have value
less than $$2$$) and so on. This is illustrated in the diagram below:
The bottom row of the diagram shows coins with value $$3$$. The coin $$\langle 2, 1 \rangle$$ will appear in the collection of coins $$3$$ times. We must exchange that coin with the MTM at least
twice (because we can only have one coin $$\langle 2, 1 \rangle$$ at the end).
For a positive integer $$k$$, consider the wallet of coins $$W(k) = \{\langle k+2, k-1\rangle, \langle k+1, k\rangle, \langle k, k+1\rangle, \langle k-1, k+2\rangle \}$$.
Let $$P(k)$$ be the proposition that the minimum number of times that each of these coins appear in our possession is at least $$1 \times \langle k+2, k-1\rangle, 3 \times \langle k+1, k\rangle, 3 \
times \langle k, k+1\rangle, 1 \times \langle k-1, k+2\rangle$$. The bottom row of the above diagram demonstrates that $$P(1)$$ is true. We will show that $$P(k)$$ implies $$P(k+1)$$.
Notice that if $$P(k)$$ is true then at a minimum we must exchange both $$\langle k+1, k\rangle$$, and $$\langle k, k+1\rangle$$ with the MTM twice. This produces $$2\times\langle k+2, k\rangle, 4\
times \langle k+1, k+1\rangle,2\times \langle k, k+2\rangle$$.
In turn this implies we must exchange $$\langle k+2, k\rangle, \langle k+1, k+1\rangle$$, and $$\langle k, k+2\rangle)$$ leaving at least $$1\times \langle k+3, k\rangle, 3\times \langle k+2, k+1\
rangle,3\times \langle k+1, k+2\rangle,1\times \langle k, k+3\rangle$$, which is $$P(k+1)$$.
By induction this implies that every coin in $$W(k)$$ for every $$k$$ must appear in our collection of coins at some point in our strategy, but this requires an infinite number of moves – a
contradiction! So we have demonstrated that both cases are impossible.
Solution 2:
With a little bit more work and ingenuity we can also show that both cases are impossible, even with an infinite number of moves.
First, let us apply a weight to each coin $$\omega( \langle x, y\rangle ) = 2^{-(x+y)}$$. We have chosen this weight because $$\omega( \langle x, y\rangle ) = \omega( \langle x+1, y\rangle ) + \omega
( \langle x, y+1\rangle )$$. That is, one transaction with the MTM preserves the total weight of the coins.
Summing a geometric series we can calculate the weight of all the coins as $$\sum_{x \in \mathbb{N}_0} \sum_{y \in \mathbb{N}_0} \omega( \langle x, y\rangle ) = 4$$.
• The weight of the $$6$$ coins with value at most $$2$$ is $$(1 + \frac{1}2 +\frac{1}2 +\frac{1}4+\frac{1}4+\frac{1}4) = 2\frac{3}4$$. The weight of all coins with a value greater than $$2$$ is
$$4 – 2\frac{3}4 = 1\frac{1}4$$. Because each transaction preserves total weight of the coins we cannot convert all of our original coins into coins with value greater than $$2$$, so case-a is
• We will extend the logic for case-b. Starting with a single coin, after any number of transactions, we can have at most $$1$$ coin of the form $$\langle x, 0\rangle$$ and $$1$$ coin with of the
form $$\langle 0, y\rangle$$ for $$x,y \in \mathbb{N}$$. The weight of all coins $$\langle x, 0\rangle$$ with value greater than $$2$$ is $$\frac{1}4$$, and the largest weight a single coin of
this form can take is $$\omega(\langle 3, 0\rangle) = \frac{1}8$$. This means that the potential weight of coins with value greater than $$2$$ reduces to $$1\frac{1}4 – (\frac{1}8 + \frac{1}8) =
1$$. Further note that the number of coins we can have of the form $$\langle x, 1 \rangle$$ is equal to the number of transactions we make of a coin with the form $$\langle x, 0\rangle$$. This
means we could never have both the coin $$\langle 3, 0\rangle$$ and more than $$3$$ coins of the form $$\langle x, 1 \rangle$$. This reduces our bound to strictly less than $$1$$. So again we
cannot convert the coin $$\langle 0, 0 \rangle$$ into coins with value greater than $$2$$, even with infinitely many moves.
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital. | {"url":"https://www.actuaries.digital/2016/05/25/magic-teller-machine-solution/","timestamp":"2024-11-06T04:05:51Z","content_type":"text/html","content_length":"118911","record_id":"<urn:uuid:f3a322a6-ea86-4cdf-8d2f-ed706b7df5c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00878.warc.gz"} |
GMTRCPLY - GeoMeTRiCPLaY
Large Builds
16 GeoBender "Nautilus"
This structure cannot be built with ShaShiBo, because of the different magnetic layout.
The structure rests on three tips, like the one you see at the bottom front between the two "Fire" designs. However, it can still be built "Free Solo", which means that a single person without
temporary support structures can build it.
The base structure is a large UFO made from UFO + Ring in the center, and three UFOs around. The roof strucutre is a Solid Hive, i.e. a Hive combined with a UFO.
16 GeoBender Nautilus, only two different shapes used. The silhouette is defined by eight UFO shapes. It's sort of an exploded view of a Double-Size Sphere.
This will also work with ShaShiBo.
16 GeoBender "Primary", playing with the color schemes of the "Primary-I" and "Primary-II" designs.
The overall shape is a classic tetrahedral build of a 4-units combo. The 4-units combo is also a classic arrangement: Three same shapes held together by a different shape.
This is a Rhombic Torus made from 16 Units.
Top Half: 8 GeoBender Primary-II Bottom Half: 8 GeoBender Primary-I
I discovered the Rhombic Torus while building the Large Rhombic Pyramid. But actually, the existance should be obvious.
Where there's an Octahedron, there's a Torus. The GeoBender Octahedron is not a regular octahedron. The dissection into one torus and two small octahedrons can be done between any opposite vertices.
The Square Torus lies between the two obtuse vertices, the Rhombic Torus lies between the two pairs of acute vertices.
The Rhombic Torus can also be built with ShaShiBo.
24 GeoBender Primary; 12 Primary-I + 12 Primary-II.
The structure can either be seen as a Rhombic Dodecahedron with some extra pieces on top of each face, or a Stellated Rhombic Dodecahedron with incomplete pyramids on the faces.
The calculation: A solid double-size Rhombic Dodecahedron requires 16 Units. With 24 Units used, that's 8 elements on top of each face. Compared to a Solid Stellated Rhombic Dodecahedron of 32 Units,
that's 8 elements missing on each face.
Just like the Rhombic Dodecahedron itself, the structure has three different main views. Because of the asymetrical add-ons, it offers more different detail views.
Important Note: "Twirlated" is not a Geometrical term.
18 GeoBender Primary-I.
This is an Enneahedron or Nonahedron, a solid with nine faces. What's so exceptional about this one is the fact that it has only two different faces: Six identical Scalene Trapezoids and three
identical Parallelograms. There are two opposing triplets of Trapezoids, surrounded and connected by three Parallelograms.
The solid shown here can be derived from a Rhombic Dodecahedron by simply adding three face pyramids to appropriate faces. If look at a Rombic Dodacahedron where three rhombic faces meet, there are
six rhombic faces around, and three opposite. The face pyramids have to be added at every other of the six faces around, then you get the Enneahedron.
This Enneahedron is Chrial, there is also a mirrored version. The mirror version can be created by adding the face pyramids to the other three faces around.
The total volume would be equal to 20 units. This build has some empty space equal to 2 units hidden inside.
24 GeoBender Primary-II.
This Six-Pointed Star can be seen as a Fusion of a "Left" and a "Right" version of the Enneahedron above.
If you look at a Rhombic Dodecahedron at a vertex where three faces meet, there is a belt of six rhombic faces around. If you put a pyramid on each of these faces, you get this Six-Ponted Star. | {"url":"http://hjreggel.net/gmtrcply/geobender/large.html","timestamp":"2024-11-07T13:04:53Z","content_type":"text/html","content_length":"12913","record_id":"<urn:uuid:19c14a80-3dad-4a5b-a1f1-d98572959811>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00240.warc.gz"} |
You may need to use the appropriate
You may need to use the appropriate technology to answer this question. The following is part of the results of a regression analysis involving sales (y in millions of dollars), advertising
expenditures (x, in thousands of dollars), and number of salespeople (x₂) for a corporation. The regression was performed on a sample of 10 observations. ý= Constant x1 Ho: x2 H₂: Coefficients (a)
Write the regression equation. -11.320 0.788 0.151 P₁!=0 Standard Error -11.320 +0.788x₁ +0.151x2 20.452 (b) Interpret the coefficients of the estimated regression equation found in part (a). (Give
your answers in dollars.) As advertising increases by 1 unit ($1,000), sales are expected to increase by $788 to increase by $ 151 X when advertising is held constant. 0.302 (c) At a = 0.05, test for
the significance of the coefficient of advertising. State the null and alternative hypotheses. (Enter != for as needed.) B₁=0 0.218 Find the value of the test statistic. (Round your answer to two
decimal places.) 2.61 Find the p-value. (Round your answer to four decimal places.) p-value = 0349 State your conclusion. x when the number of salespeople is held constant. As the number of
salespeople increases by 1, sales are expected 6:16 PM
You may need to use the appropriate technology to answer this question. The following is part of the results of a regression analysis involving sales (y in millions of dollars), advertising
expenditures (x, in thousands of dollars), and number of salespeople (x₂) for a corporation. The regression was performed on a sample of 10 observations. ý= Constant x1 Ho: x2 H₂: Coefficients (a)
Write the regression equation. -11.320 0.788 0.151 P₁!=0 Standard Error -11.320 +0.788x₁ +0.151x2 20.452 (b) Interpret the coefficients of the estimated regression equation found in part (a). (Give
your answers in dollars.) As advertising increases by 1 unit ($1,000), sales are expected to increase by $788 to increase by $ 151 X when advertising is held constant. 0.302 (c) At a = 0.05, test for
the significance of the coefficient of advertising. State the null and alternative hypotheses. (Enter != for as needed.) B₁=0 0.218 Find the value of the test statistic. (Round your answer to two
decimal places.) 2.61 Find the p-value. (Round your answer to four decimal places.) p-value = 0349 State your conclusion. x when the number of salespeople is held constant. As the number of
salespeople increases by 1, sales are expected 6:16 PM
Chapter1: Starting With Matlab
Section: Chapter Questions
Transcribed Image Text:You may need to use the appropriate technology to answer this question. The following is part of the results of a regression analysis involving sales (y in millions of
dollars), advertising expenditures (x, in thousands of dollars), and number of salespeople (x₂) for a corporation. The regression was performed on a sample of 10 observations. Constant x₁ x2 Ho: H₂:
Coefficients -11.320 (a) Write the regression equation. 0.788 0.151 ý = -11.320 +0.788x1 +0.151x2 B₁!= 0 Standard Error 20.452 0.302 (b) Interpret the coefficients of the estimated regression
equation found in part (a). (Give your answers in dollars.) As advertising increases by 1 unit ($1,000), sales are expected to increase by $ 788 to increase by $.151 X when advertising is held
constant. 0.218 (c) At a = 0.05, test for the significance of the coefficient of advertising. State the null and alternative hypotheses. (Enter != for # as needed.) B₁ = 0 J Find the value of the
test statistic. (Round your answer to two decimal places.) 2.61 Find the p-value. (Round your answer to four decimal places.) p-value = .0349 State your conclusion. 0 X when the number of salespeople
is held constant. As the number of salespeople increases by 1, sales are expected hp 4x O 6:16 PM 11/29/2022
Transcribed Image Text:(d) At a = 0.05, test for the significance of the coefficient of number of salespeople. State the null and alternative hypotheses. (Enter != for # as needed.) Ho: Ha JERT
VERMELİndi B₂=0 B₂!= 0 Find the value of the test statistic. (Round your answer to two decimal places.) 2.61 X Find the p-value. (Round your answer to four decimal places.) p-value = .0349 X ✔ State
your conclusion. O Reject Ho. There is sufficient evidence to conclude that B₂ is significant. Do not reject Ho. There is insufficient evidence to conclude that B₂ is significant. B2 $2 O Reject Ho.
There is insufficient evidence to conclude that B₂ is significant. O Do not reject Ho. There is sufficient evidence to conclude that B₂ is significant. Need Help? (e) If the company uses $48,000 in
advertisement and has 790 salespeople, what are the expected sales? Give your answer in dollars. $ Submit Answer Read It ✓ Viewing Saved Work Revert to Last Response hp □ W S D
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 2 steps with 2 images | {"url":"https://www.bartleby.com/questions-and-answers/you-may-need-to-use-the-appropriate-technology-to-answer-this-question.-the-following-is-part-of-the/c0e760f1-62e3-478a-bc28-af1306894351","timestamp":"2024-11-06T09:14:15Z","content_type":"text/html","content_length":"256837","record_id":"<urn:uuid:2c75ad17-67c5-4005-b2c6-012d57160a5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00788.warc.gz"} |
a) 2cos(2nθ)
b) 2cos(2π−1θ)
c) 2cos(2n+1θ)
16. If tanθ=x−4x1... | Filo
Question asked by Filo student
a) b) c) 16. If , then is equal to
Not the question you're searching for?
+ Ask your question
Video solutions (2)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 10/23/2022
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Trigonometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text a) b) c) 16. If , then is equal to
Updated On Oct 26, 2022
Topic Trigonometry
Subject Mathematics
Class Class 11
Answer Type Video solution: 2
Upvotes 180
Avg. Video Duration 3 min | {"url":"https://askfilo.com/user-question-answers-mathematics/a-b-c-16-if-then-is-equal-to-32343934373833","timestamp":"2024-11-13T21:46:40Z","content_type":"text/html","content_length":"447386","record_id":"<urn:uuid:031b2ee8-210d-4dd5-94e6-98f14296b253>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00350.warc.gz"} |
Sign/Square Week to Radian/Square Hour
Sign/Square Week [sign/week2] Output
1 sign/square week in degree/square second is equal to 8.2015831023768e-11
1 sign/square week in degree/square millisecond is equal to 8.2015831023768e-17
1 sign/square week in degree/square microsecond is equal to 8.2015831023768e-23
1 sign/square week in degree/square nanosecond is equal to 8.2015831023768e-29
1 sign/square week in degree/square minute is equal to 2.9525699168556e-7
1 sign/square week in degree/square hour is equal to 0.001062925170068
1 sign/square week in degree/square day is equal to 0.61224489795918
1 sign/square week in degree/square week is equal to 30
1 sign/square week in degree/square month is equal to 567.21
1 sign/square week in degree/square year is equal to 81678.1
1 sign/square week in radian/square second is equal to 1.4314462901241e-12
1 sign/square week in radian/square millisecond is equal to 1.4314462901241e-18
1 sign/square week in radian/square microsecond is equal to 1.4314462901241e-24
1 sign/square week in radian/square nanosecond is equal to 1.4314462901241e-30
1 sign/square week in radian/square minute is equal to 5.1532066444466e-9
1 sign/square week in radian/square hour is equal to 0.000018551543920008
1 sign/square week in radian/square day is equal to 0.010685689297924
1 sign/square week in radian/square week is equal to 0.5235987755983
1 sign/square week in radian/square month is equal to 9.9
1 sign/square week in radian/square year is equal to 1425.55
1 sign/square week in gradian/square second is equal to 9.1128701137519e-11
1 sign/square week in gradian/square millisecond is equal to 9.1128701137519e-17
1 sign/square week in gradian/square microsecond is equal to 9.1128701137519e-23
1 sign/square week in gradian/square nanosecond is equal to 9.1128701137519e-29
1 sign/square week in gradian/square minute is equal to 3.2806332409507e-7
1 sign/square week in gradian/square hour is equal to 0.0011810279667423
1 sign/square week in gradian/square day is equal to 0.68027210884354
1 sign/square week in gradian/square week is equal to 33.33
1 sign/square week in gradian/square month is equal to 630.23
1 sign/square week in gradian/square year is equal to 90753.44
1 sign/square week in arcmin/square second is equal to 4.9209498614261e-9
1 sign/square week in arcmin/square millisecond is equal to 4.9209498614261e-15
1 sign/square week in arcmin/square microsecond is equal to 4.9209498614261e-21
1 sign/square week in arcmin/square nanosecond is equal to 4.9209498614261e-27
1 sign/square week in arcmin/square minute is equal to 0.000017715419501134
1 sign/square week in arcmin/square hour is equal to 0.063775510204082
1 sign/square week in arcmin/square day is equal to 36.73
1 sign/square week in arcmin/square week is equal to 1800
1 sign/square week in arcmin/square month is equal to 34032.54
1 sign/square week in arcmin/square year is equal to 4900685.97
1 sign/square week in arcsec/square second is equal to 2.9525699168556e-7
1 sign/square week in arcsec/square millisecond is equal to 2.9525699168556e-13
1 sign/square week in arcsec/square microsecond is equal to 2.9525699168556e-19
1 sign/square week in arcsec/square nanosecond is equal to 2.9525699168556e-25
1 sign/square week in arcsec/square minute is equal to 0.001062925170068
1 sign/square week in arcsec/square hour is equal to 3.83
1 sign/square week in arcsec/square day is equal to 2204.08
1 sign/square week in arcsec/square week is equal to 108000
1 sign/square week in arcsec/square month is equal to 2041952.49
1 sign/square week in arcsec/square year is equal to 294041158.16
1 sign/square week in sign/square second is equal to 2.7338610341256e-12
1 sign/square week in sign/square millisecond is equal to 2.7338610341256e-18
1 sign/square week in sign/square microsecond is equal to 2.7338610341256e-24
1 sign/square week in sign/square nanosecond is equal to 2.7338610341256e-30
1 sign/square week in sign/square minute is equal to 9.8418997228521e-9
1 sign/square week in sign/square hour is equal to 0.000035430839002268
1 sign/square week in sign/square day is equal to 0.020408163265306
1 sign/square week in sign/square month is equal to 18.91
1 sign/square week in sign/square year is equal to 2722.6
1 sign/square week in turn/square second is equal to 2.278217528438e-13
1 sign/square week in turn/square millisecond is equal to 2.278217528438e-19
1 sign/square week in turn/square microsecond is equal to 2.278217528438e-25
1 sign/square week in turn/square nanosecond is equal to 2.278217528438e-31
1 sign/square week in turn/square minute is equal to 8.2015831023768e-10
1 sign/square week in turn/square hour is equal to 0.0000029525699168556
1 sign/square week in turn/square day is equal to 0.0017006802721088
1 sign/square week in turn/square week is equal to 0.083333333333333
1 sign/square week in turn/square month is equal to 1.58
1 sign/square week in turn/square year is equal to 226.88
1 sign/square week in circle/square second is equal to 2.278217528438e-13
1 sign/square week in circle/square millisecond is equal to 2.278217528438e-19
1 sign/square week in circle/square microsecond is equal to 2.278217528438e-25
1 sign/square week in circle/square nanosecond is equal to 2.278217528438e-31
1 sign/square week in circle/square minute is equal to 8.2015831023768e-10
1 sign/square week in circle/square hour is equal to 0.0000029525699168556
1 sign/square week in circle/square day is equal to 0.0017006802721088
1 sign/square week in circle/square week is equal to 0.083333333333333
1 sign/square week in circle/square month is equal to 1.58
1 sign/square week in circle/square year is equal to 226.88
1 sign/square week in mil/square second is equal to 1.4580592182003e-9
1 sign/square week in mil/square millisecond is equal to 1.4580592182003e-15
1 sign/square week in mil/square microsecond is equal to 1.4580592182003e-21
1 sign/square week in mil/square nanosecond is equal to 1.4580592182003e-27
1 sign/square week in mil/square minute is equal to 0.0000052490131855211
1 sign/square week in mil/square hour is equal to 0.018896447467876
1 sign/square week in mil/square day is equal to 10.88
1 sign/square week in mil/square week is equal to 533.33
1 sign/square week in mil/square month is equal to 10083.72
1 sign/square week in mil/square year is equal to 1452055.1
1 sign/square week in revolution/square second is equal to 2.278217528438e-13
1 sign/square week in revolution/square millisecond is equal to 2.278217528438e-19
1 sign/square week in revolution/square microsecond is equal to 2.278217528438e-25
1 sign/square week in revolution/square nanosecond is equal to 2.278217528438e-31
1 sign/square week in revolution/square minute is equal to 8.2015831023768e-10
1 sign/square week in revolution/square hour is equal to 0.0000029525699168556
1 sign/square week in revolution/square day is equal to 0.0017006802721088
1 sign/square week in revolution/square week is equal to 0.083333333333333
1 sign/square week in revolution/square month is equal to 1.58
1 sign/square week in revolution/square year is equal to 226.88 | {"url":"https://hextobinary.com/unit/angularacc/from/signpw2/to/radph2","timestamp":"2024-11-14T10:29:11Z","content_type":"text/html","content_length":"113195","record_id":"<urn:uuid:971b99e6-1989-471f-9e1c-bd6d27d79f25>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00889.warc.gz"} |
Derive EqDec failing silently
Is there a way to get info about failures of Equations' Derive EqDec? The example below fails silently — derivation completes but produces no instance. While I found how to fix this example (by
accident), but that doesn't help in a bigger example using mutual inductive, some of which are indexed (where decide equality also fails). Equations will tell me that I should use Derive Signature,
but not that EqDec instances are missing for some contained type.
Require Import Coq.Numbers.BinNums.
From Equations Require Import Equations.
Derive EqDec for positive.
Instance positive_eqdec : EqDec positive.
Fail apply _.
My _actual_ failure is on https://github.com/Blaisorblade/dot-iris/blob/3e991e814b42acc2448ccde0790d18534c521e7a/theories/Dot/syn/syn.v#L330 (and it is about a 6-way mutual inductive with 32
https://github.com/Blaisorblade/dot-iris/blob/3e991e814b42acc2448ccde0790d18534c521e7a/theories/Dot/syn/syn.v#L39-L81). If needed, I could easily minimize it to exclude iris and stdpp — whose Unset
Transparent Obligations. breaks Derive Eqdec even in simpler cases.
FWIW, derivation takes 5-7 seconds to fail, irrespective of how many reasons it has to fail — say, even if EqDec positive is missing.
Ah. As usual, Next Obligation. shows what's missing (but the equation count is hidden from vscoq, causing the confusion). However, in the actual example, decidable equality opens an obligation for...
decidable equality on one of the sorts :-(.
Hmm, the tactic should certainly be hardened to ensure the prerequesites are available. There is now support for that in the Derive framework.
It currently does not support mutual inductive types though, as it tries to derive EqDec independently for all the inductives in the mutual block. Should be doable though.
Yeah, for now I resorted to writing the equations by hand with noind — at least the equation count is still linear (unlike the term size and equation count):
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237659-Equations-devs-.26-users/topic/Derive.20EqDec.20failing.20silently.html","timestamp":"2024-11-10T18:56:05Z","content_type":"text/html","content_length":"6761","record_id":"<urn:uuid:eb4a1d85-91d9-4811-adc0-4ac84412ca77>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00556.warc.gz"} |
Cube Minus Cube Formula Explained - Agrculture Lawyer
When it comes to algebra, one of the fundamental principles is understanding various formulas to simplify expressions and solve equations efficiently. One such important formula is the Cube Minus
Cube formula, which is used to factorize or simplify algebraic expressions involving two cubes.
Understanding the Cube Minus Cube Formula
The Cube Minus Cube formula is a special case of the algebraic identity known as the difference of cubes. It is expressed as:
a^3 – b^3 = (a – b)(a^2 + ab + b^2)
Where a and b are real numbers or algebraic expressions.
Applying the Cube Minus Cube Formula
When presented with an expression in the form of a^3 – b^3, you can simplify it using the Cube Minus Cube formula. The process involves factoring the expression into the product of two binomials.
Let’s consider an example to understand this better:
Example: Simplify 27x^3 – 8y^3
1. Identify a and b in the given expression:
2. In this case, a = 3x and b = 2y
3. Apply the Cube Minus Cube formula:
4. 27x^3 – 8y^3 = (3x – 2y)(9x^2 + 6xy + 4y^2)
5. The expression is now in factored form, which can help in further simplification or solving equations.
Key Points to Remember
• The Cube Minus Cube formula is a useful tool for simplifying expressions involving the difference of cubes.
• It follows a specific pattern of factorization: a^3 – b^3 = (a – b)(a^2 + ab + b^2).
• Understanding this formula can significantly speed up your algebraic calculations and problem-solving skills.
Working with more Complex Examples
Let’s explore a slightly more complex example to illustrate the application of the Cube Minus Cube formula:
Example: Factorize 64x^3 – 125
1. Identify a and b in the given expression:
2. In this case, a = 4x and b = 5
3. Apply the Cube Minus Cube formula:
4. 64x^3 – 125 = (4x – 5)(16x^2 + 20x + 25)
5. The expression is now in its factored form, showcasing the difference of cubes.
Mastering algebraic formulas such as the Cube Minus Cube formula is crucial for tackling advanced mathematical problems efficiently. By recognizing the patterns and principles behind these formulas,
you can simplify complex expressions, factorize efficiently, and solve equations with ease. Practice applying the Cube Minus Cube formula in various scenarios to enhance your algebraic skills and
confidence in handling mathematical challenges.
Frequently Asked Questions (FAQs)
1. What is the formula for the sum of cubes?
The formula for the sum of cubes is: a^3 + b^3 = (a + b)(a^2 – ab + b^2).
2. Can the Cube Minus Cube formula be applied to variables other than numbers?
Yes, the Cube Minus Cube formula can be applied to algebraic expressions involving variables as well.
3. How does the Cube Minus Cube formula relate to polynomial factorization?
The Cube Minus Cube formula is a special case of polynomial factorization, specifically for the difference of cubes pattern.
4. Are there any mnemonic devices to remember the Cube Minus Cube formula?
One mnemonic device is to remember the pattern of the formula as (First term – Second term)(First term^2 + First term*Second term + Second term^2).
5. In what types of algebraic problems is the Cube Minus Cube formula particularly useful?
This formula is particularly useful in simplifying expressions involving the difference of cubes, aiding in factorization and solving equations efficiently. | {"url":"https://agriculture-lawyer.com/cube-minus-cube-formula-explained/","timestamp":"2024-11-13T08:37:20Z","content_type":"text/html","content_length":"58673","record_id":"<urn:uuid:0fe9d9b7-5c9e-4c4c-acc2-07cb14db3d4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00315.warc.gz"} |
Hypothesis testing - Introductory Statistics with Randomization and Simulation First Edition
In the last two sections, we utilized a hypothesis test, which is a formal technique for evaluating two competing possibilities. In each scenario, we described anull hypothesis, which represented
either a skeptical perspective or a perspective of no difference. We also laid out an alternative hypothesis, which represented a new perspective such as the possibility that there has been a change
or that there is a treatment effect in an experiment.
Null and alternative hypotheses
The null hypothesis (H0) often represents either a skeptical perspective or a claim to be tested. Thealternative hypothesis (HA)represents an alternative
claim under consideration and is often represented by a range of possible values for the value of interest.
The hypothesis testing framework is a very general tool, and we often use it without a second thought. If a person makes a somewhat unbelievable claim, we are initially skeptical. However, if there
is sufficient evidence that supports the claim, we set aside our skepticism. The hallmarks of hypothesis testing are also found in the US court system.
Simulated difference in proportions of students who do not buy the DVD
−0.2 −0.1 0.0 0.1 0.2
observed difference
Figure 2.7: A stacked dot plot of 1,000 chance differences produced under the null hypothesis,H0. Six of the 1,000 simulations had a difference of at least 20%, which was the difference observed in
the study.
Propor tion of sim ulated scenar ios −0.2 −0.1 0.0 0.1 0.2 0.000 0.025 0.050 observed difference
Simulated difference in proportions of students who do not buy the DVD
Figure 2.8: A histogram of 1,000 chance differences produced under the null hypothesis,H0. Histograms like this one are a more convenient repre- sentation of data or results when there are a large
number of observations.
Hypothesis testing in the US court system
Example 2.8 A US court considers two possible claims about a defendant: she is either innocent or guilty. If we set these claims up in a hypothesis framework, which would be the null hypothesis and
which the alternative?
The jury considers whether the evidence is so convincing (strong) that there is no reasonable doubt regarding the person’s guilt. That is, the skeptical perspective (null hypothesis) is that the
person is innocent until evidence is presented that convinces the jury that the person is guilty (alternative hypothesis).
Jurors examine the evidence to see whether it convincingly shows a defendant is guilty. Notice that if a jury finds a defendantnot guilty, this does not necessarily mean the jury is confident in the
person’s innocence. They are simply not convinced of the alternative that the person is guilty.
This is also the case with hypothesis testing: even if we fail to reject the null hypothesis, we typically do not accept the null hypothesis as truth. Failing to find strong evidence for the
alternative hypothesis is not equivalent to providing evidence that the null hypothesis is true.
p-value and statistical significance
In Section 2.1 we encountered a study from the 1970’s that explored whether there was strong evidence that women were less likely to be promoted than men. The research ques- tion – are females
discriminated against in promotion decisions made by male managers? – was framed in the context of hypotheses:
H0: Gender has no effect on promotion decisions.
HA: Women are discriminated against in promotion decisions.
The null hypothesis (H0) was a perspective of no difference. The data, summarized on page62, provided a point estimate of a 29.2% difference in recommended promotion rates between men and women. We
determined that such a difference from chance alone would be rare: it would only happen about 2 in 100 times. When results like these are inconsistent withH0, we rejectH0in favor ofHA. Here, we
concluded there was discrimination against
The 2-in-100 chance is what we call ap-value, which is a probability quantifying the strength of the evidence against the null hypothesis and in favor of the alternative.
Thep-valueis the probability of observing data at least as favorable to the alter- native hypothesis as our current data set, if the null hypothesis were true. We typ- ically use a summary statistic
of the data, such as a difference in proportions, to help compute the p-value and evaluate the hypotheses. This summary value that is used to compute the p-value is often called thetest statistic.
Example 2.9 In the gender discrimination study, the difference in discrimination rates was our test statistic. What was the test statistic in the opportunity cost study covered in Section2.2?
The test statistic in the opportunity cost study was the difference in the proportion of students who decided against the DVD purchase in the treatment and control groups. In each of these examples,
the point estimate of the difference in proportions was used as the test statistic.
When the p-value is small, i.e. less than a previously set threshold, we say the results are statistically significant. This means the data provide such strong evidence against H0 that we reject the
null hypothesis in favor of the alternative hypothesis. The thresh-
old, called the significance leveland often represented byα(the Greek letteralpha), is α[significance] level of a hypothesis test typically set to α= 0.05, but can vary depending on the field or the
application. Using a
significance level ofα= 0.05 in the discrimination study, we can say that the data provided statistically significant evidence against the null hypothesis.
Statistical significance
We say that the data providestatistically significantevidence against the null hypothesis if the p-value is less than some reference value, usuallyα= 0.05.
Example 2.10 In the opportunity cost study in Section2.2, we analyzed an experi- ment where study participants were 20% less likely to continue with a DVD purchase if they were reminded that the
money, if not spent on the DVD, could be used for other purchases in the future. We determined that such a large difference would only occur about 1-in-150 times if the reminder actually had no
influence on student decision-making. What is the p-value in this study? Was the result statistically significant?
The p-value was 0.006 (about 1/150). Since the p-value is less than 0.05, the data provide statistically significant evidence that US college students were actually influ- enced by the reminder.
What’s so special about 0.05?
We often use a threshold of 0.05 to determine whether a result is statistically significant. But why 0.05? Maybe we should use a bigger number, or maybe a smaller number. If you’re a little puzzled,
that probably means you’re reading with a critical eye – good job! We’ve made a video to help clarifywhy 0.05:
Sometimes it’s also a good idea to deviate from the standard. We’ll discuss when to choose a threshold different than 0.05 in Section2.3.4.
Decision errors
Hypothesis tests are not flawless. Just think of the court system: innocent people are sometimes wrongly convicted and the guilty sometimes walk free. Similarly, data can point to the wrong
conclusion. However, what distinguishes statistical hypothesis tests from a court system is that our framework allows us to quantify and control how often the data lead us to the incorrect
There are two competing hypotheses: the null and the alternative. In a hypothesis test, we make a statement about which one might be true, but we might choose incorrectly. There are four possible
scenarios in a hypothesis test, which are summarized in Table2.9.
Test conclusion
do not rejectH0 rejectH0 in favor ofHA
H0true okay Type 1 Error Truth
HAtrue Type 2 Error okay
Table 2.9: Four different scenarios for hypothesis tests.
AType 1 Erroris rejecting the null hypothesis whenH0 is actually true. Since we rejected the null hypothesis in the gender discrimination and opportunity cost studies, it is possible that we made a
Type 1 Error in one or both of those studies. AType 2 Error is failing to reject the null hypothesis when the alternative is actually true.
Example 2.11 In a US court, the defendant is either innocent (H0) or guilty (HA).
What does a Type 1 Error represent in this context? What does a Type 2 Error represent? Table2.9may be useful.
If the court makes a Type 1 Error, this means the defendant is innocent (H0true) but wrongly convicted. A Type 2 Error means the court failed to rejectH0(i.e. failed to convict the person) when she
was in fact guilty (HAtrue).
J [Guided Practice 2.12] [Consider the opportunity cost study where we concluded] students were less likely to make a DVD purchase if they were reminded that money not spent now could be spent later.
What would a Type 1 Error represent in this context?9
Example 2.13 How could we reduce the Type 1 Error rate in US courts? What influence would this have on the Type 2 Error rate?
To lower the Type 1 Error rate, we might raise our standard for conviction from “beyond a reasonable doubt” to “beyond a conceivable doubt” so fewer people would be wrongly convicted. However, this
would also make it more difficult to convict the people who are actually guilty, so we would make more Type 2 Errors.
9[Making a Type 1 Error in this context would mean that reminding students that money not spent]
now can be spent later does not affect their buying habits, despite the strong evidence (the data suggesting otherwise) found in the experiment. Notice that this does not necessarily mean something
was wrong with the data or that we made a computational mistake. Sometimes data simply point us to the wrong conclusion, which is why scientific studies are often repeated to check initial findings.
J [Guided Practice 2.14] [How could we reduce the Type 2 Error rate in US courts?] What influence would this have on the Type 1 Error rate?10
The example and guided practice above provide an important lesson: if we reduce how often we make one type of error, we generally make more of the other type.
Choosing a significance level
Choosing a significance level for a test is important in many contexts, and the traditional level is 0.05. However, it is sometimes helpful to adjust the significance level based on the application.
We may select a level that is smaller or larger than 0.05 depending on the consequences of any conclusions reached from the test.
If making a Type 1 Error is dangerous or especially costly, we should choose a small significance level (e.g. 0.01 or 0.001). Under this scenario, we want to be very cautious about rejecting the null
hypothesis, so we demand very strong evidence favoring the alter- nativeHAbefore we would rejectH0.
If a Type 2 Error is relatively more dangerous or much more costly than a Type 1 Error, then we should choose a higher significance level (e.g. 0.10). Here we want to be cautious about failing to
reject H0 when the null is actually false.
Significance levels should reflect consequences of errors
The significance level selected for a test should reflect the real-world consequences associated with making a Type 1 or Type 2 Error.
Introducing two-sided hypotheses
So far we have explored whether women were discriminated against and whether a simple trick could make students a little thriftier. In these two case studies, we’ve actually ignored some
• What ifmen are actually discriminated against?
• What if the money trick actually makes studentsspend more?
These possibilities weren’t considered in our hypotheses or analyses. This may have seemed natural since the data pointed in the directions in which we framed the problems. However, there are two
dangers if we ignore possibilities that disagree with our data or that conflict with our worldview:
1. Framing an alternative hypothesis simply to match the direction that the data point will generally inflate the Type 1 Error rate. After all the work we’ve done (and will continue to do) to
rigorously control the error rates in hypothesis tests, careless construction of the alternative hypotheses can disrupt that hard work. We’ll explore this topic further in Section2.3.6.
2. If we only use alternative hypotheses that agree with our worldview, then we’re going to be subjecting ourselves to confirmation bias, which means we are looking for data that supports our ideas.
That’s not very scientific, and we can do better!
10[To lower the Type 2 Error rate, we want to convict more guilty people. We could lower the standards]
for conviction from “beyond a reasonable doubt” to “beyond a little doubt”. Lowering the bar for guilt will also result in more wrongful convictions, raising the Type 1 Error rate.
The previous hypotheses we’ve seen are calledone-sided hypothesis testsbecause they only explored one direction of possibilities. Such hypotheses are appropriate when we are exclusively interested in
the single direction, but usually we want to consider all possibilities. To do so, let’s learn about two-sided hypothesis tests in the context of a new study that examines the impact of using blood
thinners on patients who have undergone CPR.
Cardiopulmonary resuscitation (CPR) is a procedure used on individuals suffering a heart attack when other emergency resources are unavailable. This procedure is helpful in providing some blood
circulation to keep a person alive, but CPR chest compressions can also cause internal injuries. Internal bleeding and other injuries that can result from CPR complicate additional treatment efforts.
For instance, blood thinners may be used to help release a clot that is causing the heart attack once a patient arrives in the hospital. However, blood thinners negatively affect internal injuries.
Here we consider an experiment with patients who underwent CPR for a heart attack and were subsequently admitted to a hospital.11 [Each patient was randomly assigned to] either receive a blood
thinner (treatment group) or not receive a blood thinner (control group). The outcome variable of interest was whether the patient survived for at least 24 hours.
Example 2.15 Form hypotheses for this study in plain and statistical language. Letpc represent the true survival rate of people who do not receive a blood thinner
(corresponding to the control group) and pt represent the survival rate for people
receiving a blood thinner (corresponding to the treatment group).
We want to understand whether blood thinners are helpful or harmful. We’ll consider both of these possibilities using a two-sided hypothesis test.
H0: Blood thinners do not have an overall survival effect, i.e. the survival proportions are the same in each group. pt−pc= 0.
HA: Blood thinners have an impact on survival, either positive or negative, but not
zero. pt−pc6= 0.
There were 50 patients in the experiment who did not receive a blood thinner and 40 patients who did. The study results are shown in Table 2.10.
Survived Died Total
Control 11 39 50
Treatment 14 26 40
Total 25 65 90
Table 2.10: Results for the CPR study. Patients in the treatment group were given a blood thinner, and patients in the control group were not.
Guided Practice 2.16 What is the observed survival rate in the control group? And in the treatment group? Also, provide a point estimate of the difference in survival proportions of the two groups:
11[B¨][ottiger et al. “Efficacy and safety of thrombolytic therapy after initially unsuccessful cardiopul-]
monary resuscitation: a prospective clinical trial.” The Lancet, 2001.
12[Observed control survival rate:] [p]
c= 11[50] = 0.22. Treatment survival rate: pt= 14[40] = 0.35. Observed
According to the point estimate, for patients who have undergone CPR outside of the hospital, an additional 13% of these patients survive when they are treated with blood thinners. However, we wonder
if this difference could be easily explainable by chance.
As we did in our past two studies this chapter, we will simulate what type of differ- ences we might see from chance alone under the null hypothesis. By randomly assigning “simulated treatment” and
“simulated control” stickers to the patients’ files, we get a new grouping. If we repeat this simulation 10,000 times, we can build a null distributionof the differences shown in Figure 2.11.
−0.4 −0.2 0.0 0.2 0.4
0 0.15
Figure 2.11: Null distribution of the point estimate, ˆpt−pˆc. The shaded
right tail shows observations that are at least as large as the observed difference, 0.13.
The right tail area is about 0.13. (Note: it is only a coincidence that we also have ˆ
pt−pˆc = 0.13.) However, contrary to how we calculated the p-value in previous studies,
the p-value of this test is not 0.13!
The p-value is defined as the chance we observe a result at least as favorable to the alternative hypothesis as the result (i.e. the difference) we observe. In this case, any differences less than or
equal to -0.13 would also provide equally strong evidence favoring the alternative hypothesis as a difference of 0.13. A difference of -0.13 would correspond to 13% higher survival rate in the
control group than the treatment group. In Figure2.12
we’ve also shaded these differences in the left tail of the distribution. These two shaded tails provide a visual representation of the p-value for a two-sided test.
For a two-sided test, take the single tail (in this case, 0.13) and double it to get the p-value: 0.26. Since this p-value is larger than 0.05, we do not reject the null hypothesis. That is, we do
not find statistically significant evidence that the blood thinner has any influence on survival of patients who undergo CPR prior to arriving at the hospital.
Default to a two-sided test
We want to be rigorous and keep an open mind when we analyze data and evidence. Use a one-sided hypothesis test only if you truly have interest in only one direction.
−0.4 −0.2 0.0 0.2 0.4 0
Figure 2.12: Null distribution of the point estimate, ˆpt−pˆc. All values
that are at least as extreme as +0.13 but in either direction away from 0 are shaded.
Computing a p-value for a two-sided test
First compute the p-value for one tail of the distribution, then double that value to get the two-sided p-value. That’s it!
Controlling the Type 1 Error rate
It is never okay to change two-sided tests to one-sided tests after observing the data. We explore the consequences of ignoring this advice in the next example.
Example 2.17 Usingα= 0.05, we show that freely switching from two-sided tests | {"url":"https://1library.net/article/hypothesis-testing-introductory-statistics-randomization-simulation-edition.g6qmn8z8","timestamp":"2024-11-06T16:59:21Z","content_type":"text/html","content_length":"80720","record_id":"<urn:uuid:9419e41f-a2a1-4af8-8dc7-9b3e87140efb>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00013.warc.gz"} |
What do you want to work on?
About Jer
Accounting, Algebra, Algebra 2, Economics, Midlevel (7-8) Math, Midlevel (7-8) Science
Social Studies - US History
This was very helpful!
Math - Algebra II
good work
Social Studies - World History
Very intelligent, helpful, and knowledgeable.
Math - Midlevel (7-8) Math
This was a great session | {"url":"https://ws.princetonreview.com/academic-tutoring/tutor/jer--7762707","timestamp":"2024-11-05T09:59:39Z","content_type":"application/xhtml+xml","content_length":"249437","record_id":"<urn:uuid:9d581389-5492-45d4-931f-70a64cea145d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00709.warc.gz"} |
b) Rotor-angle stability
The analysis of electromechanical dynamics relies significantly on the generator rotor swing following a fault in the grid. To analyze electromechanical dynamics, it can be assumed that the shaft of
the generating unit is rigid, and the rotor swing can be described as:
where the time derivative of the rotor angle dδ/dt = Δω = ω − ω[s] is the rotor speed deviation in electrical radians per second (rad/s), D is the damping coefficient, E’ is transient internal emf,
V[s] is infinite busbar voltage, x[d]’ is d-axis transient reactance between generator and the infinite busbar, is the power (or rotor) angle with respect to the infinite busbar, and P[m] and P[e]
are the mechanical and electrical power, respectively. The coefficient M is defined as:
Where H is the inertia constant, S[n] is the generator nominal power.
The response of a system to a significant disturbance, such as a short circuit or line tripping, is very dramatic from a stability standpoint. When such a fault happens, substantial currents and
torques are generated, and swift action is often necessary to preserve system stability. This challenge is commonly referred to as the issue of large-disturbance stability.
Four distinct types of short circuits, namely single-phase short circuit, phase-to-phase short circuit, phase-to-phase-to-earth short circuit, and three-phase short circuit, are examined on the
single-machine-infinite-busbar (SMIB) system depicted in Figure 1. The short circuit occurs at the beginning of the line.
Figure 1. Schematic diagram of the SMIB system
The initial step involves determining the power-angle curve P[e_pre] for the normal grid. Assuming E’ and V[s] remain constant, the focus is on finding the equivalent system reactance as shown in
Figure 2.
Figure 2. Equivalent circuit for the pre-fault state
The second step involves determining the power-angle curve P[e_fault] for the fault state. Assuming E’ and V[s] remain constant, the focus is on finding the equivalent system reactance during fault
() according to Figure 3.
Figure 3. Equivalent circuit for the fault state
Utilizing symmetrical components enables the representation of any type of fault in the positive-sequence network by introducing a fault shunt reactance (Δx[F]) connected between the point of the
fault and the neutral, as illustrated in Figure 3. The value of Δx[F] is contingent on the type of fault and is provided in Table 1, where x[i] and x[0] are the negative and zero-sequence Thévenin
equivalent reactances observed from the fault terminals.
The power-angle curve P[e_fault] for the fault state is:
Finally, the last step is to determine the power-angle curve P[e_post] post-fault, which, in the case of this grid, is the same as the power-angle curve for the normal grid (pre-fault) i.e.:
To assess rotor-angle stability during the fault, it’s essential to analyze the yellow (P_acc) and blue (P_dcc) areas in the interactive graph below. For stable operation, the deceleration (blue)
area must be larger than the acceleration (yellow) area. The sizes of both areas primarily depend on the time it takes to clear the fault, i.e., the angle delta_cl when the fault is cleared. | {"url":"https://transitproject.eu/2023/11/10/rotor-angle-stability/","timestamp":"2024-11-06T16:58:50Z","content_type":"text/html","content_length":"64815","record_id":"<urn:uuid:995b796f-460c-42d0-91b8-ac0bc8abc78c>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00609.warc.gz"} |
Measure qubits | IBM Quantum Documentation
Measure qubits
To get information about a qubit's state, you can measure it onto a classical bit. In Qiskit, measurements are performed in the computational basis, that is, the single-qubit Pauli-$Z$ basis.
Therefore, a measurement yields 0 or 1, depending on the overlap with the Pauli-$Z$ eigenstates $|0\rangle$ and $|1\rangle$:
$|q\rangle \xrightarrow{measure}\begin{cases} 0 (\text{outcome}+1), \text{with probability } p_0=|\langle q|0\rangle|^{2}\text{,} \\ 1 (\text{outcome}-1), \text{with probability } p_1=|\langle q|1\
rangle|^{2}\text{.} \end{cases}$
Apply a measurement to a circuit
There are several ways to apply measurements to a circuit:
QuantumCircuit.measure method
Use the measure method to measure a QuantumCircuit.
from qiskit import QuantumCircuit
qc = QuantumCircuit(5, 5)
qc.measure(range(5), range(5)) # Measures all qubits into the corresponding clbit.
from qiskit import QuantumCircuit
qc = QuantumCircuit(3, 1)
qc.x([0, 2])
qc.measure(1, 0) # Measure qubit 1 into the classical bit 0.
Measure class
The Qiskit Measure class measures the specified qubits.
from qiskit.circuit import Measure
qc.append(Measure(), [0], [0]) # measure qubit 0 into clbit 0
QuantumCircuit.measure_all method
To measure all qubits into the corresponding classical bits, use the measure_all method. By default, this method adds new classical bits in a ClassicalRegister to store these measurements.
from qiskit import QuantumCircuit
qc = QuantumCircuit(3, 1)
qc.x([0, 2])
qc.measure_all() # Measure all qubits.
QuantumCircuit.measure_active method
To measure all qubits that are not idle, use the measure_active method. This method creates a new ClassicalRegister with a size equal to the number of non-idle qubits being measured.
from qiskit import QuantumCircuit
qc = QuantumCircuit(3, 1)
qc.x([0, 2])
qc.measure_active() # Measure qubits that are not idle, i.e., qubits 0 and 2.
Important notes
• Circuits that contain operations after a measurement are called dynamic circuits. Not all QPUs or simulators support these.
• There must be at least one classical register in order to use measurements.
• The Sampler primitive requires circuit measurements. You can add circuit measurements with the Estimator primitive, but they are ignored.
Next steps | {"url":"https://docs.quantum.ibm.com/guides/measure-qubits","timestamp":"2024-11-15T04:29:33Z","content_type":"text/html","content_length":"177454","record_id":"<urn:uuid:643848d5-5e7d-454f-be86-e0e21a2ad957>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00839.warc.gz"} |
The Stacks project
Lemma 76.19.8. Let $S$ be a scheme. Let $f : X \to Y$ be a formally smooth morphism of algebraic spaces over $S$. Then $\Omega _{X/Y}$ is locally projective on $X$.
\[ \xymatrix{ U \ar[d] \ar[r]_\psi & V \ar[d] \\ X \ar[r]^ f & Y } \]
where $U$ and $V$ are affine(!) schemes and the vertical arrows are étale. By Lemma 76.19.5 we see $\psi : U \to V$ is formally smooth. Hence $\Gamma (V, \mathcal{O}_ V) \to \Gamma (U, \mathcal{O}_
U)$ is a formally smooth ring map, see More on Morphisms, Lemma 37.11.6. Hence by Algebra, Lemma 10.138.7 the $\Gamma (U, \mathcal{O}_ U)$-module $\Omega _{\Gamma (U, \mathcal{O}_ U)/\Gamma (V, \
mathcal{O}_ V)}$ is projective. Hence $\Omega _{U/V}$ is locally projective, see Properties, Section 28.21. Since $\Omega _{X/Y}|_ U = \Omega _{U/V}$ we see that $\Omega _{X/Y}$ is locally projective
too. (Because we can find an étale covering of $X$ by the affine $U$'s fitting into diagrams as above – details omitted.) $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 061I. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 061I, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/061I","timestamp":"2024-11-11T17:28:28Z","content_type":"text/html","content_length":"15016","record_id":"<urn:uuid:20d01374-9598-4181-a3f9-3bcb19477c06>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00409.warc.gz"} |
The concept AABBTraits provides the geometric primitive types and methods for the class CGAL::AABB_tree<AABBTraits>.
Has models
See also
typedef unspecified_type FT
Value type of the Squared_distance functor.
typedef unspecified_type Point
Type of a point.
typedef unspecified_type Primitive
Type of primitive.
typedef unspecified_type Bounding_box
Bounding box type.
typedef std::pair< Point, Primitive::Id
> Point_and_primitive_id
Point and Primitive Id type.
typedef std::pair< Object, Primitive::Id
> Object_and_primitive_id
template<typename Query >
using Intersection_and_primitive_id = unspecified_type
A nested class template providing as a pair the intersection result of a Query object and a Primitive::Datum, together with the Primitive::Id of the primitive
During the construction of the AABB tree, the primitives are split according to some comparison functions related to the longest axis:
typedef unspecified_type Split_primitives
A functor object to split a range of primitives into two sub-ranges along the longest axis.
typedef unspecified_type Compute_bbox
A functor object to compute the bounding box of a set of primitives.
The following predicates are required for each type Query for which the class CGAL::AABB_tree<AABBTraits> may receive an intersection detection or computation query.
typedef unspecified_type Do_intersect
A functor object to compute intersection predicates between the query and the nodes of the tree.
typedef unspecified_type Intersection
A functor object to compute the intersection of a query and a primitive.
The following predicates are required for each type Query for which the class CGAL::AABB_tree<AABBTraits> may receive a distance query.
typedef unspecified_type Compare_distance
A functor object to compute distance comparisons between the query and the nodes of the tree.
typedef unspecified_type Closest_point
A functor object to compute the closest point from the query on a primitive.
typedef unspecified_type Squared_distance
A functor object to compute the squared distance between two points.
typedef unspecified_type Equal
A functor object to compare two points.
Split_primitives split_primitives_object ()
returns the primitive splitting functor.
Compute_bbox compute_bbox_object ()
returns the bounding box constructor.
Do_intersect do_intersect_object ()
returns the intersection detection functor.
Intersection intersection_object ()
returns the intersection constructor.
Compare_distance compare_distance_object ()
returns the distance comparison functor.
Closest_point closest_point_object ()
returns the closest point constructor.
Squared_distance squared_distance_object ()
returns the squared distance functor.
Equal equal_object ()
returns the equal functor.
In addition, if Primitive is a model of the concept AABBPrimitiveWithSharedData, the following functions are part of the concept:
template<class ... T>
void set_shared_data (T ... t)
the signature of that function must be the same as the static function Primitive::construct_shared_data.
◆ Closest_point
A functor object to compute the closest point from the query on a primitive.
Provides the operator: Point operator()(const Query& query, const Primitive& primitive, const Point & closest); which returns the closest point to query, among closest and all points of the
◆ Compare_distance
A functor object to compute distance comparisons between the query and the nodes of the tree.
Provides the operators:
• bool operator()(const Query & query, const Bounding_box& box, const Point & closest); which returns true iff the bounding box is closer to query than closest is
• bool operator()(const Query & query, const Primitive & primitive, const Point & closest); which returns true iff primitive is closer to the query than closest is
◆ Compute_bbox
A functor object to compute the bounding box of a set of primitives.
Provides the operator: Bounding_box operator()(Input_iterator begin, Input_iterator beyond); Iterator type InputIterator must have Primitive as value type.
◆ Do_intersect
A functor object to compute intersection predicates between the query and the nodes of the tree.
Provides the operators:
• bool operator()(const Query & q, const Bounding_box & box); which returns true iff the query intersects the bounding box
• bool operator()(const Query & q, const Primitive & primitive); which returns true iff the query intersects the primitive
◆ Equal
A functor object to compare two points.
Provides the operator: bool operator()(const Point& p, const Point& q);} which returns true if p is equal to q.
◆ Intersection
A functor object to compute the intersection of a query and a primitive.
Provides the operator: std::optional<Intersection_and_primitive_id<Query>::Type > operator()(const Query & q, const Primitive& primitive); which returns the intersection as a pair composed of an
object and a primitive id, iff the query intersects the primitive.
Note on Backward Compatibility
Before the release 4.3 of CGAL, the return type of this function used to be std::optional<Object_and_primitive_id>.
◆ Intersection_and_primitive_id
template<typename Query >
A nested class template providing as a pair the intersection result of a Query object and a Primitive::Datum, together with the Primitive::Id of the primitive intersected.
The type of the pair is Intersection_and_primitive_id<Query>::Type.
◆ Object_and_primitive_id
This requirement is deprecated and is no longer needed.
◆ Primitive
◆ Split_primitives
A functor object to split a range of primitives into two sub-ranges along the longest axis.
Provides the operator: void operator()(InputIterator first, InputIterator beyond); Iterator type InputIterator must be a model of RandomAccessIterator and have Primitive as value type. The operator
is used for determining the primitives assigned to the two children nodes of a given node, assuming that the goal is to split the chosen axis dimension of the bounding box of the node. The primitives
assigned to this node are passed as argument to the operator. It should modify the iterator range in such a way that its first half and its second half correspond to the two children nodes.
◆ Squared_distance
A functor object to compute the squared distance between two points.
Provides the operator: FT operator()(const Point& query, const Point & p); which returns the squared distance between p and q.
◆ set_shared_data()
template<class ... T>
void AABBTraits::set_shared_data ( T ... t )
the signature of that function must be the same as the static function Primitive::construct_shared_data.
The type Primitive expects that the data constructed by a call to Primitive::construct_shared_data(t...) is the one given back when accessing the reference point and the datum of a primitive. | {"url":"https://doc.cgal.org:443/latest/AABB_tree/classAABBTraits.html","timestamp":"2024-11-06T08:40:20Z","content_type":"application/xhtml+xml","content_length":"41445","record_id":"<urn:uuid:14147ba5-5171-47c3-8219-d813fd1788d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00879.warc.gz"} |
Can the fuzzball conjecture be applied to microscopically explain the entropy of a region beyond the gravitational observer horizon?
In this article discussing this and related papers, it is explained among other things, how the neighborhood of an observer's worldline can be approximated by a region of Minkowsky spacetime.
If I understand this right (corrections of confused fluff and misunderstandings are highly welcome), a coordinate transformation which depends on the observer's current location $p_0$ in the
classical backround spacetime, to a free falling local Lorentz frame is applied. In this reference frame, local coordinates ($\tau$, $\theta$, $\phi$) together with a parameter $\lambda$ (which
describes the location on the observer's worldline?) can be used. As $\lambda$ deviates too mach from $\lambda(p_0)$, the local proper acceleration $\sqrt{a_{\mu}a^{\mu}}$ becames large and
approaches the string scale (is this because flat Minkowsky space is only locally valid?) and stringy effects kick in.
The authors postulate that at these points (called the gravitational observer horizon) some microscopic degrees of freedom have to exist that give rise to the Beckenstein-Hawking entropy describing
the entropy contained in spacetime beyond the gravitational observer horizon (?).
This is quite a long text to introduce my question, which simply is: Can these microstates be described by the fuzzball conjecture or what are they assumed to "look" like?
Can these microstates be described by the fuzzball conjecture or what are they assumed to "look" like?
We don't know. The gravitational observer horizon is supposed to be a place where low-energy physics becomes invalid (i.e. one shouldn't trust GR and quantum field theory of a spacetime background).
For an observer far from a black hole, this horizon roughly agrees with the usual black hole horizon, and something like the fuzzball scenario may be appropriate. However, the paper remains agnostic
about the details of the high-energy physics (it can hopefully be described well in string theory). For now, the only thing we can say with (a reasonable level of) certainty is the number of degrees
of freedom in an observer horizon.
together with a parameter λ (which describes the location on the observer's worldline?)
I think you've misunderstood the meaning of $\lambda$. Take a look at the figure in the paper. It is an affine parameter the goes down the past light cone of the observer. (The observer is at $\
lambda=0$.) The observer horizon occurs when a trajectory of constant $\lambda$ but changing $\tau$ accelerates too much to be described safely by GR.
This post imported from StackExchange Physics at 2014-03-09 16:25 (UCT), posted by SE-user sjasonw | {"url":"https://www.physicsoverflow.org/6941/fuzzball-conjecture-microscopically-gravitational-observer","timestamp":"2024-11-04T11:11:22Z","content_type":"text/html","content_length":"116478","record_id":"<urn:uuid:d89a378c-0247-4606-ac90-94256f5248ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00212.warc.gz"} |
Tameeka is in charge of designing a pennant for spirit week She wants the base to be 3 and one half feet and the height to be 6 and one half She has 20 square f
Tameeka is in charge of designing a pennant for spirit week. She wants the base to be 3 and one half feet and the height to be 6 and one half. She has 20 square feet of paper available. Does she have
enough paper? explain.
Answer :
You need to calculate the surface of the pennant.
It is a rectangle, so you can use this formula :
[tex]A = w\times h[/tex]
(w : width ; h : height).
[tex]A = 3.5 \times 6.5 = 22.75 \text{ square feet}[/tex]
22.75 > 20
Tameeka doesn't have enough paper.
Answer Link
22.75 > 20
Tameeka doesn't have enough paper.
Step-by-step explanation:
Answer Link
Other Questions | {"url":"https://mis.kyeop.go.ke/shelf/183","timestamp":"2024-11-14T08:54:58Z","content_type":"text/html","content_length":"154935","record_id":"<urn:uuid:cd2dc660-f7a8-4fcb-966c-40293d6666b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00107.warc.gz"} |
The covariant derivative in terms of the connection
\({\nabla_{v}w}\) can be written in terms of \({\check{\Gamma}}\) by using the Leibniz rule for the covariant derivative with \({w^{\mu}}\) as frame-dependent functions:
\begin{aligned}\nabla_{v}w & =\nabla_{v}\left(w^{\mu}e_{\mu}\right)\\ & =v\left(w^{\mu}\right)e_{\mu}+w^{\mu}\nabla_{v}\left(e_{\mu}\right)\\ & =\mathrm{d}w^{\mu}\left(v\right)e_{\mu}+\check{\Gamma}\
left(v\right)\vec{w}\\ & \equiv\mathrm{d}\vec{w}\left(v\right)+\check{\Gamma}\left(v\right)\vec{w} \end{aligned}
Here we again view \({\vec{w}}\) as a \({\mathbb{R}^{n}}\)-valued 0-form, so that \({\mathrm{d}\vec{w}\left(v\right)\equiv\mathrm{d}w^{\mu}\left(v\right)e_{\mu}}\). Thus \({\mathrm{d}\vec{w}\left(v\
right)}\) is the change in the components of \({w}\) in the direction \({v}\), making it frame-dependent even though \({w}\) is not. Note that although \({\nabla_{v}w}\) is a frame-independent
quantity, both terms on the right hand side are frame-dependent. This is depicted in the following figure.
The above depicts relationships between the frame, parallel transport, covariant derivative, and connection for a vector \({w}\) parallel to \({e_{1}}\) at a point \({p}\).
◊ The relation \({\nabla_{v}w=\check{\Gamma}\left(v\right)\vec{w}+\mathrm{d}\vec{w}\left(v\right)}\) can be viewed as roughly saying that the change in \({w}\) under parallel transport is equal to
the change in the frame relative to its parallel transport plus the change in the components of \({w}\) in that frame.
If the 1-form \({\Gamma^{\lambda}{}_{\mu}\left(v\right)}\) itself is written using component notation, we arrive at the connection coefficients
\(\displaystyle \Gamma^{\lambda}{}_{\mu\sigma}\equiv\Gamma^{\lambda}{}_{\mu}\left(e_{\sigma}\right)=\beta^{\lambda}\left(\nabla_{e_{\sigma}}e_{\mu}\right). \)
\({\Gamma^{\lambda}{}_{\mu\sigma}}\) thus measures the \({\lambda^{\mathrm{th}}}\) component of the difference between \({e_{\mu}}\) and its parallel transport in the direction \({e_{\sigma}}\).
Δ This notation is potentially confusing, as it makes \({\Gamma^{\lambda}{}_{\mu\sigma}}\) look like the components of a tensor, which it is not: it is a derivative of the component of the frame
indexed by \({\mu}\), and therefore is not only locally frame-dependent but also depends upon values of the frame at other points, so that it is not a multilinear mapping on its local arguments.
Similarly, \({\mathrm{d}\vec{w}}\) looks like a frame-independent exterior derivative, but it is not: it is the exterior derivative of the frame-dependent components of \({w}\).
Δ The ordering of the lower indices of \({\Gamma^{\lambda}{}_{\mu\sigma}}\) is not consistent across the literature (e.g. [17] vs [15]). This is sometimes not remarked upon, possibly due to the fact
that in typical circumstances in general relativity (a coordinate frame and zero torsion, to be defined here), the connection coefficients are symmetric in their lower indices.
It is common to extend abstract index notation to be able to express the covariant derivative in terms of the connection coefficients as follows:
\begin{aligned}\nabla_{e_{\mu}}w & =\mathrm{d}w^{\lambda}\left(e_{\mu}\right)e_{\lambda}+\Gamma^{\lambda}{}_{\sigma}\left(e_{\mu}\right)w^{\sigma}e_{\lambda}\\
\Rightarrow\nabla_{a}w^{b}\equiv\left(\nabla_{e_{a}}w\right)^{b} & =e_{a}\left(w^{b}\right)+\Gamma^{b}{}_{ca}w^{c}\\
\Rightarrow\nabla_{a}w^{b} & =\partial_{a}w^{b}+\Gamma^{b}{}_{ca}w^{c}
Here we have also defined \({\partial_{a}f\equiv\partial_{e_{a}}f=\mathrm{d}f(e_{a})=e_{a}(f)}\), which is then extended to \({\partial_{v}f\equiv v^{a}\partial_{a}f}\). This notation is also
sometimes supplemented to use a comma to indicate partial differentiation and a semicolon to indicate covariant differentiation, so that the above becomes
\(\displaystyle w^{b}{}_{;a}=w^{b}{}_{,a}+\Gamma^{b}{}_{ca}w^{c}. \)
The extension of index notation to derivatives has a number of potentially confusing aspects:
• \({\partial_{a}}\) written alone is not a 1-form, but \({\partial_{a}f}\) is, since the derivative is linear
• \({\partial^{a}\equiv g^{ab}\partial_{b}}\) is not the frame dual to \({\partial_{a}}\), which is \({\mathrm{d}x^{a}}\)
• Greek indices indicate only that a specific basis (frame) has been chosen ([16] pp. 23-26), but do not distinguish between a general frame, where \({\partial_{\mu}f\equiv\mathrm{d}f(e_{\mu})}\),
and a coordinate frame, where \({\partial_{\mu}f\equiv\partial f/\partial x^{\mu}}\)
• \({\nabla_{a}}\) alone is not a 1-form, but since \({\nabla_{a}w^{b}\equiv(\nabla_{e_{a}}w)^{b}}\) and \({\nabla_{v}w}\) is linear in \({v}\), \({\nabla_{a}w^{b}}\) is in fact a tensor of type \
({\left(1,1\right)}\); a more accurate notation might be \({(\nabla w)^{b}{}_{a}}\)
• \({w^{b}}\) in the expression \({\partial_{a}w^{b}\equiv\mathrm{d}w^{b}(e_{a})}\) is not a vector, it is a set of frame-dependent component functions labeled by \({b}\) whose change in the
direction \({e_{a}}\) is being measured
• The above means that, consistent with the definition of the connection coefficients, we have \({\nabla_{a}e_{b}=0+e_{c}\Gamma^{c}{}_{ba}}\), since the components of the frame itself by definition
do not change
• When using a coordinate frame based on curvilinear coordinates in Euclidean space, parallel transport is implicit in taking partial derivatives of vectors, resulting in the above being expressed
as \({\partial_{\mu}e_{\lambda}=e_{\sigma}\Gamma^{\sigma}{}_{\lambda\mu}}\)
• As previously noted, neither \({\Gamma^{b}{}_{ca}}\) nor \({\Gamma^{b}{}_{ca}w^{c}}\) are tensors
We will nevertheless use this notation for many expressions going forward, as it is frequently used in general relativity.
Δ It is important to remember that expressions involving \({\nabla_{a}}\), \({\partial_{a}}\), and \({\Gamma^{c}{}_{ba}}\) must be handled carefully, as none of these are consistent with the original
concept of indices denoting tensor components.
Δ Some texts will distinguish between the labels of basis vectors and abstract index notation by using expressions such as \({(e_{i})^{a}}\). We will not follow this practice, as it makes difficult
the convenient method of matching indexes in expressions such as \({\partial_{a}w^{b}\equiv\mathrm{d}w^{b}(e_{a})}\).
Δ If we choose coordinates \({x^{\mu}}\) and use a coordinate frame so that \({\partial_{\mu}\equiv\partial/\partial x^{\mu}}\), we have the usual relation \({\partial_{\mu}\partial_{\nu}f=\partial_
{\nu}\partial_{\mu}f}\). However, this is not necessarily implied by the Greek indices alone, which only indicate that a particular frame has been chosen. For index notation in general, mixed
partials do not commute, since \({\partial_{a}\partial_{b}f-\partial_{b}\partial_{a}f=e_{a}(e_{b}(f))-e_{b}(e_{a}(f))=[e_{a},e_{b}](f)=[e_{a},e_{b}]^{c}\partial_{c}f}\), which only vanishes in a
holonomic frame. | {"url":"https://www.mathphysicsbook.com/mathematics/riemannian-manifolds/introducing-parallel-transport-of-vectors/the-covariant-derivative-in-terms-of-the-connection/","timestamp":"2024-11-09T17:08:19Z","content_type":"text/html","content_length":"80359","record_id":"<urn:uuid:21e244e5-fd4d-4afe-bc67-f7caa2b02951>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00500.warc.gz"} |
Maximum velocity and leg-specific ground reaction force production change with radius during flat curve sprinting
Humans attain slower maximum velocity (v[max]) on curves versus straight paths, potentially due to centripetal ground reaction force (GRF) production, and this depends on curve radius. Previous
studies found GRF production differences between an athlete's inside versus outside leg relative to the center of the curve. Further, sprinting clockwise (CW) versus counterclockwise (CCW) slows v
[max]. We determined v[max], step kinematics and individual leg GRF on a straight path and on curves with 17.2 and 36.5m radii for nine (8 male, 1 female) competitive sprinters running CW and CCW
and compared v[max] with three predictive models. We combined CW and CCW directions and found that v[max] slowed by 10.0±2.4% and 4.1±1.6% (P<0.001) for the 17.2 and 36.5m radius curves versus the
straight path, respectively. v[max] values from the predictive models were up to 3.5% faster than the experimental data. Contact length was 0.02m shorter and stance average resultant GRF was 0.10
body weights (BW) greater for the 36.5 versus 17.2m radius curves (P<0.001). Stance average centripetal GRF was 0.10 BW greater for the inside versus outside leg (P<0.001) on the 36.5m radius
curve. Stance average vertical GRF was 0.21 BW (P<0.001) and 0.10 BW (P=0.001) lower for the inside versus outside leg for the 17.2 and 36.5m radius curves, respectively. For a given curve radius, v
[max] was 1.6% faster in the CCW compared with CW direction (P=0.003). Overall, we found that sprinters change contact length and modulate GRFs produced by their inside and outside legs as curve
radius decreases, potentially limiting v[max].
Attaining maximum sprinting velocity (v[max]) while running is particularly important in a variety of circumstances such as predator–prey relationships as well as athletic competitions. As running is
rarely along a straight path, being able to maintain v[max] while on curves in addition to on a straight path is advantageous. While some animals such as greyhounds and cheetahs retain their v[max]
on curves compared with straight paths (Usherwood and Wilson, 2005a; Wilson et al., 2013), other animals such as horses (Tan and Wilson, 2011) and humans have slower v[max] on a curve relative to a
straight path (Jain, 1980; Greene, 1985; Chang and Kram, 2007; Churchill et al., 2015, 2016). In humans the underlying biomechanics that affect v[max] on a curve have not been fully explored.
Additionally, studying curved sprinting in track and field athletes is specifically relevant to outdoor athletics events such as the 200m and 400m sprint, where more than half the race is run on a
flat (unbanked) curve (Meinel, 2008).
Track and field athletes must be able to run on a range of different curve radii due to lane assignment or track design. Regulation athletic track curve radii can range from 17.2m (innermost lane of
a regulation 200m indoor track) to 45.0m (outermost lane of a regulation 400m outdoor track) (Meinel, 2008). Additionally, the attenuation of v[max] in humans depends on the curve radius, such
that running on smaller curve radii results in a slower v[max] than that on larger curve radii (Jain, 1980; Greene, 1985; Chang and Kram, 2007). Previous studies have measured v[max] of athletes who
performed maximum effort sprints on straight and on counterclockwise (CCW) track curves with a radius equivalent to lane 2 of a 400m track (37.72m) and lane 1 of a 200m track (17.2m) and found
that v[max] was 2.3–4.7% slower on a 37.72m radius curve compared with a straight path (Churchill et al., 2015, 2016) and 8.9% slower on a 17.2m curve compared with a straight path (Taboga et al.,
To predict performance in athletics, previous studies have proposed mathematical models to predict curve-running
in humans (
Jain, 1980
Usherwood and Wilson, 2005b
McMahon (1984)
developed a mathematical model to predict curve-running
on flat curves for a range of radii (
) that uses straight-running
), and kinematic variables such as aerial time (
), contact length (
) and step time (
) (Eqn 1):This model assumed that
and step frequency are constant and independent of curve radius.
is the distance an athlete moves forward during ground contact, and step frequency is the inverse of time from heel-strike to contralateral heel-strike. The model predictions were compared with
experimental data from one subject who ran on a turf surface at five different curve radii (approximately 3–30m) and well-predicted
for larger curve radii, but over-predicted
for radii <15m.
Greene (1985)
simplified the mathematical model proposed by
McMahon (1984)
to include only
and local gravitational acceleration (
) (Eqn 2):This model assumed a constant resultant ground reaction force (GRF) produced by each leg at
that was independent of curve radius. The model predictions from
Greene (1985)
were compared with experimental data from 10 and 13 runners who ran on flat grass and concrete surfaces, respectively, at five different curve radii (approximately 3–30m) and over-predicted
for a given curve radius regardless of whether runners were sprinting on a grass or concrete surface. Additionally,
Usherwood and Wilson (2005b)
developed a model to predict curve-running
on a banked 200m indoor track (Eqn 3):This model also assumed that
and the maximum resultant GRF produced by each leg are constant and independent of curve radius. They also assumed that leg swing time (
) is constant, and that step frequency decreases with curve radius. The model developed by
Usherwood and Wilson (2005b)
predicts curve-running
from straight-running
. They calculated straight-running
for men and women from the average times published for all (heats, quarterfinals, semi-finals and finals) 200m outdoor sprint races at the 2004 Olympic Games to predict the 2004 World Indoor
Championship race times. The model well-predicted the men's 200m indoor race times, but underpredicted the 200m indoor women's race times.
The assumptions that underly each model that predict v[max] attenuation on curves versus a straight path may affect the model results compared with experimental data. Previous experimental studies
have found that step length is independent of curve radius for 3–30m radii (McMahon and Greene, 1979; Greene, 1985). However, more recent experimental studies have found shorter step lengths during
curve running at v[max] on radii of 1–6m, 17.2m and 37.5m compared with running on a straight path (Chang and Kram, 2007; Taboga et al., 2016; Churchill et al., 2015). Moreover, previous
experimental studies have found an increase in contact time with decreasing curve radii for 1–6m radius curves, but no difference in step frequency (Chang and Kram, 2007) and a 2.4% reduction in
step frequency during curve running at v[max] on a 17.2m radius curve compared with a straight path (Taboga et al., 2016). Finally, curve-running v[max] may not be limited by the magnitude of the
resultant GRF. Though Usherwood and Wilson (2005b) found that the maximum resultant GRF did not change and accounted for the slower v[max] for an indoor 200m race on a banked curve, Chang and Kram
(2007) found that the maximum resultant GRF produced during v[max] on flat curves with 1–6m radii was lower than that produced during v[max] on a straight path. Thus, there may be other
physiological limitations that result in the v[max] attenuation on curves compared with a straight path. Lastly, an implied assumption in all three models that predict v[max] on curves is that the
two legs exhibit the same biomechanics. However, previous studies have found that while running CCW at v[max] on a 37.5m radius curve, step length was shorter for the outside leg but not for the
inside leg compared with a straight path, and step frequency was slower for the outside leg but not for the inside leg compared with a straight path (Churchill et al., 2015). Thus, to better
understand potential performance implications during athletics sprints, we measured v[max] on a straight path and two flat curves with radii representative of lane 1 of a regulation 200m and 400m
track and compared these with the curve-running v[max] predicted from the three mathematical models (Eqns 1, 2 and 3). We also measured kinematic variables and GRFs from the inside and outside legs
relative to the center of the curve when sprinters ran at v[max] on these curve radii to determine the leg-specific biomechanical changes during curve-running v[max].
Sprinting on a curve requires an athlete to produce centripetal ground reaction forces (cGRFs) that accelerate their body towards the inside of the curve, where maintaining a given velocity for a
smaller curve radius requires greater cGRF. In the case of flat curves, cGRF equals the product of a sprinter's mass and forward velocity squared divided by curve radius. Experimental data indicate
that the leg on the inside and outside of a curve relative to the center of the curve may have unique roles in producing the cGRF needed to navigate a curve at a particular velocity (Chang and Kram,
2007; Churchill et al., 2016; Judson et al., 2019). Leg-specific cGRF production may change with curve radius, as prior studies suggest that the inside leg produces greater cGRF than the outside leg
on a 37.72m radius curve (Churchill et al., 2016) but produces lower cGRF than the outside leg on 1–6m radius curves (Smith et al., 2006; Chang and Kram, 2007). Additionally, vertical GRF (vGRF)
production is similar for the inside and outside leg on a 37.72m radius curve (Churchill et al., 2016), but the inside leg produces lower vGRF than the outside leg on 1–6m radius curves (Chang and
Kram, 2007). Thus, we measured and compared leg-specific cGRF and vGRF production across intermediate curve radii (17.2m and 36.5m) to potentially identify the underlying mechanisms limiting v[max]
on flat curves.
Modern athletics events that include curves (≥200m) are completed in the CCW direction and sprinters train to run along curves in the CCW direction. Experimental data support the existence of a
potential biomechanical training effect of curve-running direction, as v[max] slows by 1.9% when sprinting on a curve with a 17.2m radius in the clockwise (CW) compared with the CCW direction (
Taboga et al., 2016). Therefore, sprinting direction may also affect v[max], kinematic variables and GRFs, and depend on curve radius. Thus, we quantified the effect of sprinting direction on v[max],
kinematic variables and GRFs to inform future work aimed at determining leg-specific biomechanics in populations with apparent biomechanical asymmetries (e.g. athletes with a unilateral lower-leg
We analyzed maximum effort sprinting and the corresponding changes in v[max], kinematic variables and GRF production of athletes on a straight path and on CCW and CW curves representative of the
innermost lane of a flat (unbanked) 200m and 400m regulation athletics track (17.2m and 36.5m radii). In line with previous studies (Churchill et al., 2015, 2016; Taboga et al., 2016) and
mathematical models (Greene, 1985; McMahon, 1984; Usherwood and Wilson, 2005a,b), we hypothesized that during maximum effort sprinting: (1) v[max] would be slower on the 17.2m and 36.5m radius
curves relative to a straight path regardless of curve sprinting direction, (2) mathematical models (Eqns 1, 2 and 3; McMahon, 1984; Greene, 1985; Usherwood and Wilson, 2005a,b) would overpredict
curve-running v[max], (3) L[c], step frequency and swing time would differ between curve radii and between the inside and outside legs, (4) stance-average resultant GRF (rGRF[avg]) would not change
between curve radii or between the inside and outside leg, (5) the inside leg would produce greater stance-average cGRF (cGRF[avg]) than the outside leg, but the outside leg would produce greater
stance-average vGRF (vGRF[avg]) than the inside leg on both curve radii, and (6) v[max] would be slower on curves in the CW versus CCW direction.
Study population
A convenience sample of 9 National Collegiate Athletic Association (NCAA) track and field athletes (8 male, 1 female; 200m personal best: 22.60±2.39s; 400m personal best: 47.76±1.49s; body mass:
74.6±9.5kg; height: 1.83±0.10m; age: 21±1years, means±s.d.) with curve sprinting experience participated. We used data from Chang and Kram (2007) to estimate an appropriate sample size for peak
resultant GRF, step length and step frequency between inside and outside legs, and maximum velocity for running on a straight path compared with a 6m radius curve. We set P=0.05, used a paired t
-test design, and found significant effect sizes in peak resultant GRF (0.75), step length (0.98), step frequency (0.95) and maximum velocity (1.00) with 10 participants; we therefore anticipated
that the 9 participants we recruited would be a sufficient sample size to detect significant differences based on the power analysis. Athletes reported no musculoskeletal injuries at the time of data
collection and provided written informed consent prior to participating in the study. The experimental protocol was approved by the University of Colorado Boulder Institutional Review Board (#
Experimental design
Athletes used their own spiked sprinting footwear and completed a randomized series of maximum effort sprints on a flat indoor Mondo-covered track (see below) over 1–2days. Following a self-directed
warm-up, athletes were instructed to perform maximum effort sprints on a 40m straight section (‘straightaway’) and 40m curves with radii of 17.2m and 36.5m in the CW and CCW directions. The order
of trials was randomized for each subject. Each 40m lane length and width were indicated with cones, and curve radii represented the innermost lane of a regulation flat 200m and 400m track (
Meinel, 2008), respectively (Fig. 1). Athletes initiated each sprinting trial from a standing or crouched position, but no starting blocks were provided. Athletes practiced sprinting on the curves
and adjusted their starting position to allow them to reach their perceived v[max] halfway (∼20m) along the straightaway or curve where we positioned two force plates flush with the track surface.
Sprints were repeated for each condition until athletes successfully landed on a force plate with each leg at least once. We considered a trial to be unsuccessful if an athlete's foot was not
entirely on the force plate during stance phase or they failed to stay within the lane of the curve (approximately 1.2m width) for the entire 40m. Data from all successful trials (231 steps) were
used for analysis. Athletes were allowed ≥8min of rest between trials to minimize any potential effects of fatigue.
Two force plates (1000Hz; 1.2×0.6m; AMTI, Watertown, MA, USA) covered with an adhered track surface (Mondo S.p.A., Alba, Italy) were embedded in the ground so that the top surface was flush with
the surrounding track surface and located halfway along the straightaway or curve. Ten motion capture cameras (200Hz; 3×5m capture volume; Vicon, Centennial, CO, USA) surrounded the force plates (
Fig. 1). Prior to data collection, we adhered retroreflective markers onto each subject's pelvis and feet. Retroreflective markers on each foot were used to identify which leg was in contact with the
force plate during a given trial and retroreflective markers on the pelvis were used to calculate sprinting velocity within the capture volume (Luo and Stefanyshyn, 2012a). We measured motion and
GRFs simultaneously for each trial.
For all trials, we measured v[max] from the capture volume using the average pelvis marker velocity, which was calculated using the retroreflective markers located bilaterally on the iliac crests,
anterior superior iliac spines and posterior superior iliac spines. v[max] was averaged over the length of the capture volume (∼5m). Because of the location of the force plates in the indoor track
facility, athletes were unable to adjust their starting position backwards on the straightaway to ensure that they attained v[max] within the capture volume. Thus, we used a radar gun (47Hz; Stalker
ATS II, Applied Concepts Inc, Plano, TX, USA) on a tripod at a height of ∼1m to measure velocity along the entire 40m straightaway. To determine and verify v[max] on the straightaway, we used the
maximum value from a moving average of the radar gun velocity data (0.32s window) and used this straight-running v[max] to predict curve-running v[max] for each mathematical model.
Data processing
We processed data using MATLAB (R2020a; MathWorks, Natick, MA, USA) with custom scripts and packages (Alcantara, 2019). 3D motion and GRF data were collected synchronously and filtered with a 4th
order zero-lag low-pass Butterworth filter with a 50Hz cutoff. We used a 5 N vGRF threshold to detect stance phase. Stance-average centripetal and vertical ground reaction force (cGRF[avg] and vGRF
[avg]) were calculated as the mean GRF during the stance phase for a given curve-running direction. We calculated stance average resultant GRF (rGRF[avg]) for each trial as the vector sum of cGRF
[avg] and vGRF[avg]. To measure cGRF during the stance phase, we transformed the local coordinate system of the force plate so that the centripetal (mediolateral) horizontal axis was perpendicular to
the tangential (anteroposterior) horizontal axis relative to the position of the athlete on the curve. For the curve conditions, this was accomplished by projecting the anterior–posterior and
mediolateral horizontal GRFs relative to the force plate onto new coordinate system vectors rotated by the angle formed by the 3rd metatarsal head marker at the time of peak vGRF, the center of the
curve, and the origin of the global coordinate system (Fig. 2). Across all trials, the transformed horizontal axes were rotated <3 deg from the force plate's original coordinate system. Because of
the location of the force plates in the indoor track facility, athletes ran along a straightaway rotated 14 deg relative to the force plates (Fig. 1). Thus, we projected the anterior–posterior and
mediolateral horizontal GRFs relative to the force plate on the straightaway onto a new coordinate system rotated by 14 deg.
To evaluate the curve-running v[max] predictive models and assumptions, we calculated aerial time (t[a]), step time (t[step]), swing time (t[swing]), step frequency and contact length (L[c]). All
variables were calculated separately for the left and right legs. Most trials only included one step on a force plate, so we used the markers on each foot (right metatarsal head, left metatarsal
head, and heel) in addition to the GRF to calculate these variables if they occurred immediately before or after an athlete contacted the force plate. For each subject, we used a 5 N vGRF threshold
to calculate the average position of the foot markers and used the average position across trials to determine toe-off and heel-strike when athletes were not in contact with the force plate. We
calculated t[a] as the time from toe-off of one leg to heel-strike of the contralateral leg. We calculated t[step] as the sum of contact time (t[c]) and the subsequent t[a]. We calculated t[swing]
as the sum of t[a] and the subsequent t[step.] Step frequency equaled the inverse of t[step]. Finally, we determined L[c] as the total curved distance that the center of mass moved in the
transverse plane from heel-strike to toe-off of the same foot using the average pelvis marker position. We created a model for each subject using their individual straight-running v[max].
Data analysis
We analyzed data using R (version 3.6.3) with custom scripts and packages (Wickham, 2009; https://CRAN.R-project.org/package=emmeans; https://CRAN.R-project.org/package=nlme; http://www.R-project.org
/; https://CRAN.R-project.org/package=tidyr; https://CRAN.R-project.org/package=dplyr). We used a paired t-test (α=0.05) to compare our experimental data and the mathematical model predictions
(equation 8.4 in McMahon, 1984; equation 11 in Greene, 1985; equation 2.9 in Usherwood and Wilson, 2005a,b) for how much v[max] slows on a curve with a given radius relative to a straight path. In
agreement with previous methods (Greene, 1985), we averaged data across trials for each condition and combined data from both sprinting directions when quantifying the changes in v[max] on a curve
relative to the straightaway and when comparing leg-specific kinematic variables (L[c], step frequency and t[swing]) and GRF production across curve radii. We constructed linear mixed-effects models
(LMEM) to quantify changes in v[max], kinematic variables and GRFs across conditions. We considered condition (straight, 17.2m radius curve, 36.5m radius curve), leg relative to the center of the
curve (inside, outside), and curve sprinting direction (CCW, CW) as categorical fixed effects and athlete as a random effect. Models were first constructed with interaction terms, but
non-statistically significant model coefficients were removed from the model on the basis that the coefficient was not significantly different from zero. When statistically significant (P<0.05)
interactions were present, we performed post hoc pairwise comparisons to analyze simple effects, applied the Bonferroni correction method to each family of comparisons, and reported the corrected
α-value alongside the P-value. We also reported the numerical difference between each level of a fixed effect (e.g. inside versus outside leg) or the unstandardized model coefficients (B) alongside
the P-value.
We found that the velocity measured from the radar gun during the straight-running trials captured v[max] 2–10m after athletes ran through the capture volume. Thus, we used the v[max] measured from
the radar gun for the straight-running trials and did not compare kinematic variables or GRFs between the straight- and curve-running conditions. We found that mean (±s.d.) v[max] was 9.12±0.60ms^
−1 for the straightaway and 8.21±0.44ms^−1 and 8.75±0.62ms^−1 for the 17.2m and 36.5m radius curves, respectively (Fig. 3A), when combining data from both sprinting directions. We found no
interaction effect of sprinting direction and curve radius on v[max] (P=0.122), indicating that the effects of curve radii and sprinting direction on v[max] did not significantly depend on each
other. The LMEM revealed that v[max] decreased 10.0±2.4% (P<0.001) from the straightaway to the 17.2m radius curve and 4.1±1.6% (P<0.001) from the straightaway to the 36.5m radius curve (B=0.50ms
^−1; P<0.001; Fig. 3) when combining data from both sprinting directions.
Using the mathematical models developed by McMahon (1984), Greene (1985) and Usherwood and Wilson (2005b), we calculated that regardless of sprinting direction, the v[max] of athletes in the present
study would slow by 9.3±1.2%, 6.8±1.4% and 9.2±1.3% from the straightaway to the 17.2m radius curve, respectively, and slow by 1.9±0.4%, 1.0±0.5% and 2.0±0.3% from the straightaway to the 36.5m
radius curve, respectively. Thus, we found that the v[max] prediction from McMahon (1984) was not significantly different from measured v[max] for the 17.2m radius curve (P=0.3) but overestimated v
[max] on the 36.5m radius curve by 2.2% (P<0.01). We also found that the v[max] prediction from Greene (1985) consistently overestimated v[max] on the 17.2m radius curve (P<0.005) and 36.5m radius
curve (P<0.005) by 3.0–3.5%. Finally, we found that the v[max] prediction from Usherwood and Wilson (2005a,b) was not significantly different from measured v[max] for the 17.2m radius curve (P=0.2)
but overestimated v[max] on the 36.5m radius curve (P<0.01) by 2.1%. Moreover, for a given curve radius, v[max] in the CCW direction was 1.6% faster than that in the CW direction (B=0.14ms^−1; P=
0.003; Fig. 3A).
Kinematic variables
We found that L[c] was 0.06m shorter when sprinting at v[max] on the 17.2m radius curve compared with the 36.5m radius curve (P<0.001; Fig. 4A) when we combined data from both curve-running
directions. We found no statistical difference in L[c] between the inside and outside leg or between running in the CW versus CCW direction. We found a significant interaction effect of curve radius
and inside or outside leg on step frequency (P<0.05; Fig. 4B). Step frequency was 0.20Hz lower for the inside leg compared with the outside leg on the 17.2m radius curve (P<0.05; Fig. 4B) and was
0.03Hz greater for the inside leg compared with the outside leg on the 36.5m radius curve (P<0.05; Fig. 4B) for both curve-running directions. Additionally, we found no statistical difference in t
[swing] between curve radii, inside or outside legs, or sprinting direction (P>0.05). We did not compare kinematic variables between the straight and curved conditions because of athletes not
reaching v[max] within the capture volume for the straight conditions.
We averaged the rGRF[avg] for both sprinting directions for the inside leg and the outside leg at each curve radius to compare inside versus outside leg rGRF[avg] production. We found a significant
interaction effect of curve radius and inside or outside leg on rGRF[avg] (P=0.01; Fig. 5A). On the 17.2m radius curve, we found that the rGRF[avg] of the inside leg was 1.83 BW, which was 0.01 BW
lower than that on the 36.5m radius curve (P<0.001, α=0.0125; Fig. 5A). However, we found that the rGRF[avg] of the outside leg was not significantly different (P=0.8, α=0.0125) between the 17.2m
(2.1 BW) and 36.5m (2.0 BW) curve radii (Fig. 5A). Additionally, we found that the rGRF[avg] of the inside leg was 0.18 BW (P<0.001, α=0.0125) and 0.11 BW (P=0.001, α=0.0125) lower than that of the
outside leg on the 17.2m and 36.5m radius curves, respectively (Fig. 5A). We did not compare GRFs between the straight and curved conditions because of athletes not reaching v[max] within the
boundary of the capture volume for the straight conditions.
We averaged the cGRF[avg] for both sprinting directions for the inside leg and the outside leg at each curve radius to compare inside versus outside leg cGRF[avg] production. We found a significant
interaction effect of curve radius and inside or outside leg on cGRF[avg] (P=0.029; Fig. 5B). On the 17.2m radius curve, we found no statistically significant difference (P=0.089, α=0.0125; Fig. 5B)
in cGRF[avg] between the inside (0.68 BW) and outside legs (0.65 BW). On the 36.5m radius curve, we found that the cGRF[avg] of the inside leg was 0.48 BW, which was 0.10 BW greater than that for
the outside leg (P<0.001, α=0.0125; Fig. 5B). On the 17.2m versus 36.5m radius curve, we found that cGRF[avg] was 0.21 BW greater for the inside leg (P<0.001, α=0.0125) and 0.27 BW greater for the
outside leg (P<0.001, α=0.0125; Fig. 5B).
We averaged the vGRF[avg] for both sprinting directions for the inside leg and the outside leg at each curve radius to compare inside versus outside leg vGRF[avg] production. We found a significant
interaction effect of curve radius and inside or outside leg on vGRF[avg] (P=0.009; Fig. 5C). On the 17.2m radius curve, we found that the vGRF[avg] of the inside leg was 1.70 BW, which was 0.17 BW
lower than that on the 36.5m radius curve (P<0.001, α=0.0125; Fig. 5C). However, we found that the vGRF[avg] of the outside leg was not significantly different (P=0.025, α=0.0125) between the 17.2m
(1.90 BW) and 36.5m (1.97 BW) curve radii (Fig. 5C). Additionally, vGRF[avg] of the inside leg was 0.21 BW (P<0.001, α=0.0125) and 0.10 BW (P=0.001, α=0.0125) lower than that of the outside leg on
the 17.2m and 36.5m radius curve, respectively (Fig. 5C).
In agreement with previous studies (Jain, 1980; Greene, 1985; Chang and Kram, 2007; Churchill et al., 2015, 2016; Taboga et al., 2016) and in support of our first hypothesis, we found that athletes
had a 10.0% slower v[max] on the 17.2m radius curve and 4.1% slower v[max] on the 36.5m radius curve compared with that on the straightaway (Fig. 3B). We partially accept our second hypothesis that
mathematical models would overpredict curve-running v[max], as the v[max] predictions from the mathematical models of McMahon (1984) and Usherwood and Wilson (2005b) were not statistically different
from the measured v[max] on the 17.2m radius curve. The agreement between these predictions and the measured v[max] may be due to the inclusion of kinematic variables in the equations. We found that
the v[max] predictions from the mathematical model of Greene (1985) were significantly different and consistently overestimated v[max] on a given curve radius by 3.5% and 3.0% compared with measured
v[max] (Eqn 2) for the 17.2m and 36.5m radius curves, respectively. For the average v[max] of the athletes in this study, a 3.5% overestimation is equivalent to a 0.3ms^−1 faster v[max]. Thus, if
Greene's (1985) model was used to predict times in a 200m race it would overpredict race time by 0.42s, assuming an athlete is running at v[max] on the straightaway and curve, and that half the
race is on the curve. This may confirm that there is not a physiological limit to maximum rGRF (Chang and Kram, 2007) and supports the suggestion that values predicted through this mathematical model
act as an upper bound to running performance (Greene, 1985). The mathematical models from McMahon (1984) and Usherwood and Wilson (2005b) (Eqns 1 and 3) overestimate v[max] by 2.2% and 2.1% (0.19s)
for the 36.5m radius curve, respectively.
We partially accept our third hypothesis that L[c], step frequency and t[swing] would differ between curve radii and between the inside and outside legs. In agreement with previous studies, we found
that L[c] was 4.7% shorter at v[max] on the 17.2m compared with the 36.5m radius curve (Fig. 4A) (Chang and Kram, 2007; Taboga et al., 2016; Churchill et al., 2015). All three of the mathematical
models that predict curve-running v[max] assumed that L[c] was independent of curve radius (McMahon, 1984; Greene, 1985; Usherwood and Wilson, 2005b). Thus, accounting for the changes in L[c] across
curve radii may be needed to improve curve-running v[max] predictions. We found that step frequency was independent of curve radius, which supports the assumptions of the models proposed by McMahon
(1984) and Greene (1985). However, we also found that step frequency was 4.7% faster for the outside leg compared with the inside leg on the 17.2m radius curve, and 0.7% slower for the outside leg
compared with the inside leg on the 36.5m radius curve (Fig. 4B). Thus, it may be necessary to account for leg-specific differences in step frequency on different curve radii to improve
curve-running v[max] predictions. We found that t[swing] did not change between the 17.2m and 36.5m curve radii (Fig. 4C) or between legs. Our results thus support the assumption of Usherwood and
Wilson (2005b) that t[swing] does not depend on curve radius or on the inside versus outside leg.
We reject our fourth hypothesis that rGRF[avg] would not change between curve radii or between the inside and outside leg because we found a 5% decrease in rGRF[avg] between the 36.5m and 17.2m
radius curves (Fig. 5A) and that the inside leg produced lower rGRF[avg] than the outside leg at v[max] on both curve radii. Our findings are in line with those of Chang and Kram (2007), who found
that maximum rGRF at v[max] on small radii (1–6m) was lower than that on a straight track, and maximum rGRF decreased with a decreasing curve radius. These findings suggest that curve sprinting v
[max] may not be limited by a physiological limit to maximum rGRF. We suggest that the decrease in rGRF[avg] may be due to other physiological constraints such as the kinematic configuration of the
lower limb segments while sprinting around the curve. For both curve radii at v[max], we found that the inside leg consistently produced lower rGRF[avg] than the outside leg. These findings refute
the underlying assumption that there is no difference between the inside and outside legs that is used in all three mathematical models that predict curve-running v[max] (McMahon, 1984; Greene, 1985;
Usherwood and Wilson, 2005b). Additionally, our findings agree with previous studies that found that on small curve radii (1–6m), the inside leg produces lower maximum cGRF and vGRF than the outside
leg (Chang and Kram, 2007), but on larger curve radii (37.72m), the inside leg produces greater maximum cGRF than the outside leg but similar maximum vGRF (Churchill et al., 2016).
We partially accept our fifth hypothesis that the inside leg would produce greater cGRF[avg], but lower vGRF[avg], than the outside leg while sprinting at v[max] on both curve radii. The greater cGRF
[avg] produced by the inside versus outside leg on the 36.5m radius curve is consistent with previous studies that investigated leg-specific cGRF on a curve radius of 37.72m during maximum effort
sprinting (Churchill et al., 2016; Judson et al., 2019). However, previous studies have shown that the outside leg produced greater cGRF than the inside leg during maximum effort sprinting on curve
radii ≤6m, similar to performing a lateral cutting maneuver (Rand and Ohtsuki, 2000; Chang and Kram, 2007). We found that there was no significant difference in cGRF[avg] between the inside and
outside legs on the 17.2m radius curve (Fig. 5B). These findings suggest a potential transition where the cGRF[avg] produced by the outside leg exceeds that produced by the inside leg to navigate
curves with smaller radii and may partially explain differences in results for the inside and outside legs from studies that collected cGRF on smaller (1–6m) and larger (37.72m) curve radii.
In support of our fifth hypothesis, we found that vGRF[avg] was 0.10–0.21 BW greater for the outside compared with the inside leg for both curve radii (Fig. 5C). vGRF and cGRF production and thus
sprinting performance are due in part to the force produced by the ankle plantarflexor muscles (Dorn et al., 2012; Luo and Stefanyshyn, 2012a; Nagahara et al., 2018), and leg-specific frontal plane
ankle inversion and eversion may limit the ability of ankle plantarflexor muscles to generate cGRF and vGRF during v[max] curve sprinting. The production of cGRF and vGRF may differ between the
inside and outside legs as a result of differences in the peak ankle plantarflexor moment (Judson et al., 2020a) and peak ankle eversion angle (Alt et al., 2015) during maximum effort curve
sprinting, but further research is needed to investigate the effect of curve radii on leg-specific joint kinetics, joint kinematics and GRF production. Athletes seeking to improve curve sprinting
performance may benefit from strengthening ankle plantarflexor muscles under a range of frontal plane ankle orientations, as maximum ankle inversion and eversion angles significantly differ during
curve versus straight sprinting (Alt et al., 2015; Judson et al., 2020b).
Our findings support our sixth hypothesis that v[max] would be slower in the CW versus CCW direction. We found that athletes had 1.6% slower v[max] in the CW compared with the CCW direction
regardless of the curve radius. This effect of sprinting direction on v[max] is similar to the results of Taboga et al. (2016), who found that athletes had 1.9% slower v[max] in the CW compared with
the CCW direction on a curve with a 17.2m radius (Taboga et al., 2016). We suspect these results are due to athletes' familiarity of sprinting in the CCW direction for competitions and the potential
differences in strength between the inside and outside legs, but we did not investigate these potential effects. Future studies are warranted to determine whether there are muscle strength
differences between the inside and outside legs of competitive 200m and 400m sprinters.
One of the potential limitations of our study to consider alongside our findings is that we used a radar gun to measure v[max] on the straightaway and 3D motion capture data to measure v[max] on the
curves. This approach was necessary because the mathematical models depend on straight-running v[max] and athletes were unable to adjust their starting position on the straightaway to ensure they
reached v[max] within the capture volume because of the constraints of our indoor track facility. Despite measuring v[max] on the straightaway and curves using different methods, both provide
accurate and consistent measures of running velocity (Chelly and Denis, 2001; di Prampero et al., 2005; Morin et al., 2006; Luo and Stefanyshyn, 2012b; Zrenner et al., 2018). Further analysis of the
radar gun data revealed that v[max] was achieved 2–10m beyond the boundary of the capture volume (Fig. 1). v[max] measured from the radar gun exceeded the velocity measured in the capture volume by
0.44±0.2m s^−1. We also found that the velocity measured by the radar gun within the capture volume was not significantly different from the velocity calculated from the 3D motion capture data
(paired t-test, P=0.582). Additionally, to determine whether athletes were at a constant velocity on the force plate, we compared the anterior–posterior horizontal propulsive impulse with the braking
impulse, where the impulse was calculated as the integral of force with respect to time, propulsive impulse was the positive impulse and braking impulse was the negative impulse. We found that the
horizontal propulsive impulse was on average 0.04 N s greater than the braking impulse when athletes ran on the straightaway (paired t-test, P<0.05) over the force plates, indicating that athletes
were accelerating on the straight path. However, there was no difference between propulsive and braking impulses during running on the curves (paired t-test, P=0.3). Therefore, we assume that
athletes were neither accelerating nor decelerating and likely running at their v[max] for each curve-running trial. Because athletes did not achieve v[max] on the force plate during the
straight-running trials, we did not statistically compare GRFs or kinematic variables between the straight-running and curve-running trials (Table S1). Although we did not compare curve-sprinting
vGRF production with straight-running vGRF production, previous work investigating submaximal straight and curve (36.5m radius) sprinting found that there was no significant difference in peak vGRF
between straight and curve sprinting for either leg (Viellehner et al., 2016). Moreover, we used the position of the metatarsal foot markers to determine the coordinate system for the cGRF and found
that it differed with the orientation of the force plates by <3 deg. If we did not correct for this angle change, the difference in cGRF[avg] would have been small. For example, if the horizontal
forces were offset by 3 deg, this would change cGRF[avg] by 0.003 N on a 36.5m curve radius. Lastly, we combined data from the CW and CCW sprinting directions when investigating the effect of curve
radius and the inside or outside leg on GRF production, which assumes that there are no anatomical asymmetries between the legs such as differences in muscle strength. We suspect that the slower v
[max] for the CW versus CCW direction may be due to a trained sprinter's unfamiliarity with CW sprinting, but future work should investigate the potential biomechanical mechanisms such as strength
differences between legs that could be responsible for the differences we observed in CW and CCW v[max].
Chang and Kram (2007) found that on curves with small radii (1–6m), v[max] was not constrained by maximum limb force generation of the inside or outside leg and suggested that a combination of
different biomechanical constraints led to the inside leg limiting the generation of forces necessary to achieve similar v[max] on curves compared with a straight track. Additionally, when exploring
the attenuation of v[max] on curves in horses, Tan and Wilson (2011) showed that at small curve radii, v[max] is likely limited by friction, whereas at larger curve radii, the slowing of v[max] is
likely due to a limit in maximum limb force generation. Coupling these previous studies with our findings suggests that limitations in curve-sprinting v[max] change with curve radii and may be due to
different biomechanical mechanisms such as cGRF and vGRF production differences between legs for different curve radii. Additionally, previous research suggests that the v[max] of greyhounds does not
slow on curves compared with a straight path because of the mechanical separation of the muscles that provide power from the structures that support body weight (Usherwood and Wilson, 2005a).
However, in humans, the lower limb muscles generate power and support body weight. Moreover, the position of the lower limb may affect force production. Future studies are needed to determine how
lower limb joint moments and power affect maximum effort sprinting performance on a curve.
We determined how v[max] changes on two flat regulation track curves compared with a straight track and the kinematics and GRFs produced by the inside and outside legs. We found that, compared with
that for a straight track, v[max] slowed by 10.0% and 4.1% on a 17.2m and 36.5m radius curve, respectively. We compared these results with predictions from mathematical models and found that the v
[max] predicted by models proposed by McMahon (1984) and Usherwood and Wilson (2005b) were not different compared with the measured v[max] on the 17.2m radius curve; however, both mathematical
models overpredicted v[max] by 2.1–2.2% on the 36.5m radius curve. The v[max] predicted by the model proposed by Greene (1985) overestimated v[max] on the 17.2m and 36.5m radius curves by
3.0–3.5%. We tested the four main assumptions used when developing the mathematical models and found that L[c] was not independent of curve radius and decreased with a smaller curve radius.
Additionally, we found that step frequency and t[swing] did not change between 17.2m and 36.5m radius curves. Moreover, we found that rGRF[avg] decreased between the 36.5m and 17.2m radius
curves. Thus, future predictive models of maximum curve-running velocity should account for differences in L[c] and rGRF[avg] with changes in curve radius. We also found that sprinters modulate the
rGRF, cGRF and vGRF produced by their inside and outside legs as curve radius decreases. Limitations to leg-specific cGRF and vGRF production may be due to frontal plane ankle kinematics of the
inside and outside legs during maximum effort curve sprinting. Future studies are needed to better understand leg-specific joint kinematics and kinetics during maximum effort curved sprinting and
their influence on performance.
Results in this paper are reproduced from the PhD thesis of R.S.A. (Alcantara, 2021). We would like to thank CU Boulder Athletics Department for allowing us to use the Balch Fieldhouse for all data
Author contributions
Conceptualization: A.M.G.; Formal analysis: G.B.D., R.S.A.; Investigation: G.B.D., R.S.A.; Data curation: G.B.D., R.S.A.; Writing - original draft: G.B.D.; Writing - review & editing: G.B.D., R.S.A.,
A.M.G.; Visualization: G.B.D.; Supervision: A.M.G.; Project administration: A.M.G.; Funding acquisition: A.M.G.
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Open access funding provided by University of Colorado. Deposited in PMC for
immediate release.
R. S.
Dryft: a python and MATLAB package to correct drifting ground reaction force signals during treadmill running
J. Open Source Softw.
R. S.
Improving running performance and monitoring injury risk with wearable devices
PhD thesis
Lower extremity kinematics of athletics curve sprinting
J. Sports Sci.
Limitations to maximum running speed on flat curves
J. Exp. Biol.
S. M.
Leg power and hopping stiffness: relationship with sprint running performance
Med. Sci. Sports Exerc.
S. M.
A. I. T.
The effect of the bend on technique and performance during maximal effort sprinting
Sports Biomech.
S. M.
I. N.
A. I. T.
Force production during maximal effort bend sprinting: theory vs reality
Scand. J. Med. Sci. Sports
di Prampero
P. E.
J. B.
Sprint running: a new energetic approach
J. Exp. Biol.
T. W.
A. G.
M. G.
Muscular strategy shift in human running: dependence of running speed on hip and ankle muscle performance
J. Exp. Biol.
P. R.
Running on flat turns: experiments, theory, and applications
J. Biomech. Eng.
P. C.
On a discrepancy in track races
Res. Q Exerc. Sport
L. J.
S. M.
J. A.
I. G. A.
Horizontal force production and multi-segment foot kinematics during the acceleration phase of bend sprinting
Scand. J. Med. Sci. Sports
L. J.
S. M.
J. A.
Joint moments and power in the acceleration phase of bend sprinting
J. Biomech.
L. J.
S. M.
J. A.
I. G. A.
, and
Kinematic modifications of the lower limb during the acceleration phase of bend sprinting
J. Sports Sci.
Ankle moment generation and maximum-effort curved sprinting performance
J. Biomech.
Limb force and non-sagittal plane joint moments during maximum-effort curve sprint running in humans
J. Exp. Biol.
T. A.
Muscles, Reflexes, and Locomotion
Princeton University Press
T. A.
P. R.
Fast Running Tracks
Sci. Am.
Article 6.
IAAF Track and Field Facilities Manual IAAF Requirements for Planning, Constructing, Equipping and Maintaining
2008th edn
Edited by D. Wilson et al.
Editions EGC
Spring-mass model characteristics during sprint running: correlation with performance and fatigue-induced changes
Int. J. Sports Med.
Association of sprint performance with ground reaction forces during acceleration and maximal speed phases in a single sprint
J. Appl. Biomech.
M. K.
EMG analysis of lower limb muscles in humans during quick change in running directions
Gait Posture
Contributions of the inside and outside leg to maintenance of curvilinear motion on a natural turf surface
Gait Posture
A. M.
Maximum-speed curve-running biomechanics of sprinters with and without unilateral leg amputations
J. Exp. Biol.
A. M.
Grip and limb force limits to turning performance in competition horses
Proc. R. Soc. B
J. R.
A. M.
No force limit on greyhound sprint speed
J. R.
A. M.
Accounting for elite indoor 200m sprint results
Biol. Lett.
Lower extremity joint moments in athletics curve sprinting
34th International Conference on Biomechanics in Sports
, pp.
ggplot2: Elegant Graphics for Data Analysis
Use R!
New York
A. M.
J. C.
P. E.
K. A.
J. W.
Locomotion dynamics of hunting in wild cheetahs
Comparison of different algorithms for calculating velocity and stride length in running using inertial measurement units
Competing interests
The authors declare no competing or financial interests.
© 2024. Published by The Company of Biologists Ltd
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (
), which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
Supplementary information | {"url":"https://journals.biologists.com/jeb/article/227/4/jeb246649/344025/Maximum-velocity-and-leg-specific-ground-reaction","timestamp":"2024-11-14T04:45:24Z","content_type":"text/html","content_length":"330461","record_id":"<urn:uuid:7a5e5634-c306-4f84-a332-9c163b480478>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00351.warc.gz"} |
Fun Division Worksheets 3rd Grade - Divisonworksheets.com
Fun Division Worksheets 3rd Grade
Fun Division Worksheets 3rd Grade – Your child can to practice and improve their division skills with the help of division worksheets. You can create your own worksheets. There are a variety of
options for worksheets. They are great because they are available for download for free and make the exact layout you desire you want them to be. They’re perfect for second-graders, kindergarteners
and first-graders.
Two people can create enormous numbers
It is essential for children to practice division on worksheets. Sometimes, worksheets will only accommodate two, three, four, or more divisors. The child will not have worry about forgetting to
divide the big number, or making mistakes when using their times tables due to this method. If you want to help your child improve their mathematical skills You can download worksheets on the
internet or print them from your computer.
Multi-digit division worksheets can be a fantastic method for kids to practice and reinforce their knowledge. This is a vital maths skill needed to tackle complex math concepts as well as everyday
calculations. These worksheets help reinforce the concept by offering interactive activities and questions that are based on the divisions of multi-digit numbers.
The task of dividing huge numbers can be quite difficult for students. The worksheets typically employ a common algorithm and step-by-step instructions. Students might not receive the understanding
they need from this. Learning long division can be done by using the bases of ten blocks. Learning the steps should simplify long division for students.
Use a variety of worksheets and practice questions to practice division of large amounts. The worksheets also include the results of fractions in decimals. If you need to divide large amounts of
money, worksheets for centimeters can be found.
Sort the numbers into smaller groups.
The process of dividing a group into smaller groups might be challenging. It may seem good on paper, but the majority of participants of small groups dislike this process. It truly reflects the way
that human bodies develop and it could aid in the Kingdom’s endless development. It encourages others to help the less fortunate, as well as the new leadership team to take over the reins.
It is also useful for brainstorming. It’s possible to make groups of people who have similar characteristics and skills. This could lead to some very creative ideas. Once you’ve created your groups,
present everyone to you. It’s a great exercise that encourages creativity and ingenuity.
It can be used to split huge numbers into smaller pieces. It is useful for when you want to make the same amount of things for various groups. For example, a huge class can be split into five
classes. Add these groups together and you’ll get the initial 30 students.
Keep in mind that there exist two different types of numbers you can choose from when you divide numbers: divisors and the quote. Divide one by five yields the result, while two times two yields the
similar result.
It is an excellent idea to utilize the power of ten to calculate big numbers.
We can split huge numbers into powers of 10, to allow comparison between them. Decimals are an extremely frequent element of shopping. They can be found on receipts as well as price tags, food labels
and even receipts. To display the price per Gallon and the amount gas that was dispensed through a nozzle, petrol pumps use decimals.
There are two ways to split a large number into powers of ten. One is by moving the decimal point to the left and using a multiplier of 10-1. This second method utilizes the associative feature of
powers of 10. It is possible to divide a massive number of numbers into smaller powers of 10 after you understand the associative feature of powers of 10.
The first method relies on the mental process of computation. A pattern can be observed if 2.5 is divided by 10. As a power of ten is increased the decimal points will shift towards the left. This
principle is simple to understand and can be applied to any situation, no matter how difficult.
The other option is mentally splitting large numbers into powers ten. Next, you need to write large numbers in a scientific note. Large numbers must be expressed with positive exponents if written in
scientific notation. To illustrate, if you move the decimal points five spaces to the left, you could turn 450,000 into 4.5. To divide a huge amount into smaller powers 10, you could make use of
exponent 5, or divide it in smaller powers 10 until it becomes 4.5.
Gallery of Fun Division Worksheets 3rd Grade
3rd Grade Division Worksheets Best Coloring Pages For Kids
3rd Grade Division Worksheets Best Coloring Pages For Kids
April FUN Filled Learning Math Division 3rd Grade Math Learning
Leave a Comment | {"url":"https://www.divisonworksheets.com/fun-division-worksheets-3rd-grade/","timestamp":"2024-11-07T06:09:48Z","content_type":"text/html","content_length":"63063","record_id":"<urn:uuid:01bf9b75-c899-46b9-8cc2-db403857ad69>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00610.warc.gz"} |
Math. for ML 2020
Mathematics for machine learning - Fall 2020
(Special Mathematics Lecture)
If you are
Japanese student,
please also register at the
NU-EMI project
for this course.
This course will be done completely online (Zoom).
• We will use a Discord server for communication. Please join this server if you plan to attend this course.
• There is also a NUCT course page for this lecture, but all information will be available here and in the discord server.
Machine learning became a popular and really broad field in recent years. Machine learning algorithms are used in a wide variety of applications, such as email filtering, computer vision, medicine,
language translation, computer games, economic, etc.. The goal of this course is to give a brief introduction into machine learning with a focus on the mathematical tools used.
We will probably cover following topics:
• Overview of machine learning
• (Linear) Regression
• Review Linear Algebra
• Programming & doing mathematics in Python
• Introduction to Probability
• Support vector machines
• k-means clustering
• Neural networks
• Deep learning
• Lectures slides: Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10, Lecture 11, Lecture 12, Lecture 13
• Zoom lectures notes : Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10, Lecture 11, Lecture 12
(Lecture slides with handwriting from the Zoom Lecture)
• Colab Notebook: Lecture 2, Lecture 4, Lecture 9, Lecture 11, Lecture 13
• Zoom lecture video: Lecture 1, Lecture 2, Lecture 3, Lecture 4, Lecture 5, Lecture 6, Lecture 7, Lecture 8, Lecture 9, Lecture 10, Lecture 11, Lecture 12, Lecture 13
• Homework Assignments: Test assignment, Homework 1, Homework 2, Homework 3
Homework AssignmentsWe will use GitHub classrooms and Google Colab for the homework assignments. If you want to do the assignments, please send me an email or write me a private message in discord
with the following information:
FAMILYNAME Firstname, GitHub Account name, Email.
You will then be added to our GitHub classroom.
Course Prerequisites
Basic knowledge in Linear Algebra and Calculus is helpful. We will also do some programming in Python. Programming knowledge are useful but not necessary since a rough introduction to programming in
Python will be part of the course. Motivated 1st-year students can also attend without these prerequisites if they contact the lecturer beforehand.
Due to the programming part of the lecture, students should have (access to) a computer/laptop.
The final grade will be based on active participation during the lectures and on some written and programming tasks. This course is an optional subject which does not count towards the number of
credits required for graduation in any program at Nagoya University.
Studdy sessions
There will be study sessions organized by students of the course. These are each week
Thursday from 18:30
Tuesday from 18:30
. The Zoom meeting information are available in the Discord server.
Lecture schedule:
We will meet
each Wednesday
in Zoom. The Zoom Meeting ID & Password are available on the NUCT page or in the Discord server.
The following gives a tentative overview of the topics we will cover each week. Week 01 (10/05-10/11):
Introduction to the course & Overview of machine learning & Linear Regression I
Week 02 (10/12-10/18): Linear Regression II & Python examples
Week 03 (10/19-10/25): Linear Regression III, Logistic Regression, Maximum Likelihood
Week 04 (10/26-11/01): - G30 Welcome party -
Week 05 (11/02-11/08): Logistic Regression & Maximum Likelihood II
Week 06 (11/09-11/15):
Generative Learning algorithms & Naive Bayes
Week 07 (11/16-11/22): - Break -
Week 08 (11/23-11/29): Naive Bayes II & Support vector machines
Week 09 (11/30-12/06):
Support vector machines
Week 10 (12/07-12/13):
Support vector machines III: Primal & Dual Problem & Kernels
Week 11 (12/14-12/20): Reinforcement Learning: Q-Learning I
Week 12 (12/21-12/27):
Q-Learning II, Unsupervised learning: k-means clusteringWinter Vacation (12/27-01/07)
Week 13 (01/11-01/17): Neural Networks I
Week 14 (01/18-01/24): - Break -
Week 15 (01/25-01/31): Neural Networks II
Week 16 (02/01-02/07): Neural Networks III & TensorFlowReferences
(a more detailed list of references will follow)
Last update: 3rd February 2021. | {"url":"https://www.henrikbachmann.com/mml_2020.html","timestamp":"2024-11-07T13:07:49Z","content_type":"text/html","content_length":"38501","record_id":"<urn:uuid:0b24cca5-1339-48d2-9269-2a46148399b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00892.warc.gz"} |
PySDR: A Guide to SDR and DSP using Python
3. IQ Sampling¶
In this chapter we introduce a concept called IQ sampling, a.k.a. complex sampling or quadrature sampling. We also cover Nyquist sampling, complex numbers, RF carriers, downconversion, and power
spectral density. IQ sampling is the form of sampling that an SDR performs, as well as many digital receivers (and transmitters). It’s a slightly more complex version of regular digital sampling
(pun intended), so we will take it slow and with some practice the concept is sure to click!
Sampling Basics¶
Before jumping into IQ sampling, let’s discuss what sampling actually means. You may have encountered sampling without realizing it by recording audio with a microphone. The microphone is a
transducer that converts sound waves into an electric signal (a voltage level). That electric signal is transformed by an analog-to-digital converter (ADC), producing a digital representation of the
sound wave. To simplify, the microphone captures sound waves that are converted into electricity, and that electricity in turn is converted into numbers. The ADC acts as the bridge between the analog
and digital domains. SDRs are surprisingly similar. Instead of a microphone, however, they utilize an antenna, although they also use ADCs. In both cases, the voltage level is sampled with an ADC.
For SDRs, think radio waves in then numbers out.
Whether we are dealing with audio or radio frequencies, we must sample if we want to capture, process, or save a signal digitally. Sampling might seem straightforward, but there is a lot to it. A
more technical way to think of sampling a signal is grabbing values at moments in time and saving them digitally. Let’s say we have some random function,
We record the value of sample period. The frequency at which we sample, i.e., the number of samples taken per second, is simply sample rate, and its the inverse of the sample period. For example, if
we have a sample rate of 10 Hz, then the sample period is 0.1 seconds; there will be 0.1 seconds between each sample. In practice our sample rates will be on the order of hundreds of kHz to tens of
MHz or even higher. When we sample signals, we need to be mindful of the sample rate, it’s a very important parameter.
For those who prefer to see the math; let
Nyquist Sampling¶
For a given signal, the big question often is how fast must we sample? Let’s examine a signal that is just a sine wave, of frequency f, shown in green below. Let’s say we sample at a rate Fs
(samples shown in blue). If we sample that signal at a rate equal to f (i.e., Fs = f), we will get something that looks like:
The red dashed line in the above image reconstructs a different (incorrect) function that could have lead to the same samples being recorded. It indicates that our sample rate was too low because the
same samples could have come from two different functions, leading to ambiguity. If we want to accurately reconstruct the original signal, we can’t have this ambiguity.
Let’s try sampling a little faster, at Fs = 1.2f:
Once again, there is a different signal that could fit these samples. This ambiguity means that if someone gave us this list of samples, we could not distinguish which signal was the original one
based on our sampling.
How about sampling at Fs = 1.5f:
Still not fast enough! According to a piece of DSP theory we won’t dive into, you have to sample at twice the frequency of the signal in order to remove the ambiguity we are experiencing:
There’s no incorrect signal this time because we sampled fast enough that no signal exists that fits these samples other than the one you see (unless you go higher in frequency, but we will discuss
that later).
In the above example our signal was just a simple sine wave, most actual signals will have many frequency components to them. To accurately sample any given signal, the sample rate must be “at
least twice the frequency of the maximum frequency component”. Here’s a visualization using an example frequency domain plot, note that there will always be a noise floor so the highest frequency
is usually an approximation:
We must identify the highest frequency component, then double it, and make sure we sample at that rate or faster. The minimum rate in which we can sample is known as the Nyquist Rate. In other words,
the Nyquist Rate is the minimum rate at which a (finite bandwidth) signal needs to be sampled to retain all of its information. It is an extremely important piece of theory within DSP and SDR that
serves as a bridge between continuous and discrete signals.
If we don’t sample fast enough we get something called aliasing, which we will learn about later, but we try to avoid it at all costs. What our SDRs do (and most receivers in general) is filter out
everything above Fs/2 right before the sampling is performed. If we attempt to receive a signal with too low a sample rate, that filter will chop off part of the signal. Our SDRs go to great lengths
to provide us with samples free of aliasing and other imperfections.
Quadrature Sampling¶
The term “quadrature” has many meanings, but in the context of DSP and SDR it refers to two waves that are 90 degrees out of phase. Why 90 degrees out of phase? Consider how two waves that are
180 degrees out of phase are essentially the same wave with one multiplied by -1. By being 90 degrees out of phase they become orthogonal, and there’s a lot of cool stuff you can do with orthogonal
functions. For the sake of simplicity, we use sine and cosine as our two sine waves that are 90 degrees out of phase.
Next let’s assign variables to represent the amplitude of the sine and cosine. We will use
We can see this visually by plotting I and Q equal to 1:
We call the cos() the “in phase” component, hence the name I, and the sin() is the 90 degrees out of phase or “quadrature” component, hence Q. Although if you accidentally mix it up and
assign Q to the cos() and I to the sin(), it won’t make a difference for most situations.
IQ sampling is more easily understood by using the transmitter’s point of view, i.e., considering the task of transmitting an RF signal through the air. We want to send a single sine wave at a
certain phase, which can be done by sending the sum of a sin() and cos() with a phase of 0, because of the trig identity:
What happens when we add a sine and cosine? Or rather, what happens when we add two sinusoids that are 90 degrees out of phase? In the video below, there is a slider for adjusting I and another for
adjusting Q. What is plotted are the cosine, sine, and then the sum of the two.
(The code used for this pyqtgraph-based Python app can be found here)
The important takeaways are that when we add the cos() and sin(), we get another pure sine wave with a different phase and amplitude. Also, the phase shifts as we slowly remove or add one of the two
parts. The amplitude also changes. This is all a result of the trig identity:
We only need to generate one sine wave and shift it by 90 degrees to get the Q portion.
Complex Numbers¶
Ultimately, the IQ convention is an alternative way to represent magnitude and phase, which leads us to complex numbers and the ability to represent them on a complex plane. You may have seen complex
numbers before in other classes. Take the complex number 0.7-0.4j as an example:
A complex number is really just two numbers together, a real and an imaginary portion. A complex number also has a magnitude and phase, which makes more sense if you think about it as a vector
instead of a point. Magnitude is the length of the line between the origin and the point (i.e., length of the vector), while phase is the angle between the vector and 0 degrees, which we define as
the positive real axis:
This representation of a sinusoid is known as a “phasor diagram”. It’s simply plotting complex numbers and treating them as vectors. Now what is the magnitude and phase of our example complex
number 0.7-0.4j? For a given complex number where
In Python you can use np.abs(x) and np.angle(x) for the magnitude and phase. The input can be a complex number or an array of complex numbers, and the output will be a real number(s) (of the data
type float).
You may have figured out by now how this vector or phasor diagram relates to IQ convention: I is real and Q is imaginary. From this point on, when we draw the complex plane, we will label it with I
and Q instead of real and imaginary. They are still complex numbers!
Now let’s say we want to transmit our example point 0.7-0.4j. We will be transmitting:
We can use trig identity
Even though we started with a complex number, what we are transmitting is a real signal with a certain magnitude and phase; you can’t actually transmit something imaginary with electromagnetic
waves. We just use imaginary/complex numbers to represent what we are transmitting. We will talk about the
Complex Numbers in FFTs¶
The above complex numbers were assumed to be time domain samples, but you will also run into complex numbers when you take an FFT. When we covered Fourier series and FFTs last chapter, we had not
dived into complex numbers yet. When you take the FFT of a series of samples, it finds the frequency domain representation. We talked about how the FFT figures out which frequencies exist in that set
of samples (the magnitude of the FFT indicates the strength of each frequency). But what the FFT also does is figure out the delay (time shift) needed to apply to each of those frequencies, so that
the set of sinusoids can be added up to reconstruct the time-domain signal. That delay is simply the phase of the FFT. The output of an FFT is an array of complex numbers, and each complex number
gives you the magnitude and phase, and the index of that number gives you the frequency. If you generate sinusoids at those frequencies/magnitudes/phases and sum them together, you’ll get your
original time domain signal (or something very close to it, and that’s where the Nyquist sampling theorem comes into play).
Receiver Side¶
Now let’s take the perspective of a radio receiver that is trying to receive a signal (e.g., an FM radio signal). Using IQ sampling, the diagram now looks like:
What comes in is a real signal received by our antenna, and those are transformed into IQ values. What we do is sample the I and Q branches individually, using two ADCs, and then we combine the pairs
and store them as complex numbers. In other words, at each time step, you will sample one I value and one Q value and combine them in the form
If someone gives you a bunch of IQ samples, it will look like a 1D array/vector of complex numbers. This point, complex or not, is what this entire chapter has been building to, and we finally made
Throughout this textbook you will become very familiar with how IQ samples work, how to receive and transmit them with an SDR, how to process them in Python, and how to save them to a file for later
One last important note: the figure above shows what’s happening inside of the SDR. We don’t actually have to generate a sine wave, shift by 90, multiply or add–the SDR does that for us. We
tell the SDR what frequency we want to sample at, or what frequency we want to transmit our samples at. On the receiver side, the SDR will provide us the IQ samples. For the transmitting side, we
have to provide the SDR the IQ samples. In terms of data type, they will either be complex ints or floats.
Carrier and Downconversion¶
Until this point we have not discussed frequency, but we saw there was an
For reference, radio signals such as FM radio, WiFi, Bluetooth, LTE, GPS, etc., usually use a frequency (i.e., a carrier) between 100 MHz and 6 GHz. These frequencies travel really well through the
air, but they don’t require super long antennas or a ton of power to transmit or receive. Your microwave cooks food with electromagnetic waves at 2.4 GHz. If there is a leak in the door then your
microwave will jam WiFi signals and possibly also burn your skin. Another form of electromagnetic waves is light. Visible light has a frequency of around 500 THz. It’s so high that we don’t use
traditional antennas to transmit light. We use methods like LEDs that are semiconductor devices. They create light when electrons jump in between the atomic orbits of the semiconductor material, and
the color depends on how far they jump. Technically, radio frequency (RF) is defined as the range from roughly 20 kHz to 300 GHz. These are the frequencies at which energy from an oscillating
electric current can radiate off a conductor (an antenna) and travel through space. The 100 MHz to 6 GHz range are the more useful frequencies, at least for most modern applications. Frequencies
above 6 GHz have been used for radar and satellite communications for decades, and are now being used in 5G “mmWave” (24 - 29 GHz) to supplement the lower bands and increase speeds.
When we change our IQ values quickly and transmit our carrier, it’s called “modulating” the carrier (with data or whatever we want). When we change I and Q, we change the phase and amplitude of
the carrier. Another option is to change the frequency of the carrier, i.e., shift it slightly up or down, which is what FM radio does.
As a simple example, let’s say we transmit the IQ sample 1+0j, and then we switch to transmitting 0+1j. We go from sending
It is easy to get confused between the signal we want to transmit (which typically contains many frequency components), and the frequency we transmit it on (our carrier frequency). This will
hopefully get cleared up when we cover baseband vs. bandpass signals.
Now back to sampling for a second. Instead of receiving samples by multiplying what comes off the antenna by a cos() and sin() then recording I and Q, what if we fed the signal from the antenna into
a single ADC, like in the direct sampling architecture we just discussed? Say the carrier frequency is 2.4 GHz, like WiFi or Bluetooth. That means we would have to sample at 4.8 GHz, as we learned.
That’s extremely fast! An ADC that samples that fast costs thousands of dollars. Instead, we “downconvert” the signal so that the signal we want to sample is centered around DC or 0 Hz. This
downconversion happens before we sample. We go from:
to just I and Q.
Let’s visualize downconversion in the frequency domain:
When we are centered around 0 Hz, the maximum frequency is no longer 2.4 GHz but is based on the signal’s characteristics since we removed the carrier. Most signals are around 100 kHz to 40 MHz
wide in bandwidth, so through downconversion we can sample at a much lower rate. Both the B2X0 USRPs and PlutoSDR contain an RF integrated circuit (RFIC) that can sample up to 56 MHz, which is high
enough for most signals we will encounter.
Just to reiterate, the downconversion process is performed by our SDR; as a user of the SDR we don’t have to do anything other than tell it which frequency to tune to. Downconversion (and
upconversion) is done by a component called a mixer, usually represented in diagrams as a multiplication symbol inside a circle. The mixer takes in a signal, outputs the down/up-converted signal, and
has a third port which is used to feed in an oscillator. The frequency of the oscillator determines the frequency shift applied to the signal, and the mixer is essentially just a multiplication
function (recall that multiplying by a sinusoid causes a frequency shift).
Lastly, you may be curious how fast signals travel through the air. Recall from high school physics class that radio waves are just electromagnetic waves at low frequencies (between roughly 3 kHz to
80 GHz). Visible light is also electromagnetic waves, at much higher frequencies (400 THz to 700 THz). All electromagnetic waves travel at the speed of light, which is about 3e8 m/s, at least when
traveling through air or a vacuum. Now because they always travel at the same speed, the distance the wave travels in one full oscillation (one full cycle of the sine wave) depends on its frequency.
We call this distance the wavelength, denoted as
Receiver Architectures¶
The figure in the “Receiver Side” section demonstrates how the input signal is downconverted and split into I and Q. This arrangement is called “direct conversion”, or “zero IF”, because
the RF frequencies are being directly converted down to baseband. Another option is to not downconvert at all and sample so fast to capture everything from 0 Hz to 1/2 the sample rate. This strategy
is called “direct sampling” or “direct RF”, and it requires an extremely expensive ADC chip. A third architecture, one that is popular because it’s how old radios worked, is known as
“superheterodyne”. It involves downconversion but not all the way to 0 Hz. It places the signal of interest at an intermediate frequency, known as “IF”. A low-noise amplifier (LNA) is simply
an amplifier designed for extremely low power signals at the input. Here are the block diagrams of these three architectures, note that variations and hybrids of these architectures also exist:
Baseband and Bandpass Signals¶
We refer to a signal centered around 0 Hz as being at “baseband”. Conversely, “bandpass” refers to when a signal exists at some RF frequency nowhere near 0 Hz, that has been shifted up for
the purpose of wireless transmission. There is no notion of a “baseband transmission”, because you can’t transmit something imaginary. A signal at baseband may be perfectly centered at 0 Hz
like the right-hand portion of the figure in the previous section. It might be near 0 Hz, like the two signals shown below. Those two signals are still considered baseband. Also shown is an example
bandpass signal, centered at a very high frequency denoted
You may also hear the term intermediate frequency (abbreviated as IF); for now, think of IF as an intermediate conversion step within a radio between baseband and bandpass/RF.
We tend to create, record, or analyze signals at baseband because we can work at a lower sample rate (for reasons discussed in the previous subsection). It is important to note that baseband signals
are often complex signals, while signals at bandpass (e.g., signals we actually transmit over RF) are real. Think about it: because the signal fed through an antenna must be real, you cannot directly
transmit a complex/imaginary signal. You will know a signal is definitely a complex signal if the negative frequency and positive frequency portions of the signal are not exactly the same. Complex
numbers are how we represent negative frequencies after all. In reality there are no negative frequencies; it’s just the portion of the signal below the carrier frequency.
In the earlier section where we played around with the complex point 0.7 - 0.4j, that was essentially one sample in a baseband signal. Most of the time you see complex samples (IQ samples), you are
at baseband. Signals are rarely represented or stored digitally at RF, because of the amount of data it would take, and the fact we are usually only interested in a small portion of the RF spectrum.
DC Spike and Offset Tuning¶
Once you start working with SDRs, you will often find a large spike in the center of the FFT. It is called a “DC offset” or “DC spike” or sometimes “LO leakage”, where LO stands for local
Here’s an example of a DC spike:
Because the SDR tunes to a center frequency, the 0 Hz portion of the FFT corresponds to the center frequency. That being said, a DC spike doesn’t necessarily mean there is energy at the center
frequency. If there is only a DC spike, and the rest of the FFT looks like noise, there is most likely not actually a signal present where it is showing you one.
A DC offset is a common artifact in direct conversion receivers, which is the architecture used for SDRs like the PlutoSDR, RTL-SDR, LimeSDR, and many Ettus USRPs. In direct conversion receivers, an
oscillator, the LO, downconverts the signal from its actual frequency to baseband. As a result, leakage from this LO appears in the center of the observed bandwidth. LO leakage is additional energy
created through the combination of frequencies. Removing this extra noise is difficult because it is close to the desired output signal. Many RF integrated circuits (RFICs) have built-in automatic DC
offset removal, but it typically requires a signal to be present to work. That is why the DC spike will be very apparent when no signals are present.
A quick way to handle the DC offset is to oversample the signal and off-tune it. As an example, let’s say we want to view 5 MHz of spectrum at 100 MHz. Instead what we can do is sample at 20 MHz at
a center frequency of 95 MHz.
The blue box above shows what is actually sampled by the SDR, and the green box displays the portion of the spectrum we want. Our LO will be set to 95 MHz because that is the frequency to which we
ask the SDR to tune. Since 95 MHz is outside of the green box, we won’t get any DC spike.
There is one problem: if we want our signal to be centered at 100 MHz and only contain 5 MHz, we will have to perform a frequency shift, filter, and downsample the signal ourselves (something we will
learn how to do later). Fortunately, this process of offtuning, a.k.a applying an LO offset, is often built into the SDRs, where they will automatically perform offtuning and then shift the frequency
to your desired center frequency. We benefit when the SDR can do it internally: we don’t have to send a higher sample rate over our USB or Ethernet connection, which bottleneck how high a sample
rate we can use.
This subsection regarding DC offsets is a good example of where this textbook differs from others. Your average DSP textbook will discuss sampling, but it tends not to include implementation hurdles
such as DC offsets despite their prevalence in practice.
Sampling Using our SDR¶
For SDR-specific information about performing sampling, see one of the following chapters:
Calculating Average Power¶
In RF DSP, we often like to calculate the power of a signal, such as detecting the presence of the signal before attempting to do further DSP. For a discrete complex signal, i.e., one we have
sampled, we can find the average power by taking the magnitude of each sample, squaring it, and then finding the mean:
Remember that the absolute value of a complex number is just the magnitude, i.e.,
In Python, calculating the average power will look like:
avg_pwr = np.mean(np.abs(x)**2)
Here is a very useful trick for calculating the average power of a sampled signal. If your signal has roughly zero mean–which is usually the case in SDR (we will see why later)–then the signal
power can be found by taking the variance of the samples. In these circumstances, you can calculate the power this way in Python:
avg_pwr = np.var(x) # (signal should have roughly zero mean)
The reason why the variance of the samples calculates average power is quite simple: the equation for variance is
Calculating Power Spectral Density¶
Last chapter we learned that we can convert a signal to the frequency domain using an FFT, and the result is called the Power Spectral Density (PSD). The PSD is an extremely useful tool for
visualizing signals in the frequency domain, and many DSP algorithms are performed in the frequency domain. But to actually find the PSD of a batch of samples and plot it, we do more than just take
an FFT. We must do the following six operations to calculate PSD:
1. Take the FFT of our samples. If we have x samples, the FFT size will be the length of x by default. Let’s use the first 1024 samples as an example to create a 1024-size FFT. The output will be
1024 complex floats.
2. Take the magnitude of the FFT output, which provides us 1024 real floats.
3. Square the resulting magnitude to get power.
4. Normalize: divide by the FFT size (
5. Convert to dB using
6. Perform an FFT shift, covered in the previous chapter, to move “0 Hz” in the center and negative frequencies to the left of center.
Those six steps in Python are:
Fs = 1e6 # lets say we sampled at 1 MHz
# assume x contains your array of IQ samples
N = 1024
x = x[0:N] # we will only take the FFT of the first 1024 samples, see text below
PSD = np.abs(np.fft.fft(x))**2 / (N*Fs)
PSD_log = 10.0*np.log10(PSD)
PSD_shifted = np.fft.fftshift(PSD_log)
Optionally we can apply a window, like we learned about in the Frequency Domain chapter. Windowing would occur right before the line of code with fft().
# add the following line after doing x = x[0:1024]
x = x * np.hamming(len(x)) # apply a Hamming window
To plot this PSD we need to know the values of the x-axis. As we learned last chapter, when we sample a signal, we only “see” the spectrum between -Fs/2 and Fs/2 where Fs is our sample rate. The
resolution we achieve in the frequency domain depends on the size of our FFT, which by default is equal to the number of samples on which we perform the FFT operation. In this case our x-axis is 1024
equally spaced points between -0.5 MHz and 0.5 MHz. If we had tuned our SDR to 2.4 GHz, our observation window would be between 2.3995 GHz and 2.4005 GHz. In Python, shifting the observation window
will look like:
center_freq = 2.4e9 # frequency we tuned our SDR to
f = np.arange(Fs/-2.0, Fs/2.0, Fs/N) # start, stop, step. centered around 0 Hz
f += center_freq # now add center frequency
plt.plot(f, PSD_shifted)
We should be left with a beautiful PSD!
If you want to find the PSD of millions of samples, don’t do a million-point FFT because it will probably take forever. It will give you an output of a million “frequency bins”, after all,
which is too much to show in a plot. Instead I suggest doing multiple smaller PSDs and averaging them together or displaying them using a spectrogram plot. Alternatively, if you know your signal is
not changing fast, it’s adequate to use a few thousand samples and find the PSD of those; within that time-frame of a few thousand samples you will likely capture enough of the signal to get a nice
Here is a full code example that includes generating a signal (complex exponential at 50 Hz) and noise. Note that N, the number of samples to simulate, becomes the FFT length because we take the FFT
of the entire simulated signal.
import numpy as np
import matplotlib.pyplot as plt
Fs = 300 # sample rate
Ts = 1/Fs # sample period
N = 2048 # number of samples to simulate
t = Ts*np.arange(N)
x = np.exp(1j*2*np.pi*50*t) # simulates sinusoid at 50 Hz
n = (np.random.randn(N) + 1j*np.random.randn(N))/np.sqrt(2) # complex noise with unity power
noise_power = 2
r = x + n * np.sqrt(noise_power)
PSD = np.abs(np.fft.fft(r))**2 / (N*Fs)
PSD_log = 10.0*np.log10(PSD)
PSD_shifted = np.fft.fftshift(PSD_log)
f = np.arange(Fs/-2.0, Fs/2.0, Fs/N) # start, stop, step
plt.plot(f, PSD_shifted)
plt.xlabel("Frequency [Hz]")
plt.ylabel("Magnitude [dB]") | {"url":"https://pysdr.org/content/sampling.html","timestamp":"2024-11-01T22:18:39Z","content_type":"text/html","content_length":"473173","record_id":"<urn:uuid:b0165b15-2e63-4f92-ae6e-e963fe905a5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00775.warc.gz"} |
Lesson 4
Money and Debts
Let's apply what we know about signed numbers to money.
Problem 1
The table shows five transactions and the resulting account balance in a bank account, except some numbers are missing. Fill in the missing numbers.
│ │transaction amount │account balance │
│transaction 1│200 │200 │
│transaction 2│-147 │53 │
│transaction 3│90 │ │
│transaction 4│-229 │ │
│transaction 5│ │0 │
Problem 2
1. Clare has $54 in her bank account. A store credits her account with a $10 refund. How much does she now have in the bank?
2. Mai's bank account is overdrawn by $60, which means her balance is -$60. She gets $85 for her birthday and deposits it into her account. How much does she now have in the bank?
3. Tyler is overdrawn at the bank by $180. He gets $70 for his birthday and deposits it. What is his account balance now?
4. Andre has $37 in his bank account and writes a check for $87. After the check has been cashed, what will the bank balance show?
Problem 3
Last week, it rained \(g\) inches. This week, the amount of rain decreased by 5%. Which expressions represent the amount of rain that fell this week? Select all that apply.
(From Unit 4, Lesson 8.)
Problem 4
Decide whether or not each equation represents a proportional relationship.
1. Volume measured in cups (\(c\)) vs. the same volume measured in ounces (\(z\)): \(c = \frac18 z\)
2. Area of a square (\(A\)) vs. the side length of the square (\(s\)): \(A = s^2\)
3. Perimeter of an equilateral triangle (\(P\)) vs. the side length of the triangle (\(s\)): \(3s = P\)
4. Length (\(L\)) vs. width (\(w\)) for a rectangle whose area is 60 square units: \(L = \frac{60}{w}\)
(From Unit 2, Lesson 8.)
Problem 5
1. \(5\frac34 + (\text{-}\frac {1}{4})\)
2. \(\text {-}\frac {2}{3} + \frac16\)
3. \(\text{-}\frac {8}{5} + (\text{-}\frac {3}{4})\)
(From Unit 5, Lesson 3.)
Problem 6
In each diagram, \(x\) represents a different value.
For each diagram,
1. What is something that is definitely true about the value of \(x\)?
2. What is something that could be true about the value of \(x\)?
(From Unit 5, Lesson 1.) | {"url":"https://curriculum.illustrativemathematics.org/MS/students/2/5/4/practice.html","timestamp":"2024-11-03T07:43:59Z","content_type":"text/html","content_length":"73436","record_id":"<urn:uuid:8d6ab2b1-6352-41d9-b0aa-d4d1bce2c35f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00108.warc.gz"} |
SOLID command
Draws solid-filled polygons.
Alias: PLANE, SO
Draws 3 and 4-sided solid-filled regular and irregular 2D polygons.
Note: Does not draw 3D solids.
There are four methods to draw solid-filled polygons:
• By picking points.
Note: After picking the first point, BricsCAD draws a ghost line to the next point. After specifying the second corner of the polygon, its shape and size can be previewed. If a fourth point is
specified, an adjacent polygon is created (the third and fourth points of the first polygon become the first and second points of the second polygon).
• Rectangular.
• Square.
• Triangle.
Options within the command
Draws one or more connected solid-filled rectangles.
Draws one or more connected solid-filled squares.
Draws one or more connected equilateral solid-filled triangles.
Note: 2D solids can be edited directly through grips. | {"url":"https://help.bricsys.com/en-us/document/command-reference/s/solid-command?version=V24","timestamp":"2024-11-01T20:39:20Z","content_type":"text/html","content_length":"66228","record_id":"<urn:uuid:d29ded8e-67c9-4959-a796-54502455d028>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00734.warc.gz"} |
Show that if \(X\) is a finite-dimensional Banach space, then every linear functional \(f\) on \(X\) is continuous on \(X\).
Short Answer
Expert verified
Every linear functional on a finite-dimensional Banach space is continuous because linear functionals on \( \text{\bf R}^n \) are continuous and every finite-dimensional Banach space is isomorphic to
\( \text{\bf R}^n \).
Step by step solution
- Understand the Given Problem
We need to show that any linear functional on a finite-dimensional Banach space is continuous. Let's begin by understanding the terms: A Banach space is a complete normed vector space, and a linear
functional is a linear map from the space to the real or complex numbers.
- Finite-dimensional Spaces are Isomorphic to \(\text{\bf R}^n\)
Recall that any finite-dimensional Banach space is isomorphic to \(\text{\bf R}^n\) (or \(\text{\bf C}^n\) if working with complex numbers). An isomorphism here means there exists a linear bijection
between the Banach space and \(\text{\bf R}^n\).
- Linear Maps on \( \text{\bf R}^n \) are Continuous
In finite-dimensional spaces such as \( \text{\bf R}^n \), any linear map is continuous. This is a known result from linear algebra and is based on the fact that all norms on finite-dimensional
vector spaces are equivalent.
- Use the Isomorphism to Finalize the Proof
Given the isomorphism between our Banach space \(X\) and \( \text{\bf R}^n \), any linear functional on \(X\) can be translated into a linear functional on \(\text{\bf R}^n\). Since linear
functionals on \( \text{\bf R}^n \) are continuous, so are the linear functionals on \(X\) due to the isomorphism.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
linear functional
A linear functional is a very special kind of function in mathematics. It is a linear map that takes a vector from a vector space and returns a real or complex number. Importantly, it preserves the
operations of addition and scalar multiplication. This means that for a linear functional \(f \) and any two vectors \(x, y \) in the vector space, and any scalar \(a, b \) from the respective field:
\ f(ax + by) = af(x) + bf(y) \. This property makes linear functionals particularly useful in various fields including functional analysis, quantum mechanics, and optimization.
In simpler terms, a linear functional stretches or scales vectors and sums their results in a linear way, turning them into simple numbers. Understanding this will make further steps, such as
studying continuity and isomorphism, clearer and more logical.
Continuity, in the context of functions, means that small changes in the input lead to small changes in the output. For linear functionals on finite-dimensional Banach spaces, continuity is
To see why, let's start by recalling that a Banach space is a vector space with a norm, which is complete (meaning it contains all its limit points). Now, in any real-world application, we love
Banach spaces as they allow us to handle infinite sequences nicely under the operation of taking limits.
We can show that a linear functional \(f \) is continuous by noting that in a finite-dimensional Banach space, any linear map is continuous. This comes from the equivalence of all norms in
finite-dimensional spaces. In other words, no matter which norm we use, distances and sizes of vectors remain proportional. Therefore, any small change in the vector results in a proportionally small
change in the output number given by the linear functional, ensuring continuity.
Isomorphism is a concept that gives a really powerful idea: two mathematical structures can be considered the same if there is a way to map one structure onto the other perfectly, respecting their
When we say a finite-dimensional Banach space is isomorphic to \( \text{\bf R}^n \), we mean there's a bijective (one-to-one and onto) linear map that preserves vector addition and scalar
multiplication. Essentially, it means these spaces are structurally identical for practical purposes.
This is a critical concept because it tells us that what happens in the relatively simple and intuitive \( \text{\bf R}^n \) also happens in any finite-dimensional Banach space. Thus, properties like
continuity of linear maps that are well-known in \( \text{\bf R}^n \) transfer directly. Therefore, if every linear functional is continuous on \( \text{\bf R}^n \), they are also continuous on any
finite-dimensional Banach space through the isomorphism. Understanding isomorphisms helps in transferring and applying results from simpler spaces to more complex ones seamlessly. | {"url":"https://www.vaia.com/en-us/textbooks/math/functional-analysis-and-infinite-dimensional-geometry-0-edition/chapter-2/problem-2-show-that-if-x-is-a-finite-dimensional-banach-spac/","timestamp":"2024-11-03T15:26:09Z","content_type":"text/html","content_length":"252715","record_id":"<urn:uuid:aaa61707-8726-4653-b483-2abede5b57be>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00407.warc.gz"} |
RESCALE Function for Excel
Returns an array with values rescaled to [lower,upper]
=L_RESCALE(array, lower, upper)
Argument Description Example
array A 1D or 2D range of values B1:D5
lower The lower bound of the scaled array -5
upper The upper bound of the scaled array 5
In the template file, navigate to the Sequences worksheet to see the L_RESCALE function in action.
Rescaling an array using a process called "min-max normalization" is a common task when you want to compare data sets that are at very different scales. Normalization typically involves scaling to
[0,1], but there are situations where you may want to rescale to different bounds. Examples may include:
• Sensitivity Analysis: Plotting the effect of multiple variables on a single output in a single chart by scaling all the inputs to a range of [-1,1].
• Finance: Rescaling stock prices from different companies to compare price movement and volatility.
• Data Visualization: Scaling data to the same [lower,upper] bounds may aid in comparing plots such as heat maps.
• Image Processing: Scaling pixel intensities to [0,1] to help in image enhancement techniques and training learning models.
• Signal Processing: Compare different signals on a common scale.
• Grade Curve: A quick-and-dirty method for curving grades on an exam ( eg: change values from a range of [30,85] to a range of [70,95] )
• Machine Learning & Optimization: Algorithms often perform better when scaled to a standard range, especially those relying on distance calculations (gradient descent, k-nearest neighbors, etc.)
• User Interface Controls: For slider bars, you may want to convert an input between [0,100] to some other range such as [1%,10%].
Rescaled Value = Lower + (Upper - Lower) * (Value - MIN(array)) / (MAX(array) - MIN(array))
When Lower = 0 and Upper = 1, this simplies to:
Rescaled Value = (Value - MIN(array)) / (MAX(array) - MIN(array))
This calculation is performed for each value of the array.
Lambda Function Code
This code for using L_RESCALE in Excel is provided under the License as part of the LAMBDA Library, but to use just this function, you may copy the following code directly into your spreadsheet.
Code to Create Function via the Name Manager
Name: L_RESCALE
Comment: Returns an array with values rescaled to [lower,upper]
Refers To:
Code for AFE Workbook Module (Excel Labs Add-in)
* Returns an array with values rescaled to [lower,upper]
* L_RESCALE({1,2,3},0,1) = {0, 0.5, 1}
L_RESCALE = LAMBDA(array,lower,upper,
Named Function for Google Sheets
Name: L_RESCALE
Description: Returns an array with values rescaled to [lower,upper]
Arguments: array, lower, upper (see above for descriptions and example values)
min, MIN(array), max, MAX(array),
These L_RESCALE functions are not compatible between Excel and Google Sheets. The GS version requires ARRAYFORMULA to return multiple results.
L_RESCALE Examples
Curved Test Scores
In this example, a teacher has a set of 5 exam scores that range from 45 to 88. The teacher decides to grade this exam on a curve, and wants to scale the values to [70,95].
A1:A5 = {74; 88; 67; 45; 81}
B1 = L_RESCALE(A1:A5, 70, 95)
Result: {86.86; 95; 82.79; 70; 90.93}
The lowest original score of 45 became 70, and the highest original score of 88 became 95.
Test: Copy and Paste this LET function into a cell
array, {74; 88; 67; 45; 81},
L_RESCALE(array, 70, 95)
See Also
: This article is meant for educational purposes only. See the
regarding the LAMBDA code, and the site
Terms of Use
for the documentation. | {"url":"https://www.vertex42.com/lambda/rescale.html","timestamp":"2024-11-13T06:18:11Z","content_type":"text/html","content_length":"69776","record_id":"<urn:uuid:7b0fd242-4967-4a08-a351-68d559649227>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00583.warc.gz"} |
WIN+SHIFT+L/R ARROW Doesn't work, any alternate?
I run multiple monitors with one of those monitors being an HDTV in another room for media. One of the keyboard shortcuts I have loved since Windows 7 is the WIN+SHIFT+L/R ARROW being able to
switch my current window between monitors. This keyboard shortcut does not work in my version of Windows 10 (Build 9841). Anyone else run into this problem or know of a possible alternate in the
PS: This is my first post... | {"url":"https://www.windowsphoneinfo.com/threads/win-shift-l-r-arrow-doesnt-work-any-alternate.138/","timestamp":"2024-11-09T01:24:10Z","content_type":"text/html","content_length":"57130","record_id":"<urn:uuid:41e0cc8d-a7a3-4c22-bc32-10a9ec1e47a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00574.warc.gz"} |
Reinforced Concrete
– A reinforced concrete girder is subjected to torsional moment from the loads on the cantilever
frame. The following factored forces are computed from
this beam:
Factored moment, Mu = 290kN-m
Factored shear, Vu = 220 kN
Factored torque, Tu = 180 kN-m
The girder has a width of 400 mm and an overall depth of 500 mm. Concrete cover is 40 mm. The centroid of longitudinal bars of the girder are placed 65 mm from the extreme concrete fibers. Concrete
strength fc’ = 20.7 MPa and steel yield strength for longitudinal bars is fy =415 MPa. Use 12 mm U-stirrups with fyt = 275 MPa. Allowable shear stress in concrete is 0.76 MPa. Use ρb = 0.021.
Pls help me with this guys :)
Ano ang gagawin dito? Wala naman tanong 'to? | {"url":"https://mathalino.com/comment/9689","timestamp":"2024-11-08T15:22:45Z","content_type":"text/html","content_length":"50751","record_id":"<urn:uuid:ab1a1c10-be17-4861-968f-ff579290eaea>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00083.warc.gz"} |
National Math Panel Formed
The New York Times is reporting that President Bush has chosen Larry Faulkner, a chemist and a former President of the University of Texas at Austin to head the National Math Panel:
The former president, Larry R. Faulkner, who led the university from 1998 until early this year, will be chairman of the National Math Panel, which President Bush created by executive order in
The panel is modeled on the National Reading Panel, which has been highly influential in promoting phonics and a back-to-basics approach to reading in classrooms around the nation. Though that
panel has been criticized by English teachers and other educators, its report has become the guide by which $5 billion in federal grants to promote reading proficiency are being awarded.
The new panel reflects a growing concern by the Bush administration that the United States risks losing its competitive edge as other nations outpace its performance in math and science. Citing
figures from a report by the National Academies in his State of the Union address in January, President Bush unveiled an American Competitiveness Initiative to pump hundreds of millions of
dollars into research in the physical sciences, and some $250 million into improving math instruction in elementary and secondary schools.
The article goes on to mention some difficult issues in mathematics education:
The conflict over how to teach reading -- whether by teaching children to recognize words in the context of stories or through more explicit instruction in letters and sounds -- has its parallels
in the fight over how to teach math, and the conflicts share many of the same political and philosophical disputes.
In traditional math, children learn multiplication tables and specific techniques for calculating 25 x 25, for example. In so-called constructivist math, the process by which students explore the
question can be more important than getting the right answer, and the early use of calculators is welcomed.
I'm much closer to the traditionalists on this one.
I haven't always felt this way, but the specatcle of college students having to think about 8x7, or being unable to add 1/2 and 1/3 in their head, is pretty persuasive evidence that the
constructivists have the wrong emphasis. To have any hope of being successful in higher levels of mathematics, it is essential first to be comfortable with arithmetic. Multiplication tables have to
be automatic, as does an ability to find common denominators for fractions with small denominators. Likewise for basic skills such as converting between fractions, decimals and percentages.
The way you develop that comfort is by doing it. Over and over again. Lot's of drill. It's boring and tedious and stressful for many kids. Sorry, but it's the only way. And, yes, getting the right
answer is important.
On top of that, there is a limit to how much abstraction young children can handle. A typical bright eight-year old can learn his multiplication tables without too much difficulty, but abstract set
theory is probably beyond him. That is one of the lessons we learned from the “New Math” fiasco in the 1960's (in which children were introduced to abstract mathematics at a very young age.)
As for calculators, I believe there are probably innovative ways to integrate them into the early math curriculum. For example, one teacher suggested to me that calculators can facilitate things like
“counting by fives.” The idea is that a child types 5+5 and then the equal sign over and over again, watching the sums go 5, 10, 15, 20 and so forth. This can help develop a certain number sense in
small children. That's fine, but it is essential that the calculator not become a crutch for replacing pencil and paper algorithms.
None of this is to say that the reasoning behind the algorithms, or the general approach to solving a given problem is unimportant. Clearly both items are important: Getting the right answer, and
understanding the means by which the answer is obtained. The issue is finding the appropriate balance between these concerns at each grade level. The way I see it, as a student gets older there
should be a gradual shift away from algorithms and towards nore abstract approaches.
In other words, first master your basic skills then worry about proofs and abstract reasoning.
More like this
Via Mark Chu-Carroll I just read this article, from the USA Today, about a mathematician at the University of Pennsylvania who believes that fractions have no place in the elementary and middle
school mathematics curriculum: A few years ago, Dennis DeTurck, an award-winning professor of…
Via Mark Chu-Carroll, I just finished reading this article by mathematician Keith Devlin. He writes: Let's start with the underlying fact. Multiplication simply is not repeated addition, and telling
young pupils it is inevitably leads to problems when they subsequently learn that it is not.…
By way of Majikthise, I found this excellent post by Abbas Raza about the problem of mathematical illiteracy. But to step back a bit, this trail of links began with the release of new teaching
guidelines by the National Council of Mathematics Teachers: The report urges teachers to focus on three…
A bunch of people have been sending me links to a USA Today article about a math professor who wants to change math education. Specifically, he wants to stop teaching fractions, and de-emphasize
manual computation like multiplication and long division. Frankly, reading about it, I'm pissed off by…
Interesting stuff. I'm but a humble MSc in pure math, but I agree with just about everything you say.
My recent horrible experience was getting $5 more in change than was warranted because the register was wrong and the cashier couldn't grapple with the ugly reality and recognize that "the machine
was wrong."
If anything, I'd be a bit more puritanical about calculators. They're great for saving time on operations which you already understand ... but you should still be able to recognize when something
like a simple multiplication result just doesn't feel right. (After all, you still have to push the right keys.) Used as a crutch, they're the devil's plaything.
But I wouldn't want you to think I felt strongly about it.
And I'll end with an illustrious name in "new math" - Tom Lehrer. ("Don't worry. Base 8 is just like base 10 ... if you're missing two fingers.")
I read a lot of these ScienceBlogs and I sometimes ask naive questions as a non-scientist. But in this case I feel that I must respond from the position of (semi)expert. I too have an MS in Math and
before that a BA in Math Ed. I taught Math for 15 years in everything from 5th grade through Calculus. I supervised the elementary school curriciulum and teachers for a while. And then I spent 25
years in Computer Programming/Analysis/Management. My experience revealed that very few people - including Math teachers, college graduate programmers, people in business, have a real understanding
of Arithmetic. As I see it the problem is -still - too early an introduction to using algorithms to get the right answer. The important concept that "New Math" introduced, which became perverted, was
that concrete experience and understanding of how numbers work must precede introduction of any algorithm. Students are not taught the most fundamental concept - the difference between a number and a
numeral. They need to understand how the numbers work before they are taught the shortcuts that the symbols allow. They are taught to memorize (indoctrination - just like in church) rather than to
understand WHY 7 X 9 = 9 X 7, WHY the long division algorithem works, etc. The same thing is true at every level. Once Arithmetic has been learned and understood it is the concrete level that must be
used to introduce Algebra, and so on. The problem has never been with that theory, it is with the fact that the people teaching arithmetic never understood those concepts themselves.
This got me curious because i'm taking an auxiliary math course at university and i'm scheduled to take many other math courses during the next few years and so far, we practiced a lot of thing but
the logic behind the rules failed to sink in (ok, i may not explain it well but Karl is), is there a (or a few) good book(s) explaining the logic and rules you can recommend ?
Thanks in advance
I'm a computer science major (junior in college) so forgive my analogy here but I think of it like a computer program. There are two ways to go, either you can have a lookup table (rote memorization)
or you can have a function (theory/algorithm). They each provide advantages and disadvantages and based on the circumstances, either one could prove more useful. When I was in middle school and high
school I definitely had the multiplication table stored in my brain as a lookup table. I just had to think of two numbers and the answer was there. It was very VERY fast but it used lots of memory.
Now, as I've moved on to more advance subjects and space in my brain has become more and more limited, I'm forced to store the multiplication table as an algorithm. When I need to know 8x7, my brain
doesn't always simply throw out the answer, sometimes I have to run the numbers in my head. Ive traded a small processing delay for increased capacity. I think at some point, depending on the
frequency of use, thats a given with most people and math. In the average persons life you just dont do that much math that requires split second response time.
I am probably betraying my bias here, but is there a good argument for not having more mathematicians on the panel? Of the sixteen members listed here, only three are mathematicians (as far as I can
tell from their instutitional affiliations). Of those three, one is from Harvard and another from UC-Berkley.
Frankly, I think they would have been better served replacing one (or two) of the 'education' or psychology specialists with mathematics professors from mid-sized state universities. Those folks are
more likely to see the so-called fruits of the elementary/high school labors in teaching mathematics and are in a better position to point out just how little the average student actually does know.
Even students who take calculus as their first math course in college are often woefully underprepared in algebra.
With respect to Karl's point about algorithm's being introduced too early, I'm not sure that I agree. Rather, I think the problem is not reinforcing those algorithms with two things:
1) tying the results of an algorithm back to the original problem or context so that students can see that the algorithm is not the answer. I think students can develop a concrete understanding of
arithmetic (and algebra) at the same time that they are learning to use algorithms; the two are complementary, not contradictory.
2) practicing those algorithms over and over and over and over. The denigration of practice as "drill and kill" has allowed students to get by with very weak skills. I have seen more and more
students writing the equivalent of "sqrt(a+b) = sqrt(a) + sqrt(b)" and I think such mistakes are due in large part to a lack of practice. Students should repeat doing the right thing so many times
that it becomes an automatic response.
When employing most algorithms in arithmetic and algebra, thinking should be done before the algorithm (what should be done?) and after the algorithm (what do I do with the result?), not during the
I agree that rote learning is probably necessary for the absolute basics, but I think a lot of the problem people have graduating from arithmetic to higher maths is that conceptual understanding
becomes much more important but it often isn't taught that way. I was fortunate enough to have an excellent maths education (I was raised in Britain if you're wondering about the spelling), and I was
always perplexed as to why people from other schools had such problems with algebra - it seems it's not just a British phenomenon if Richard Cohen's infamous column is anything to go by. Algebra has
always seemed perfectly simple to me, while other relatively simple mathematical concepts were much harder for me to grasp. When I talked to people who admitted to such difficulty, it turned out they
were taught algebra without any mention of variables or why it works. It was just a set of seemingly arbitrary rules for manipulating equations. That can't be the right way to teach it.
Alain wrote: "is there a (or a few) good book(s) explaining the logic and rules you can recommend ?"
See if you can find a 3rd, 4th, 5th, 6th grade book that talks about commutative associative, transitive. Commutative: 3+4 = 4+3. Addition and multiplication are commutative, subtraction and division
are not.
Associative: 3+(4+5) = (3+4)+5. Again, add & mult are, subt & div are not.
Transitive: 3X(4+5) = 3X4 + 3X5.
Knowing these principles, and the place value concept of the decimal numeration system makes mental arithmetic and therefore all following math much more understandable.
DFX wrote: "In the average persons life you just don?t do that much math that requires split second response time."
It's not a question of split-second. It is a question of having a basic understanding. Two examples: I once talked to a woman about her job function. She said that at a certain point in the process
she moved the decimal over two places. I said - Oh, you're dividing by 100. She said, no, I just move the decimal point. No understanding.
Another time, ordering custom sized Venetian blinds, I asked how precise the width measurement needs to be. The reply: to the nearest fraction. No understanding.
Kipli wrote "sqrt(a+b) = sqrt(a) + sqrt(b)" and I think such mistakes are due in large part to a lack of practice"
I disagree. I think that such mistakes are due entirely to a lack of understanding of the concept. I think that this epitomizes the problem - teaching algorithms too soon.
Ginger yellow wrote: "they were taught algebra without any mention of variables or why it works. It was just a set of seemingly arbitrary rules for manipulating equations. That can't be the right way
to teach it."
I agree 100%.
I think this a clear opportunity for the Bush administration to show how their particular brand of respect for technology and science can affect this situation.
First, get Michael Crichton into the White House to explain about the liberal numbers conspiracy.
Second, we need to change away from our current ARABIC number system. When you use ARABIC numbers, the terrorists win. We're an empire, we should be using Roman numerals.
Just imagine how the elimination of negative numbers and 0 could improve our current budget deficit situation.
Long ago, in the fifth grade, we learned "invert and multiply" for dividing fractions. I think the pineapple upside-down cake that was served had something to do with our success.
In our modern world, the cashier at McGrease King takes your order by pressing buttons bearing pictures of the food item, hits the big green button, and the machine spits out your change. But taking
this example as a model of how most people don't need arthmetical abillities is very misleading--I see frequent demonstrations of people geeting frustrated because they don't know how to figure how
much fertilizer to apply, or how to measure 100mm with a ruler graduated in 16ths of an inch, &c.
One cannot be blamed for being skeptical about Bush's support of math & science. The president that called for teaching intelligent design creationism in schools may be just as likely to think there
is some Bible-based alternative to the Pythagorean theorem. Jim's right--have you ever seen a person with -10 fingers?
Thanks you very much Karl for mentioning these words, I found a useful ressource (the math forum http://mathforum.org/ at Drexel University) by googling on them, they also did a number of books http:
//mathforum.org/pubs/dr.mathbooks.html which look very interresting.
I was trained in the traditionalist mode, like most here, although I was certainly hit with "New Math" along the way. My sons' elementary school uses a more constructivist approach (the
"Investigations" system.) The emphasis is on the kids developing solutions rather than learning algorithms. Calculators aren't used. I wouldn't say that there is too much abstraction in this method,
so perhaps it isn't the same as other constructivist systems that you've seen.
My friend teaches in the middle school that is fed by this elementary and she has noticed both good and bad things about the kids taught through "Investigations" when compared to kids taught more
traditionally. On the good side, the constructivist kids are much better at problem solving -- they can approach a new problem with confidence, while the other kids have trouble applying their
algorithms in new situations. On the other hand, the non-traditional kids aren't as facile with calculations as their peers.
That last issue is being addressed by some "bridging" work before the kids leave elementary school. They are being taught the traditional algorithms at that point.
While I'm not a complete convert, I think that algorithms should be balanced by much more of this kind of constructivist teaching -- there is no dichotomy here. Just like language arts, where you
need a combination of whole language and phonics. Not every student learns well in the same mode and teaching, especially in the elementary grades, needs to accomodate that. | {"url":"https://www.scienceblogs.com/evolutionblog/2006/05/15/national-math-panel-formed","timestamp":"2024-11-12T07:29:42Z","content_type":"text/html","content_length":"66443","record_id":"<urn:uuid:911cc4b4-ebc3-458a-9ae9-05b4f76e06cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00620.warc.gz"} |
PPT - Model Checking Lecture 1 PowerPoint Presentation, free download - ID:181865
1. Model Checking Lecture 1
2. Outline • 1 Specifications: logic vs. automata, linear vs. branching, safety vs. liveness • 2 Graph algorithms for model checking • Symbolic algorithms for model checking • Pushdown systems
3. Model checking, narrowly interpreted: Decision procedures for checking if a given Kripke structure is a model for a given formula of a modal logic.
4. Why is this of interest to us? Because the dynamics of a discrete system can be captured by a Kripke structure. Because some dynamic properties of a discrete system can be stated in modal logics.
Model checking = System verification
5. Model checking, generously interpreted: Algorithms, rather than proof calculi, for system verification which operate on a system model (semantics), rather than a system description (syntax).
6. There are many different model-checking problems: for different (classes of) system models for different (classes of) system properties
7. A specific model-checking problem is defined by I|=S “implementation” (system model) “specification” (system property) “satisfies”, “implements”, “refines” (satisfaction relation)
8. A specific model-checking problem is defined by I|=S more detailed more abstract “implementation” (system model) “specification” (system property) “satisfies”, “implements”, “refines”
(satisfaction relation)
9. Characteristics of system models which favor model checking over other verification techniques: ongoing input/output behavior (not: single input, single result) concurrency (not: single control
flow) control intensive (not: lots of data manipulation)
10. Examples -control logic of hardware designs -communication protocols -device drivers
11. Paradigmatic example: mutual-exclusion protocol || loop out: x1 := 1; last := 1 req: await x2 = 0 or last = 2 in: x1 := 0 end loop. loop out: x2 := 1; last := 2 req: await x1 = 0 or last = 1 in:
x2 := 0 end loop. P2 P1
12. Model-checking problem I|=S system model system property satisfaction relation
13. Model-checking problem I|=S system model system property satisfaction relation
14. Important decisions when choosing a system model -state-based vs. event-based -interleaving vs. true concurrency -synchronous vs. asynchronous interaction -etc.
15. Particular combinations of choices yield CSP Petri nets I/O automata Reactive modules etc.
16. While the choice of system model is important for ease of modeling in a given situation, the only thing that is important for model checking is that the system model can be translated into some
form of state-transition graph.
17. q1 a a,b b q2 q3
18. State-transition graph • Q set of states {q1,q2,q3} • A set of atomic observations {a,b} • Q Q transition relation q1 q2 [ ]: Q 2A observation function [q1] = {a} set of observations
19. Mutual-exclusion protocol || loop out: x1 := 1; last := 1 req: await x2 = 0 or last = 2 in: x1 := 0 end loop. loop out: x2 := 1; last := 2 req: await x1 = 0 or last = 1 in: x2 := 0 end loop. P2
20. oo001 or012 ro101 io101 rr112 pc1: {o,r,i} pc2: {o,r,i} x1: {0,1} x2: {0,1} last: {1,2} ir112 33222 = 72 states
21. The translation from a system description to a state-transition graph usually involves an exponential blow-up !!! e.g., n boolean variables 2n states This is called the “state-explosion
22. Finite state-transition graphs don’t handle: - recursion (need pushdown models) - process creation State-transition graphs are not necessarily finite-state We will talk about some of these issues
in a later lecture.
23. Model-checking problem I|=S system model system property satisfaction relation
24. Three important decisions when choosing system properties: • automata vs. logic • branching vs. linear time • safety vs. liveness
25. Three important decisions when choosing system properties: • automata vs. logic • branching vs. linear time • safety vs. liveness The three decisions are orthogonal, and they lead to
substantially different model-checking problems.
26. Three important decisions when choosing system properties: • automata vs. logic • branching vs. linear time • safety vs. liveness The three decisions are orthogonal, and they lead to
substantially different model-checking problems.
27. Safety vs. liveness Safety: something “bad” will never happen Liveness: something “good” will happen (but we don’t know when)
28. Safety vs. liveness for sequential programs Safety: the program will never produce a wrong result (“partial correctness”) Liveness: the program will produce a result (“termination”)
29. Safety vs. liveness for sequential programs Safety: the program will never produce a wrong result (“partial correctness”) Liveness: the program will produce a result (“termination”)
30. Safety vs. liveness for state-transition graphs Safety:those properties whose violation always has a finite witness (“if something bad happens on an infinite run, then it happens already on some
finite prefix”) Liveness:those properties whose violation never has a finite witness (“no matter what happens along a finite run, something good could still happen later”)
31. q1 a a,b b q2 q3 Run: q1 q3 q1 q3 q1 q2 q2 Trace:a b a b a a,b a,b
32. State-transition graph S = ( Q, A, , [] ) Finite runs: finRuns(S) Q* Infinite runs: infRuns(S) Q Finite traces: finTraces(S) (2A)* Infinite traces: infTraces(S) (2A)
33. Safety: the properties that can be checked on finRuns Liveness: the properties that cannot be checked on finRuns
34. This is much easier. Safety: the properties that can be checked on finRuns Liveness: the properties that cannot be checked on finRuns (they need to be checked on infRuns)
35. Example: Mutual exclusion It cannot happen that both processes are in their critical sections simultaneously.
36. Example: Mutual exclusion It cannot happen that both processes are in their critical sections simultaneously. Safety
37. Example: Bounded overtaking Whenever process P1 wants to enter the critical section, then process P2 gets to enter at most once before process P1 gets to enter.
38. Example: Bounded overtaking Whenever process P1 wants to enter the critical section, then process P2 gets to enter at most once before process P1 gets to enter. Safety
39. Example: Starvation freedom Whenever process P1 wants to enter the critical section, provided process P2 never stays in the critical section forever, P1 gets to enter eventually.
40. Example: Starvation freedom Whenever process P1 wants to enter the critical section, provided process P2 never stays in the critical section forever, P1 gets to enter eventually. Liveness
41. q1 a a,b b q2 q3 infRuns finRuns
42. q1 a a,b b q2 q3 infRuns finRuns * closure *finite branching
43. For state-transition graphs, all properties are safety properties !
44. Example: Starvation freedom Whenever process P1 wants to enter the critical section, provided process P2 never stays in the critical section forever, P1 gets to enter eventually. Liveness
45. q1 a a,b b q2 q3 Fairness constraint: the green transition cannot be ignored forever
46. q1 a a,b b q2 q3 Without fairness: infRuns = q1 (q3 q1)* q2 (q1 q3) With fairness: infRuns = q1 (q3 q1)* q2
47. Two important types of fairness 1 Weak (Buchi) fairness: a specified set of transitions cannot be enabled forever without being taken 2 Strong (Streett) fairness: a specified set of transitions
cannot be enabled infinitely often without being taken
48. q1 a a,b b q2 q3 Strong fairness
49. a q1 a,b q2 Weak fairness
50. Fair state-transition graph S = ( Q, A, , [], WF, SF) WF set of weakly fair actions SF set of strongly fair actions where each action is a subset of | {"url":"https://www.slideserve.com/lotus/model-checking-lecture-1","timestamp":"2024-11-09T10:34:35Z","content_type":"text/html","content_length":"94721","record_id":"<urn:uuid:43a0eaca-7d4b-4a91-997f-329d27070f52>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00642.warc.gz"} |
Integer Multiplication And Division Worksheet - Divisonworksheets.com
Integer Multiplication And Division Worksheet
Integer Multiplication And Division Worksheet – Help your child master division by providing worksheets for division. There are numerous kinds of worksheets, and you can create your own. These
worksheets are amazing because they are available for download at no cost and modify them as you like they should be. They are perfect for kindergarteners and first-graders.
Two are able to produce enormous numbers
Practice on worksheets with large numbers. The worksheets typically only accommodate two, three, or four different divisors. This will not create stress for the child as they won’t be stressed about
the need to divide large numbers or making mistakes with their tables of times. If you want to help your child improve their math skills it is possible to download worksheets online , or print them
on your computer.
Use multidigit division worksheets to assist children with their practice and improve their understanding of the subject. It’s an essential mathematical skill which is needed for a variety of
calculations in daily life as well as complex mathematical topics. These worksheets build on the idea by giving interactive exercises and questions that are based on the divisions of multi-digit
It can be difficult to divide huge numbers for students. These worksheets typically are based on a common algorithm that provides step-by-step instructions. The students may not gain the level of
understanding they want from these worksheets. For teaching long division, one method is to utilize the base 10 blocks. Students should feel at ease with long division after they’ve mastered the
Use a variety of worksheets or practice questions to practice division of large numbers. These worksheets incorporate fractional calculations in decimals. Additionally, you can find worksheets on
hundredsths that are particularly useful for learning to divide large sums of money.
Sort the numbers to form smaller groups.
It can be difficult to split a group into small groups. It may seem appealing on paper, but many participants of small groups dislike this method. It’s a true reflection of the development of the
human body and could help the Kingdom’s never-ending growth. It also inspires others to help those who have lost their leadership and to seek out fresh ideas to take the helm.
It also helps in brainstorming. You can create groups of people with similar experiences and characteristics. This will let you think of new ideas. After you have created your groups, present each
person to you. It’s a useful activity to encourage innovation and fresh thinking.
Divide huge numbers into smaller ones is the primary function of division. It is helpful in situations where you need to create the same amount of things for multiple groups. You could break up the
class into groups of five students. The groups are then added to provide the original 30 pupils.
Be aware that when you divide numbers, there’s a divisor as well as the quotient. Dividing one by five gives the results, whereas two by two produces the identical result.
Powers of ten should only be employed to solve huge numbers.
Dividing large numbers into powers could aid in comparing the numbers. Decimals are an integral part of shopping. They are usually found on receipts and price tags. They are used to indicate the
price per gallon or the quantity of gasoline that comes through the nozzles of petrol pumps.
There are two methods to divide large numbers into powers of ten. The first is by shifting the decimal to the left and multiplying by 10-1. The second option is to use the power of ten’s associations
feature. After you understand the associative property of powers 10, you will be able to split a large number in smaller powers.
The first method is based on mental computation. The pattern is visible if 2.5 is divided by the power 10. The decimal points will shift left as the power of ten grows. This principle can be applied
to solve any issue even the most difficult.
The second option involves mentally dividing massive numbers into powers of 10. Next, you need to write large numbers in a scientific note. If you are using scientific notation to express huge
numbers, it is best to use positive exponents. It is possible to change the decimal point by five spaces to one side and then convert 450,000 to 4.5. To divide a large amount into smaller power 10,
you could apply exponent 5 or divide it in smaller powers 10 so that it is 4.5.
Gallery of Integer Multiplication And Division Worksheet
Multiplying And Dividing Integers Worksheet
Multiplying And Dividing Integers Worksheet
Multiplying And Dividing Integers Worksheets
Leave a Comment | {"url":"https://www.divisonworksheets.com/integer-multiplication-and-division-worksheet/","timestamp":"2024-11-04T04:58:10Z","content_type":"text/html","content_length":"65859","record_id":"<urn:uuid:4117541b-7cf6-4d03-9e68-bfeba4aa1c9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00251.warc.gz"} |
Programmatically creating text output in R – Exercises | R-bloggersProgrammatically creating text output in R – Exercises
Programmatically creating text output in R – Exercises
[This article was first published on
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
In the age of Rmarkdown and Shiny, or when making any custom output from your data you want your output to look consistent and neat. Also, when writing your output you often want it to obtain a
specific (decorative) format defined by the html or LaTeX engine. These exercises are an opportunity to refresh our memory on functions such as paste, sprintf, formatC and others that are convenient
tools to achieve these ends. All of the solutions rely partly on the ultra flexible sprintf() but there are no-doubt many ways to solve the exercises with other functions, feel free to share your
solutions in the comment section.
Example solutions are available here.
Exercise 1
Print out the following vector as prices in dollars (to the nearest cent):
c(14.3409087337707, 13.0648270623048, 3.58504267621646, 18.5077076398145, 16.8279241011882). Example: $14.34
Exercise 2
Using these numbers c(25, 7, 90, 16) make a vector of filenames in the following format: file_025.txt. That is, left pad the numbers so they are all three digits.
Exercise 3
Actually, if we are only dealing numbers less than hundred file_25.txt would have been enough. Change the code from last exercise so that the padding is progammatically decided by the biggest number
in the vector.
Exercise 4
Print out the following haiku on three lines, right aligned, with the help of cat. c("Stay the patient course.", "Of little worth is your ire.", "The network is down.").
Exercise 5
Write a function that converts a number to its hexadecimal representation. This is a useful skill when converting bmp colors from one representation to another. Example output:
[1] "12 is c in hexadecimal"
Exercise 6
Take a string and programmatically surround it with the html header tag h1
Exercise 7
Back to the poem from exercise 4, let R convert to html unordered list. So that it would appear like the following in a browser:
• Stay the patient course.
• Of little worth is your ire.
• The network is down.
Exercise 8
Here is a list of the currently top 5 movies on imdb.com in terms of rating c("The Shawshank Redemption", "The Godfather", "The Godfather: Part II", "The Dark Knight", "12 Angry Men", "Schindler's
List") convert them into a list compatible with written text.
Example output:
[1] "The top ranked films on imdb.com are The Shawshank Redemption, The Godfather, The Godfather: Part II, The Dark Knight, 12 Angry Men and Schindler's List"
Exercise 9
Now you should be able to solve this quickly: write a function that converts a proportion to a percentage that takes as input the number of decimal places. Input of 0.921313 and 2 decimal places
should return "92.13%"
Exercise 10
Improve the function from last exercise so the percentage take consistently 10 characters by doing some left padding. Raise an error if percentage already happens to be longer than 10.
(Image by Daniel Friedman). | {"url":"https://www.r-bloggers.com/2018/05/programmatically-creating-text-output-in-r-exercises/","timestamp":"2024-11-06T20:35:49Z","content_type":"text/html","content_length":"95987","record_id":"<urn:uuid:235fbfd9-4edb-446f-9e31-cc653f0d0235>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00385.warc.gz"} |
><Methods to study complex systems
related to PIK research on Time Series Analysis and Complex Networks
Recurrence plots
Recurrence plots (RPs) provide an alternative way to study various aspects of complex systems, such regime transitions, classification, detection of time-scales, synchronisation, and coupling
detection (RP bibliography). Main contributions have been in bivariate extensions (cross RPs) and coupling analysis, new measures of complexity, significance assessments of the RP based results,
spatial extensions, parameter selection, RPs for irregularly sampled data and for extreme events data, or complex network based quantification.
Complex networks
Complex networks provide a powerful approach to investigate extended and spatio-temporal systems, such as the climate by climate networks. Moreover, they offer an alternative way for a recurrence
based time-series analysis by recurrence networks.
Special time-series analysis methods for special problems
Special problems require especially adopted methods of time-series analysis. For example, proxy records in Earth sciences are often irregularly sampled and come with uncertainties in the dating
points. Approaches for considering such dating uncertainties in the subsequent analysis and methods for correlation analysis of irregularly sampled time series have been developed. Such approaches
can be helpful for the reconstruction of palaeoclimate complex networks.
><Complexity in applications
Climate and palaeoclimate
The study of palaeoclimate from proxy records is helpful for a better understanding of the climate system. Information based on lake sediments or speleothemes can be used to study complex
interrelationships or past climate transitions. We are also participating in the coordinated scientific research in the Blessberg Cave, Thuringia.
Cardiovascular systems
Besides the main focus on climate related problems, recurrence properties of the cardiovascular system are studied, e.g., to early detect ventricular tachycardia or preeclampsia, or to investigate
the coupling mechanisms in the cardio-respiratory system.
Further interest in life science is related to EEG analysis, aiming at the detection of event related potentials or early signatures of epileptic seizures, or identifying pathological changes in
brains connectivity due to diseases.
3D image analysis
Methods to investigate complexity in 3D have been applied to study structural changes in trabecular bone, such as occurring during osteoporosis or space flights.
><Cave research
Scientific research in caves is performed to explore and survey newly discovered cave parts, but also to collect data for the palaeoclimate studies (samples, monitoring). Cave research is focused on
caves in Switzerland (research with isaak), but also in India, Caucasus, Kosovo, and Germany.
• N. Marwan: Kalzit-Sinter in Sandsteinhöhlen des Elbsandsteingebirges, Die Höhle, 51(1), 19–20 (2000).
• N. Marwan: Cave Blisters in der Oberländerhöhle (M3)/ Découverte de blisters dans la Oberländerhöhle (M3), Stalactite, 50(2), 103-105 (2000).
• N. Marwan: Das Karstgebiet des Bol'soj Thac, Abhandlungen und Berichte des Naturkundemuseums Görlitz, 79(1), 55-84 (2007).
• S. Breitenbach, N. Marwan, G. Wibbelt: Weißnasensyndrom in Nordamerika – Pilzbesiedlung in Europa, Nyctalus, 16(3), 172-179 (2011).
• N. Marwan: Der digitale Sägistal-Kataster, Stalactite, 73, 24–33 (2023).
• S. Breitenbach, N. Marwan: Using Low-Cost Software to Obtain and Study Stalagmite Greyscale Data, CREG Journal, 125, 7–10 (2024).
• one of the first web presentations of speleology was the speleo server east | {"url":"https://tocsy.pik-potsdam.de/~marwan/special.php","timestamp":"2024-11-08T01:01:13Z","content_type":"text/html","content_length":"30693","record_id":"<urn:uuid:f7249a2d-2b1a-440a-9c12-511bab7402b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00865.warc.gz"} |
103.3.2 Descriptive Statistics | Statinfer103.3.2 Descriptive Statistics | Statinfer
In previous section, we studied about Basic Statistics, Graphs and Reports, now we will be studying about Descriptive Statistics.
As soon as we get some data, we can carry out descriptive statistics on it. Basic descriptive statistics give an idea on the variable and their distribution, we get an overall picture of dataset and
it also helps us to create a report on the data. There are 2 types of basic descriptive statistics:
Central tendencies and Dispersion.
Central tendencies deal with the mean, median and mode, whereas the measures of Dispersion are range, variance and standard deviation.
Central tendencies: mean, median
Mean is nothing but the arithmetic mean or the average, i.e, the sum of the values divided by the count of values. It helps us to understand the data, evaluate the data. The mean is a good measure to
calculate the average of the variables, but it is not recommended when there are outliers in the data. Outliers are fewer data elements in the dataset which are very much different from the rest of
the data elements.
For Example : Let us consider this data
Nowhere 90% of the values are below 2, but when we calculate the mean, we get the value as 2. This is because there is a value (i.e.,9), which is very much different from the rest of the values.
This is called an outlier. So in such cases, where there are outliers, we need a better approach which gives a more accurate or true middle value. Hence median can be considered in such cases.
For calculating the median, the give data is sorted in either ascending or descending order, and then take the middle value which becomes the median, which can be a true average value in such
For example, consider the same data;
Ascending order:
Here the middle value is 1.4, which becomes the median.
Therefore, we can say that even if there are outliers present in the data, we can get a true middle value using median, as the sorting shifts the outliers to the extreme ends.
Let us see how to calculate mean and median in R. We consider the Income data.
Income<-read.csv("C:\\Amrita\\Datavedi\\Census Income Data\\Income_data.csv")
From this dataset we calculate the mean and median of the variable “capital.gain”.
## [1] 1077.649
## [1] 0
We get mean as 1077.649 and median as 0. As there is a vast difference between the two, we can say that there are outliers in the data. If there are no outliers, there will not be much difference in
the mean and median values. So if there are outliers we must always consider the median.
Lab: Mean & Median
Now let us consider the dataset, Online Retail Sales Data.
Online_Retail<-read.csv("C:\\Amrita\\Datavedi\\Online Retail Sales Data\\Online Retail.csv")
Calculate the mean and median of the variable “UnitPrice” and let us see if there are any outliers in the data.
## [1] 4.611114
## [1] 2.08
So here the mean is 4.611114 and the median is 2.08, which means mean and meadian are very close. However, we still cannot conclude on the absence of an outlier because if there are balancing
outliers on the either side of median, then also the mean and median can be close. Now also find the mean and median of the variable “Quantity”.
## [1] 9.55225
## [1] 3
Here we can see that the mean is 9.55225 and the median is 3. In this case, as there is some difference in the mean and the median value, there can be outliers in the data but we cannot be sure.
Outliers can be detected using box plot which will be covered in further sessions.
In next section, we will be studying about Percentile and Quartile. | {"url":"https://statinfer.com/103-3-2-descriptive-statistics/","timestamp":"2024-11-08T01:54:20Z","content_type":"text/html","content_length":"206507","record_id":"<urn:uuid:fb68cc2e-276a-405b-80e9-ad8d82a953b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00695.warc.gz"} |
How to use the ROWS function
What is the ROWS function?
The ROWS function allows you to calculate the number of rows in a cell range.
The example above shows that cell range B3:B10 contains 8 rows.
1. Introduction
What is a cell reference?
A cell reference in Excel is a way to identify and refer to a specific cell or range of cells within a spreadsheet. Cell references are fundamental to Excel's functionality, allowing users to create
dynamic, interconnected spreadsheets.
What is the cell reference structure?
It depends if the cell reference points to a single cell or a cell range containing multiple cells. A single cell reference typically consists of a column letter followed by a row number (e.g., A1,
B2, C3).
A cell range reference points to multiple cells, for example: A1:B10. Note that the colon separates the first cell reference and the second cell reference. The first cell reference indicates the
top-left cell in the specified cell range, while the second cell reference denotes the bottom-right cell in that same range. These two cell references determines the height in rows and width in
columns, of the cell range. Cell range references are used in functions that perform calculations or operations on a group of cells, rather than just a single cell.
Both single cell and multi-cell references can also include sheet names for referencing cells in other worksheets. For example, Sheet!A1 or Sheet2!B2:B10. Note that Excel requires an exclamation mark
between the sheet name and the cell reference. You can find the Sheet names your workbook contains, at the very bottom to the left. Single quotation marks are used if the worksheet name contains a
space character, example: 'Budget 2027'!B3
What are the different cell reference types?
• Relative: The cell reference changes when the cell is copied or moved. For example: A1
• Absolute: The cell reference is fixed and doesn't change when copied. The dollar signs lets you specify which part of the cell reference you want to be abolute. For example: $A$1
• Mixed: One part fixed, one part relative. $A1 or A$1
When are cell references used?
In formulas to perform calculations using values from other cells. For data validation, conditional formatting, and other Excel features. To link data between different sheets or workbooks.
2. Syntax
array Required. A cell range for which you want to calculate the number of rows.
3. Example
Formula in cell D3:
4. Count rows in an array
The ROWS function also calculates the number of rows in an array. This example demonstrates how to count rows in a hard coded array. An array is a collection of values that can be used in formulas
and functions. It's a way to store and manipulate multiple values as a single unit.
A hard-coded array in Excel is an array that is explicitly defined within a formula using curly brackets {}. For example: {1, 2, 3, 4, 5}. This type of array is also known as a "constant array" or
"literal array".
Formula in cell B3:
=ROWS({20,95,67; 13,14,58; 96,74,28; 7,64,22})
The array has four rows. The ; (semicolon) character is a row delimiting character in an array.
The delimiting characters in an array in Excel are:
• Commas (,) to separate values horizontally into columns. Using only column separated delimiters creates a one-dimensional array (e.g. {1, 2, 3, 4, 5}) The same thing applies if you only use row
• Semicolons (;) to separate values into rows in a two-dimensional array (e.g. {1, 2; 3, 4; 5, 6})
• Curly brackets {} to enclose the entire array
5. Count rows based on a condition
This formula is used to count the number of rows in a range that meet a specific condition. In other words, this formula is counting the number of cells in the range B3:B10 that have the same value
as cell D3.
Formula in cell E3
Here is a breakdown of how it works:
• B3:B10 is the range of cells that we want to filter.
• D3 is the value that we want to match in the range.
• FILTER(B3:B10,B3:B10=D3) filters the range to only include cells that are equal to the value in cell D3.
• ROWS then counts the number of rows in the filtered range.
Explaining formula
Step 1 - Logical expression
The equal sign lets you compare value to value, it is also possible to compare a value to an array of values. The result is either TRUE or FALSE.
{"A"; "B"; "B"; "A"; "B"; "B"; "A"; "A"}="A"
and returns
{TRUE; FALSE; FALSE; TRUE; FALSE; FALSE; TRUE; TRUE}.
Step 2 - Filter values based on a condition
The FILTER function gets values/rows based on a condition or criteria.
FILTER(array, include, [if_empty])
FILTER({"A"; "B"; "B"; "A"; "B"; "B"; "A"; "A"}, {TRUE; FALSE; FALSE; TRUE; FALSE; FALSE; TRUE; TRUE})
and returns
{"A"; "A"; "A"; "A"}.
Step 3 - Count rows
ROWS({"A"; "A"; "A"; "A"})
and returns 4.
6. Count rows based on a list
Formula in cell F3:
Explaining formula
Step 1 - Which values equal any item in the list
The COUNTIF function counts the number of cells that meet a given condition.
COUNTIF(range, criteria)
COUNTIF(E3:E4, C3:C11)
COUNTIF({"Clip"; "Pen"},{"Pen"; "Pencil"; "Clip"; "Pen"; "Clip"; "Pencil"; "Pen"; "Clip"; "Clip"})
and returns {1; 0; 1; 1; 1; 0; 1; 1; 1}.
Step 2 - Filter values based on array
The FILTER function gets values/rows based on a condition or criteria.
FILTER(array, include, [if_empty])
FILTER({"Pen"; "Pencil"; "Clip"; "Pen"; "Clip"; "Pencil"; "Pen"; "Clip"; "Clip"}, {1; 0; 1; 1; 1; 0; 1; 1; 1})
and returns
{"Pen"; "Clip"; "Pen"; "Clip"; "Pen"; "Clip"; "Clip"}.
Step 3 - Count rows
ROWS({"Pen"; "Clip"; "Pen"; "Clip"; "Pen"; "Clip"; "Clip"})
and returns 7.
7. Count rows in a delimited string
The formula in cell D3 counts delimited values in a string located in cell B3, you can use any character or string a s a delimiting character.
Excel 365 dynamic array formula in cell C3:
Explaining formula
Step 1 - Split string using a given delimiting character
The TEXTSPLIT function lets you split a string into an array across columns and rows based on delimiting characters.
and returns
{""; "7"; "45"; "31"; ""; "37"; "98"; ""; "6"}.
The semicolon is a delimiting character in arrays, however, they are determined by your regional settings. In other words, you may be using other delimtiing characters.
Step 2 - Count rows
ROWS({""; "7"; "45"; "31"; ""; "37"; "98"; ""; "6"})
and returns 9. The values in the array are arranged vertically. An horizontal array would be using commas, like this: {"", "7", "45", "31", "", "37", "98", "", "6"}.
8. Count rows in multiple cell ranges
This example demonstrate how to count rows in three different sized cell ranges simultaneously and return total rows.
Formula in cell B12:
Explaining formula
Step 1 - Join arrays
The VSTACK function combines cell ranges or arrays, it joins data to the first blank cell at the bottom of a cell range or array.
VSTACK(B3:B9, D3:D7, F3:F5)
VSTACK({7; 25; 82; 43; 25; 10; 21},{73; 13; 93; 25; 10; 65; 91},{43; 11; 97; 61; 4; 45; 91})
and returns
{7; 25; 82; 43; 25; 10; 21; 73; 13; 93; 25; 10; 65; 91; 43; 11; 97; 61; 4; 45; 91}.
Step 2 - Calculate rows
ROWS({7; 25; 82; 43; 25; 10; 21; 73; 13; 93; 25; 10; 65; 91; 43; 11; 97; 61; 4; 45; 91})
and returns 15.
Useful links
ROWS function - Microsoft
ROWS Formula in Excel: Explained
'ROWS' function examples
Functions in 'Lookup and reference' category
The ROWS function function is one of 25 functions in the 'Lookup and reference' category.
Excel function categories
Excel categories
How to comment
How to add a formula to your comment
<code>Insert your formula here.</code>
Convert less than and larger than signs
Use html character entities instead of less than and larger than signs.
< becomes < and > becomes >
How to add VBA code to your comment
[vb 1="vbnet" language=","]
Put your VBA code here.
How to add a picture to your comment:
Upload picture to postimage.org or imgur
Paste image link to your comment. | {"url":"https://www.get-digital-help.com/how-to-use-the-rows-function/","timestamp":"2024-11-06T23:47:50Z","content_type":"application/xhtml+xml","content_length":"191290","record_id":"<urn:uuid:c31e96a7-e10f-44ce-8115-de9e5c7ef58b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00118.warc.gz"} |
Standard error of measurement
The intraclass correlation coefficient provides an estimate of the relative error of the measurement; that is, it is unitless and is sensitive to the between-subjects variability. Because the general
form of the intraclass correlation coefficient is a ratio of variabilities (see equation 13.04), it is reflective of the ability of a test to differentiate between subjects. It is useful for
assessing sample size and statistical power and for estimating the degree of correlation attenuation. As such, the intraclass correlation coefficient is helpful to researchers when assessing the
utility of a test for use in a study involving multiple subjects. However, it is not particularly informative for practitioners such as clinicians, coaches, and educators who wish to make inferences
about individuals from a test result.
For practitioners, a more useful tool is the standard error of measurement (SEM; not to be confused with the standard error of the mean). The standard error of measurement is an absolute estimate of
the reliability of a test, meaning it has the units of the test being evaluated and is not sensitive to the between-subjects variability of the data. Further, the standard error of measurement is an
index of the precision of the test, or the trial-to-trial noise of the test. Standard error of measurement can be estimated with two common formulas. The first formula is the most common and
estimates the standard error of measurement as
where ICC is the intraclass correlation coefficient as described previously and SD is the standard deviation of all the scores about the grand mean. The standard deviation can be calculated quickly
from the repeated measures ANOVA as
where N is the total number of scores.
Because the intraclass correlation coefficient can be calculated in multiple ways and is sensitive to between-subjects variability, the standard error of measurement calculated using equation 13.06
will vary with these factors. To illustrate, we use the example data presented in table 13.5 and ANOVA summary from table 13.6. First, the standard deviation is calculated from equation 13.07 as
Recall that we calculated ICC (1,1) = .30, ICC (2,1) = .40, and ICC (3,1) = .73. The respective standard error of measurement values calculated using equation 13.07 are
Notice that standard error of measurement value can vary markedly depending on the magnitude of the intraclass correlation coefficient used. Also, note that the higher the intraclass correlation
coefficient, the smaller the standard error of measurement. This should be expected because a reliable test should have a high reliability coefficient, and we would further expect that a reliable
test would have little trial-to-trial noise and therefore the standard error should be small. However, the large differences between standard error of measurement estimates depending on which
intraclass correlation coefficient value is used are a bit unsatisfactory.
Instead, we recommend using an alternative approach to estimating the standard error of measurement:
where MS[E] is the mean square error term from the repeated measures ANOVA. From table 13.6, MS[E] = 1,044.54. The resulting standard error of measurement is calculated as
This standard error of measurement value does not vary depending on the intraclass correlation coefficient model used because the mean square error is constant for a given set of data. Further, the
standard error of measurement from equation 13.08 is not sensitive to the between-subjects variability. To illustrate, recall that the data in table 13.7 were created by modifying the data in table
13.1 such that the between-subjects variability (larger standard deviations) was increased but the means were unchanged. The mean square error term for the data in table 13.1 (see table 13.2, MS[E]=
1,070.28) was unchanged with the addition of between-subjects variability (see table 13.8). Therefore, the standard error of measurement values for both data sets are identical when using equation
Interpreting the Standard Error of Measurement
As noted previously, the standard error of measurement differs from the intraclass correlation coefficient in that the standard error of measurement is an absolute index of reliability and indicates
the precision of a test. The standard error of measurement reflects the consistency of scores within individual subjects. Further, unlike the intraclass correlation coefficient, it is largely
independent of the population from which the results are calculated. That is, it is argued to reflect an inherent characteristic of the test, irrespective of the subjects from which the data were
The standard error of measurement also has some uses that are especially helpful to practitioners such as clinicians and coaches. First, it can be used to construct a confidence interval about the
test score of an individual. This confidence interval allows the practitioner to estimate the boundaries of an individual's true score. The general form of this confidence interval calculation is
T = S ± Z[crit] (SEM), (13.09)
where T is the subject's true score, S is the subject's score on the test, and Z[crit] is the critical Z score for a desired level of confidence (e.g., Z = 1.96 for a 95% CI). Suppose that a
subject's observed score (S) on the Wingate test is 850 watts. Because all observed scores include some error, we know that 850 watts is not likely the subject's true score. Assume that the data in
table 13.7 and the associated ANOVA summary in table 13.8 are applicable, so that the standard error of measurement for the Wingate test is 32.72 watts as shown previously. Using equation 13.09 and
desiring a 95% CI, the resulting confidence interval is
T = 850 watts ± 1.96 (32.72 watts) = 850 ± 64.13 watts = 785.87 to 914.13 watts.
Therefore, we would infer that the subject's true score is somewhere between approximately 785.9 and 914.1 watts (with a 95% LOC). This process can be repeated for any subsequent individual who
performs the test.
It should be noted that the process described using equation 13.09 is not strictly correct, and a more complicated procedure can give a more accurate confidence interval. For more information, see
Weir (2005). However, for most applications the improved accuracy is not worth the added computational complexity.
A second use of the standard error of measurement that is particularly helpful to practitioners who need to make inferences about individual athletes or patients is the ability to estimate the change
in performance or minimal difference needed to be considered real (sometimes called the minimal detectable change or the minimal detectable difference). This is typical in situations in which the
practitioner measures the performance of an individual and then performs some intervention (e.g., exercise program or therapeutic treatment). The test is then given after the intervention, and the
practitioner wishes to know whether the person really improved. Suppose that an athlete improved performance on the Wingate test by 100 watts after an 8-week training program. The savvy coach should
ask whether an improvement of 100 watts is a real increase in anaerobic fitness or whether a change of 100 watts is within what one might expect simply due to the measurement error of the Wingate
test. The minimal difference can be estimated as
Again, using the previous value of SEM = 32.72 watts and a 95% CI, the minimal difference value is estimated to be
We would then infer that a change in individual performance would need to be at least 90.7 watts for the practitioner to be confident, at the 95% LOC, that the change in individual performance was a
real improvement. In our example, we would be 95% confident that a 100-watt improvement is real because it is more than we would expect just due to the measurement error of the Wingate test. Hopkins
(2000) has argued that the 95% LOC is too strict for these types of situations and a less severe level of confidence should be used. This is easily done by choosing a critical Z score appropriate for
the desired level of confidence.
It is not intuitively obvious why the use of the
We use the SD[d]) provides such an index, and when there are only two measurements like we have here,
MD = SD[d] × Z[crit].
As with equation 13.09, the approach outlined in equation 13.10 is not strictly correct, and a modestly more complicated procedure can give a slightly more accurate confidence interval. However, for
most applications the procedures described are sufficient.
An additional way to interpret the size of the standard error of measurement is to convert it to a type of coefficient of variation (CoV). Recall from chapter 5 that we interpreted the size of a
standard deviation by dividing it by the mean and then multiplying by 100 to convert the value to a percentage (see equation 5.05). We can perform a similar operation with the standard error of
measurement as follows:
where CoV = the coefficient of variation, SEM = the standard error of measurement, and M[G] = the grand mean from the data. The resulting value expresses the typical variation as a percentage (Lexell
and Downham, 2005). For the example data in table 13.7 and the associated ANOVA summary in table 13.8, SEM = 32.72 watts (as shown previously) and M[G] = 774.0 (calculations not shown). The resulting
CoV = 32.72/774.0 × 100 = 4.23%. This normalized standard error of measurement allows researchers to compare standard error of measurement values between different tests that have different
units, as well as to judge how big the standard error of measurement is for a given test being evaluated.
More Excerpts From Statistics in Kinesiology 5th Edition With Web Resource
Get the latest insights with regular newsletters, plus periodic product information and special insider offers. JOIN NOW | {"url":"https://us.humankinetics.com/blogs/excerpt/standard-error-of-measurement","timestamp":"2024-11-10T08:08:47Z","content_type":"text/html","content_length":"244469","record_id":"<urn:uuid:1c1ac2fa-afcd-4305-9a69-d93eb31dcdf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00090.warc.gz"} |
Mathematical issues for chemists
Typically, mathematics is regarded as a useful tool by chemists, and all undergraduate chemists will need to attend some sort of mathematics course in order to access and make the most of their
science. There are various levels of mathematics used in chemistry degrees, ranging from combinatorics and proportional reasoning to heavy-weight differential equations and Fourier analysis.
However, study of any of the underlying mathematics out of context tends to reduce mathematical activity to a series of clean, dry routines and procedures. Many students then struggle with applying
the quantitative knowledge in the complicated chemical contexts they encounter.
For example, we have
│ Mathematics │ Chemistry context │
│Ratios │Mixing solutions with certain molarities, making dilutions │
│Proportional reasoning│Analysis of molecular structure; moles │
│Algebra and graphs │Analysis of experimental plots of reaction rates; gas laws │
│Calculus │Predicting and measuring rates of reaction in measurable experiments │
│Units of measurements │Making sense of real, complicated measurements │
│Vectors │Understanding crystal structure │
│Logarithms │Understanding pH │
│Probability │Drawing general conclusions from trials │
Suppose that a chemist achieved a good grade in GCSE mathematics or AS mathematics. Why would such students struggle with the mathematical aspects of chemistry? There are several possible reasons:
• Procedural thinking
□ Mathematics exams can often be passed by learning the content procedurally. This means that students can answer certain types of question by following a recipe. The problems in chemistry
arise because even minor deviations from the precise recipe cause the student to fail to know what to do.
• Inability to translate mathematical meaning to chemical meaning
□ Students who are very skilled at mathematics might have trouble seeing how to relate the mathematical process to a real-world context; this hampers the use of common sense, so valuable in
quantitative science.
• Inability to make estimates or approximations
□ Mathematical contexts in chemistry are rarely simple. In order to apply mathematics predictively, approximations will need to be made. To make approximations requires the student to really
understand the meaning and structure of the mathematics.
• Poor problem solving skills
□ Mathematical issues in chemistry problems are not usually clearly 'signposted' from a mathematical point of view. The chemist must assess the situation, decide how to represent it
mathematically, decide what needs to be solved and then solve the problem. Students who are not well versed in solving 'multi-step' problems in mathematics are very likely to struggle with
the application of their mathematical knowledge.
• Lack of practice
□ There are two ways in which lack of practice can impact mathematical activity in the sciences.
☆ First is a lack of skill at basic numerical manipulation. This leads to errors and hold-ups regardless of whether the student understands what they are trying to do.
☆ Second is a lack of practice at thinking mathematically in a chemical context.
• Lack of confidence
□ Lack of confidence builds with uncertainty and failure, leading to more problems. Students who freeze at the sight of numbers or equations will most certainly under perform.
• Lack of mathematical interest
□ Students are hopefully strongly driven by their interest in science. If mathematics is studied in an environment independent of this then mathematics often never finds meaning and remains
abstract, dull and difficult. | {"url":"https://nrich.maths.org/articles/mathematical-issues-chemists","timestamp":"2024-11-12T00:27:42Z","content_type":"text/html","content_length":"38232","record_id":"<urn:uuid:84e786c3-5deb-4d75-9ef5-439eb2b3c5b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00120.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
My husband has been using the software since he went back to school a few months ago. Hes been out of college for over 10 years so he was very rusty with his math skills. A teacher friend of ours
suggested the program since she uses it to teach her students fractions. Mike has been doing well in his two math classes. Thank you!
Adam Botts, FL
As a private tutor, I have found this program to be invaluable in helping students understand all levels of algebra equations and fractions.
Camila Denton, NJ
I recommend this program to every student that comes in my class. Since I started this, I have noticed a dramatic improvement.
Ed Carly, IN
I am a parent of an 8th grader:The software itself works amazingly well - just enter an algebraic equation and it will show you step by step how to solve and offer clear, brief explanations,
invaluable for checking homework or reviewing a poorly understood concept. The practice test with printable answer key is a great self check with a seemingly endless supply of non-repeating
questions. Just keep taking the tests until you get them all right = an A+ in math.
Gina Simpson, DE
Search phrases used on 2014-09-08:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• logarithmic expressions on TI83+ calculator
• kumon answer keys
• Math Trivias
• aptitude question and answers
• precalculus coburn "Equations and Inequalities"
• Grade 7 (Multiples and Factors)
• examples algerbia equation
• in what year do you learn simultaneous equations
• free advanced excel book tutorial download
• online answers from quadratic formulas
• Refresher college algebra
• 8th +TAKS +pdf
• converting mixed fractions to decimals
• Uni Care Insurance
• Pennsylvania Dating
• GCF and LCM variable calculator
• what is 4^ square root 10,000 times 4 ^square root 100,000
• Rational Expressions Online Calculator
• Monogrammed Gifts
• Student Activities
• New Hampshire Mortgage Quote
• hard math problems for a sixth grader
• multiplication with percentage
• 1st grade english homework worksheets
• MATH TEST PRINT OUT
• mathematical induction calculator
• root solver
• math printouts for 8th grade
• difference two square
• dividing radicals calclator
• printable algebra tiles
• simplify, multiply square root, online calculator
• hyperbola equation
• decimal to fraction worksheet
• aptitude question and answer
• Brain teasers for KS3 and KS4 science
• nonlinear equations worksheet free
• free math works sheets on percentages
• what is factoring the polynomial
• Free Equation Solver
• where is root x on the calculator ti 83 plus
• "college algebra"digital video tutor "fourth edition"
• Square Root Chart
• integer of fraction algebra equations
• free ti-83 plus emulator
• JC Whitney Catalog
• Reverse Annuity Mortgages
• How to graph system of equation
• algebra problems.com
• Senario Scientific Calculator
• preparing for the ks2 sat tests
• solving hard trinomials
• a table for converting a mixed fraction into a decimal
• free printable integer quiz
• adding and subtracting worksheets word
• algebric formula
• adding squared numbers
• beginning algebra free texas
• ti-84 emulator
• grade 6 math worksheet ontario
• linear equations calculate b
• algebra 2 answer
• polynomials for dummies
• diamond and box square roots
• "look for the GCF first "
• adding decimals worksheets
• lesson plans first grade
• "Code to convert hex to decimal"
• solving linear equations online calculator
• solving cube root questions online help
• finding the inverse & determinant on a TI-83 calculator
• adding subtracting and multiplying radical expressions help
• formulas for solving algebra equations
• New Jobs
• 8th grade math questions and answers
• 5th grade math problem 7 cats
• free math tutorial ged
• rules for graphing inequalities
• solving differential equations +TI 89
• 6th grade math test
• math geometry trivia with answers
• dividing a bigger number into a smaller number examples
• fun activities completing the square
• prealgebra worksheet
• free NC EOG 7th grade math
• 1st grade printables
• Quadratic Equation Problems
• Thank Sayings Graduation
• Down load aptitude test
• "fractions" + "online worksheets"
• free algebra online problem solver
• ratio KS2 worksheets
• evaluating multiplying factorial equations fractions
• GRADE 2 MATH SHEETS
• algebra websites for beginners
• algebra variables restrictions
• square root to the third
• 3rd grade equations samples
• how to pass an algebra test | {"url":"https://softmath.com/math-book-answers/perfect-square-trinomial/factoring-binomials.html","timestamp":"2024-11-09T13:54:24Z","content_type":"text/html","content_length":"35916","record_id":"<urn:uuid:87a08c96-a24f-449e-bdf0-eb2cef31a070>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00520.warc.gz"} |
A plane left Kennedy airport on Tuesday morning for an 630mile 5 hour trip for the first part of the trip the average speed was 120 mph for - DocumenTVA plane left Kennedy airport on Tuesday morning for an 630mile 5 hour trip for the first part of the trip the average speed was 120 mph for
A plane left Kennedy airport on Tuesday morning for an 630mile 5 hour trip for the first part of the trip the average speed was 120 mph for
A plane left Kennedy airport on Tuesday morning for an 630mile 5 hour trip for the first part of the trip the average speed was 120 mph for the remainder of the trip the average speed was 130 mph how
long did the plane fly at each speed
in progress 0
Mathematics 3 years 2021-08-30T12:18:32+00:00 2021-08-30T12:18:32+00:00 1 Answers 16 views 0 | {"url":"https://documen.tv/question/a-plane-left-kennedy-airport-on-tuesday-morning-for-an-630mile-5-hour-trip-for-the-first-part-of-24096162-71/","timestamp":"2024-11-13T09:51:42Z","content_type":"text/html","content_length":"87926","record_id":"<urn:uuid:512649a6-3104-40dd-87b6-086932a1bc61>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00431.warc.gz"} |
Section 1: Sequences
Shodor > Interactivate > Textbooks > Math Thematics 1st Ed. Book 3 > Section 1: Sequences
Math Thematics 1st Ed. Book 3
Module 4 - Patterns and Discoveries
Section 1: Sequences
Lesson • Activity • Discussion • Worksheet • Show All
Lesson (...)
Lesson: Introduces students to arithmetic and geometric sequences. Students explore further through producing sequences by varying the starting number, multiplier, and add-on.
Activity (...)
Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in
sequences and geometric properties of fractals.
Hilbert Curve Generator
Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals.
Koch's Snowflake
Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.
Sierpinski's Carpet
Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences
and geometric properties of fractals.
Sierpinski's Triangle
Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in
sequences and geometric properties of fractals.
Activity: Enter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form.
This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets.
Discussion (...)
Worksheet (...)
No Results Found
©1994-2024 Shodor Website Feedback
Math Thematics 1st Ed. Book 3
Module 4 - Patterns and Discoveries
Section 1: Sequences
Lesson • Activity • Discussion • Worksheet • Show All
Lesson (...)
Lesson: Introduces students to arithmetic and geometric sequences. Students explore further through producing sequences by varying the starting number, multiplier, and add-on.
Activity (...)
Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in
sequences and geometric properties of fractals.
Hilbert Curve Generator
Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals.
Koch's Snowflake
Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals.
Sierpinski's Carpet
Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences
and geometric properties of fractals.
Sierpinski's Triangle
Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in
sequences and geometric properties of fractals.
Activity: Enter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form.
This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets.
Discussion (...)
Worksheet (...)
No Results Found | {"url":"http://www.shodor.org/interactivate/textbooks/section/190/","timestamp":"2024-11-12T12:07:40Z","content_type":"application/xhtml+xml","content_length":"16270","record_id":"<urn:uuid:d3f23af0-bb95-4f0b-95e8-84fc7030f58c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00387.warc.gz"} |
Steel Plate & Sheet Weight Calculator
Steel Plate Weight Calculator In Mm, Steel Plate Weight Calculator Online, Steel Plate Weight Calculator in Kg, stainless steel plate weight calculator, steel plate weight calculator formula
Free online weight calculator to calculate the weight of Steel Plates in High Carbon, Low Carbon, Stainless Steel, copper, aluminium, brass, hardox 400,
and gi plate material.
How important is knowing the weight of steel plates?
Steel is a common material that is typically priced per unit weight, just like gravel, sand, and concrete. The weight of steel is calculated to ensure consistent pricing in the market, considering
that steel is available in various types, shapes, and sizes. When purchasing steel plates, calculating the total weight of the products helps us plan the transport from the supplier's warehouse to
the project or fabrication site efficiently. Additionally, knowing the weight of the steel plates used in a project can help us determine if we can lift the final product safely. As demonstrated in
the sample computation of steel weight, understanding the weight of steel plates is vital for project planning and execution.
How to measure steel plate weight?
To calculate the weight of a steel plate, it is important to know the density of the steel alloy that the plate is made of. This information is readily available in our steel plate weight calculator,
which provides the densities of the most commonly used steel alloys. Our calculator also includes a table that lists the density of each steel type or alloy for reference purposes. By inputting the
dimensions of the steel plate into our calculator and selecting the appropriate steel type, one can quickly and easily determine the weight of the plate.
│ Steel type │Density (kg/m³) │
│ Tool steel │ 7715 │
│ Wrought iron │ 7750 │
│Carbon tool steel │ 7820 │
│ Cold-drawn steel │ 7830 │
│ Carbon steel │ 7840 │
│ C1020 HR steel │ 7850 │
│ Pure iron │ 7860 │
│ Mild steel │ 7870 │
│ Stainless steel │ 8030 │
Weight = Volume x Density
In the case of steel plates, the formula for calculating their weight is:
Weight (in kilograms) = Volume (in cubic meters) x Density (in g/cm³) x 1000
The resulting weight will be in kilograms since we used the density in grams per cubic centimeter and multiplied it by the volume in cubic meters. Knowing the weight of our steel plate is crucial,
especially when we need to transport or lift it, and it helps us in planning and estimating the cost of our projects accurately.
To better understand how to calculate the weight of a steel plate, let's consider an example. If we have multiple steel plates of the same dimensions, we can input the number of plates into our steel
plate weight calculator to obtain the total weight of all plates. This process involves determining the density of the steel alloy, obtaining the total volume of the plate, and multiplying this
volume by the steel plate's density. By following these steps, we can accurately calculate the weight of our steel plates and properly plan for their transportation and use in our projects.
How does the steel weight plate calculator work?
Let's consider an example to understand how to calculate the weight of steel plates. Say we want to build a mold for cube-shaped concrete blocks and we need to cut 5 squares, each with 20 cm sides,
from a 1-cm thick mild steel plate with a density of 7,870 kg/m³. The illustration below shows the dimensions of the squares we need to cut from the steel plate:
To determine the weight of the cut steel plates, we can start by calculating the volume of the steel plate. Since we know that the density of mild steel is in kilograms per cubic meter, we can
calculate the steel plate volume in cubic meters by multiplying its dimensions together. In this example, the side length of the square plate is 20 cm or 0.2 meters, and the thickness is 1 cm or 0.01
meters. Therefore, the volume of a single piece of square plate can be calculated as follows:
steel plate volume = 0.2 m × 0.2 m × 0.01 m
steel plate volume = 0.0004 m³
Solving for weight, we have:
steel plate weight = steel plate volume × density
steel plate weight = 0.0004 m³ × 7,870 kg/m³
steel plate weight = 3.148 kilograms
However, since we need five pieces of this square steel plate, the steel plates' total weight would then be 3.148 kg × 5 = 15.74 kilograms. That's a lot of weight for just a small piece of cube mold! | {"url":"https://www.amardeepsteel.com/steel-plate-weight-calculator.html","timestamp":"2024-11-03T20:19:06Z","content_type":"text/html","content_length":"38839","record_id":"<urn:uuid:5c282b70-9a94-45ba-ad0e-aedcab81390e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00580.warc.gz"} |
Previous parts: part I, part II, part III, part IV, part V, and part VI.
This was supposed to be the last blog post on distance estimated 3D fractals, but then I stumbled upon the dual number formulation, and decided it would blend in nicely with the previous post. So
this blog post will be about dual numbers, and the next (and probably final) post will be about hybrid systems, heightmap rendering, interior rendering, and links to other resources.
Dual Numbers
Many of the distance estimators covered in the previous posts used a running derivative. This concept can be traced back to the original formula for the distance estimator for the Mandelbrot set,
where the derivative is described iteratively in terms of the previous values:
\(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1\)
In the previous post, we saw how the Mandelbox could be described a running Jacobian matrix, and how this matrix could be replaced by a single running scalar derivative, since the Jacobians for the
conformal transformations all have a particular simple form (and thanks to Knighty the argument was extended to non-Julia Mandelboxes).
Now, some month ago I stumbled upon automatic differentation and dual numbers, and after having done some tests, I think this a very nice framework to complete the discussion of distance estimators.
So what are these dual numbers? The name might sound intimidating, but the concept is very simple: we extend the real numbers with another component – much like the complex numbers:
\(x = (x_r, x_d) = x_r + x_d \epsilon\)
where \(\epsilon\) is the dual unit, similar to the imaginary unit i for the complex numbers. The square of a dual unit is defined as: \(\epsilon * \epsilon = 0\).
Now for any function which has a Taylor series, we have:
\(f(x+dx) = f(x) + f'(x)dx + (f”(x)/2)dx^2 + …\)
If we let \(dx = \epsilon\), it follows:
\(f(x+\epsilon) = f(x) + f'(x)\epsilon \)
because the higher order terms vanish. This means, that if we evaluate our function with a dual number \(d = x + \epsilon = (x,1)\), we get a dual number back, (f(x), f'(x)), where the dual component
contains the derivative of the function.
Compare this with the finite difference scheme for obtaining a derivative. Take a quadratic function as an example and evaluate its derivative, using a step size ‘h’:
\(f(x) = x*x\)
This gives us the approximate derivative:
\(f'(x) \approx \frac {f(x+h)-f(x)}{h} = \frac { x^2 + 2*x*h + h^2 – x^2 } {h} = 2*x+h\)
The finite difference scheme introduces an error, here equal to h. The error always gets smaller as h gets smaller (as it converges towards to the true derivative), but numerical differentiation
introduces inaccuracies.
Compare this with the dual number approach. For dual numbers, we have:
\(x*x = (x_r+x_d\epsilon)*(x_r+x_d\epsilon) = x_r^2 + (2 * x_r * x_d )\epsilon\).
\(f(x_r + \epsilon) = x_r^2 + (2 * x_r)*\epsilon\)
Since the dual component is the derivative, we have f'(x) = 2*x, which is the exact answer.
But the real beauty of dual numbers is, that they make it possible to keep track of the derivative during the actual calculation, using forward accumulation. Simply by replacing all numbers in our
calculations with dual numbers, we will end up with the answer together with the derivative. Wikipedia has a very nice article, that explains this in more details: Automatic Differentation. The
article also list several arithmetric rules for dual numbers.
For the Mandelbox, we have a defining function R(p), which returns the length of p, after having been through a fixed number of iterations of the Mandelbox formula: scale*spherefold(boxfold(z))+p.
The DE is then DE = R/DR, where DR is the length of the gradient of R.
R is a scalar-valued vector function. To find the gradient we need to find the derivative along the x,y, and z direction. We can do this using dual vectors and evaluate the three directions, e.g. for
the x-direction, evaluate \(R(p_r + \epsilon (1,0,0))\). In practice, it is more convenient to keep track of all three dual vectors during the calculation, since we can reuse part of the
calculations. So we have to use a 3×3 matrix to track our derivatives during the calculation.
Here is some example code for the Mandelbox:
// simply scale the dual vectors
void sphereFold(inout vec3 z, inout mat3 dz) {
float r2 = dot(z,z);
if (r2 < minRadius2) {
float temp = (fixedRadius2/minRadius2);
z*= temp; dz*=temp;
} else if (r2 < fixedRadius2) {
float temp =(fixedRadius2/r2);
dz[0] =temp*(dz[0]-z*2.0*dot(z,dz[0])/r2);
dz[1] =temp*(dz[1]-z*2.0*dot(z,dz[1])/r2);
dz[2] =temp*(dz[2]-z*2.0*dot(z,dz[2])/r2);
z*=temp; dz*=temp;
// reverse signs for dual vectors when folding
void boxFold(inout vec3 z, inout mat3 dz) {
if (abs(z.x)>foldingLimit) { dz[0].x*=-1; dz[1].x*=-1; dz[2].x*=-1; }
if (abs(z.y)>foldingLimit) { dz[0].y*=-1; dz[1].y*=-1; dz[2].y*=-1; }
if (abs(z.z)>foldingLimit) { dz[0].z*=-1; dz[1].z*=-1; dz[2].z*=-1; }
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
float DE(vec3 z)
// dz contains our three dual vectors,
// initialized to x,y,z directions.
mat3 dz = mat3(1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0);
vec3 c = z;
mat3 dc = dz;
for (int n = 0; n < Iterations; n++) {
z += c*Offset;
dz +=matrixCompMult(mat3(Offset,Offset,Offset),dc);
if (length(z)>1000.0) break;
return dot(z,z)/length(z*dz);
The 3×3 matrix dz contains our three dual vectors (they are stored as columns in the matrix, dz[0], dz[1], dz[2]).
In order to calculate the dual numbers, we need to know how to calculate the length of z, and how to divide by the length squared (for sphere folds).
Using the definition of the product for dual numbers, we have:
\(|z|^2 = z \cdot z = z_r^2 + (2 z_r \cdot z_d)*\epsilon\)
For the length, we can use the power rule, as defined on Wikipedia:
\(|z_r + z_d \epsilon| = \sqrt{z_r^2 + (2 z_r \cdot z_d)*\epsilon}
= |z_r| + \frac{(z_r \cdot z_d)}{|z_r|}*\epsilon\)
Using the rule for division, we can derive:
\(z/|z|^2=(z_r+z_d \epsilon)/( z_r^2 + 2 z_r \cdot z_d \epsilon)\)
\( = z_r/z_r^2 + \epsilon (z_d*z_r^2-2z_r*z_r \cdot z_d)/z_r^4\)
Given these rules, it is relatively simple to update the dual vectors: For the sphereFold, we either multiply by a real number or use the division rule above. For the boxFold, there is both
multiplication (sign change), and a translation by a real number, which is ignored for the dual numbers. The (real) scaling factor is also trivially applied to both real and dual vectors. Then there
is the addition of the original vector, where we must remember to also add the original dual vector.
Finally, using the length as derived above, we find the length of the full gradient as:
\(DR = \sqrt{ (z_r \cdot z_x)^2 + (z_r \cdot z_y)^2 + (z_r \cdot z_z)^2 } / |z_r|\)
In the code example, the vectors are stored in a matrix, which makes a more compact notation possible: DR = length(z*dz)/length(z), leading to the final DE = R/DR = dot(z,z)/length(z*dz)
There are some advantages to using the dual numbers approach:
• Compared to the four-point Makin/Buddhi finite difference approach the arbitrary epsilon (step distance) is avoided – which should give better numerical accuracy. It is also somewhat slightly
faster computationally.
• Very general – e.g. works for non-conformal cases, where running scalar derivatives fail. The images here are from a Mandelbox where a different scaling factor was applied to each direction
(making them non-conformal). This is not possible to capture in a running scalar derivative.
On the other hand, the method is slower than using running scalar estimators. And it does require code changes. It should be mentioned that libraries exists for languages supporting operator
overloading, such as C++.
Since we find the gradient directly in this method, we can also use it as a surface normal – this is also an advantage compared to the scalar derivates, which normally use a finite difference scheme
for the normals. Using the code example the normal is:
// (Unnormalized) normal
vec3 normal = vec3(dot(z,dz[0]),dot(z,dz[1]),dot(z,dz[2]));
It should be noted that in my experiments, I found the finite difference method produced better normals than the above definition. Perhaps because it smothens them? The problem was somehow solved by
backstepping a little before calculating the normal, but this again introduces an arbitrary distance step.
Now, I said the scalar method was faster – and for a fixed number of ray steps it is – but let us take a closer look at the distance estimator function:
The above image shows a sliced Mandelbox.
The graph in the lower right conter shows a plot of the DE function along a line (two dimensions held fixed): The blue curve is the DE function, and the red line shows the derivative of the DE
function. The function is plotted for the dual number derived DE function. We can see that our DE is well-behaved here: for a consistent DE the slope can never be higher than 1, and when we move away
from the side of the Mandelbox in a perpendicular direction the derivative of the DE should be plus or minus one.
Now compare this to the scalar estimated DE:
Here we see that the DE is less optimal – the slope is ~0.5 for this particular line graph. Actually, the slope would be close to one if we omitted the ‘+1’ term for the scalar estimator, but then it
overshoots slightly some places inside the Mandelbox.
We can also see that there are holes in our Mandelbox – this is because for this fixed number of ray steps, we do not get close enough to the fractal surface to hit it. So even though the scalar
estimator is faster, we need to crank up the number of ray steps to achieve the same quality.
Final Remarks
The whole idea of introducing dual derivatives of the three unit vectors seems to be very similar to having a running Jacobian matrix estimator – and I believe the methods are essentially idential.
After all we try to achieve the same: keeping a running record of how the R(p) function changes, when we vary the input along the axis.
But I think the dual numbers offer a nice theoretical framework for calculating the DE, and I believe they could be more accurate and faster then finite difference four point gradient methods.
However, more experiments are needed before this can be asserted.
Scalar estimators will always be the fastest, but they are probably only optimal for conformal systems – for non-conformal system, it seems necessary to introduce terms that make them too
conservative, as demonstrated by the Mandelbox example.
The final part contains all the stuff that didn’t fit in the previous posts, including references and links.
Distance Estimated 3D Fractals (VI): The Mandelbox
Previous parts: part I, part II, part III, part IV and part V.
After the Mandelbulb, several new types of 3D fractals appeared at Fractal Forums. Perhaps one of the most impressive and unique is the Mandelbox. It was first described in this thread, where it was
introduced by Tom Lowe (Tglad). Similar to the original Mandelbrot set, an iterative function is applied to points in 3D space, and points which do not diverge are considered part of the set.
Tom Lowe has a great site, where he discusses the history of the Mandelbox, and highlights several of its properties, so in this post I’ll focus on the distance estimator, and try to make some more
or less convincing arguments about why a scalar derivative works in this case.
The Mandelbulb and Mandelbrot systems use a simple polynomial formula to generate the escape-time sequence:
\(z_{n+1} = z_n^\alpha + c\)
The Mandelbox uses a slightly more complex transformation:
\(z_{n+1} = scale*spherefold(boxfold(z_n)) + c\)
I have mentioned folds before. These are simply conditional reflections.
A box fold is a similar construction: if the point, p, is outside a box with a given side length, reflect the point in the box side. Or as code:
if (p.x>L) { p.x = 2.0*L-p.x; } else if (p.x<-L) { p.x = -2.0*L-p.x; }
(this must be done for each dimension. Notice, that in GLSL this can be expressed elegantly in one single operation for all dimensions: p = clamp(p,-L,L)*2.0-p)
The sphere fold is a conditional sphere inversion. If a point, p, is inside a sphere with a fixed radius, R, we will reflect the point in the sphere, e.g:
float r = length(p);
if (r<R) p=p*R*R/(r*r);
(Actually, the sphere fold used in most Mandelbox implementations is slightly more complex and adds an inner radius, where the length of the point is scaled linearly).
Now, how can we create a DE for the Mandelbox?
Again, it turns out that it is possible to create a scalar running derivative based distance estimator. I think the first scalar formula was suggested by Buddhi in this thread at fractal forums. Here
is the code:
float DE(vec3 z)
vec3 offset = z;
float dr = 1.0;
for (int n = 0; n < Iterations; n++) {
boxFold(z,dr); // Reflect
sphereFold(z,dr); // Sphere Inversion
z=Scale*z + offset; // Scale & Translate
dr = dr*abs(Scale)+1.0;
float r = length(z);
return r/abs(dr);
where the sphereFold and boxFold may be defined as:
void sphereFold(inout vec3 z, inout float dz) {
float r2 = dot(z,z);
if (r<minRadius2) {
// linear inner scaling
float temp = (fixedRadius2/minRadius2);
z *= temp;
dz*= temp;
} else if (r2<fixedRadius2) {
// this is the actual sphere inversion
float temp =(fixedRadius2/r2);
z *= temp;
dz*= temp;
void boxFold(inout vec3 z, inout float dz) {
z = clamp(z, -foldingLimit, foldingLimit) * 2.0 - z;
It is possible to simplify this even further by storing the scalar derivative as the fourth component of a 4-vector. See Rrrola’s post for an example.
However, one thing that is missing, is an explanation of why this distance estimator works. And even though I do not completely understand the mechanism, I’ll try to justify this formula. It is not a
strict derivation, but I think it offers some understanding of why the scalar distance estimator works.
A Running Scalar Derivative
Let us say, that for a given starting point, p, we obtain an length, R, after having applied a fixed number of iterations. If the length is less then \(R_{min}\), we consider the orbit to be bounded
and is thus part of the fractal, otherwise it is outside the fractal set. We want to obtain a distance estimate for this point p. Now, the distance estimate must tell us how far we can go in any
direction, before the final radius falls below the minimum radius, \(R_{min}\), and we hit the fractal surface. One distance estimate approximation would be to find the direction, where R decreases
fastest, and do a linear extrapolation to estimate when R becomes less than \(R_{min}\):
where DR is the magnitude of the derivative along this steepest descent (this is essentially Newton root finding).
In the previous post, we argued that the linear approximation to a vector-function is best described using the Jacobian matrix:
F(p+dp) \approx F(p) + J(p)dp
The fastest decrease is thus given by the induced matrix norm of J, since the matrix norm is the maximum of \(|J v|\) for all unit vectors v.
So, if we could calculate the (induced) matrix norm of the Jacobian, we would arrive at a linear distance estimate:
Calculating the Jacobian matrix norm sounds tricky, but let us take a look at the different transformations involved in the iteration loop: Reflections (R), Sphere Inversions (SI), Scalings (S), and
Translations (T). It is also common to add a rotation (ROT) inside the iteration loop.
Now, for a given point, we will end applying an iterated sequence of operations to see if the point escapes:
Mandelbox(p) = (T\circ S\circ SI\circ R\circ \ldots\circ T\circ S\circ SI\circ R)(p)
In the previous part, we argued that the most obvious derivative for a R^3 to R^3 function is a Jacobian. According to the chain rule for Jacobians, the Jacobian for a function such as this Mandelbox
(z) will be of the form:
J_{Mandelbox} = J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R
In general, all of these matrices will be functions of R^3, which should be evaluated at different positions. Now, let us take a look of the individual Jacobian matrices for the Mandelbox
A translation by a constant will simply have an identity matrix as Jacobian matrix, as can be seen from the definitions.
Consider a simple reflection in one of the coordinate system planes. The transformation matrix for this is:
T_{R} = \begin{bmatrix}
1 & & \\
& 1 & \\
& & -1
Now, the Jacobian of a transformation defined by multiplying with a constant matrix is simply the constant matrix itself. So the Jacobian is also simply a reflection matrix.
A rotation (for a fixed angle and rotation vector) is also constant matrix, so the Jacobian is also simply a rotation matrix.
The Jacobian for a uniform scaling operation is:
J_S = scale*\begin{bmatrix}
1 & & \\
& 1 & \\
& & 1
Sphere Inversions
Below can be seen how the sphere fold (the conditional sphere inversion) transforms a uniform 2D grid. As can be seen, the sphere inversion is an anti-conformal transformation – the angles are still
90 degrees at the intersections, except for the boundary where the sphere inversion stops.
The Jacobian for sphere inversions is the most tricky. But a derivation leads to:
J_{R} = (r^2/R^2) \begin{bmatrix}
1-2x^2/R^2 & -2xy/R^2 & -2xz/R^2 \\
-2yx/R^2 & 1-2y^2/R^2 & -2yz/R^2 \\
-2zx/R^2 & -zy/R^2 & 1-2z^2/R^2
Here R is the length of p, and r is radius of the inversion sphere. I have extracted the scalar front factor, so that the remaining part is an orthogonal matrix (as is also demonstrated in the
derivation link).
Notice that all reflection, translation, and rotation Jacobian-matrices will not change the length of a vector when multiplied with it. The Jacobian for the Scaling matrix, will multiply the length
with the scale factor, and the Jacobian for the Sphere Inversion will multiply the length by a factor of (r^2/R^2) (notice that the length of the point must evaluated at the correct point in the
Now, if we calculated the matrix norm of the Jacobian:
|| J_{Mandelbox} || = || J_T * J_S * J_{SI} * J_R …. J_T * J_S * J_{SI} * J_R ||
we can easily do it, since we do only need to keep track of the scalar factor, whenever we encounter a Scaling Jacobian or a Sphere Inversion Jacobian. All the other matrix stuff will simply not
change the length of a given vector and may be ignored. Also notice, that only the sphere inversion depends on the point where the Jacobian is evaluated – if this operation was not present, we could
simply count the number of scalings performed and multiply the escape length with \(2^{-scale}\).
This means that the matrix norm of the Jacobian can be calculated using only a simply scalar variable, which is scaled, whenever we apply the scaling or sphere inversion operation.
This seems to hold for all conformal transformations (strictly speaking sphere inversions and reflections are not conformal, but anti-conformal, since orientations are reversed). Wikipedia also
mentions, that any function, with a Jacobian equal to a scalar times a rotational matrix, must be conformal, and it seems the converse is also true: any conformal or anti-conformal transformation in
3D has a Jacobian equal to a scalar times a orthogonal matrix.
Final Remarks
There are some reasons, why I’m not completely satisfied why the above derivation: first, the translational part of the Mandelbox transformation is not really a constant. It would be, if we were
considering a Julia-type Mandelbox, where you add a fixed vector at each iteration, but here we add the starting point, and I’m not sure how to express the Jacobian of this transformation. Still, it
is possible to do Julia-type Mandelbox fractals (they are quite similar), and here the derivation should be more sound. The transformations used in the Mandelbox are also conditional, and not simple
reflection and sphere inversions, but I don’t think that matters with regard to the Jacobian, as long as the same conditions are used when calculating it.
Update: As Knighty pointed out in the comments below, it is possible to see why the scalar approximation works in the Mandelbrot case too:
Let us go back to the original formula:
\(f(z) = scale*spherefold(boxfold(z)) + c\)
and take a look at its Jacobian:
\(J_f = J_{scale}*J_{spherefold}*J_{boxfold} + I\)
Now by using the triangle inequality for matrix norms, we get:
\(||J_f|| = ||J_{scale}*J_{spherefold}*J_{boxfold} + I|| \)
\(\leq ||J_{scale}*J_{spherefold}*J_{boxfold}|| + ||I|| \)
\(= S_{scale}*S_{spherefold}*S_{boxfold} + 1 \)
where the S’s are the scalars for the given transformation. This argument can also be applied to repeated applications of the Mandelbox transformation. This means, that if we add one to the running
derivative at each iteration (like in the Mandelbulb case), we get an upper bound of the true derivative. And since our distance estimate is calculated by dividing with the running derivate, this
approximation yields a smaller distance estimate than the true one (which is good).
Another point is, that it is striking that we end up with the same scalar estimator as for the tetrahedron in part 3 (except that is has no sphere inversion). But for the tetrahedron, the scalar
estimator was based on straight-forward arguments, so perhaps it is possible to come up with a much simpler argument for the running scalar derivative for the Mandelbox as well.
There must also be some kind of link between the gradient and the Jacobian norm. It seems, that the norm of the Jacobian should be equal to the absolute value of the length of the Mandelbox(p)
function: ||J|| = |grad|MB(p)||, since they both describe how fast the length varies along the steepest descent path. This would also make the link to the gradient based numerical methods (discussed
in part 5) more clear.
And finally, if we reused our argumentation for using a linear zero-point approximation of the escape length to the Mandelbulb, it just doesn’t work. Here it is necessary to introduce a log-term (\
(DE= 0.5*r*log(r)/dr\)). Of course, the Mandelbulb is not composed of conformal transformations, so the “Jacobian to Scalar running derivative” argument is not valid anymore, but we already have an
expression for the scalar running derivative for the Mandelbulb, and this expression does not seem to work well with the \(DE=(r-r_{min})/dr\) approximation. So it is not clear under what conditions
this approximation is valid. Update: Again, Knighty makes some good arguments below in the comments for why the linear approximations holds here.
The next part is about dual numbers and distance estimation.
Distance Estimated 3D Fractals (V): The Mandelbulb & Different DE Approximations
Previous posts: part I, part II, part III and part IV.
The last post discussed the distance estimator for the complex 2D Mandelbrot:
(1) \(DE=0.5*ln(r)*r/dr\),
with ‘dr’ being the length of the running (complex) derivative:
(2) \(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1\)
In John Hart’s paper, he used the exact same form to render a Quaternion system (using four-component Quaternions to keep track of the running derivative). In the paper, Hart never justified why the
complex Mandelbrot formula also should be valid for Quaternions. A proof of this was later given by Dang, Kaufmann, and Sandin in the book Hypercomplex Iterations: Distance Estimation and Higher
Dimensional Fractals (2002).
I used the same distance estimator formula, when drawing the 3D hypercomplex images in the last post – it seems to be quite generic and applicable to most polynomial escape time fractal. In this post
we will take a closer look at how this formula arise.
The Mandelbulb
But first, let us briefly return to the 2D Mandelbrot equation: \(z_{n+1} = z_{n}^2+c\). Now, squaring complex numbers has a simple geometric interpretation: if the complex number is represented in
polar coordinates, squaring the number corresponds to squaring the length, and doubling the angle (to the real axis).
This is probably what motivated Daniel Nylander (Twinbee) to investigate what happens when turning to spherical 3D coordinates and squaring the length and doubling the two angles here. This makes it
possible to get something like the following object:
On the image above, I made made some cuts to emphasize the embedded 2D Mandelbrot.
Now, this object is not much more interesting than the triplex and Quaternion Mandelbrot from the last post. But Paul Nylander suggested that the same approach should be used for a power-8 formula
instead: \(z_{n+1} = z_{n}^8+c\), something which resulted in what is now known as the Mandelbulb fractal:
The power of eight is somewhat arbitrary here. A power seven or nine object does not look much different, but unexpectedly these higher power objects display a much more interesting structure than
their power two counterpart.
Here is some example Mandelbulb code:
float DE(vec3 pos) {
vec3 z = pos;
float dr = 1.0;
float r = 0.0;
for (int i = 0; i < Iterations ; i++) {
r = length(z);
if (r>Bailout) break;
// convert to polar coordinates
float theta = acos(z.z/r);
float phi = atan(z.y,z.x);
dr = pow( r, Power-1.0)*Power*dr + 1.0;
// scale and rotate the point
float zr = pow( r,Power);
theta = theta*Power;
phi = phi*Power;
// convert back to cartesian coordinates
z = zr*vec3(sin(theta)*cos(phi), sin(phi)*sin(theta), cos(theta));
return 0.5*log(r)*r/dr;
It should be noted, that several versions of the geometric formulas exists. The one above is based on doubling angles for spherical coordinates as they are defined on Wikipedia and is the same
version as Quilez has on his site. However, several places this form appears:
float theta = asin( z.z/r );
float phi = atan( z.y,z.x );
z = zr*vec3( cos(theta)*cos(phi), cos(theta)*sin(phi), sin(theta) );
which results in a Mandelbulb object where the poles are similar, and where the power-2 version has the nice 2D Mandelbrot look depicted above.
I’ll not say more about the Mandelbulb and its history, because all this is very well documented on Daniel White’s site, but instead continue to discuss various distance estimators for it.
So, how did we arrive at the distance estimator in the code example above?
Following the same approach as for the 4D Quaternion Julia set, we start with our iterative function:
\(f_n(c) = f_{n-1}^8(c) + c, f_0(c) = 0\)
Deriving this function (formally) with respect to c, gives
(3) \(f’_n(c) = 8f_{n-1}^7(c)f’_{n-1}(c)+1\)
where the functions above are ‘triplex’ (3 component) valued. But we haven’t defined how to multiply two spherical triplex numbers. We only know how to square them! And how do we even derive a vector
valued function with respect to a vector?
The Jacobian Distance Estimator
Since we have three different function components, which we can derive with three different number components, we end up with nine possible scalar derivatives. These may be arranged in a Jacobian
J = \begin{bmatrix}
\frac {\partial f_x}{\partial x} & \frac {\partial f_x}{\partial y} & \frac {\partial f_x}{\partial z} \\
\frac {\partial f_y}{\partial x} & \frac {\partial f_y}{\partial y} & \frac {\partial f_y}{\partial z} \\
\frac {\partial f_z}{\partial x} & \frac {\partial f_z}{\partial y} & \frac {\partial f_z}{\partial z}
The Jacobian behaves like similar to the lower-dimensional derivatives, in the sense that it provides the best linear approximation to F in a neighborhood of p:
(4) \(
F(p+dp) \approx F(p) + J(p)dp
In formula (3) above this means, we have to keep track of a running matrix derivative, and use some kind of norm for this matrix in the final distance estimate (formula 1).
But calculating the Jacobian matrix above analytically is tricky (read the comments below from Knighty and check out his running matrix derivative example in the Quadray thread). Luckily, other
solutions exist.
Let us start by considering the complex case once again. Here we also have a two-component function derived by a two-component number. So why isn’t the derivative of a complex number a 2×2 Jacobian
It turns out that for a complex function to be complex differentiable in every point (holomorphic), it must satisfy the Cauchy Riemann equations. And these equations reduce the four quantities in the
2×2 Jacobian to just two numbers! Notice, that the Cauchy Riemann equations are a consequence of the definition of the complex derivative in a point p: we require that the derivative (the limit of
the difference) is the same, no matter from which direction we approach p (see here). Very interestingly, the holomorphic functions are exactly the functions that are conformal (angle preserving) –
something which I briefly mentioned (see last part of part III) is considered a key property of fractal transformations.
What if we only considered conformal 3D transformation? This would probably imply that the Jacobian matrix of the transformation would be a scalar times a rotation matrix (see here, but notice they
only claim the reverse is true). But since the rotational part of the matrix will not influence the matrix norm, this means we would only need to keep track of the scalar part – a single component
running derivative. Now, the Mandelbulb power operation is not a conformal transformation. But even though I cannot explain why, it is still possible to define a scalar derivative.
The Scalar Distance Estimator
It turns out the following running scalar derivative actually works:
(5) \(dr_n = 8|f_{n-1}(c)|^7dr_{n-1}+1\)
where ‘dr’ is a scalar function. I’m not sure who first came up with the idea of using a scalar derivative (but it might be Enforcer, in this thread.) – but it is interesting, that it works so well
(it also works in many other cases, including for Quaternion Julia system). Even though I don’t understand why the scalar approach work, there is something comforting about it: remember that the
original Mandelbulb was completely defined in terms of the square and addition operators. But in order to use the 3-component running derivative, we need to able to multiply two arbitrary ‘triplex’
numbers! This bothered me, since it is possible to draw the Mandelbulb using e.g. a 3D voxel approach without knowing how to multiply arbitrary numbers, so I believe it should be possible to
formulate a DE-approach, that doesn’t use this extra information. And the scalar approach does exactly this.
The escape length gradient approximation
Let us return to formula (1) above:
(1) \(DE=0.5*ln(r)*r/dr\),
The most interesting part is the running derivative ‘dr’. For the fractals encountered so far, we have been able to find analytical running derivatives (both vector and scalar valued), but as we
shall see (when we get to the more complex fractals, such as the hybrid systems) it is not always possible to find an analytical formula.
Remember that ‘dr’ is the length of the f'(z) (for complex and Quaternion numbers). In analogy with the complex and quaternion case, the function must be derived with regard to the 3-component number
c. Deriving a vector-valued function with regard to a vector quantity suggests the use of a Jacobian matrix. Another approach is to take the gradient of the escape length: \(dr=|\nabla |z_{n}||\) –
while it is not clear to me why this is valid, it work in many cases as we will see:
David Makin and Buddhi suggested (in this thread) that instead of trying to calculate a running, analytical derivative, we could use an numerical approximation, and calculate the above mentioned
gradient using the finite forwarding method we also used when calculating a surface normal in post II.
The only slightly tricky point is, that the escape length must be evaluated for the same iteration count, otherwise you get artifacts. Here is some example code:
int last = 0;
float escapeLength(in vec3 pos)
vec3 z = pos;
for( int i=1; i Bailout && last==0) || (i==last))
last = i;
return length(z);
return length(z);
float DE(vec3 p) {
last = 0;
float r = escapeLength(p);
if (r*r<Bailout) return 0.0;
gradient = (vec3(escapeLength(p+xDir*EPS), escapeLength(p+yDir*EPS), escapeLength(p+zDir*EPS))-r)/EPS;
return 0.5*r*log(r)/length(gradient);
Notice the use of the ‘last’ variable to ensure that all escapeLength’s are evaluated at the same iteration count. Also notice that ‘gradient’ is a global varible – this is because we can reuse the
normalized gradient as an approximation for our surface normal and save some calculations.
The approach above is used in both Mandelbulber and Mandelbulb 3D for the cases where no analytical solution is known. On Fractal Forums it is usually refered to as the Makin/Buddhi 4-point Delta-DE
The potential gradient approximation
Now we need to step back and take a closer look at the origin of the Mandelbrot distance estimation formula. There is a lot of confusion about this formula, and unfortunately I cannot claim to
completely understand all of this myself. But I’m slowly getting to understand bits of it, and want to share what I found out so far:
Let us start by the original Hart paper, which introduced the distance estimation technique for 3D fractals. Hart does not derive the distance estimation formula himself, but notes that:
Now, I haven’t talked about this potential function, G(z), that Hart mentions above, but it is possible to define a potential with the properties that G(Z)=0 inside the Mandelbrot set, and positive
outside. This is the first thing that puzzled me: since G(Z) tends toward zero near the border, the “log G(Z)” term, and hence the entire term will become negative! As it turns out, the “log” term in
the Hart paper is wrong. (And also notice that his formula (8) is wrong too – he must take the norm of complex function f(z) inside the log function – otherwise the distance will end up being complex
In The Science of Fractal Images (which Hart refers to above) the authors arrive at the following formula, which I believe is correct:
Similar, in Hypercomplex Iterations the authors arrive at the same formula:
But notice that formula (3.17) is wrong here! I strongly believe it misses a factor two (in their derivation they have \(sinh(z) \approx \frac{z}{2}\) for small z – but this is not correct: \(sinh(z)
\approx z\) for small z).
The approximation going from (3.16) to (3.17) is only valid for points close to the boundary (where G(z) approaches zero). This is no big problem, since for points far away we can restrict the
maximum DE step, or put the object inside a bounding box, which we intersect before ray marching.
It can be shown that \(|Z_n|^{1/2^n} \to 1\) for \(n \to \infty\). By using this we end up with our well-known formula for the lower bound from above (in a slightly different notation):
(1) \(DE=0.5*ln(r)*r/dr\),
Instead of using the above formula, we can work directly with the potential G(z). For \(n \to \infty\), G(z) may be approximated as \(G(z)=log(|z_n|)/power^n\), where ‘power’ is the polynomial power
(8 for Mandelbulb). (This result can be found in e.g. Hypercomplex Iterations p. 37 for quadratic functions)
We will approximate the length of G'(z) as a numerical gradient again. This can be done using the following code:
float potential(in vec3 pos)
vec3 z = pos;
for(int i=1; i Bailout) return log(length(z))/pow(Power,float(i));
return 0.0;
float DE(vec3 p) {
float pot = potential(p);
if (pot==0.0) return 0.0;
gradient = (vec3(potential(p+xDir*EPS), potential(p+yDir*EPS), potential(p+zDir*EPS))-pot)/EPS;
return (0.5/exp(pot))*sinh(pot)/length(gradient);
Notice, that this time we do not have to evaluate the potential for the same number of iterations. And again we can store the gradient and reuse it as a surface normal (when normalized).
A variant using Subblue’s radiolari tweak
Quilez’ Approximation
I arrived at the formula above after reading Iñigo Quilez’ post about the Mandelbulb. There are many good tips in this post, including a fast trigonometric version, but for me the most interesting
part was his DE approach: Quilez used a potential based DE, defined as:
\(DE(z) = \frac{G(z)}{|G'(z)|} \)
This puzzled me, since I couldn’t understand its origin. Quilez offers an explanation in this blog post, where he arrives at the formula by using a linear approximation of G(z) to calculate the
distance to its zero-region. I’m not quite sure, why this approximation should be justified, but it seems a bit like an example of Newtons method for root finding. Also, as Quilez himself notes, he
is missing a factor 1/2.
But if we start out by formula (3.17) above, and notes that \(sinh(G(z)) \approx G(z)\) for small G(z) (near the fractal boundary) , and that \(|Z_n|^{1/2^n} \to 1\) for \(n \to \infty\) we arrive
\(DE(z) = 0.5*\frac{G(z)}{|G'(z)|} \)
(And notice that the same two approximations are used when arriving at our well-known formula (1) at the top of the page).
Quilez’ method can be implemented using the previous code example and replacing the DE return value simply by:
return 0.5*pot/length(gradient);
If you wonder how these different methods compare, here are some informal timings of the various approaches (parameters were adjusted to give roughly identical appearances):
Sinh Potential Gradient (my approach) 1.0x
Potential Gradient (Quilez) 1.1x
Escape Length Gradient (Makin/Buddhi) 1.1x
Analytical 4.1x
The three first methods all use a four-point numerical approximation of the gradient. Since this requires four calls to the iterative function (which is where most of the computational time is
spent), they are around four times slower than the analytical solution, that only uses one evaluation.
My approach is slightly slower than the other numerical approaches, but is also less approximated than the others. The numerical approximations do not behave in the same way: the Makin/Buddhi
approach seems more sensible to choosing the right EPS size in the numerical approximation of the gradient.
As to which function is best, this requires some more testing on various systems. My guess is, that they will provide somewhat similar results, but this must be investigated further.
The Mandelbulb can also be drawn as a Julia fractal.
Some final notes about Distance Estimators
Mathematical justification: first note, that the formulas above were derived for complex mathematics and quadratic systems (and extended to Quaternions and some higher-dimensional structures in
Hypercomplex Iterations). These formulas were never proved for exotic stuff like the Mandelbulb triplex algebra or similar constructs. The derivations above were included to give a hint to the origin
and construction of these DE approximations. To truly understand these formula, I think the original papers by Hubbard and Douady, and the works by John Willard Milnor should be consulted –
unfortunately I couldn’t find these online. Anyway, I believe a rigorous approach would require the attention of someone with a mathematical background.
Using a lower bound as a distance estimator. The formula (3.17) above defines lower and upper bounds for a given point to the boundary of the Mandelbrot set. Throughout this entire discussion, we
have simply used the lower bound as a distance estimate. But a lower bound is not good enough as a distance estimate. This can easily be realized since 0 is always a lower bound of the true distance.
In order for our sphere tracing / ray marching approach to work, the lower bound must converge towards to the true distance, as it approaches zero! In our case, we are safe, because we also have an
upper bound which is four times the lower bound (in the limit where the exp(G(z)) term disappears). Since the true distance must be between the lower and upper bound, the true distance converges
towards the lower bound, as the lower bound get smaller.
DE’s are approximations. All our DE formulas above are only approximations – valid in the limit \(n \to \infty\), and some also only for point close to the fractal boundary. This becomes very
apparent when you start rendering these structures – you will often encounter noise and artifacts. Multiplying the DE estimates by a number smaller than 1, may be used to reduce noise (this is the
Fudge Factor in Fragmentarium). Another common approach is to oversample – or render images at large sizes and downscale.
Future directions. There is much more to explore and understand about Distance Estimators. For instance, the methods above use four-point numerical gradient estimation, but perhaps the primary camera
ray marching could be done using directional derivatives (two point delta estimation), and thus save the four-point sampling for the non-directional stuff (AO, soft shadows, normal estimation).
Automatic differentation with dual numbers (as noted in post II) may also be used to avoid the finite difference gradient estimation. It would be nice to have a better understanding of why the scalar
gradients work.
The next blog post discusses the Mandelbox fractal.
Distance Estimated 3D Fractals (IV): The Holy Grail
Previous posts: part I, part II, and part III.
Despite its young age, the Mandelbulb is probably the most famous 3D fractal in existence. This post will examine how we can create a Distance Estimator for it. But before we get to the Mandelbulb,
we will have to step back and review a bit of the history behind it.
The Search for the Holy Grail
The original Mandelbrot fractal is a two dimensional fractal based on the convergence properties of a series of complex numbers. The formula is very simple: for any complex number z, check whether
the sequence iteratively defined by \(z_{n+1} = z_{n}^2+c\) diverges or not. The Mandelbrot set is defined as the set of points which do not diverge, that is, the points with a series that stays
bounded within a given radius. The results can be depicted in the complex plane.
The question is how to extend this to three dimensions. The Mandelbrot set fits two dimensions, because complex numbers have two components. Can we find a similar number system for three dimensions?
The Mandelbrot formula involves two operations: adding numbers, and squaring them. Creating a n-component number where addition is possible is easy. This is what mathematicians refer to as a vector
space. Component-wise addition will do the trick, and seems like the logical choice.
But the Mandelbrot formula also involves squaring a number, which requires a multiplication operator (a vector product) on the vector space. A vector space with a (bilinear) vector product is called
an algebra over a field. The numbers in these kind of vector spaces are often called hypercomplex numbers.
To see why a three dimensional number system might be problematic, let us try creating one. We could do this by starting out with the complex numbers and introduce a third component, j. We will try
to keep as many as possible of the characteristic properties of the complex and real numbers, such as distributivity, \(a*(b+c)=(a*b)+(a*c)\), and commutativity, \(a*b=b*a\). If we assume
distributivity, we only need to specify how the units of the three components multiply. This can be illustrated in a multiplication table. Since we also assumed commutativity, such a table must be
& \boldsymbol{1} & \boldsymbol{i} & \boldsymbol{j} \\
\boldsymbol{1} & 1 & i & j \\
\boldsymbol{i} & i & -1 & ?\\
\boldsymbol{j} & j & ? & ?
For a well-behaved number system, anything multiplied by 1 is still one, and if we now require the real and imaginary components to behave as for the complex numbers, we only have three components
left – the question marks in the matrix. I’ve rendered out a few of the systems, I encountered while trying arbitrary choices of the missing numbers in the matrix:
(Many people have explored various 3D component multiplication tables – see for instance Paul Nylander’s Hypercomplex systems for more examples).
Unfortunately, our toy system above fails to be associative (i.e. it is not always true, that \(a*(b*c) = (a*b)*c\)), as can be seen by looking at the equation \(i*(i*j) = (i*i)*j \Rightarrow i*x =
-j\), which can not be satisfied no matter how we choose x.
It turns out that it is difficult to create a consistent number system in three dimensions. There simply is no natural choice. In fact, if we required that our number system allowed for a division
operator, there is a theorem stating that only four such mathematical spaces are possible: the real numbers (1D), the complex numbers (2D), the quaternions (4D) and the octonions (8D). But no 3D
But what about the 4D Quaternions? Back in 1982, Alan Norton published a paper showing a Quaternion Julia set made by displaying a 3D “slice” of the 4D space. Here is an example of a Quaternion Julia
Of course, in order to visualize a 4D object, you have to make some kind of dimensional reduction. The most common approach is to make a 3D cross-section, by simply keeping one of the four components
at a fixed value.
If you wonder why you never see a Quaternion Mandelbrot image, the reason is simple. It is not very interesting because of its axial symmetry:
If you, however, make a rotation inside the iteration loop, you can get something more like a 3D Mandelbrot.
The Quaternion system (and the 3D hypercomplex systems above) are defined exactly as the 2D system – by checking if \(z_{n+1} = z_{n}^2+c\) converges or not.
But how do we draw a 3D image of these fractals? In contrast to the 2D case, where it is possible to build a 2D grid, and check inside each cell, building a 3D grid and checking each cell would be
far too memory and time consuming for images in any decent resolution.
A distance estimator for quadratic systems.
While Alan Norton used a different rendering approach, a very elegant solution to this was found by John Hart et al in a 1989 paper: distance estimated rendering. As discussed in the previous posts,
distance estimated rendering requires that we are able to calculate a lower bound to the distance from every point in space to our fractal surface! A first, this might seem impossible. But it turns
out such a formula already was known for the 2D Mandelbrot set. A distance estimate can be found as:
(1) \(DE=0.5*ln(r)*r/dr\)
Where ‘r’ is the escape time length, and ‘dr’ is the length of the running derivative. (The approximation is only exact in the limit where the number of iterations goes to infinity)
In order to define what we mean by the running derivative, we need a few extra definitions. For Mandelbrot sets, we study the sequence \(z_{n+1} = z_{n}^2+c\) for each point c. Let us introduce the
function \(f_n(c)\), defined as the n’th entry for the sequence for the point c. By this definition, we have the following defining formula for the Mandelbrot set:
\(f_n(c) = f_{n-1}^2(c) + c, f_0(c) = 0\)
Deriving this function with respect to c, gives
(2) \(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)+1\) (for Mandelbrot formula)
Similar, the Julia set is defined by choosing a fixed constant, d, in the quadratic formula, using c only as the first entry in our sequence:
\(f_n(c) = f_{n-1}^2(c) + d, f_0(c) = c\)
Deriving this function with respect to c, gives
(3) \(f’_n(c) = 2f_{n-1}(c)f’_{n-1}(c)\) (for Julia set formula)
which is almost the same result as for the Mandelbrot set, except for unit term. And now we can define the length of \(f_n\), and the running derivative \(f’_n\):
\(r = |f_n(c)|\) and \(dr = |f’_n(c)|\)
used in the formula (1) above. This formula was found by Douady and Hubbard in a 1982 paper (more info).
2D Julia set rendered using a distance estimator approach. This makes it possible to emphasize details, without having to use extensive oversampling.
Due to a constraint in WordPress, this post has reached its maximum length. The next post continues the discussion, and shows how the formula above can be used for other types of fractals than the 2D
Distance Estimated 3D Fractals (III): Folding Space
The previous posts (part I, part II) introduced the basics of rendering DE (Distance Estimated) systems, but left out one important question: how do we create the distance estimator function?
Drawing spheres
Remember that a distance estimator is nothing more than a function, that for all points in space returns a length smaller than (or equal to) the distance to the closest object. This means we are safe
to march at least this step length without hitting anything – and we use this information to speed up the ray marching.
It is fairly easy to come up with distance estimators for most simple geometric shapes. For instance, let us start by a sphere. Here are three different ways to calculate the distance from a point in
space, p, to a sphere with radius R:
(1) DE(p) = max(0.0, length(p)-R) // solid sphere, zero interior
(2) DE(p) = length(p)-R // solid sphere, negative interior
(3) DE(p) = abs(length(p)-R) // hollow sphere shell
From the outside all of these look similar. But (3) is hollow – we would be able to position the camera inside it, and it would look different if intersected with other objects.
What about the first two? There is actually a subtle difference: the common way to find the surface normal, is to sample the DE function close to the camera ray/surface intersection. But if the
intersection point is located very close to the surface (for instance exactly on it), we might sample the DE inside the sphere. And this will lead to artifacts in the normal vector calculation for
(1) and (3). So, if possible use signed distance functions. Another way to avoid this, is to backstep along the camera ray a bit before calculating the surface normal (or to add a ray step multiplier
less than 1.0).
From left to right: Sphere (1), with normal artifacts because the normal was not backstepped. Sphere (2) with perfect normals. Sphere (3) drawn with normal backstepping, and thus perfect normals. The
last row shows how the spheres look when cut open.
Notice that distance estimation only tells the distance from a point to an object. This is in contrast to classic ray tracing, which always is about finding the distance from a point to a given
object along a line. The formulas for ray-object intersection in classic ray tracing are thus more complex, for instance the ray-sphere intersection involves solving a quadratic equation. The
drawback of distance estimators is that multiple ray steps are needed, even for simple objects like spheres.
Combining objects
Distance fields have some nice properties. For instance, it is possible to combine two distance fields using a simple minimum(a,b) operator. As an example we could draw the union of two spheres the
following way:
DE(p) = min( length(p)-1.0 , length(p-vec3(2.0,0.0,0.0))-1.0 );
This would give us two spheres with unit radius, one centered at origo, and another at (2,0,0). The same way it is possible to calculate the intersection of two objects, by taking the maximum value
of the fields. Finally, if you are using signed distance functions, it is possible to subtract one shape from another by inverting one of the fields, and calculating the intersection (i.e. taking max
(A, -B)).
So now we have a way to combine objects. And it is also possible to apply local transformations, to get interesting effects:
This image was created by combining the DE’s of a ground plane and two tori while applying a twisting deformation to the tori.
Rendering of (non-fractal) distance fields is described in depth in this paper by Hart: Sphere Tracing: A Geometric Method for the Antialiased Ray Tracing of Implicit Surfaces. This paper also
describes distance estimators for various geometric objects, such as tori and cones, and discuss deformations in detail. Distance field techniques have also been adopted by the demoscene, and Iñigo
Quilez’s introduction contains a lot of information. (Update: Quilez has created a visual reference page for distance field primitives and transformations)
Building Complexity
This is all nice, but even if you can create interesting structures, there are some limitations. The above method works fine, but scales very badly when the number of distance fields to be combined
increases. Creating a scene with 1000 spheres by finding the minimum of the 1000 fields would already become too slow for real-time purposes. In fact ordinary ray tracing scales much better – the use
of spatial acceleration structures makes it possible for ordinary ray tracers to draw scenes with millions of objects, something that is far from possible using the “find minimum of all object
fields” distance field approach sketched above.
But fractals are all about detail, and endless complexity, so how do we proceed?
It turns out that there are some tricks, that makes it possible to add complexity in ways that scales much better.
First, it is possible to reuse (or instance) objects using e.g. the modulo-operator. Take a look at the following DE:
float DE(vec3 z)
z.xy = mod((z.xy),1.0)-vec3(0.5); // instance on xy-plane
return length(z)-0.3; // sphere DE
Which generates this image:
Now we are getting somewhere. Tons of detail, at almost no computational cost. Now we only need to make it more interesting!
A Real Fractal
Let us continue with the first example of a real fractal: the recursive tetrahedron.
A tetrahedron may be described as a polyhedron with vertices (1,1,1),(-1,-1,1),(1,-1,-1),(-1,1,-1). Now, for each point in space, lets us take the vertex closest to it, and scale the system by a
factor of 2.0 using this vertex as center, and then finally return the distance to the point where we end, after having repeated this operation. Here is the code:
float DE(vec3 z)
vec3 a1 = vec3(1,1,1);
vec3 a2 = vec3(-1,-1,1);
vec3 a3 = vec3(1,-1,-1);
vec3 a4 = vec3(-1,1,-1);
vec3 c;
int n = 0;
float dist, d;
while (n < Iterations) {
c = a1; dist = length(z-a1);
d = length(z-a2); if (d < dist) { c = a2; dist=d; }
d = length(z-a3); if (d < dist) { c = a3; dist=d; }
d = length(z-a4); if (d < dist) { c = a4; dist=d; }
z = Scale*z-c*(Scale-1.0);
return length(z) * pow(Scale, float(-n));
Which results in the following image:
Our first fractal! Even though we do not have the infinite number of objects, like the mod-example above, the number of objects grow exponentially as we crank up the number of iterations. In fact,
the number of objects is equal to 4^Iterations. Just ten iterations will result in more than a million objects - something that is easily doable on a GPU in realtime! Now we are getting ahead of the
standard ray tracers.
Folding Space
But it turns out that we can do even better, using a clever trick by utilizing the symmetries of the tetrahedron.
Now, instead of scaling about the nearest vertex, we could use the mirror points in the symmetry planes of the tetrahedron, to make sure that we arrive at the same "octant" of the tetrahedron - and
then always scale from the vertex it contains.
The following illustration tries to visualize this:
The red point at the top vertex is the scaling center at (1,1,1). Three symmetry planes of the tetrahedron have been drawn in red, green, and blue. By mirroring points if they are on the wrong side
(the non-white points) of plane, we will ensure they get mapped to the white "octant". The operation of mirroring a point, if it is on one side of a plane, is called a 'folding operation' or just a
Here is the code:
float DE(vec3 z)
float r;
int n = 0;
while (n < Iterations) {
if(z.x+z.y<0) z.xy = -z.yx; // fold 1
if(z.x+z.z<0) z.xz = -z.zx; // fold 2
if(z.y+z.z<0) z.zy = -z.yz; // fold 3
z = z*Scale - Offset*(Scale-1.0);
return (length(z) ) * pow(Scale, -float(n));
These folding operations shows up in several fractals. A fold in a general plane with normal n can be expressed as:
float t = dot(z,n1); if (t<0.0) { z-=2.0*t*n1; }
or in a optimized version (due to AndyAlias):
z-=2.0 * min(0.0, dot(z, n1)) * n1;
Also notice that folds in the xy, xz, or yz planes may be expressed using the 'abs' operator.
That was a lot about folding operations, but the really interesting stuff happens when we throw rotations into the system. This was first introduced by Knighty in the Fractal Forum's thread
Kaleidoscopic (escape time) IFS. The thread shows recursive versions of all the Platonic Solids and the Menger Sponge - including the spectacular forms that arise when inserting rotations and
translations into the system.
The Kaleidoscopic IFS fractals are in my opinion some of the most interesting 3D fractals ever discovered (or created if you are not a mathematical platonist). Here are some examples of forms that
may arise from a system with icosahedral symmetry:
Here the icosahedral origin might be evident, but it possible to tweak these structures beyond any recognition of their origin. Here are a few more examples:
Knighty's fractals are composed using a small set of transformations: scalings, translations, plane reflections (the conditional folds), and rotations. The folds are of course not limited to the
symmetry planes of the Platonic Solids, all planes are possible.
The transformations mentioned above all belong to the group of conformal (angle preserving) transformations. It is sometimes said (on Fractal Forums) that for 'true' fractals the transformations must
be conformal, since non-conformal transformations tend to stretch out detail and create a 'whipped cream' look, which does not allow for deep zooms. Interestingly, according to Liouville's theorem
there are not very many possible conformal transformations in 3D. In fact, if I read the theorem correctly, the only possible conformal 3D transformations are the ones above and the sphere inversions
Part IV discusses how to arrive at Distance Estimators for the fractals such as the Mandelbulb, which originates in attempts to generalize the Mandelbrot formula to three dimensions: the so-called
search for the holy grail in 3D fractals.
Distance Estimated 3D Fractals (II): Lighting and Coloring
The first post discussed how to find the intersection between a camera ray and a fractal, but did not talk about how to color the object. There are two steps involved here: setting up a coloring
scheme for the fractal object itself, and the shading (lighting) of the object.
Lights and shading
Since we are raymarching our objects, we can use the standard lighting techniques from ray tracing. The most common form of lightning is to use something like Blinn-Phong, and calculate approximated
ambient, diffuse, and specular light based on the position of the light source and the normal of the fractal object.
Surface Normal
So how do we obtain a normal of a fractal surface?
A common method is to probe the Distance Estimator function in small steps along the coordinate system axis and use the numerical gradient obtained from this as the normal (since the normal must
point in the direction where the distance field increase most rapidly). This is an example of the finite difference method for numerical differentiation. The following snippet shows how the normal
may be calculated:
vec3 n = normalize(vec3(DE(pos+xDir)-DE(pos-xDir),
The original Hart paper also suggested that alternatively, the screen space depth buffer could be used to determine the normal – but this seems to be both more difficult and less accurate.
Finally, as fpsunflower noted in this thread it is possible to use Automatic Differentiation with dual numbers, to obtain a gradient without having to introduce an arbitrary epsilon sampling
Ambient Occlusion
Besides the ambient, diffuse, and specular light from Phong-shading, one thing that really improves the quality and depth illusion of a 3D model is ambient occlusion. In my first post, I gave an
example of how the number of ray steps could be used as a very rough measure of how occluded the geometry is (I first saw this at Subblue’s site – his Quaternion Julia page has some nice
illustrations of this effect). This ‘ray step AO‘ approach has its shortcomings though: for instance, if the camera ray is nearly parallel to a surface (a grazing incidence) a lot of steps will be
used, and the surface will be darkened, even if it is not occluded at all.
Another approach is to sample the Distance Estimator at points along the normal of the surface and use this information to put together a measure for the Ambient Occlusion. This is a more intuitive
method, but comes with some other shortcomings – i.e. new parameters are needed to control the distance between the samplings and their relative weights with no obvious default settings. A
description of this ‘normal sampling AO‘ approach can be found in Iñigo Quilez’s introduction to distance field rendering.
In Fragmentarium, I’ve implemented both methods: The ‘DetailAO’ parameter controls the distance at which the normal is sampled for the ‘normal sampling AO’ method. If ‘DetailAO’ is set to zero, the
‘ray step AO’ method is used.
Other lighting effects
Besides Phong shading and ambient occlusion, all the usual tips and tricks in ray tracing may be applied:
Glow – can be added simply by mixing in a color based on the number of ray steps taken (points close to the fractal will use more ray steps, even if they miss the fractal, so pixels close to the
object will glow).
Fog – is also great for adding to the depth perception. Simply blend in the background color based on the distance from the camera.
Hard shadows are also straight forward – check if the ray from the surface point to the light source is occluded.
Soft shadows: Iñigo Quilez has a good description of doing softened shadows.
Reflections are pretty much the same – reflect the camera ray in the surface normal, and mix in the color of whatever the reflected ray hits.
The effects above are all implemented in Fragmentarium as well. Numerous other extensions could be added to the raytracer: for example, environment mapping using HDRI panoramic maps provides very
natural lighting and is easy to apply for the user, simulated depth-of-field also adds great depth illusion to an image, and can be calculated in reasonable time and quality using screen space
buffers, and more complex materials could also be added.
Fractal objects with a uniform base color and simple colored light sources can produce great images. But algorithmic coloring is a powerful tool for bringing the fractals to life.
Algorithmic color use one or more quantities, determined by looking at the orbit or the escape point or time.
Orbit traps is a popular way to color fractals. This method keeps track of how close the orbit comes to a chosen geometric object. Typical traps include keeping track of the minimum distance to the
coordinate system center, or to simple geometric shapes like planes, lines, or spheres. In Fragmentarium, many of the systems use a 4-component vector to keep track of the minimum distance to the
three x=0, y=0, and z=0 planes and to the distance from origo. These are mapped to color through the X,Y,Z, and R parameters in the ‘Coloring’ tab.
The iteration count is the number of iterations it takes before the orbit diverges (becomes larger than the escape radius). Since this is an integer number it is prone to banding, which is discussed
later in this post. One way to avoid this is by using a smooth fractional iteration count:
float smooth = float(iteration)
+ log(log(EscapeRadiusSquared))/log(Scale)
- log(log(dot(z,z)))/log(Scale);
(For a derivation of this quantity, see for instance here)
Here ‘iteration’ is the number of iterations, and dot(z,z) is the square of the escape time length. There are a couple of things to notice. First, the formula involves a characteristic scale,
referring to the scaling factor in the problem (e.g. 2 for a standard Mandelbrot, 3 for a Menger). It is not always possible to obtain such a number (e.g. for Mandelboxes or hybrid systems).
Secondly, if the smooth iteration count is used to lookup a color in a palette, offset may be ignored, which means the second term can be dropped. Finally, which ‘log’ functions should be used? This
does not matter if only they are used consistently: since all different log functions are proportional, the ratio of two logs does not depend on the base used. For the inner logs (e.g. log(dot(,z))),
changing the log will result in a constant offset to the overall term, so again this will just result in an offset in the palette lookup.
The lower half of this image use a smooth iteration count.
Conditional Path Coloring
(I made this name up – I’m not sure there is an official name, but I’ve seen the technique used several times in Fractal Forums posts.)
Some fractals may have conditional branches inside their iteration loop (sometimes disguised as an ‘abs’ operator). The Mandelbox is a good example: the sphere fold performs different actions
depending on whether the length of the iterated point is smaller or larger than a set threshold. This makes it possible to keep track of a color variable, which is updated depending on the path
Many other types of coloring are also possible, for example based on the normal of the surface, spherical angles of the escape time points, and so on. Many of the 2D fractal coloring types can also
be applied to 3D fractals. UltraFractal has a nice list of 2D coloring types.
Improving Quality
Some visual effects and colorings, are based on integer quantities – for example glow is based on on the number of ray steps. This will result in visible boundaries between the discrete steps, an
artifact called banding.
The smooth iteration count introduced above is one way to get rid of banding, but it is not generally applicable. A more generic approach is to add some kind of noise into the system. For instance,
by scaling the length of the first ray step for each pixel by a random number, the banding will disappear – at the cost of introducing some noise.
Personally, I much prefer noise to banding – in fact I like the noisy, grainy look, but that is a matter of preference.
Another important issue is aliasing: if only one ray is traced per pixel, the image is prone to aliasing and artifacts. Using more than one sample will remove aliasing and reduce noise. There are
many ways to oversample the image – different strategies exist for choosing the samples in a way that optimizes the image quality and there are different ways of weighting (filtering) the samples for
each pixel. Physical Based Rendering has a very good chapter on sampling and filtering for ray tracing, and this particular chapter is freely available here:
In Fragmentarium there is some simple oversampling built-in – by setting the ‘AntiAlias’ variable, a number of samples are chosen (on a uniform grid). They are given the same weight (box filtered). I
usually only use this for 2D fractals – because they render faster, which allows for a high number of samples. For 3D renders, I normally render a high resolution image, and downscale it in a image
editing program – this seems to create better quality images for the same number of samples.
Part III discusses how to derive and work with Distance Estimator functions.
Optimizing GLSL Code
By making selected variables constant at compile time, some 3D fractals render more than four times faster. Support for easily locking variables has been added to Fragmentarium.
Some time ago, I became aware that the raytracer in Fragmentarium was somewhat slower than both Fractal Labs and Boxplorer for similar systems – this was somewhat puzzling since the DE raycasting
technique is pretty much the same. After a bit of investigation, I realized that my standard raytracer had grown slower and slower, as new features had been added (e.g. reflections, hard shadows, and
floor planes) – even if the features were turned off!
One way to speed up GLSL code, is by marking some variables constant at compile-time. This way the compiler may optimize code (e.g. unroll loops) and remove unused code (e.g. if hard shadows are
disabled). The drawback is that changing these constant variables requires that the GLSL code is compiled again.
It turned out that this does have a great impact on some systems. For instance for the ‘Dodecahedron.frag’, take a look at the following render times:
No constants: 1.4 fps (1.0x)
Constant rotation matrices : 3.4 fps (2.4x)
Constant rotation matrices + Anti-alias + DetailAO: 5.6 fps (4.0x)
All 38 parameters (except camera): 6.1 fps (4.4x)
The fractal rotation matrices are the matrices used inside the DE-loop. Without the constant declarations, they must be calculated from scratch for each pixel, even though they are identical for all
pixels. Doing the calculation at compile-time gives a notable speedup of 2.4x (notice that another approach would be to calculate such frame constants in the vertex shader and pass them to the pixel
shader as ‘varying’ variables. But according to this post this is – surprisingly – not very effective).
The next speedup – from the ‘Anti-alias’ and ‘DetailAO’ variables – is more subtle. It is difficult to see from the code why these two variables should have such impact. And in fact, it turns out
that combinations of other variables will amount in the same speedup. But these speedups are not additive! Even if you make all variables constants, the framerate only increases slightly above 5.6
fps. It is not clear why this happens, but I have a guess: it seems that when the complexity is lowered between a certain treshold, the shader code execution speed increases sharply. My guess is that
for complex code, the shader runs out of free registers and needs to perform calculations using a slower kind of memory storage.
Interestingly, the ‘iterations’ variable offers no speedup – even though the compiler must be able to unroll the principal DE loop, there is no measurable improvement by doing it.
Finally, the compile time is also greatly reduced when making variables constant. For the ‘Dodecahedron.frag’ code, the compile time is ~2000ms with no constants. By making most variables constant,
the compile time is lowered to around ~335ms on my system.
Locking in Fragmentarium.
In Fragmentarium variables can be locked (made compile-time constant) by clicking the padlock next to them. Locked variables appear with a yellow padlock next to them. When a variable is locked, any
changes to it will first be executed when the system is compiled (by pressing ‘build’). Locked variables, which have been changes, will appear with a yellow background until the system is compiled,
and the changes are executed.
Notice, that whole parameter groups may be locked, by using the buttons at the bottom.
The ‘AntiAlias’ and ‘DetailAO’ variables are locked. The ‘DetailAO’ has been changed, but the changes are not executed yet (the yellow background). The ‘BoundingSphere’ variable has a grey
background, because it has keyboard focus: its value can be finetuned using the arrow keys (up/down controls step size, left/right changes value).
In a fragment, a user variable can be marked as locked by default, by adding a ‘locked’ keyword to it:
uniform float Scale; slider[-5.00,2.0,4.00] Locked
Some variables can not be locked – e.g. the camera settings. It is possible to mark such variables by the ‘NotLockable’ keyword:
uniform vec3 Eye; slider[(-50,-50,-50),(0,0,-10),(50,50,50)] NotLockable
The same goes for presets. Here the locking mode can be stated, if it is different from the default locking mode:
#preset SomeName
AntiAlias = 1 NotLocked
Detail = -2.81064 Locked
Offset = 1,1,1
Locking will be part of Fragmentarium v0.9, which will be released soon.
Syntopia Blog Update
It has not been possible to post comments at my blog for some months. Apparently, my reCAPTCHA plugin was broken (amazingly, spam comments still made their way into the moderation queue).
This should be fixed now.
I’m also on twitter now: @SyntopiaDK, where I’ll post links and news releated to generative systems, 3D fractals, or whatever pops up.
Finally, if you are near Stockholm, some of my images are on display at a small gallery (from July 9th to September 11th): Kungstensgatan 27.
Plotting High-frequency Functions Using a GPU.
A slight digression from the world of fractals and generative art: This post is about drawing high-quality graphs of high-frequency functions.
Yesterday, I needed to draw a few graphs of some simple functions. I started out by using Iñigo Quilez’s nice little GraphToy, but my functions could not be expressed in a single line. So I decided
to implement a graph plotter example in Fragmentarium instead.
Plotting a graph using a GLSL shader is not an obvious task – you have to frame the problem in a way, such that each pixel can be processed individually. This is in contrast to the standard way of
drawing graphs – where you choose a uniform set of values for the x-axis, and draw the lines connecting the points in the (x,f(x)) set.
So how do you do it for each pixel individually?
The first thing to realize is, that it is easy to determine whether a pixel is above or below the graph – this can be done by checking whether y<f(x) or y>f(x). The tricky part is, that we only want
to draw the boundary – the curve that separates the regions above and below the graph.
So how do we determine the boundary? After having tried a few different approaches, I came up with the following simple edge detection procedure: for each pixel, choose a number of samples, in a
region around the pixel center. Then count how many samples are above, and how many samples are below the curve.
If all samples are above, or all samples are below, the pixel is not on the boundary. However, if there are samples both above and below, the boundary must be passing through the pixel.
The whole idea can be expressed in a few lines of code:
for (float i = 0.0; i < samples; i++) {
for (float j = 0.0;j < samples; j++) {
float f = function(pos.x+ i*step.x)-(pos.y+ j*step.y);
count += (f>0.) ? 1 : -1;
// base color on abs(count)/(samples*samples)
It should be noted, that the sampling can be improved by adding a small amount of jittering (random offsets) to the positions – this reduces the aliasing at the cost of adding a small amount of
Highfrequency functions and aliasing
So why it this better than the common ‘connecting line’ approach?
Because this approach deals with the high-frequency information much better.
Consider the function f(x)=sin(x*x*x)*sin(x).
Here is a plot from GraphToy:
Notice how the graph near the red arrows seem to be slowly varying. This is not the true behavior of function, but an artifact of the way we sample our function. Our limited resolution cannot capture
the high frequency components, which results in aliasing.
Whenever you do anything media-related on a computer, you will at some point run into problems with aliasing: whether you are doing sound synthesis, image manipulation, 3D rendering or even drawing a
straight line.
However, using the pixel shader approach, aliasing is much easier to avoid. Here is a Fragmentarium plot of the same function:
Even though it may seem backwards to evaluate the function for all pixels on the screen, it makes it possible to tame the aliasing, and even on a modest GPU, the procedure is fast enough for realtime
The example is included in GitHub under Examples/2D Systems/GraphPlotter.frag.
Distance Estimated 3D Fractals (Part I)
During the last two years, the 3D fractal field has undergone a small revolution: the Mandelbulb (2009), the Mandelbox (2010), The Kaleidoscopic IFS’s (2010), and a myriad of equally or even more
interesting hybrid systems, such as Spudsville (2010) or the Kleinian systems (2011).
All of these systems were made possible using a technique known as Distance Estimation and they all originate from the Fractal Forums community.
Overview of the posts
Part I briefly introduces the history of distance estimated fractals, and discuss how a distance estimator can be used for ray marching.
Part II discuss how to find surface normals, and how to light and color fractals.
Part III discuss how to actually create a distance estimator, starting with distance fields for simple geometric objects, and talking about instancing, combining fields (union, intersections, and
differences), and finally talks about folding and conformal transformation, ending up with a simple fractal distance estimator.
Part IV discuss the holy grail: the search for generalization of the 2D (complex) Mandelbrot set, including Quaternions and other hypercomplex numbers. A running derivative for quadratic systems is
Part V continues the discussion about the Mandelbulb. Different approaches for constructing a running derivative is discussed: scalar derivatives, Jacobian derivatives, analytical solutions, and the
use of different potentials to estimate the distance.
Part VI is about the Mandelbox fractal. A more detailed discussion about conformal transformations, and how a scalar running derivative is sufficient, when working with these kind of systems.
Part VII discuss how dual numbers and automatic differentation may used to construct a distance estimator.
Part VIII is about hybrid fractals, geometric orbit traps, various other systems, and links to relevant software and resources.
The background
The first paper to introduce Distance Estimated 3D fractals was written by Hart and others in 1989:
Ray tracing deterministic 3-D fractals
In this paper Hart describe how Distance Estimation may be used to render a Quaternion Julia 3D fractal. The paper is very well written and definitely worth spending some hours on (be sure to take a
look at John Hart’s other papers as well). Given the age of Hart’s paper, it is striking that is not until the last couple of years that the field of distance estimated 3D fractals has exploded.
There has been some important milestones, such as Keenan Crane’s GPU implementation (2004), and Iñigo Quilez 4K demoscene implementation (2007), but I’m not aware of other fractal systems being
explored using Distance Estimation, before the advent of the Mandelbulb.
Classic raytracing shoots one (or more) rays per pixel and calculate where the rays intersect the geometry in the scene. Normally the geometry is described by a set of primitives, like triangles or
spheres, and some kind of spatial acceleration structure is used to quickly identify which primitives intersect the rays.
Distance Estimation, on the other hand, is a ray marching technique.
Instead of calculating the exact intersection between the camera ray and the geometry, you proceed in small steps along the ray and check how close you are to the object you are rendering. When you
are closer than a certain threshold, you stop. In order to do this, you must have a function that tells you how close you are to the object: a Distance Estimator. The value of the distance estimator
tells you how large a step you are allowed to march along the ray, since you are guaranteed not to hit anything within this radius.
Schematics of ray marching using a distance estimator.
The code below shows how to raymarch a system with a distance estimator:
float trace(vec3 from, vec3 direction) {
float totalDistance = 0.0;
int steps;
for (steps=0; steps < MaximumRaySteps; steps++) {
vec3 p = from + totalDistance * direction;
float distance = DistanceEstimator(p);
totalDistance += distance;
if (distance < MinimumDistance) break;
return 1.0-float(steps)/float(MaxRaySteps);
Here we simply march the ray according to the distance estimator and return a greyscale value based on the number of steps before hitting something. This will produce images like this one (where I
used a distance estimator for a Mandelbulb):
It is interesting that even though we have not specified any coloring or lighting models, coloring by the number of steps emphasizes the detail of the 3D structure - in fact, this is an simple and
very cheap form of the Ambient Occlusion soft lighting often used in 3D renders.
Another interesting observation is that these raymarchers are trivial to parallelise, since each pixel can be calculated independently and there is no need to access complex shared memory structures
like the acceleration structure used in classic raytracing. This means that these kinds of systems are ideal candidates for implementing on a GPU. In fact the only issue is that most GPU's still only
supports single precision floating points numbers, which leads to numerical inaccuracies faster than for the CPU implementations. However, the newest generation of GPU's support double precision, and
some API's (such as OpenCL and Pixel Bender) are heterogeneous, meaning the same code can be executed on both CPU and GPU - making it possible to create interactive previews on the GPU and render
final images in double precision on the CPU.
Estimating the distance
Now, I still haven't talked about how we obtain these Distance Estimators, and it is by no means obvious that such functions should exist at all. But it is possible to intuitively understand them, by
noting that systems such as the Mandelbulb and Mandelbox are escape-time fractals: we iterate a function for each point in space, and follow the orbit to see whether the sequence of points diverge
for a maximum number of iterations, or whether the sequence stays inside a fixed escape radius. Now, by comparing the escape-time length (r), to its spatial derivative (dr), we might get an estimate
of how far we can move along the ray before the escape-time length is below the escape radius, that is:
\(DE = \frac{r-EscapeRadius }{dr}\)
This is a hand-waving estimate - the derivative might fluctuate wildly and get larger than our initial value, so a more rigid approach is needed to find a proper distance estimator. I'll a lot more
to say about distance estimators inthe later posts, so for now we will just accept that these function exists and can be obtained for quite a diverse class of systems, and that they are often
constructed by comparing the escape-time length with some approximation of its derivative.
It should also be noticed that this ray marching approach can be used for any kinds of systems, where you can find a lower bound for the closest geometry for all points in space. Iñigo Quilez has
used this in his impressive procedural SliseSix demo, and has written an excellent introduction, which covers many topics also relevant for Distance Estimation of 3D fractals.
This concludes the first part of this series of blog entries. Part II discusses lighting and coloring of fractals. | {"url":"http://blog.hvidtfeldts.net/index.php/category/fragmentarium/page/3/","timestamp":"2024-11-05T20:10:32Z","content_type":"text/html","content_length":"139117","record_id":"<urn:uuid:b1875867-048d-4282-8663-4aaa3571936b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00693.warc.gz"} |
More Question Writing Examples
Purpose of this document
This document contains example IMathAS questions with explanation of the code used. For detailed question language reference, please refer to the help file.
Example of Function type
Common Control
$a,$b = nonzerodiffrands(-8,8,2)
$variables = "x"
$domain = "-5,5"
The first line defines two variables, $a and $b, as different, nonzero random integers. The first two arguments specify that the integers should be chosen between -8 and 8. The third argument
specifies that 2 random integers should be chosen
$variables is used to define the variables in the expression. If more than one variable is used, enter a list of variables, like $variables = "x,y,x". This defaults to "x", so this line is not really
necessary in this problem.
$domain specifies the domain on which the student's answer should be compared to the given answer. Enter as a list "min,max". The same domain will apply to all variables in the expression. This
defaults to -10 to 10
Question Control
$ansprompt = "Ans="
Rather than place the $answerbox in the question text, I'm going to have the system place the default answer box at the end of the question. The $ansprompt variable specifies that the box should have
"Ans=" displayed in front of the answer box
Question Text
Simplify `x^$a/x^$b`
Write your answer with positive exponents only.
$p = abs($a - $b)
$answer = "x^($p)" if ($a>$b)
$answer = "1/x^($p)" if ($a<$b)
$showanswer = "x^($a-$b) = $answer"
$requiretimes = "^,<2,-,=0"
The first three lines define the answer. Note that it would have worked just fine to define $answer = makepretty("x^($a-$b)"), but because I want to use the answer in the $showanswer to show students
later, I instead defined the answer using "if" statements. The "if" allows you define different values to a variable depending on values of other variables
The $showanswer line defines the answer to show to students. There is no default value for Function type questions, so you must specify something if you want an answer to be available to students. In
this case, I showed the first step as well as the answer
$requiretimes places format requirements on the student's answer. The list in quotes is in pairs; the first value is the symbol to look for, and the second value indicates the number of times that
symbol should appear. In this example, the ^ symbol should show up less than two times, and the - symbol should show up zero times. The first rule requires that students cannot simply reenter the
original expression and get credit. The second rule requires that students cannot enter negative exponents
Example of Matching type
Common Control
$qarr = array("`sin x`","`cos x`","`x^2`","`x^3`","`e^x`","`log x`","`2^x`")
$aarr = array("`cos x`","`-sin x`","`2x`","`3x^2`","`e^x`","`1/x`","`2^x ln2`")
$questions,$answers = jointshuffle($qarr,$aarr,4,5)
$questiontitle = "`f(x)`";
$answertitle = "`f'(x)`";
The first two lines define arrays of functions ($qarr) and their derivatives ($aarr)
The third line creates two new arrays, $questions and $answers, by jointly shuffling the arrays (retaining respective pairing), and picking 4 elements of the $qarr, and 5 elements of the $aarr.
The last two lines define the titles (column headers) for the $questions and $answers lists.
Question Text
Match each function with it's derivative.
There is no need to specify anything here
The Matching type requires a $questions array and $answers array. The $questions will display on the left with entry boxes next to each. The $answers will display on the right, lettered. If each
answer is used at most once, then you do not have to do anything else - the first entry of the $answers array will be assumed to be the answer to the first entry of the $questions array. If there are
more entries in $answers than $questions, the left over answers are presumed to never be used. If you want an answer to be used more than once, you will need to define a $matchlist
Load Library Example (Number type)
Example of using loadlibrary to access functions in a macro file (mean from stats library in this case)
Common Control
$a = nonzerodiffrands(1,10,5)
This line defines an array variable $a to be 5 different nonzero integers between 1 and 10. Note that since a single variable was defined, it was created as an array variable
Question Control
$table = showarrays("x",$a)
This defines $table using a standard display macro that creates a tabular display of the array $a with title (header) "x". If you want to display two lists side-by-side, you can do so, for example:
Question Text
Find `bar x`
Recall that items in backticks are rendered as math. The math command "bar" will place a bar over the item that follows it
$answer = mean($a)
The first line loads the stats macro library. Admins can install new Macro libraries to extend the functionality of IMathAS. The Macro Library Help link will show what libraries are currently
installed and the functions they provide.
Here we are using the mean function from the stats library to determine the answer.
Another Example of Matching Type
Common Control
$a,$b,$c = rands(-3,3,3)
This selects three random numbers between -3 and 3
$cols = singleshuffle("red,green,blue")
shuffles the list of colors, placing it in the array $cols
$graphs = array("$a*x^2+$b*x+$c,$cols[0]","2*$a*x+$b,$cols[1]","$a*x^3/3+$b*x^2/2+$c*x,$cols[2]")
We're going to be using the showplot macro. The first argument is a single function or an array of functions. In this case, we're giving an array of functions, though we're only specifying the
function and the color. There are other options available.
$plot = showplot($graphs,-3,3,-5,5,off,off)
this actually calls the showplot macro. After the function, the window is specified, then we're setting the labels to off, and grid is set to off
$questions = array("`f(x)`","`f'(x)`","`int f(x)dx`")
$answers = $cols
this defines the questions and answers. Note that they are matched - the first entry in $answers is the answer the first entry in $questions. Notice that the primary randomization in this question is
the shuffling of the color array.
Question Control
$questiontitle = "Function"
$answertitle = "Graph Color"
these set titles for the list of questions and answers
Question Text
Match each function with its graph
Nothing is needed here. The answers are automatically associated with the questions based on array order
Example of Multipart Type
Common Control
$anstypes = array("calculated","calculated")
$a,$b = nonzerodiffrands(-8,8,2)
$c = nonzerorand(-30,30)
The first line defines that there will be two parts, both of type calculated. Refer the help for valid anstypes.
The next two lines define our random variables
Question Control
$question = makeprettydisp("{$a}x+{$b}y=$c")
Set up the equation
$hidepreview[1] = true
in some multipart questions, it might be useful to hide the preview button usually provided with calculated and function answer types. You can set $hidepreview to hide the preview button. Note that
it is suffixed with a [1]. This specifies to apply the option to the second calculated type. All options should be suffixed like this in a multipart problem unless the option applies to all parts of
the problem.
Note that this is a silly example; there is no good reason to hide the preview on one part of this question but not the other
Question Text
Find the x and y intercepts of $question
x-int: `x=`$answerbox[0]<br/>
y-int: `y=`$answerbox[1]
Note the use of the $answerbox above. This places the answerboxes in the problem text. Make sure you put the boxes in numerical order; entry tips are given assuming this.
$answer[0] = $c/$a
$answer[1] = $c/$b
like with other options, the $answer also needs to be suffixed with the question part.
Example of Number Type
Common Control
$a = nonzerorand(-5,5)
Set $a to be a nonzero random number between -5 and 5
$b = rrand(.1,5,.1) if ($a < 0)
$b = rrand(-5,-.1,.1) if ($a > 0)
a decimal number between -5 and 5, with one decimal place. We're going to ensure that $a and $b are different signs using the "if" conditional
$c,$d = nonzerodiffrands(-5,5,2)
two different, nonzero integers
Question Control
$prob = "`$a + $b + $c + $d`"
this could show up as: -4 + -2.3 + 3 + -1 the backquotes tell it to display as math
$prob2 = makeprettydisp("$a + $b + $c + $d")
if we want to simplify it like: -4 - 2.3 + 3 - 1
Question Text
Find: $prob
or equivalently: $prob2
$answer = $a + $b + $c + $d
for number, we just need to specify the answer. No quotes here because we're calculating, not creating a display string
by default, numbers are allowed a .001 relative error.
$reltolerance = .0001 would require a higher accuracy
$abstolerance = .01 would require an absolute error under .01
$answer = "[-10,8)" would accept any answer where `-10 <= givenanswer < 8`
Example of Calculated Type
Common Control
$a,$b = randsfrom("2,3,5,7,11",2)
choose two numbers from a list. Can also choose from an array
$c = rand(1,10) where ($c % $a != 0)
$d = rand(1,10) where ($d % $b != 0)
the "where" statement is used with randomizers. It allows you to avoid a specific case. In this case, we're requiring that $a not divide evenly into $c. The modulus operator, %, gives the remainder
upon division
$answerformat = "reducedfraction"
note that the student could enter 2/5*6/7 and get the correct answer. We can prevent this by adding this line. $answerformat = "fraction" is also an option, if you don't care if the answer is
Question Control
Question Text
Multiply: `$c/$a * $d/$b`
Enter your answer as a single, reduced fraction
$answer = $c/$a * $d/$b
like with the Number type, we supply a number as the answer. The only difference is that the student can enter a calculation instead of a number
Example of Multiple-Choice Type
Common Control
$a,$b = nonzerodiffrands(-5,5,2)
pick two different nonzero numbers. The numbers are important here to ensure that all the choices will be different.
$questions[0] = $a+$b
$questions[1] = $a-$b
$questions[2] = $a*$b
we can either define the entire $questions array at once, or define each piece separately. The former would look like: $questions = array($a+$b,$a-$b,...
Question Control
$displayformat = "horiz"
$text = makeprettydisp("$a+$b")
The first line above will lay out the choices horizontally. To do a standard vertical layout, just omit this line
Question Text
Find $text
$answer = 0
Here the answer is the INDEX into the questions array that holds the correct answer. Arrays are zero-indexed, so the first entry is at index 0.
In multiple-choice questions, the question order is automatically randomized unless you specify otherwise, so it's fine for $answer to always be 0; the location of the correct answer will be shuffled
Example of Multiple Answer Type
Common Control
$questions = listtoarray("`sin(x)`,`sin^-1(x)`,`tan(x)`,`csc(x)`,`x^2`")
the $questions array is a list of the options. The listtoarray macro converts a list of numbers or strings to an array. Use calclisttoarray to convert a list of calculations to an array of numbers
Question Control
Question Text
Select all the functions that are periodic
$answers = "0,2,3"
the answer here is a list of indexes into the $questions array that contain correct answers. Remember that arrays are 0-indexed. Like with multiple-choice, the question order is randomized
Normally, each part is given equal weight (each checkbox is worth 1/5 point). If you wish to divide the point score only by the number of correct answers, use this line: $scoremethod = "answers"
A Graphing Example (Multipart)
Common Control
$anstypes = listtoarray("number,number,number,number")
Specify the answer types. In this case, four number answers
$graphs[0] = "-x-5,black,-5,-1,,closed"
$graphs[1] = "-2x+3,black,-1,2,open"
$graphs[2] = "-2x+3,black,2,5,open"
Define the graphs. For each graph, it's: function,color,xmin,xmax,startmark,endmark
$graphs[3] = "2,black,2,2,closed"
last one is really just a dot, but we define it as a function
$plot = showplot($graphs,-5,5,-5,5,1,1)
The inputs here are: graphs,xmin,xmax,ymin,ymax,label spacing,grid spacing
Question Control
this question is not randomized; it's just meant for illustration of graphing options.
Question Text
The graph below is the function `f(x)`
Find `lim_(x->-1^+) \ f(x)` $answerbox[0]
Find `lim_(x->-1^-) \ f(x)` $answerbox[1]
Find `lim_(x->-1) \ f(x)` $answerbox[2]
Find `lim_(x->2) \ f(x)` $answerbox[3]
the backslashes above add extra spacing between the limit and the f(x)
$answer[0] = 5
$answer[1] = -4
$answer[2] = "DNE"
$answer[3] = -1
Define the part answers. "DNE" and "oo" (for infinity) are allowed string answers to number questions
© 2006 David Lippman
This guide was written with development grant support from the WA State Distance Learning Council | {"url":"https://aitmath.com/docs/morequestions.html","timestamp":"2024-11-14T03:31:01Z","content_type":"text/html","content_length":"15353","record_id":"<urn:uuid:d381141a-ef34-4739-be32-eeeb15f85cb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00473.warc.gz"} |
The Millikan L'Eggs Experiment
Coleman, Roy Morgan Park High School
To use discrete masses to simulate the Millikan experiment (the
discrete charge on the electron).
one balance per group and EITHER
a large number of L'Eggs eggs (from panty-hose) individually numbered
and filled with ball bearings or clay such that the filler is
divided into 'unit masses' i.e. for a ball bearing filler, use
multiples of 7 bearings (or some multiple larger than the weight of
the 'shell'). It is nice to have several regular intervals and
then skip one (put in two additional unit masses)
10 numbered plastic Easter eggs filled with clay or bearings (as with
the L'Eggs eggs) for each group
Present the problem of how to find the mass of a unit 'yoke' where
there is a shell and at least one 'yoke' in each egg. If the students
cannot come up with the idea to mass them on a balance, suggest it.
After they have massed their eggs, some may see a pattern but suggest
that they draw a histogram of their data (mass vs. number of eggs with
that mass (NOT egg number)). It should be obvious from the graph that
the masses fall into several groups. From the average mass of each
group, students should be able to see that the groups fall at regular
intervals and that these intervals correspond to each additional unit
mass (one more 'yoke').
A discussion should be held to talk about the number of digits of
accuracy needed in the measurements since the shells and unit masses
will each vary by some small amount. It is possible for the students
to become so involved with making accurate measurements that they miss
the pattern or waste too much time on the weighing.
After the experiment is done, a comparison should be made between this
experiment and Millikan's oil drop experiment where he found the unit
charge of the electron by looking for regular intervals (or discrete
units of charge).
Return to Physics Index | {"url":"https://smileprogram.info/ph8819.html","timestamp":"2024-11-13T19:46:24Z","content_type":"text/html","content_length":"2643","record_id":"<urn:uuid:959722a4-ca18-4622-8da3-a24c93093454>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00682.warc.gz"} |
Calculate Spring Constant - Physics 1 Work and Energy
Calculate Spring Constant
In an experiment to determine the spring constant of an elastic cord of length 0.60 m, a student hangs the cord from a rod and then attaches a variety of weights to the cord. For each weight, the
student allows the weight to hang in equilibrium and then measures the entire length of the cord. The data are recorded in the table below:
iii. Calculate the maximum speed of the object.
SOLUTION MISSING: Unfortunately the author of this youtube video removed their content. You may be able to find a similar problem by checking the other problems in this subject. If you want to
contribute, leave a comment with the link to your solution.
Posted by Rick Weaver a year ago
Related Problems
A 20 kg student is about to go down a slide. There is a 3 N frictional force opposing his movement. Assume his velocity at the top of the slide is 0 m/s. Find his velocity at the bottom of the slide.
A student is about to be launched from a spring loaded canon. The student weighs 60 kg and the spring constant is 200 N/kg. Find the students velocity the moment he leaves the canon if the spring is
compressed, x = 3 m. Also find the students velocity after the spring has decompressed to x = 2 m.
A rubber ball of mass $m$ is dropped from a cliff. As the ball falls, it is subject to air drag (a resistive force caused by the air). The drag force on the ball has a magnitude $bv^2$ , where $b$ is
a constant drag coefficient and $v$ is the instantaneous speed of the ball. The drag coefficient $b$ is directly proportional to the cross-sectional area of the ball and the density of the air and
does not depend on the mass of the ball. As the ball falls, its speed approaches a constant value called the terminal speed.
A. Draw and label all the forces on the ball at some instant before it reaches terminal speed.
B. State whether the magnitude of the acceleration of the ball of mass $m$ increases, decreases, or remains the same as the ball approaches terminal speed. Explain.
C. Write, but do NOT solve, a differential equation for the instantaneous speed $v$ of the ball in terms of time $t$ , the given quantities, and fundamental constants.
D. Determine the terminal speed $v_t$ in terms of the given quantities and fundamental constants.
E. Determine the energy dissipated by the drag force during the fall if the ball is released at height $h$ and reaches its terminal speed before hitting the ground, in terms of the given quantities
and fundamental constants.
A small sphere is moving at a constant speed in a vertical circle. Which of the following quantities is changing?
i. kinetic energy ii. potential energy iii. momentum
i and ii only
i and iii only
i and ii only
ii only
iii only
ii and iii only | {"url":"https://www.practiceproblems.org/problem/Physics_1/Work_and_Energy/Calculate_Spring_Constant","timestamp":"2024-11-11T09:47:39Z","content_type":"text/html","content_length":"42090","record_id":"<urn:uuid:1b8e8058-aafb-45af-9f0d-078740c899eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00243.warc.gz"} |
Euler Project—Double-base Palindromes
Euler Project — Double-base Palindromes
Photo by Mika Baumeister on Unsplash
In this blog, I go though how I solved the Double-base palindromes problem from Project Euler.
The problem first describes what is a double-base palindrome — it a number where both its base 10 form and binary (base 2) form are palindromes. It then asks for the sum of all numbers below one
million that are double-base palindromes.
In my approach, this problem can be broken down into 2 steps, where the first step produces a list of base10 palindromes, and the second step checks if the binary form of each individual element in
the previous list is also a palindrome. The output of the second step would then allow me to meet the final goal.
For the first step, since a palindrome is a number that reads the same backwards as forwards. I can validate a number as a palindrome by inverting the sequence of its string form, as shown using the
following code snippet:
The process is repeated on every single number from 1 to 1000000, where all identified palindromes are stored in a list that will be further validated in the second step.
In the second step, I have a variable that keeps track of the total sum. The sum gets updated whenever a binary form of a number from the previous list is found to be palindromic using the same
logic. This idea is illustrated in the code snippet below:
In the code above, sum_palindromes is used to keep track of the sum. Within the for loop, each number is first converted to its binary form before getting inversed.
The complete implementation can be found in the notebook below. | {"url":"https://michaeltang101.medium.com/euler-project-double-base-palindromes-f3f44956276f?source=user_profile_page---------6-------------f1d2d8b95c81---------------","timestamp":"2024-11-05T23:18:04Z","content_type":"text/html","content_length":"95316","record_id":"<urn:uuid:c581fd86-a772-40a5-ae4b-a6bb8c780432>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00077.warc.gz"} |
Impact of a recent tobacco tax reform in Argentina
The literature on policies for the control of the tobacco epidemic suggests that increasing excise taxes on the consumption of tobacco products is the most cost-effective policy. Cigarette tax
structure in Argentina is very complex. All the tax bases for cigarette consumption taxes are related and, therefore, any modification of a tax affects the collection of the rest of the taxes. This
is important given that funds raised by one of the taxes, the Special Tobacco Fund (FET), are allocated among the tobacco provinces according to the value of tobacco production. These provinces
oppose in the congress to any reform that increase taxes on cigarette consumption that negatively affects these funds. In May 2016, the government decided to increase the rate of one of the taxes,
the internal tax, from 60% to 75%. We study the impact on cigarettes’ demand price elasticity, consumption and tax revenues of this tobacco tax reform. Using an Error Correction Model, we estimate
short-run and long-run demand price and income elasticities. We find that the tax reform of May 2016 induced an increase in the magnitude, in absolute value, of the short-run demand price elasticity
and at the same time increased the funds collected by the FET. We simulate the effects of the tax reform over the government revenues and per-capita consumption of cigarettes showing that additional
increments in taxes would increase revenues and diminish consumption of cigarettes.
• taxation
• public policy
• price
• economics
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant
permission to reuse the content in many different ways.
The literature on policies for tobacco control suggests that increasing excise taxes on the consumption of tobacco products is the most cost-effective policy. The reason is that increasing taxes
causes the prices of tobacco products to increase. This makes the different tobacco products less accessible, thus reducing initiation, prevalence and consumption of tobacco. In addition, because the
demand for tobacco is inelastic, higher taxes generate increases in tax revenues. See Gajalakshmi et al,1 Jha and Chaloupka2, Ranson et al 3 among others for international evidence. See
González-Rozada4 González-Rozada and Rodríguez-Iglesias5, Rodríguez-Iglesias et al 6 for evidence for Argentina.
The tax structure on cigarette consumption in Argentina is very complex including four ad-valorem taxes. One of the taxes, the Special Tobacco Fund (FET), acts as a subsidy to the provinces that
produce tobacco. Therefore, these provinces oppose in the congress to any tax reform that negatively affects these funds. In May 2016, the government decided to increase the rate of one of the taxes,
the internal tax, from 60% to 75%. In this paper, we study the impact on cigarettes’ demand price elasticity, consumption and tax revenues of this tobacco tax reform. Using an Error Correction Model,
we estimate short-run and long-run demand price and income elasticities. Then, using these estimations, we simulate the impact of the tax reform by increasing the rate of internal taxes on
consumption of cigarettes and government revenue. The rest of the work is organised as follows. Section 2 describes the tax structure of cigarettes in Argentina and presents the tax reform and its
impact on the tax share on prices, retail price, FET and government tax revenue. Section 3 describes the data used in the estimation of the demand function of cigarettes and studies the underlying
statistical properties of retail price, real income and consumption of cigarettes. Section 4 introduces the methodology used to estimate demand price and income elasticities. Section 5 shows the main
results of the paper and Section 6 concludes the work.
Tax structure of cigarettes in Argentina
The tax structure on cigarette consumption in Argentina is very complex. Federal taxes affecting cigarettes are four ad-valorem taxes: the additional emergency tax (IAE), the value added tax (VAT),
the FET and the internal tax (II). The tax base of each one is different. Table 1 shows tax rates, tax base and the tax share on the retail price of each ad-valorem tax before the reform.
The tax share on prices before the tax reform of May 2016 was 68.6%. The average retail price in April 2016 was almost AR$26 per pack of 20 cigarettes (AR$3.1 in real terms) and internal taxes
represented 47% of that retail price. This structure implies than changing the tax rate of one of the taxes affects the tax base of the other taxes. This is important because, in practice, one of the
taxes, the FET, acts as a subsidy to tobacco producers. The main objective of the FET is to guide, coordinate and supervise the actions tending to achieve the modernisation, reconversion,
complementation and diversification of the tobacco areas, both in the primary production and in the associated agro-industrial chain. The funds raised by the FET are allocated among the tobacco
provinces according to the value of tobacco production. The Ministry of Agriculture of the Nation is the enforcement authority of the FET. It has in its functions to fix the price of the different
varieties of tobacco and transfer the collection of the FET to the provinces so that they make cash the payment of the surcharge to the producers. That is, the FET acts as a subsidy to tobacco
producers and the tobacco industry in particular.
Usually, those who oppose increasing taxes on tobacco products use the FET as an argument against it by saying that increment in taxes will reduce the FET funds. It is important then, for policy
reasons, to show evidence that this is not the case when increasing internal taxes.
The tax reform
In May 2016, Argentina established an increase in the rate of II on cigarettes from 60% to 75%. After this reform, the tax share on retail price increased and reached almost 80%. II represented
almost 61% of the average retail price of almost AR$50 per pack of 20 cigarettes (AR$4.5 in real terms). FET tax share on retail price decreased slightly from 7.8% to 7.7% but because average real
retail price of a pack of 20 cigarettes increased almost 50%, from AR$3.1 to AR$4.5, FET funds increased.
The response of the tobacco industry to the tax reform was to increase average retail prices of cheapest brands 40% in the month after the reform while for the most expensive brands, they incremented
average retail prices by 50%. This strategy had to do with the cigarette consumption market in Argentina, where the great majority of smokers consume the most expensive brands (for data source of
response of the industry and structure of consumption of cigarettes see next section).
After the second quarter of 2016, there was a clear increase in the collection of internal taxes. Before the tax reform, tax revenues from II were around 4500million of constant AR$, while after the
reform these revenues were almost 6000million.
Figure 1 shows the tax collection, in millions of constant pesos of the fourth quarter of 2017, coming from the FET before and after the implementation of the reform of May 2016 (marked in the figure
by the dotted vertical line). As can be observed, after the tax reform, the tax collection from the FET increased throughout the period analysed. Before the reform FET revenues were around
AR$750million and jumped to more than AR$850million just after the reform. The main reason for this was the tax base increase due to the increment in retail prices.
This evidence shows that it is possible to increase taxes on the consumption of cigarettes without affecting the FET funds. As mentioned above, affecting the FET funds is a political concern when
there is a proposal to increment taxes on cigarettes.
Data and statistical properties
We use monthly data from January 2005 to June 2018 for consumption (approximated by the total sales of packages of 20 cigarettes), average real retail price of cigarettes and real income of the
population, represented by the average remuneration of registered workers of the private sector published by the Ministry of Labour, Employment and Social Security (Data are available online here:
https://www.agroindustria.gob.ar/sitio/areas/tabaco/estadisticas; https://www.trabajo.gob.ar/left/estadisticas/descargas/SIPA/AnexoEstadistico.xlsx). To specify the demand function for cigarettes, we
first needed to find the statistical properties of these variables. Using the Augmented Dickey–Fuller test,7 we show that all three variables, consumption, real price and real income, have
individually a unit root. Then, using the Johansen Trace test,8 we show that the three variables are cointegrated.
Methodology for estimating the demand price elasticity of cigarettes
Cointegration implies that the tobacco demand function can be specified with a model that takes into account not only the relationship between the variables in the short-run but also in the long-run.
Using an error correction model, the long-run relationship among consumption of cigarettes, real retail price and real income is:
Where is the natural log of consumption, is the natural log of real retail price, is the natural log of real income and u[t] is an error term. is the demand price elasticity and is the real income
elasticity. Equation (1) is the long-run equilibrium relationship.
In the short-run, the variables may not be in the steady state; therefore, we specify the dynamics of the short-run relationship using r lags in equation (2).
Where δ, α, β, γ, α[j]*, β[0], β[j]*, γ[0], γ[j] *, θ[0], θ[j] and k[i] ^* are the parameters of the model and ε[t] is a stationary error term. The value of r determines the number of months involved
in the long-run concept of the model. The term in levels between braces represents the solution of long-run equilibrium (1), while all the variables in first differences measure the short-run
dynamics. Some of the parameters in (2) have an interpretation in terms of the short-run elasticities of cigarette consumption. In particular, β[0] is the short-run demand price elasticity and γ[0]
is the short-run real income elasticity. To capture the impact of the tax reform, we introduced a binary variable, D[201] , adopting the unity value since May 2016 when the reform was applied
onwards. This indicator variable interacts with the price variables in the short-run specification (2). Then, the impact of the reforms on the short-run demand price elasticity is measured by: β[0] +
θ[0] . For a detailed description of the model see Annex two in González-Rozada.9
We estimate the ECM using the Engle–Granger methodology.10 This is a two-stage estimate. First, we estimate the long-term equilibrium relationship (1) and then we estimate the ECM (2) to obtain the
short-run effects.
Table 2 shows the estimation of equation (1) including a dummy variable for the Christmas bonus. The long-run demand price elasticity is −0.441, while the long-run real income elasticity is 0.127.
These values imply that, in the long-run, a 10% increase in the real retail price reduces cigarette consumption by 4.41% and a 10% increase in real income increases the consumption of cigarettes by
1.27%. All estimations are statistically significant at usual levels of significance.
Table 3 shows the estimation of the short-term dynamics, including the effect of the tax increase from May 2016. Z(t–1) represents the estimation of term in levels between braces of equation (2). The
variable D[2016] is a binary variable adopting the unity value since the month of May 2016 when the tax reform was implemented. As can be seen in the table, the short-run demand price elasticity
without the effect of the reform is −0.91 while, as a result of the reform of May 2016, this value is −1.38. These results suggest that, in the short-run before the reform, a 10% increase in real
retail price induced a 9% decrease in consumption, while after the tax reform the same increase in real retail price produced a decrease in consumption of cigarettes of around 14%. The tax reform
induced a huge increase in retail price and this, in turn, produced a large fall in consumption for a few months after the reform. These sudden changes are captured by the increment, in absolute
magnitude, of the short-run demand price elasticity.
Simulation of results
To analyse the impact of the reform of May 2016 on cigarette consumption and tax collection, we perform a simulation exercise. In this exercise, we use the long-run price elasticity of −0.44
presented in table 1 and increase the internal tax rate sequentially. In this way, we can see the impact of the fiscal reform. The parameters used for the simulation exercise are:
Consumption of cigarettes: 177056579 packages
Average retail price: AR$25.88 per package
Tax on cigarettes: AR$ 20.65 per package
Government revenue for taxes on cigarettes: AR$3 658 275 567
Exchange rate: 14.25 AR$ per dollar
Population (over 15 years old): 31452302
Consumption per capita: 67.53 packages per year
Figure 2 shows the changes in the government’s tax revenue. The vertical line shows the tax increase of 15 percentage points produced by the fiscal reform, the figure shows that there is enough room
to increase the internal tax rate on cigarette consumption and still increase government's tax revenue. For example, if the government decided to increase the internal tax rate an additional three
percentage points, it would increase tax revenues around US$200million.
Figure 3 shows the effects of the tax reform on the per capita consumption of cigarettes. As in figure 2, the vertical line shows the implemented fiscal reform, an increase in the internal tax rate
from 60% to 75%. The figure shows that this increment in the internal tax rate induced a fall in the average per capita consumption of cigarettes from 68 to around 50 packs per year. The figure also
shows that further increases in the internal tax rate would reduce the average per capita consumption of cigarettes.
We studied the impact on demand price elasticity, the FET, cigarette consumption and tax collection of a recent tax reform in Argentina. This reform increased the rate of internal taxes from 60% to
75% and this, in turn, increased the government revenues collected from II about 40% by the end of 2016. We provided evidence that the increment in the rate of internal taxes produced an increment in
the revenue collected by the FET. We estimate an ECM, obtaining short-run and long-run demand price elasticities. We found a long-run elasticity of −0.441, suggesting that a 10% increase in the real
retail price of cigarettes would decrease consumption by around 4.4%. The estimation of the short-run demand price elasticity was −0.911 without the tax reform, whereas if we consider the reform,
this short-run elasticity increases in absolute value to −1.385. Using the estimation of the demand price elasticity, we simulate the impact of the tax reform by increasing the rate of internal taxes
on consumption and government revenue, finding that it is possible to increase even more this tax rate and increase revenues and decrease consumption of cigarettes.
What this paper adds
• This paper shows how a tobacco tax reform affects cigarette’s demand price elasticity, tobacco consumption and government revenues in the context of a complex cigarettes tax structure.
• Argentina cigarette’s tax structure include four ad-valorem taxes. One of the taxes, the Special Tobacco Fund (FET), acts as a subsidy to the provinces that produce tobacco. Therefore, these
provinces oppose in the congress to any tax reform that negatively affects these funds. We show that the tax reform of May 2016, that increase the rate of one of the taxes, the internal tax, from
60% to 75% induced an increment in the funds raised by the FET.
• Using the estimation of the demand price elasticity, we simulate the impact of the tax reform by increasing the rate of internal taxes on consumption and government revenue, finding that it is
possible to increase even more this tax rate and increase revenues and decrease consumption of cigarettes.
• Twitter @MartinGRozada
• Contributors I wrote the paper and made all estimations in it.
• Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
• Competing interests None declared.
• Patient consent for publication Not required.
• Provenance and peer review Not commissioned; externally peer reviewed.
• Data availability statement Data are available in a public, open access repository. All data relevant to the study are included in the article or uploaded as supplementary information. | {"url":"https://tobaccocontrol.bmj.com/content/29/Suppl_5/s300?ijkey=abb1e05af26618507a53f288573ac8b36d07f46c&keytype2=tf_ipsecsha","timestamp":"2024-11-06T08:06:38Z","content_type":"text/html","content_length":"141389","record_id":"<urn:uuid:2ff9e866-00d9-4ad3-91b3-35d7729ea76b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00790.warc.gz"} |
Inferring Somatic Signatures from Single Nucleotide Variant Calls
1 Motivation: The Concept Behind Mutational Signatures
Recent publications introduced the concept of identifying mutational signatures from cancer sequencing studies and linked them to potential mutation generation processes [11,2,3]. Conceptually, this
relates somatically occurring single nucleotide variants (SNVs) to the surrounding sequence which will be referred to as mutational or somatic motifs in the following. Based on the frequency of the
motifs occurring in multiple samples, these can be decomposed mathematically into so called mutational signatures. In case of the investigation of tumors, the term somatic signatures will be used
here to distinguish them from germline mutations and their generating processes.
The SomaticSignatures package provides an efficient and user-friendly implementation for the extraction of somatic motifs based on a list of somatically mutated genomic sites and the estimation of
somatic signatures with different matrix decomposition algorithms. Methodologically, this is based on the work of Nik-Zainal and colleagues [11]. If you use SomaticSignatures in your research, please
cite it as:
Gehring, Julian S., Bernd Fischer, Michael Lawrence, and Wolfgang Huber.
SomaticSignatures: Inferring Mutational Signatures from Single Nucleotide Variants.
Bioinformatics, 2015, btv408. http://dx.doi.org/10.1093/bioinformatics/btv408
2 Methodology: From Mutations to Somatic Signatures
The basic idea of somatic signatures is composed of two parts:
Firstly, each somatic mutation is described in relation of the sequence context in which it occurs. As an example, consider a SNV, resulting in the alteration from A in the normal to G in the tumor
sample, that is embedded in the sequence context TAC. Thus, the somatic motif can be written as TAC>TGC or T.C A>G. The frequency of these motifs across multiple samples is then represented as a
matrix \(M_{ij}\), where \(i\) counts over the motifs and \(j\) over the samples.
In a second step, the matrix \(M\) is numerically decomposed into two matrices \(W\) and \(H\)
\(M_{ij} \approx \sum_{k=1}^{r} W_{ik} H_{kj}\)
for a fixed number \(r\) of signatures. While \(W\) describes the composition of each signature in term of somatic motifs, \(H\) shows the contribution of the signature to the alterations present in
each sample.
3 Workflow: Analysis with the SomaticSignatures Package
The SomaticSignatures package offers a framework for inferring signatures of SNVs in a user-friendly and efficient manner for large-scale data sets. A tight integration with standard data
representations of the Bioconductor project [8] was a major design goal. Further, it extends the selection of multivariate statistical methods for the matrix decomposition and allows a simple
visualization of the results.
For a typical workflow, a set of variant calls and the reference sequence are needed. Ideally, the SNVs are represented as a VRanges object with the genomic location as well as reference and
alternative allele defined. The reference sequence can be, for example, a FaFile object, representing an indexed FASTA file, a BSgenome object, or a GmapGenome object. Alternatively, we provide
functions to extract the relevant information from other sources of inputs. At the moment, this covers the MuTect [4] variant caller.
Generally, the individual steps of the analysis can be summarized as:
1. The somatic motifs for each variant are retrieved from the reference sequence with the mutationContext function and converted to a matrix representation with the motifMatrix function.
2. Somatic signatures are estimated with a method of choice (the package provides with nmfDecomposition and pcaDecomposition two approaches for the NMF and PCA).
3. The somatic signatures and their representation in the samples are assessed with a set of accessor and plotting functions.
To decompose \(M\), the SomaticSignatures package implements two methods:
Non-negative matrix factorization (NMF)
The NMF decomposes \(M\) with the constraint of positive components in \(W\) and \(H\) [7]. The method was used [11] for the identification of mutational signatures, and can be computationally
expensive for large data sets.
Principal component analysis (PCA)
The PCA employs the eigenvalue decomposition and is therefore suitable for large data sets [13]. While this is related to the NMF, no constraint on the sign of the elements of \(W\) and \(H\)
Other methods can be supplied through the decomposition argument of the identifySignatures function.
4 Use case: Estimating Somatic Signatures from TCGA WES Studies
In the following, the concept of somatic signatures and the steps for inferring these from an actual biological data set are shown. For the example, somatic variant calls from whole exome sequencing
(WES) studies from The Cancer Genome Atlas (TCGA) project will be used, which are part of the SomaticCancerAlterations package.
4.1 Data: Preproccessing of the TCGA WES Studies
The SomaticCancerAlterations package provides the somatic SNV calls for eight WES studies, each investigating a different cancer type. The metadata summarizes the biological and experimental settings
of each study.
sca_metadata = scaMetadata()
Cancer_Type Center NCBI_Build Sequence_Source
gbm_tcga GBM broad.mi.... 37 WXS
hnsc_tcga HNSC broad.mi.... 37 Capture
kirc_tcga KIRC broad.mi.... 37 Capture
luad_tcga LUAD broad.mi.... 37 WXS
lusc_tcga LUSC broad.mi.... 37 WXS
ov_tcga OV broad.mi.... 37 WXS
skcm_tcga SKCM broad.mi.... 37 Capture
thca_tcga THCA broad.mi.... 37 WXS
Sequencing_Phase Sequencer Number_Samples
gbm_tcga Phase_I Illumina.... 291
hnsc_tcga Phase_I Illumina.... 319
kirc_tcga Phase_I Illumina.... 297
luad_tcga Phase_I Illumina.... 538
lusc_tcga Phase_I Illumina.... 178
ov_tcga Phase_I Illumina.... 142
skcm_tcga Phase_I Illumina.... 266
thca_tcga Phase_I Illumina.... 406
Number_Patients Cancer_Name
gbm_tcga 291 Glioblastoma multiforme
hnsc_tcga 319 Head and Neck squamous cell carcinoma
kirc_tcga 293 Kidney Chromophobe
luad_tcga 519 Lung adenocarcinoma
lusc_tcga 178 Lung squamous cell carcinoma
ov_tcga 142 Ovarian serous cystadenocarcinoma
skcm_tcga 264 Skin Cutaneous Melanoma
thca_tcga 403 Thyroid carcinoma
The starting point of the analysis is a VRanges object which describes the somatic variants in terms of their genomic locations as well as reference and alternative alleles. For more details about
this class and how to construct it, please see the documentation of the VariantAnnotation package [12]. In this example, all mutational calls of a study will be pooled together, in order to find
signatures related to a specific cancer type.
sca_data = unlist(scaLoadDatasets())
sca_data$study = factor(gsub("(.*)_(.*)", "\\1", toupper(names(sca_data))))
sca_data = unname(subset(sca_data, Variant_Type %in% "SNP"))
sca_data = keepSeqlevels(sca_data, hsAutosomes(), pruning.mode = "coarse")
sca_vr = VRanges(
seqnames = seqnames(sca_data),
ranges = ranges(sca_data),
ref = sca_data$Reference_Allele,
alt = sca_data$Tumor_Seq_Allele2,
sampleNames = sca_data$Patient_ID,
seqinfo = seqinfo(sca_data),
study = sca_data$study)
VRanges object with 594607 ranges and 1 metadata column:
seqnames ranges strand ref alt
<Rle> <IRanges> <Rle> <character> <characterOrRle>
[1] 1 887446 * G A
[2] 1 909247 * C T
[3] 1 978952 * C T
[4] 1 981607 * G A
[5] 1 985841 * C T
... ... ... ... ... ...
[594603] 22 50961303 * G T
[594604] 22 50967746 * C A
[594605] 22 50967746 * C A
[594606] 22 51044090 * C T
[594607] 22 51044095 * G A
totalDepth refDepth altDepth sampleNames
<integerOrRle> <integerOrRle> <integerOrRle> <factorOrRle>
[1] <NA> <NA> <NA> TCGA-06-5858
[2] <NA> <NA> <NA> TCGA-32-1977
[3] <NA> <NA> <NA> TCGA-06-0237
[4] <NA> <NA> <NA> TCGA-06-0875
[5] <NA> <NA> <NA> TCGA-06-6693
... ... ... ... ...
[594603] <NA> <NA> <NA> TCGA-BJ-A0Z0
[594604] <NA> <NA> <NA> TCGA-BJ-A2NA
[594605] <NA> <NA> <NA> TCGA-BJ-A2NA
[594606] <NA> <NA> <NA> TCGA-EM-A3FK
[594607] <NA> <NA> <NA> TCGA-EL-A3T0
softFilterMatrix | study
<matrix> | <factor>
[1] | GBM
[2] | GBM
[3] | GBM
[4] | GBM
[5] | GBM
... ... . ...
[594603] | THCA
[594604] | THCA
[594605] | THCA
[594606] | THCA
[594607] | THCA
seqinfo: 22 sequences from an unspecified genome
hardFilters: NULL
To get a first impression of the data, we count the number of somatic variants per study.
sort(table(sca_vr$study), decreasing = TRUE)
LUAD SKCM HNSC LUSC KIRC GBM THCA OV
4.2 Motifs: Extracting the Sequence Context of Somatic Variants
In a first step, the sequence motif for each variant is extracted based on the genomic sequence. Here, the BSgenomes object of the human hg19 reference is used for all samples. However, personalized
genomes or other sources for sequences, for example an indexed FASTA file, can be used naturally. Additionally, we transform all motifs to have a pyrimidine base (C or T) as a reference base [2]. The
resulting VRanges object then contains the new columns context and alteration which specify the sequence content and the base substitution.
sca_motifs = mutationContext(sca_vr, BSgenome.Hsapiens.1000genomes.hs37d5)
VRanges object with 6 ranges and 3 metadata columns:
seqnames ranges strand ref alt
<Rle> <IRanges> <Rle> <character> <characterOrRle>
[1] 1 887446 * G A
[2] 1 909247 * C T
[3] 1 978952 * C T
[4] 1 981607 * G A
[5] 1 985841 * C T
[6] 1 1120451 * C T
totalDepth refDepth altDepth sampleNames
<integerOrRle> <integerOrRle> <integerOrRle> <factorOrRle>
[1] <NA> <NA> <NA> TCGA-06-5858
[2] <NA> <NA> <NA> TCGA-32-1977
[3] <NA> <NA> <NA> TCGA-06-0237
[4] <NA> <NA> <NA> TCGA-06-0875
[5] <NA> <NA> <NA> TCGA-06-6693
[6] <NA> <NA> <NA> TCGA-26-1439
softFilterMatrix | study alteration context
<matrix> | <factor> <DNAStringSet> <DNAStringSet>
[1] | GBM CT G.G
[2] | GBM CT A.G
[3] | GBM CT G.G
[4] | GBM CT G.C
[5] | GBM CT A.G
[6] | GBM CT C.G
seqinfo: 22 sequences from an unspecified genome
hardFilters: NULL
To continue with the estimation of the somatic signatures, the matrix \(M\) of the form {motifs × studies} is constructed. The normalize argument specifies that frequencies rather than the actual
counts are returned.
sca_mm = motifMatrix(sca_motifs, group = "study", normalize = TRUE)
head(round(sca_mm, 4))
GBM HNSC KIRC LUAD LUSC OV SKCM THCA
CA A.A 0.0083 0.0098 0.0126 0.0200 0.0165 0.0126 0.0014 0.0077
CA A.C 0.0093 0.0082 0.0121 0.0217 0.0156 0.0192 0.0009 0.0068
CA A.G 0.0026 0.0061 0.0046 0.0144 0.0121 0.0060 0.0004 0.0048
CA A.T 0.0057 0.0051 0.0070 0.0134 0.0100 0.0092 0.0007 0.0067
CA C.A 0.0075 0.0143 0.0215 0.0414 0.0390 0.0128 0.0060 0.0112
CA C.C 0.0075 0.0111 0.0138 0.0415 0.0275 0.0143 0.0018 0.0063
The observed occurrence of the motifs, also termed somatic spectrum, can be visualized across studies, which gives a first impression of the data. The distribution of the motifs clearly varies
between the studies.
plotMutationSpectrum(sca_motifs, "study")
4.3 Decomposition: Inferring Somatic Signatures
The somatic signatures can be estimated with each of the statistical methods implemented in the package. Here, we will use the NMF and PCA, and compare the results. Prior to the estimation, the
number \(r\) of signatures to obtain has to be fixed; in this example, the data will be decomposed into 5 signatures.
n_sigs = 5
sigs_nmf = identifySignatures(sca_mm, n_sigs, nmfDecomposition)
sigs_pca = identifySignatures(sca_mm, n_sigs, pcaDecomposition)
Samples (8): GBM, HNSC, ..., SKCM, THCA
Signatures (5): S1, S2, S3, S4, S5
Motifs (96): CA A.A, CA A.C, ..., TG T.G, TG T.T
Samples (8): GBM, HNSC, ..., SKCM, THCA
Signatures (5): S1, S2, S3, S4, S5
Motifs (96): CA A.A, CA A.C, ..., TG T.G, TG T.T
The individual matrices can be further inspected through the accessors signatures, samples, observed and fitted.
4.4 Assessment: Number of Signatures
Up to now, we have performed the decomposition based on a known number \(r\) of signatures. In many settings, prior biological knowledge or complementing experiments may allow to determine \(r\)
independently. If this is not the case, we can try to infer suitable values for \(r\) from the data.
Using the assessNumberSignatures function, we can compute the residuals sum of squares (RSS) and the explained variance between the observed \(M\) and fitted \(WH\) mutational spectrum for different
numbers of signatures. These measures are generally applicable to all kinds of decomposition methods, and can aid in choosing a likely number of signatures. The usage and arguments are analogous to
the identifySignatures function.
n_sigs = 2:8
gof_nmf = assessNumberSignatures(sca_mm, n_sigs, nReplicates = 5)
gof_pca = assessNumberSignatures(sca_mm, n_sigs, pcaDecomposition)
The obtained statistics can further be visualized with the plotNumberSignatures. For each tested number of signatures, black crosses indicate the results of individual runs, while the red dot
represents the average over all respective runs. Please note that having multiple runs is only relevant for randomly seeded decomposition methods, as the NMF in our example.
Warning: The `fun.y` argument of `stat_summary()` is deprecated as of ggplot2
ℹ Please use the `fun` argument instead.
ℹ The deprecated feature was likely used in the SomaticSignatures
Please report the issue at <https://support.bioconductor.org>.
This warning is displayed once every 8 hours.
Call `lifecycle::last_lifecycle_warnings()` to see where this warning
was generated.
\(r\) can then be chosen such that increasing the number of signatures does not yield a significantly better approximation of the data, i.e. that the RSS and the explained variance do not change
sufficiently for more complex models. The first inflection point of the RSS curve has also been proposed as a measure for the number of features in this context [9]. Judging from both statistics for
our dataset, a total of 5 signatures seems to explain the characteristics of the observed mutational spectrum well. In practice, a combination of a statistical assessment paired with biological
knowledge about the nature of the data will allow for the most reliable interpretation of the results.
4.5 Visualization: Exploration of Signatures and Samples
To explore the results for the TCGA data set, we will use the plotting functions. All figures are generated with the ggplot2 package, and thus, their properties and appearances can directly be
modified, even at a later stage.
Focusing on the results of the NMF first, the five somatic signatures (named S1 to S5) can be visualized either as a heatmap or as a barchart.
plotSignatureMap(sigs_nmf) + ggtitle("Somatic Signatures: NMF - Heatmap")
plotSignatures(sigs_nmf) + ggtitle("Somatic Signatures: NMF - Barchart") | {"url":"http://bioconductor.riken.jp/packages/release/bioc/vignettes/SomaticSignatures/inst/doc/SomaticSignatures-vignette.html","timestamp":"2024-11-07T02:25:48Z","content_type":"application/xhtml+xml","content_length":"1048898","record_id":"<urn:uuid:18e55f3d-5cd3-414f-8cf2-5d3451170a1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00443.warc.gz"} |
ROC and Precision-Recall Curves in Python
Effective classification is essential for many machine learning applications, from spam detection to medical diagnoses. Evaluating the performance of these models is crucial, and ROC and
Precision-Recall curves are two powerful tools for this purpose. This article delves into using these curves in Python, providing insights and practical examples to enhance your classification
Understanding ROC and Precision-Recall Curves
Importance of ROC and AUC
The ROC curve (Receiver Operating Characteristic curve) is a graphical representation of a classifier's performance across various threshold settings. It plots the True Positive Rate (TPR) against
the False Positive Rate (FPR), helping to visualize the trade-offs between sensitivity and specificity.
The AUC ROC curve (Area Under the ROC Curve) is a single metric summarizing the classifier's performance. A higher AUC indicates a better-performing model. This metric is especially useful when
comparing multiple models, as it provides a clear and concise measure of their effectiveness.
In many cases, relying solely on accuracy can be misleading, particularly with imbalanced datasets. The roc function and the auc roc curve help address this issue by focusing on the trade-offs
between different types of errors, offering a more nuanced evaluation of the model's performance.
Comparing ML and Statistical Models: Effectiveness and Performance
Precision-Recall Curves Explained
The Precision-Recall curve is another essential tool for evaluating classification models, especially when dealing with imbalanced data. It plots Precision (the ratio of true positive predictions to
the total positive predictions) against Recall (the ratio of true positives to the total actual positives).
Precision-Recall curves are particularly useful when the positive class is rare or when the cost of false positives and false negatives is significantly different. These curves provide insights into
the balance between Precision and Recall, allowing you to choose the optimal threshold for your specific application.
Comparing roc and auc with Precision-Recall curves highlights their different strengths. While roc auc curve is useful for overall model performance, Precision-Recall curves excel in highlighting
performance for the positive class.
Key Differences and Use Cases
Understanding when to use ROC curves versus Precision-Recall curves is vital. ROC curves are generally preferred when the negative and positive classes are roughly equal in size, as they provide a
comprehensive view of the model's performance.
Maximizing Deep Learning Performance: Optimization Techniques
In contrast, Precision-Recall curves are more informative when dealing with imbalanced datasets. They focus on the performance concerning the positive class, making them ideal for applications like
fraud detection or medical screening, where the positive cases are rare but critical.
Choosing the appropriate curve based on your dataset and application ensures a more accurate evaluation of your classification models. Both curves, when used effectively, can significantly enhance
your model's performance.
Implementing ROC Curves in Python
Loading and Preprocessing Data
To illustrate the use of ROC and AUC, we'll start with loading and preprocessing data. For this example, we'll use the popular Breast Cancer Wisconsin dataset, available in the sklearn.datasets
Here's how to load and preprocess the data:
Error Analysis to Evaluate Machine Learning Models
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Load the dataset
data = load_breast_cancer()
X = data.data
y = data.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize the data
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
This code snippet demonstrates the process of loading the dataset, splitting it into training and testing sets, and standardizing the features. Standardization ensures that all features contribute
equally to the model, improving its performance.
Training the Model
Next, we'll train a logistic regression model on the training data. Logistic regression is a simple yet powerful classification algorithm that is well-suited for binary classification tasks like this
from sklearn.linear_model import LogisticRegression
# Train a logistic regression model
model = LogisticRegression(max_iter=200)
model.fit(X_train, y_train)
This code snippet trains the logistic regression model on the standardized training data. Logistic regression works by fitting a linear decision boundary between the two classes, making it easy to
interpret and evaluate.
Plotting the ROC Curve
Once the model is trained, we can plot the ROC curve to evaluate its performance. Scikit-learn provides a convenient function for this purpose:
Interpreting Machine Learning Model Results: A Guide
from sklearn.metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt
# Predict probabilities for the test set
y_probs = model.predict_proba(X_test)[:, 1]
# Compute the ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_probs)
# Compute the AUC
auc = roc_auc_score(y_test, y_probs)
# Plot the ROC curve
plt.plot(fpr, tpr, label=f'ROC curve (AUC = {auc:.2f})')
plt.plot([0, 1], [0, 1], 'k--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc='lower right')
This code snippet demonstrates how to plot the ROC curve and calculate the AUC. The roc function in scikit-learn computes the false positive and true positive rates for different thresholds, allowing
us to visualize the trade-offs between sensitivity and specificity.
Implementing Precision-Recall Curves in Python
Calculating Precision and Recall
To plot the Precision-Recall curve, we first need to calculate Precision and Recall for different thresholds. Scikit-learn provides functions for this as well:
from sklearn.metrics import precision_recall_curve
# Compute precision and recall
precision, recall, thresholds = precision_recall_curve(y_test, y_probs)
This code snippet calculates Precision and Recall values for different thresholds, which are necessary for plotting the Precision-Recall curve.
Plotting the Precision-Recall Curve
With Precision and Recall values computed, we can now plot the Precision-Recall curve:
Can Machine Learning Classification Be Validated for Accuracy?
# Plot the Precision-Recall curve
plt.plot(recall, precision, label='Precision-Recall curve')
plt.title('Precision-Recall Curve')
plt.legend(loc='lower left')
This code snippet plots the Precision-Recall curve, providing insights into the trade-offs between Precision and Recall. This curve is particularly useful for evaluating models on imbalanced
Comparing Models with Precision-Recall Curves
Precision-Recall curves can also be used to compare the performance of multiple models. By plotting the curves for different models on the same graph, you can easily see which model performs better
in terms of Precision and Recall.
# Plot Precision-Recall curves for multiple models
plt.plot(recall, precision, label='Logistic Regression')
# Add more models here for comparison
plt.title('Precision-Recall Curve Comparison')
plt.legend(loc='lower left')
This code snippet provides a template for comparing multiple models using Precision-Recall curves. By evaluating the curves side by side, you can choose the model that best balances Precision and
Recall for your specific application.
Advanced Techniques and Considerations
Handling Imbalanced Datasets
When dealing with imbalanced datasets, standard metrics like accuracy can be misleading. ROC and Precision-Recall curves offer a more nuanced evaluation of model performance. Additionally, techniques
like SMOTE (Synthetic Minority Over-sampling Technique) can be used to balance the dataset.
Key Evaluation Scenarios for Machine Learning Models
Here is an example of using SMOTE with scikit-learn:
from imblearn.over_sampling import SMOTE
# Apply SMOTE to balance the dataset
smote = SMOTE(random_state=42)
X_resampled, y_resampled = smote.fit_resample(X_train, y_train)
This code snippet demonstrates how to apply SMOTE to oversample the minority class, resulting in a more balanced dataset. Balancing the dataset can improve the performance of classification models,
particularly when using metrics like roc auc curve and Precision-Recall curves.
Threshold Selection and Optimization
Selecting the optimal threshold for classification is crucial for maximizing model performance. Both ROC and Precision-Recall curves can help identify the best threshold by highlighting the
trade-offs between different metrics.
Here is an example of threshold selection using the ROC curve:
# Find the optimal threshold
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print(f'Optimal Threshold: {optimal_threshold}')
This code snippet identifies the optimal threshold based on the ROC curve by maximizing the difference between the true positive rate and false positive rate. Selecting the right threshold can
significantly impact the model's performance, making it a critical step in the evaluation process.
Combining Multiple Metrics
Using multiple metrics, such as roc auc curve and Precision-Recall curves, provides a comprehensive evaluation of your model's performance. By considering various aspects of the model, you can make
more informed decisions about its effectiveness and areas for improvement.
Here is an example of combining multiple metrics:
from sklearn.metrics import f1_score
# Compute the F1 score
f1 = f1_score(y_test, y_pred)
print(f'F1 Score: {f1}')
This code snippet calculates the F1 score, a metric that combines Precision and Recall into a single value. By using multiple metrics, you can gain a deeper understanding of your model's strengths
and weaknesses.
Practical Applications of ROC and Precision-Recall Curves
Fraud Detection
In fraud detection, identifying fraudulent transactions is critical. ROC and Precision-Recall curves help evaluate the performance of fraud detection models, ensuring they effectively distinguish
between fraudulent and legitimate transactions.
For example, using Precision-Recall curves can highlight the trade-offs between false positives and false negatives, allowing you to choose a model that minimizes the cost of fraud while maintaining
a high level of accuracy.
Medical Diagnoses
In medical diagnoses, accurate classification models can save lives. ROC and Precision-Recall curves provide essential insights into the performance of diagnostic models, helping healthcare
professionals make informed decisions.
By evaluating models using these curves, you can ensure that the models are sensitive enough to detect true positives while maintaining a low rate of false positives, improving patient outcomes.
Spam Detection
Spam detection is another practical application where ROC and Precision-Recall curves play a crucial role. These curves help evaluate spam filters, ensuring they effectively identify spam emails
while minimizing false positives.
Using roc auc curve and Precision-Recall curves, you can optimize your spam detection models to balance the trade-offs between different types of errors, improving the overall performance of your
spam filter.
ROC and Precision-Recall curves are powerful tools for evaluating classification models. By understanding and applying these curves in Python, you can boost your classification models' performance
and make more informed decisions. Whether you're working on fraud detection, medical diagnoses, or spam detection, these curves provide invaluable insights into your models' strengths and weaknesses.
If you want to read more articles similar to ROC and Precision-Recall Curves in Python, you can visit the Performance category. | {"url":"https://machinelearningmodels.org/roc-and-precision-recall-curves-in-python/","timestamp":"2024-11-12T20:26:29Z","content_type":"text/html","content_length":"121293","record_id":"<urn:uuid:a1dcff97-7b43-4d98-9a1a-aa8c19388420>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00130.warc.gz"} |
Ionworks Blog - Physics-based models (1): what are they?
Featured post ->
July 21, 2023
Physics-based models (1): what are they?
Physics-based models for lithium-ion batteries are quite popular these days, and they have literally seen an exponential growth in the past decades. According to Google Scholar results, the number of
articles published each year (since 2000) matching the search «"physics-based" model "lithium" battery» have been doubling every 3.3 years. But what are physics-based models? The term is used to
describe models which are built upon fundamental laws of physics (as opposed to empirical models), and their uses extend way beyond lithium-ion batteries. Their main advantage with respect to
empirical models is that they provide a lot more information on the processes they describe.
Figure 1: Literally an exponential growth of articles on physics-based models of batteries, which doubles every 3.3 years. Check out this notebook for the data and the fit.
Let’s consider a very simple example, much simpler than batteries: dropping a ball from the top of a tower. We could run a lot of experiments, dropping a ball from different heights and collecting
data on the time it takes to reach the ground. After analysing the data, we could decide to fit a function to the data (maybe a parabola?) and that would give us an empirical model. Of course, this
model would give us quite accurate predictions but presents three main shortcomings:
1. We would have no clue on why the result is a parabola.
2. If we changed some settings of the experiment (e.g. do the experiment on the moon) we would not know if our model would work unless we tested that new situation.
3. If we wanted to “fix” our model for that new situation we would have to start from scratch.
A (very simple) physics-based model would be to combine Newton’s second law and Newton’s law of universal gravitation to write a differential equation for the position of the ball (and in fact we can
use PyBaMM to solve it). Solving the equation we would find that the trajectory vs time is indeed a parabola, hence overcoming shortcoming 1. Our physics-based model would also offer insight on what
to change when testing another situation and also we would be aware of when the modelling assumptions break down. For example, we would be aware that we neglected air friction and considered the
acceleration of gravity to be constant, so we could judge if the model would work in a new situation (overcoming shortcoming 2). Finally, if we wanted to extend our model to this new situation (e.g.
account for air friction) we would not have to start from scratch, but just introduce a new force into the model and compute the solution to the new equation.
Figure 2: Simulation results of the simple model of dropping a ball in the Earth and on the moon. The code is available in this notebook.
The advantages of physics-based models that this simple example demonstrated, also apply to lithium-ion battery models. For example, a Single Particle Model with electrolyte (SPMe) would clearly show
that it breaks down when electrolyte depletion occurs, and therefore we would know in advance which are the maximum discharge rates that we can simulate. Or degradation mechanisms can be incorporated
much more easily into the physics-based model rather than on an equivalent-circuit one. In our next blog post we will discuss how physics-based models for lithium-ion batteries are built and which
are the most popular ones.
Digitize your battery R&D
Run parameterization in minutes, not days. Accelerate your development with virtual prototyping, cell design, performance and lifetime prediction.
Frequently asked questions
What does Ionworks add on top of PyBaMM?
What industries can benefit from using Ionworks?
How secure is Ionworks’ platform?
Who owns the outputs generated on the platform? | {"url":"https://www.ionworks.com/blog/physics-based-models-1-what-are-they","timestamp":"2024-11-04T05:53:21Z","content_type":"text/html","content_length":"41908","record_id":"<urn:uuid:ee8b49e3-d503-4c61-8f15-fd7894e00f1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00881.warc.gz"} |
RWM102: Algebra, Topic: Unit 2: Linear Equations | Saylor Academy
Unit 2: Linear Equations
We use equations everyday without realizing it. Examples include calculating the unit price to compare the price of brands in the grocery store, converting inches into feet (or centimeters into
meters), estimating how much time it will take to drive to a destination at a certain speed. In this unit, we explore formal procedures for solving equations. After reviewing basic math rules, we
apply the skills we learned in Unit 1 to simplify the sides of an equation before attempting to solve it and work with equations that contain more than one variable. Because variables represent
numbers, we use the same rules to find the specific variables we are looking for.
Completing this unit should take you approximately 5 hours.
• Upon successful completion of this unit, you will be able to:
□ determine whether a given real number is a solution of an equation;
□ simplify equations using addition and multiplication properties;
□ find the solution of a given linear equation with one variable;
□ determine the number of solutions of a given linear equation in one variable;
□ solve a literal equation for the given variable; and
□ rearrange formulas to isolate a quantity of interest.
• 2.1: Definition of an Equation and a Solution of an Equation
We define an equation as a statement that contains a variable, which may or may not be true, depending on the value of the variable. Solving an equation means finding the possible values of the
variable that make the equation true.
□ Read the "Define Linear Equations in One Variable" and "Solutions to Linear Equations in One Variable" sections. Then, complete exercises 1 to 5 and check your answers.
• 2.2: Addition/Subtraction Property of Equations
When solving algebraic equations, we need to be aware of the properties of the types of mathematical operations we are doing. The first property we explore is the addition and subtraction
property of equations.
□ Read up to the "Solve Equations that Require Simplification" section. Pay attention to the "Solve Equations Using the Subtraction and Addition Properties of Equality" section, which gives a
good example of how the two sides of an equation must be equal. After you read, complete examples 2.2 through 2.5 and check your answers.
• 2.3: Multiplication/Division Property of Equations
Much like in the previous section we must use the properties of multiplication and division when solving algebraic expressions involving these types of calculations.
□ Read up to the "Sole Equations that Require Simplification" section. Complete examples 2.13 to 2.17.
• 2.4: Equations of the Form x + a = b and x − a = b
Algebraic equations can be categorized based on the form and types of operations in the equation. In the next few sections, we will explore different forms of equations.
The first form is the simplest: x + a = b or x − a = b. An example of this type of equation is: 5 + x = 8.
□ Watch this video for examples of these types of equations.
□ After you watch, complete this assessment to test yourself.
• 2.5: Equations of the Form ax = b and x/a = b
The next general form of equations involve multiplying or dividing the variable by a coefficient. These equations are of the form ax = b or x/a = b. An example of this type of equation is: x/2 =
□ Watch these videos for a few examples of how to solve algebraic equations involving multiplication and division. Pay attention to the problem-solving steps for fractional coefficients in the
third video. Instead of dividing both sides by the fraction, you can multiply both sides of the equation by the reciprocal of the fraction. It is often easier to multiply fractions rather
than dividing them, so this trick can be useful.
□ After you watch, complete this assessment and check your answers.
• 2.6: Equations of the Form ax + b = c
Often types of mathematical operations are combined in an equation. For example, multiplication can be combined with addition in an equation. An example of this type of equation is: 2x + 1 = 11.
This requires a two-step process for solving the equation.
□ Watch this video for examples of how to solve these types of problems in a two-step process.
□ After you watch, complete this assessment and check your answers.
• 2.7: Equations of the Form ax + b = cx + d
This section involves solving more complicated equations where the variable appears on both sides. We can use what we learned about combining like terms to make solving these types of equations
□ Watch these videos to see examples of how we use like terms to solve these types of equations.
□ After you watch, complete this assessment and check your answers.
• 2.8: Equations with Parentheses
The last general type of linear equation we can solve are those involving parentheses. For example, we can have an equation 2(4 + x) = 12. We need to use order of operations and the distributive
property to solve this type of equation.
□ Watch these videos to see examples of how this type of equation can be solved.
□ After you watch, complete this assessment and check your answers.
• 2.9: Solving Literal Equation for One of the Variables
We can use the methods we learned in the previous sections to solve literal equations, or formulas which often have more than one variable. When a literal equation has more than one variable, we
can solve for the variable of interest with respect to the other variable.
For example, consider the equation 2a + b = 10. Here, there are two variables, a and b. If we want to solve for b, we can do so with respect to a. We can subtract 2a from both sides to obtain: 10
− 2a = b.
□ Read the section on linear literal equations. Be sure to go through the examples in detail. After read, complete the exercises for literal equations and check your answers.
□ We can apply these concepts to known formulas, such as formulas for area of a shape or rates.
Watch these videos for real examples of using formulas. In the first video, the formula for perimeter of a rectangle is solved for the width. In the second, a formula is used to convert
between Fahrenheit and Celsius. | {"url":"https://learn.saylor.org/course/view.php?id=39§ion=10","timestamp":"2024-11-04T07:34:01Z","content_type":"text/html","content_length":"123830","record_id":"<urn:uuid:f79157bc-ddf9-4900-903e-389864408cee>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00424.warc.gz"} |
A Course on the Web Graphsearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
A Course on the Web Graph
Hardcover ISBN: 978-0-8218-4467-0
Product Code: GSM/89
List Price: $99.00
MAA Member Price: $89.10
AMS Member Price: $79.20
eBook ISBN: 978-1-4704-2119-9
Product Code: GSM/89.E
List Price: $85.00
MAA Member Price: $76.50
AMS Member Price: $68.00
Hardcover ISBN: 978-0-8218-4467-0
eBook: ISBN: 978-1-4704-2119-9
Product Code: GSM/89.B
List Price: $184.00 $141.50
MAA Member Price: $165.60 $127.35
AMS Member Price: $147.20 $113.20
Click above image for expanded view
A Course on the Web Graph
Hardcover ISBN: 978-0-8218-4467-0
Product Code: GSM/89
List Price: $99.00
MAA Member Price: $89.10
AMS Member Price: $79.20
eBook ISBN: 978-1-4704-2119-9
Product Code: GSM/89.E
List Price: $85.00
MAA Member Price: $76.50
AMS Member Price: $68.00
Hardcover ISBN: 978-0-8218-4467-0
eBook ISBN: 978-1-4704-2119-9
Product Code: GSM/89.B
List Price: $184.00 $141.50
MAA Member Price: $165.60 $127.35
AMS Member Price: $147.20 $113.20
• Graduate Studies in Mathematics
Volume: 89; 2008; 184 pp
MSC: Primary 05; 68; 94
A Course on the Web Graph provides a comprehensive introduction to state-of-the-art research on the applications of graph theory to real-world networks such as the web graph. It is the first
mathematically rigorous textbook discussing both models of the web graph and algorithms for searching the web.
After introducing key tools required for the study of web graph mathematics, an overview is given of the most widely studied models for the web graph. A discussion of popular web search
algorithms, e.g. PageRank, is followed by additional topics, such as applications of infinite graph theory to the web graph, spectral properties of power law graphs, domination in the web graph,
and the spread of viruses in networks.
The book is based on a graduate course taught at the AARMS 2006 Summer School at Dalhousie University. As such it is self-contained and includes over 100 exercises. The reader of the book will
gain a working knowledge of current research in graph theory and its modern applications. In addition, the reader will learn first-hand about models of the web, and the mathematics underlying
modern search engines.
This book is published in cooperation with Atlantic Association for Research in the Mathematical Sciences.
Graduate students and research mathematicians interested in graph theory, applied mathematics, probability, and combinatorics.
□ Chapters
□ Chapter 1. Graphs and probability
□ Chapter 2. The web graph
□ Chapter 3. Random graphs
□ Chapter 4. Models for the web graph
□ Chapter 5. Searching the web
□ Chapter 6. The infinite web
□ Chapter 7. New directions in internet mathematics
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Additional Material
• Requests
Volume: 89; 2008; 184 pp
MSC: Primary 05; 68; 94
A Course on the Web Graph provides a comprehensive introduction to state-of-the-art research on the applications of graph theory to real-world networks such as the web graph. It is the first
mathematically rigorous textbook discussing both models of the web graph and algorithms for searching the web.
After introducing key tools required for the study of web graph mathematics, an overview is given of the most widely studied models for the web graph. A discussion of popular web search algorithms,
e.g. PageRank, is followed by additional topics, such as applications of infinite graph theory to the web graph, spectral properties of power law graphs, domination in the web graph, and the spread
of viruses in networks.
The book is based on a graduate course taught at the AARMS 2006 Summer School at Dalhousie University. As such it is self-contained and includes over 100 exercises. The reader of the book will gain a
working knowledge of current research in graph theory and its modern applications. In addition, the reader will learn first-hand about models of the web, and the mathematics underlying modern search
This book is published in cooperation with Atlantic Association for Research in the Mathematical Sciences.
Graduate students and research mathematicians interested in graph theory, applied mathematics, probability, and combinatorics.
• Chapters
• Chapter 1. Graphs and probability
• Chapter 2. The web graph
• Chapter 3. Random graphs
• Chapter 4. Models for the web graph
• Chapter 5. Searching the web
• Chapter 6. The infinite web
• Chapter 7. New directions in internet mathematics
Permission – for use of book, eBook, or Journal content
You may be interested in...
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/GSM/89","timestamp":"2024-11-05T06:22:44Z","content_type":"text/html","content_length":"102505","record_id":"<urn:uuid:3883d4d7-e098-42fb-bb55-38a9a5d9271a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00427.warc.gz"} |
Complex Power Functions
Complex Power Functions
Recall from The Complex Natural Logarithm Function page that if $z \in \mathbb{C} \setminus \{ 0 \}$ then we defined:
\quad \log (z) = \log \mid z \mid + i \arg (z)
Where $\arg(z)$ is specified in some branch of the complex natural logarithm function.
With this definition, we are able to define complex power functions. Let $a, b \in \mathbb{C}$ with $a \neq 0$. We would like a meaningful definition for $a^b$. If $a \neq 0$ then we know that $a = e
^{\log a}$ for some branch of the logarithm function. Defining $a^b$ follows naturally.
Definition: If $a, b \in \mathbb{C}$ and $a \neq 0$ then the Complex Power Function $a^b$ is defined as $a^b = (e^{\log a})^b = e^{b \log a}$ with some branch for the complex logarithm function.
For example, suppose that we want to compute $i^i$. Then from our definition:
\quad i^i = e^{i \log (i)} = e^{i[\log \mid i \mid + i \arg(i)]} = e^{i \left [0 + \left ( \frac{\pi}{2} + 2k\pi \right )i \right ]} = e^0 \cdot e^{i^2 \left ( 2k + \frac{1}{2} \right )\pi} = e^{-\
left ( 2k + \frac{1}{2} \right )\pi}
In general, since the complex natural logarithm function is a multi-valued function, the power function $a^b$ may take on multiple values. The following theorem tells us when multiple values of $a^b$
Theorem 1: Let $a, b \in \mathbb{C}$, $a \neq 0$.
a) $a^b$ has a unique value if and only if $b \in \mathbb{Z}$.
b) If $b \in \mathbb{Q}$, i.e., $\displaystyle{b = \frac{p}{q}}$ where $p, q \in \mathbb{Z}$ ($q \neq 0$), and $p$ and $q$ are in lowest terms, i.e., $\gcd (p, q) = 1$, then $a^b$ has $q$ distinct
values which are the $q^{\mathrm{th}}$ roots of $a^p$.
c) If $b \in \mathbb{R} \setminus \mathbb{Q}$ ($b$ is irrational) or if $b \in \mathbb{C} \setminus \mathbb{R}$ ($b$ is a nonreal complex number), then $a^b$ takes on infinitely many values.
As we will see in the proof below, the various values of $a^b$ differ only by factors of $e^{2kb\pi i}$ where $k \in \mathbb{Z}$ is determined by the choice of branch for the logarithm function.
• Proof) Consider the complex natural logarithm function $\log z$ with a chosen branch, say $[0, 2\pi)$. Then:
\quad a^b = e^{b \log a} = e^{b [\log \mid a \mid + i \mathrm{Arg} (a)]} = e^{b \log \mid a \mid} \cdot e^{i b \mathrm{Arg} (a)}
• For any other branch of the complex natural logarithm function we would have for some $k \in \mathbb{Z}$:
\quad a^b = e^{b \log a} = e^{b [\log \mid a \mid + i (\mathrm{Arg} (a) + 2k\pi)]} = e^{b \log \mid a \mid} \cdot e^{ib[\mathrm{Arg} (a) + 2k\pi]} = e^{b \log \mid a \mid} \cdot e^{ib\mathrm{Arg}
(a)} \cdot e^{2kb\pi i}
• These values different only by the factor $e^{2kb \pi i}$ where $k \in \mathbb{Z}$ is determined by the choice of branch for the logarithm function. If this value remains the same as $k$ varies
then $a^b$ has a single value. If this value differs as $k$ varies then $a^b$ will have multiple (possibly infinite) values. We cover these cases below:
• Proof of a) Suppose that $b \in \mathbb{Z}$. Then $kb \in \mathbb{Z}$. So $2kb \pi i$ is an integer multiple of $2\pi i$. But from one of the theorems on the Properties of the Complex Exponential
Function page we know that $2kb \pi i = 1$ if and only if $kb \in \mathbb{Z}$ (and $kb \in \mathbb{Z}$ if and only if $b \in \mathbb{Z}$. Therefore $a^b$ has a unique value if and only if $b \in
\mathbb{Z}$. $\blacksquare$
• Proof of b) If $b \in \mathbb{Q}$ and $\displaystyle{b = \frac{p}{q}}$ in lowest terms, then we know by the theorem on the nth Roots of Complex Numbers page that $a^p$ has $q$ many $q^{\mathrm
{th}}$ roots, i.e., there exists $q$ values for $a^b = a^{p/q}$. $\blacksquare$
• Proof of c) Suppose that $b$ is irrational and that:
\quad e^{2kb\pi i} = e^{2jb\pi i} \quad (*)
\quad e^{2kb\pi i - 2jb\pi i} = 1 \\ \quad e^{2b(k - j) \pi i} = 1
• But from the result mentioned in part (a), this happens if and only if $b(k - j) \in \mathbb{Z}$. Since $b$ is irrational, $b(k - j)$ is never an integer - a contradiction. Therefore the equality
at $(*)$ never happens. In other words, $a^b$ takes on infinitely many values.
• Now suppose that $b$ is a nonreal complex number. Say $b = x + yi$ with $y \neq 0$. Then:
\quad e^{2kb \pi i} = e^{2k(x + yi) \pi i} = e^{2kx \pi i + 2ky\pi i^2} = e^{2kx \pi i} e^{-2ky \pi}
• $e^{-2ky \pi}$ takes on multiple values as $k$ varies (by choice of the branch for the logarithm function), so $a^b$ takes on infinitely many values. | {"url":"http://mathonline.wikidot.com/complex-power-functions","timestamp":"2024-11-04T20:03:53Z","content_type":"application/xhtml+xml","content_length":"21297","record_id":"<urn:uuid:f99ee9b3-8626-4225-b24d-7a8dc2beccb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00485.warc.gz"} |
Minimum Description Length Analogies on Character Strings
Minimum Description Length Analogies on Character Strings
Get Complete Project Material File(s) Now! »
Justification of Empirical Risk Minimization
We now present the formalism of learning theory that is employed to give formal justification of the ERM. In order to introduce the hypotheses made by learning theory, we first introduce an example.
Consider a training dataset D = f(xi, yi)gi=1…N of elements in X Y. We consider the case where X = Rd, ie. the case where the input space is a vector space. Consider that the hypothesis class H is
the set of all functions X ! Y. There exists infinitely-many hypotheses h 2 H that describe the data correctly. For instance, the function that outputs yi if the input is equal to xi for i N and y1
otherwise. By construction, this hypothesis is an empirical risk minimizer for the dataset D. It can be used directly for rote learning, in other words when the goal of the learning process is to
remember the correct labeling of the set of training inputs. It is intuitively clear that this hypothesis will have extremely low performances on a task that require generalization, ie. if the
learning is asked to classify new points outside of the training set. The purpose of statistical learning theory is to provide an evidence that ERM can perform well on a task of generalization in
given conditions.
Describing generalization requires that we rigorously define the expected limits of this generalization. A key assumption in this definition is the fact that the data points are chosen independent
and identically distributed (i.i.d.). We consider the existence of a generating distribution P on X Y. Based on this distribution, we define the risk of an hypothesis h 2 H as: R(h) = E(X,Y)P [L(Y, h
(X))] (2.2) where (X,Y) P means that the random variables X and Y are drawn from distribution P. Based on this definition, the main goal of learning would be to minimize R(h) over h 2 H, but this
minimization is impossible when the distribution P is unknown. In practice, the learner has only access to the dataset D, which is a sample of N points drawn i.i.d. from distribution P. The empirical
risk defined above is thus an estimator of R(h) and can be used as a proxy for the risk minimization. A fundamental question, however, is to determine how close a solution of ERM principle is to the
solution of risk minimization.
General Presentation of Analogy and Cognition
The importance of analogy in human cognition has been studied for decades now and is closely related to the fundamental ability to extract the similarities between two concepts or situations and to
transfer the characteristics of the one to the other. This skill appears at the lowest degree of cognition, for instance in the fundamental task of perception. Children learn to discriminate between
thousands of categories in their environment, but are also able to tell the similarities from the differences. For instance, they will find many similarities between a dog and a cat, or even between
a dog and another dog, but will also be able to understand them as different. They are also able to apply a “transformation » from one object to the other. For instance, they will get that the
transformation from a standard dog to a red dog can be applied to any other object that have similar characteristics, for instance a cat. More impressively, perception of an object can depend on the
context: an animal in a picture will not be the same as a real animal, but children are able to map these two radically different entities while being able to tell which one is real. This ability is
characteristic of analogical reasoning: An interpretation, given by Structure Mapping Theory (described below), suggests that the child extracts common descriptive information and builds a mapping
between the real animal and the drawn animal. The case of drawings is even more interesting, in the sense that sensitivity to the “style » is a basic ability. A drawn dog will not be the same
depending on the cartoonist, which does not affect the perception. The transfer from one style to another is possible, and is even considered as a required competence in many artistic domains (for
instance being able to paint in the style of a famous painter, to compose in the style of a composer or to write in the style of a writer). Melanie Mitchell proposes the example of music in
(Mitchell, 2001): Any two pieces by Mozart are superficially very different, but at some level we perceive a common essence. Likewise, the Beatles rendition of Hey Jude and the version you might hear
in the supermarket have little in common in terms of instrumentation, tempo, vocals, and other readily apparent musical features, but we can easily recognize it as the same song.
A domain where analogy is hidden everywhere (but most of the time in an unconscious manner) is natural language. In order to communicate ideas or concepts, we often rely on comparisons and analogies.
For instance, it is not rare to hear expressions like “He is the Mozart of painting. » while this expression is a priori very unclear. Does it mean that this painter is also a musician? Died at the
same age as Mozart did? We intuitively understand what is meant here: Mozart, in the context of music, is often considered as a prodigy; transferring this characterization to the new context
(painting), we understand that the painter of interest has to be considered as a prodigy as well. This example displays two of the most fundamental components of analogical reasoning: on the one
hand, it shows that we are able to transfer unified descriptions from one domain to the other; on the other hand, we are even able to determine the domain when it is not defined. No indication was
given that Mozart had to be considered in the scope of music, but it is natural. Mimicking this ability will be one of the objectives of connectionist models such as ACME or LISA (Section 3.2.4).
Evidences of Syntactic Priming
As a complement to the general introduction on analogy in human cognition, we propose now a brief presentation of the problem of syntactic priming, which is some kind of unconscious analogy observed
in language production and interpretation. Syntactic priming is a well-known phenomenon observed in cognitive sciences which leads to reusing previously encountered structures in sentence generation
or comprehension. Such phenomenon is observed in several grammatical choices, such as active vs active (“Cats eat mice” vs “Mice are eaten by cats”), dative formation (“He bought the girl a book” vs
“He bought a book for the girl”) or relative clause attachment ambiguity (“I like the friend of my sister who plays the violin”: who plays the violin?). In the first two problems, it is observed that
encountering one of the forms favors the reuse of the same form; In the case of relative clause attachment, the disambiguation is primed by a former non-ambiguous relative clause. This transfer is
typically unconscious.
First studies of the phenomenon emerged in the 1980s with pioneering works on the repetition of similar syntactic forms across successive utterances (Bock, 1986). For instance, some studies observe
that, in case a speaker used a passive in a recent sentence, he is more likely to use one in future sentences (Weiner and Labov, 1983). These observations were not sufficient to conclude on the
importance of syntactic priming, since they relied on corpus data and did not reflect a preference between two alternatives. Other phenomenons could be at stake here: facility of repetition in either
lexical, thematical or metrical aspects. (Bock, 1986) provides the first real evidence of syntactic priming in language production. The existence of syntactic priming was shown in multiple languages
(Hartsuiker and Kolk, 1998), and for both written (Branigan, Pickering, and Cleland, 1999) and oral productions (Potter and Lombardi, 1998).
Table of contents :
1 Introduction: Knowledge Transfer in Artificial Intelligence
1.1 Scope
1.2 Position and Contributions
1.3 Outline
1.4 List of Publications
2 Preamble: The Problem of Learning
2.1 Reminder on Supervised Learning
2.1.1 Problem and Notations
2.1.2 The No-Free-Lunch Theorem
2.1.3 Justification of Empirical Risk Minimization
2.1.4 Conclusion on Supervised Learning
2.2 Minimum Description Length and Minimum Message Length Principles
2.2.1 Learning as Compression
2.2.2 Introducing Minimum Description Length Principle
2.2.3 Introducing Minimum Message Length Principle
2.2.4 Conclusion on MDL and MML
2.3 Drifting Away from Supervised Learning
2.3.1 Unsupervised Domain Adaptation
2.3.2 Unsupervised Learning
2.3.3 Analogies
2.4 Conclusion
I A Fundamental Problem: Analogical Reasoning
3 Introduction to Analogical Reasoning
3.1 Analogies in Human Cognition
3.1.1 General Presentation of Analogy and Cognition
3.1.2 Evidences of Syntactic Priming
3.2 Formal Models of Analogical Reasoning
3.2.1 Logic Description
3.2.2 Analogical Proportion
3.2.3 Structure Mapping Theory
3.2.4 Connectionist Models
3.2.5 Copycat and Metacat
3.2.6 Analogies and Kolmogorov Complexity
3.3 Applications of Analogical Reasoning
3.3.1 Word Embedding and Analogical Proportion
3.3.2 Linguistic Analogies
3.3.3 Machine Learning Applications
3.4 Conclusion
4 Minimum Description Length Analogies on Character Strings
4.1 Introduction: Hofstadter’s Micro-World
4.1.1 Hofstadter’s Micro-World: Presentation and Discussion
4.1.2 An Application: Linguistic Analogies
4.1.3 Method Overview
4.2 Representation Bias for Hofstadter’s Problem
4.2.1 A Generative Language
4.2.2 Basic Operators
4.2.3 Using Memory
4.2.4 Remarks on the Language
4.3 Relevance of a Solution
4.3.1 Relevance: Problems and Intuitions
4.3.2 From Language to Code
4.3.3 Relevance of a Description
4.3.4 Relevance of a Solution for Analogical Equations
4.3.5 Validation
4.4 Perspectives: Finding an Optimal Representation
4.4.1 Syntactic Scanning and Semantic Phase
4.4.2 Rule Generation
4.4.3 World Mapping and Rule Slipping
4.4.4 Rule Execution
4.4.5 Cognitive Interpretation
4.5 Conclusion
5 Minimum Complexity Analogies
5.1 A General Description Language for Analogies?
5.1.1 Analogies in Structured Domains
5.1.2 Description Length and Memory Factorization
5.2 Descriptive Graphical Models
5.2.1 Description Length and Kolmogorov Complexity
5.2.2 A Key Property: The Chain Rule
5.2.3 Defining Graphical Models
5.2.4 Machine Restriction
5.2.5 Discussion: DGM and PGM
5.2.6 Algorithmic independence
5.2.7 Inference
5.3 Minimum Complexity Analogies
5.3.1 A Graphical Model for Analogical Reasoning
5.3.2 Application: Priming Effect
5.4 Conclusion
6 Geometrical analogies
6.1 Building Analogies in Concept Spaces
6.1.1 Interpretation of the Parallelogram Rule
6.1.2 General Construction of a Parallelogram
6.2 Non-Euclidean Analogies
6.2.1 Intuition: Analogies on the Sphere
6.2.2 Non-Euclidean Analogies
6.2.3 Reminder: Riemannian Geometry
6.2.4 Non-Euclidean Analogies on Differential Manifolds
6.2.5 Proportional Analogies on Manifolds
6.3 Applications
6.3.1 Non-Euclidean Analogies in Fisher Manifold
6.3.1.1 Fisher Manifold
6.3.1.2 Experimental Results
6.3.2 Non-Euclidean Analogies in Curved Concept Spaces
6.4 Conclusion
II From Analogy to Transfer Learning
7 Transfer Learning: An Introduction
7.1 What is Transfer?
7.1.1 Examples of Transfer Learning Problems
7.1.1.1 Transfer Learning for Computer Vision
7.1.1.2 The Problem of “Small Data »
7.1.2 Background and Notations
7.1.3 Historical References and Related Problems
7.1.4 A Taxonomy of Transfer Learning Settings
7.2 Trends in Transfer Learning
7.2.1 Importance Sampling and Reweighting
7.2.2 Optimal Transport
7.2.3 Mapping and Learning Representations
7.3 A Central Question: When to Transfer?
7.3.1 Introducing Negative Transfer
7.3.2 Guarantees with Small Drifts
7.3.3 Characterizing Task Relatedness
7.4 Conclusion
8 Transfer Learning with Minimum Description Length Principle
8.1 Transductive Transfer Learning with Minimum Description Length Principle
8.1.1 Transductive Transfer and Analogy: Two Related Tasks?
8.1.2 What Analogy Suggests
8.1.3 Interpretation: A General Principle?
8.2 Defining Models
8.2.1 Probabilistic models
8.2.2 A prototype-based model
8.2.2.1 Model Complexity
8.2.2.2 Data Complexity
8.3 Validation of the Framework: A Prototype-based Algorithm
8.3.1 Measuring Complexity
8.3.1.1 Complexity of real numbers
8.3.1.2 Complexity of vectors
8.3.1.3 Complexity of prototype model transfer
8.3.2 Algorithm
8.3.2.1 A Class of Functions
8.3.2.2 Unlabeled Data Description without Transfer
8.3.2.3 Labeled Data Description without Transfer
8.3.2.4 Prototype-based Transductive Transfer with Simple Transformation
8.3.3 Measuring the quality of transfer
8.3.4 Toy examples
8.3.5 Results and discussion
8.4 Conclusion
9 Beyond Transfer: Learning with Concern for Future Questions
9.1 Supervised and Semi-Supervised Problems with Transfer and without Transfer
9.1.1 Supervised and Semi-Supervised Domain Adaptation
9.1.2 Absence of Transfer
9.2 Impossibility of Transfer?
9.2.1 Two Notions of Transferability
9.2.1.1 Learnability from Source Model
9.2.1.2 Properties of Learnability
9.2.1.3 Transferable Problems
9.2.2 Non-Transferability and Negative Transfer
9.3 Learning with Concern for Future Questions
9.3.1 Transfer to Multiple Targets
9.3.2 Transfer, Transduction and Induction: Which Links?
9.3.3 Learning with No Future in Mind
9.3.4 Including Priors over the Future
9.3.5 Some Priors for Future Questions
9.3.6 Discussion: A general learning paradigm?
9.4 Conclusion
III Incremental Learning
10 From Transfer Learning to Incremental Learning
10.1 Introduction: Learning in Streaming Environments
10.1.1 A Recent Problem: Stream Mining
10.1.2 Introducing Concept Drift
10.1.3 Passive and Active Methods
10.2 Minimum Complexity Transfer for Incremental Learning
10.2.1 Notations for Online Learning
10.2.2 A Graphical Model for Incremental Learning
10.2.3 Remark: Estimating the Models Online
10.2.4 Classes of Models
10.2.4.1 Active Methods
10.2.4.2 Passive Methods
10.3 Algorithms
10.3.1 Dealing with Previous Models
10.3.2 An Algorithm for Continuous Adaptation
10.3.3 Experimental Results
10.4 Conclusion
11 Incremental Topic Modeling and Hybrid Recommendation
11.1 Online Topic Modeling
11.1.1 Topic Modeling
11.1.2 Adaptive windowing for Topic Drift Detection
11.1.2.1 Principle
11.1.2.2 Algorithm
11.1.2.3 Theoretical guarantees
11.1.2.4 Nature of the drift
11.1.3 Experimental Results
11.1.3.1 Datasets
11.1.3.2 Setting of AWILDA
11.1.3.3 Evaluation
11.1.3.4 Comparison of AWILDA and its variants on Sd4
11.1.3.5 Performance of AWILDA on controlled datasets
11.1.3.6 Comparing AWILDA with online LDA
11.1.4 Discussion
11.2 Incremental Hybrid Recommendation
11.2.1 Online Hybrid Recommendation
11.2.2 From Incremental Matrix Factorization to Adaptive Collaborative Topic Modeling
11.2.3 Experimental Results
11.2.3.1 Datasets
11.2.3.2 Evaluation protocol
11.2.3.3 Compared Methods
11.2.3.4 Results and Discussion
11.3 Perspective: Coping with Reoccurring Drifts
11.3.1 Reoccurring Drifts
11.3.2 Drift Adaptation seen as a CBR Problem
11.3.2.1 General Process
11.3.2.2 Case Representation
11.3.2.3 Case Retrieval
11.3.2.4 Case Reuse
11.3.2.5 Case Revision
11.3.2.6 Case Retainment
11.3.3 Application to AWILDA
11.4 Conclusion
12 U-shaped phenomenon in Incremental Learning
12.1 Context: Language Acquisition
12.2 A modeling Framework
12.2.1 Assumptions
12.2.2 A Complexity-Based Framework
12.2.3 Computing Complexities
12.2.3.1 Encoding the Grammar
12.2.3.2 Grammar Transfer
12.2.3.3 Encoding the Observations
12.3 Experimental Results
12.3.1 Causes of U-shaped Phenomenon
12.3.2 Finiteness of Memory
12.3.3 Uncorrected Mistakes
12.3.4 Discussion
12.4 Conclusion
IV Information Transfer in Unsupervised Learning
13 Introduction to Multi-Source Clustering
13.1 Reminder on Clustering
13.1.1 Definition and Issues
13.1.2 Families of Algorithms
13.1.3 Performance Measures
13.2 Multi-Source Clustering: An Overview
13.2.1 Overcoming the Individual Biases
13.2.2 Clustering in Distributed Environments
13.2.3 Multi-view Data
13.2.4 The Solution of Multi-Source Clustering
13.3 Cooperative Clustering
13.3.1 Consensus Based on Objects Co-Occurrence
13.3.2 Consensus Based on Median Partition
13.3.3 Discussion
13.4 Collaborative Clustering
13.5 Conclusion
14 Complexity-based Multisource Clustering
14.1 Graphical Model for Unsupervised Collaboration
14.1.1 Notations
14.1.2 A Model for Collaboration
14.2 Complexity of Local Clustering
14.2.1 Complexity of Prototype-Based Models
14.2.2 Complexity of Probabilistic Models
14.2.3 Complexity of Density-Based Models
14.2.4 Complexity of Other Models
14.3 Algorithm for Collaborative Clustering
14.3.1 Forgetting Consensus
14.3.2 Global Approach
14.3.3 Solution Mapping
14.3.4 Mapping Optimization
14.3.5 Dealing with Sparsity
14.4 Experimental Validation
14.4.1 Datasets
14.4.2 Experimental Results
14.5 Conclusion
15 Can clustering algorithms collaborate?
15.1 Collaboration: A Difficult Concept in the Absence of Supervision
15.2 Selecting the Best Collaborators
15.2.1 Introducing the problem
15.2.2 Optimizing the Collaboration
15.2.3 Discussion
15.3 Stability of Collaborative Clustering
15.3.1 Reminder: Clustering Stability
15.3.2 Definitions: Collaborative Clustering
15.3.3 Stability of Collaborative Clustering
15.3.4 Perspectives
15.4 Conclusion
V Conclusion
16 Conclusion
16.1 Contributions
16.1.1 General Contributions
16.1.2 Local Contributions
16.1.2.1 Analogical Reasoning
16.1.2.2 Transfer Learning
16.1.2.3 Data Stream Mining
16.1.2.4 A Cognitive Model
16.1.2.5 Multi-Source Clustering
16.2 Perspectives and Future Works
A Experiment on Hofstadter’s Analogies
A.1 Experiment Protocol
A.2 Filtering Results
A.3 Detailed Results
A.3.1 Raw Results
A.3.2 Ages
A.3.3 Results by Question
B Résumé en Français
B.1 Un problème fondamental: Le Raisonnement par Analogie
B.1.1 Introduction au Raisonnement par Analogie
B.1.2 Analogies à longueur de description minimale sur les chaînes de caractères
B.1.3 Analogies de complexité minimale
B.1.4 Analogies géométriques
B.2 De l’analogie à l’apprentissage par transfert
B.2.1 Introduction à l’apprentissage par transfert
B.2.2 Apprentissage par transfert et principe de longueur de description minimale
B.2.3 Au-delà du transfert: Apprentissage avec non-indifférence à la question future
B.3 Apprentissage incrémental
B.3.1 De l’apprentissage par transfert à l’apprentissage incrémental
B.3.2 Recommandation incrémentale hybride
B.3.3 Apprentissage incrémental en forme de U
B.4 Transfert d’information en apprentissage non-supervisé
B.4.1 Introduction au clustering multi-sources
B.4.2 Clustering multi-source de complexité minimale
B.4.3 Possibilité de collaboration pour les algorithmes de clustering | {"url":"https://www.bestpfe.com/minimum-description-length-analogies-on-character-strings/","timestamp":"2024-11-12T14:06:10Z","content_type":"text/html","content_length":"70497","record_id":"<urn:uuid:52807aa7-fba5-4c8f-b329-de8744c0bc6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00570.warc.gz"} |
Know Your Relations Better -
Artful parametric modeling is all about managing relationships. Not the personal kind, but the algorithmic kind.
By defining meaningful relationships among your parameters, you can encode a new and powerful morphological genus – creating a new species DNA, as it were, that can be used to generate hundreds of
variations of models.
There are two kinds of parametric relations in Archimatix: 1. inter-nodal connections, and 2. parameter expressions. In this tutorial, we will demonstrate inter-nodal connections.
When you connect parameters from different noes to each other, you are authoring behaviors for your parametric model. When you modify one parameter, all sorts of changes may ripple through the model
according to the logic you encoded via Relation connections and mathematical expressions within the Relation. Furthermore, these Relations may be bi-directional, meaning that you can change a
parameter anywhere in the graph and changes will ripple out from that parameter. The default mathematical expression set when you fist specify a Relation is equivalency, or simply “=”..
Equal Relations
A common case for an equals Relation is when you want one object to always sit atop another, regardless of how tall the bottom object is. In the example to the right, the behavior illustrated is that
the blue Cylinder is always atop the red Box.
Try This!
To set up this parametric behavior:
1. Choose a Cylinder and a Box from the 3D Library (left sidebar in the NodeGraphWindow.
2. Unfold the Controls of the Box and the
Transformations of the Cylinder.
3. Click on the red connector box on the parameter Extrude in the Box node palette.
4. Click on the red connector button next to the Trans_Y parameter on the Cylinder node palette.
5. To test: either click on the Box in the SceneView and then drag the green knob to make the Box taller, or click on the Cylinder and then drag the Y-Axis Position Handle.
You will notice that the relation is bi-directional. Modifying either parameter will alter the related parameter. This is a departure from other parametric modelers which feature uni-directional
relations. The benefit of bi-directional is that, when playing with a parametric model in the SceneView, you can click just about anywhere you like and start modifying, rather than searching for the
“master” parameter.
However, this freedom is not free: the bi-directionality requires inverse expressions to be input. In the case of our simple example, we did not edit the expression found in th relation, relying on
the default equals expression. Let’s take a look at how we might make a slightly more complex relation expression.
Expressing Relations
When would like to have more interesting Relations, you can use the ExpressionEditorWindow that pops up when you click on the green button at the center of the Relation connector cable. In the
ExpressionEditorWindow are two text fields allowing you to edit the bi-directional relationship between the two parameters.
Try This!
Lets say that we would like to simulate the movement of a piston relative to the rotation of a crankshaft in a car engine. The piston rises and falls sinusoidally as the shaft turns. The expression
is Piston.Trans_Y=Sin(Crankshaft.Rot_X). Lets go ahead and set this up:
1. Choose a Cylinder from the 3D Library.
2. Click on the name button at the top of the node palette and rename it “Piston.”
3. Use your copy and paste short cuts to create a second Cylinder. Name it “Crankshaft.”
4. Next to the Transformations foldout, click the axis button until it show “X.”
5. Open the Transformations foldout and choose “Align_X” choose “Center.”
6. Click on the Crankshaft in the SceneView and reduce the radius a bit.
7. Connect the Piston.Trans_Y to the Crankshaft.RotX.
8. Try rotating the Crankshaft – the Piston will continue upward or downward.
9. Click on the green button in the middle of the red connector cable.
10. In the ExpressionEditor, in the field filled in with Crankshaft.Rot_X, change the expression to: Sin(Crankshaft.Rot_X) and then click on the Save button just below.
11. Test by rotating the Crankshaft again. The Piston will oscillate up an down.
12. Make the Piston more responsive to the rotation, decrease the stroke and lift it higher above the Crankshaft by editing the expression again to be: 1.5+.5*Sin(2*Crankshaft.Rot_X)
We will save more elaboration on this with the addition of a piston rod, etc. please see the tutorial The Parametric Engine.
Best Practice: Relating Translations
If you find yourself connecting the Trans_X for one node and the Trans_X of another node to the same source, it is probably better to group the two nodes together with a Grouper and then relate the
Grouper’s Trans_X to the source. This is analogous to parenting two GameObjects to an “Empty” GameObject in the Unity hierarchy window. While Archimatix can handle lots of relations, by using
relation connections where a grouping would do, you will add more visual complexity to the NodeGraphWindow.
For example, the animation to the right depicts a parametric behavior whereby the red Box and the blue Cylinder are always positioned at the end of the gold Box. There are two ways we could encode
this behavior:
Method 1: This method is not preferable, but happens commonly while building up a graph. The Trans_X of the Cylinder has been related to the width parameter of the rectangular plan of the gold Box
with an expression of Cylinder.Trans_X=Rectangle.width/2. When the red Box was added to the graph, a similar relation was added between the gold Box and the red Box, as shown in the figure to the
right. Now when we drag the Handle for the Rectangle width, the blue Cylinder and the red Box translate accordingly.
The down side of this is that we have two connections and we have to enter the same mathematical expression twice (for the Cylinder and for the red Box). If we want to change that relationship, we
have to change it in two places. Also, the graph will quickly get cluttered if such translations are maintained with Relation connections all the time.
Method 2: Alternatively, we can feed the Cylinder and Box into a Grouper and then relate the Trans_X of the Grouper to the width of the Rectangle.
The behavior of our parametric model will be exactly the same, but now, if we wish to edit the expression in the relations, we are editing in only one place. Also, the graph will have fewer
parametric relations, which tends to make the graph more legible.
You must be logged in to post a comment. | {"url":"https://www.archimatix.com/tutorials/getting-to-know-your-relations","timestamp":"2024-11-15T03:19:02Z","content_type":"text/html","content_length":"79560","record_id":"<urn:uuid:bbec6291-6ce4-4ba4-997c-5e7f776ee191>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00653.warc.gz"} |