content
stringlengths
86
994k
meta
stringlengths
288
619
Overcome Divide by Zero Using NULLIF Anytime we are dividing we need to think of the divide by zero scenario. Even if you think you will never encounter this with your result set, it’s advisable to guard against it because when divide by zero is encountered, an error is thrown. The best method I’ve found to overcome this is by using the NULLIF function. This function takes two parameters and if they are equal, a NULL value is returned. Lets take a look at an example that throws a divide by zero error. [cc lang=”sql”] DECLARE @iter float; DECLARE @num float SET @num = 10; SET @iter = 5; WHILE @iter > -5 SELECT @num / @iter SET @iter = @iter – 1 Running the following query, we see that once the variable @iter becomes zero, we receive an error. So the most elegant way to overcome this is to use NULLIF function and compare @iter to zero. When it does equal zero, it will instead change it to a null. And when dividing anything by NULL will equal a NULL. [cc lang=”sql”] DECLARE @iter float; DECLARE @num float; SET @num = 10; SET @iter = 5; WHILE @iter > -5 SELECT @num / NULLIF(@iter,0); SET @iter = @iter – 1; This executes without error, however we still receive a null as a result. If you need otherwise, then you may want to wrap the equation in an ISNULL, to return a different value. [cc lang=”sql”] DECLARE @iter float; DECLARE @num float; SET @num = 10; SET @iter = 5; WHILE @iter > -5 SELECT ISNULL(@num / NULLIF(@iter,0),@num); SET @iter = @iter – 1; This will just return the same number you are dividing by, if you encounter a NULL denominator. One comment Jiwa 29 Aug 2015 at 8:36 am ( 2012.02.11 08:02 ) : This feeder works great as long as you fololw the directions and complete ALL the steps for setting it up. After programming the current time, you set the times you want them to eat, THEN go back to those times and select 1 rotation or 2, depending on how much you want them to have. The default is 0, so that’s why it didn’t work for me at first. Now that I have it programmed correctly it works great, right on schedule. The mounting bracket did not fit on the edge of my tank and it comes with double sided tape just in case that happens. But, I wanted to be able to easily take it down to refill/reprogram/change the batteries, so I used Velcro instead of the tape. The Velcro works great but the unit does lean a little to the side. I put a quarter under that side and it’s level again. Be sure to save the directions for when you want to reprogram, because it can be a little confusing without them.
{"url":"https://sqlserverplanet.com/tsql/overcome-divide-by-zero-using-nullif","timestamp":"2024-11-08T12:27:49Z","content_type":"application/xhtml+xml","content_length":"40523","record_id":"<urn:uuid:809b243e-7f18-453f-bbe4-2c0e31036b49>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00201.warc.gz"}
Traversals - 4 Arrays Hands-On: Warming Up Traversals - 4 Rotate Array (once) Explained in simpler words, the problem basically says that given an array of integers, our target is to shift each element to the position just next to it (on the right side). Now, you may think that it probably makes sense for all the other elements, but where will the last element go? It should come to the first position. Please refer to the diagram below for a better How to solve it? One way could be to create a new array, and try to set the elements in such a manner, that the newly created array is the rotated version of the given array. Now, after being done with the new array, we can just copy paste the elements from the new array to the original array, and we're done. Let's explore this way first. Let's call the newly created array ans and the original given array, arr. Now, since all the elements are shifted right once, that simply means that, ans[1] = arr[0], ans[2] = arr[1], .. ans[n-1] = arr[n-2]. In general, what I basically mean to say is that ans[i] = arr The above holds true for all the indices except the index 0. The value of ans[0] will be equal to arr[n-1]. Please look at the above diagram carefully to be able to understand this. I think that enough idea about the solution has been given above, let's dive right into implementing the approach given above. Once again, please try it yourself first. void rotate(int arr[], int n) // Declaring a new vector/array vector<int> ans(n); // Handling the general cases for(int i = 1; i < n; ++i) ans[i] = arr[i-1]; // Handling the corner case. ans[0] = arr[n-1]; // Copying the elements to arr. for(int i = 0; i < n; ++i) arr[i] = ans[i]; How to solve it without using extra space? Now, what we're going to try to do is make some changes in the given array itself, to rotate it, without any involvement of a second array. This process of not involving a new array and doing things in the given input array is sometimes referred to as doing something in-place. Trial 1 void rotate(int arr[], int n) // Handling the general cases for(int i = 1; i < n; ++i) arr[i] = arr[i-1]; // Handling the corner case. arr[0] = arr[n-1]; What's wrong with trial 1? What's wrong is that we're saying that arr[i] = arr[i-1] but arr[i-1] would've already been updated because we're iterating from left to right. To break it down, let's try to dry-run the code line-by-line for the input [1, 2, 3, 4]: 1. In the first iteration: i = 1, therefore arr[1] will become equal to arr[0]. Hence, arr[1] = 1. 2. In the second iteration: i = 2, therefore arr[2] will become equal to arr[1]. Hence, arr[2] = 1. 3. I hope you get the picture, everything will become equal to 1.. A learning A learning from this trial that'll be useful is that always keep this in check that if you're shifting around different values in a data structure, make sure that you do it in such a manner that you do have the initial original values when required. Hope the above explanation makes sense. If it doesn't, please use the feedback form to let me know. Trial 2 What we did in the above trial was that by the time we used arr[i] to shift it, it was already updated. So, instead of iterating from left to right, what we iterate from right to left? Think about it. void rotate(int arr[], int n) // Handling the general cases for(int i = n - 1; i >= 1; --i) arr[i] = arr[i-1]; // Handling the corner case. arr[0] = arr[n-1]; Is there anything wrong? The answer is that we're almost there, there's just a small thing. Try to find the bug before moving on? The Bug The bug is in the corner case. If you notice the general case, the problem with trial 1 is solved now, because now arr[i] is made equal to arr[i-1], and because we're iterating in reverse order, arr [i-1] wouldn't have been touched when we're dealing with arr[i]. In the corner case, the problem is that arr[n-1] has already been updated and is going to be equal to arr[n-2], let's quickly fix the bug, and we're done. Final Implementation void rotate(int arr[], int n) int last_element = arr[n - 1]; // Handling the general cases for(int i = n - 1; i >= 1; --i) arr[i] = arr[i-1]; // Handling the corner case. arr[0] = last_element; Time and Space Complexity • Time Complexity: O(N) • Space Complexity: O(1)
{"url":"https://read.learnyard.com/dsa/traversals-4/","timestamp":"2024-11-03T02:59:50Z","content_type":"text/html","content_length":"219488","record_id":"<urn:uuid:03bb0639-22a4-4b78-b303-dd1d230e288f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00428.warc.gz"}
The Evolution of Signature Algorithms | Curity The Evolution of Signature Algorithms Public key cryptography deals with finding mathematical operations for a key pair that are easy to calculate with part of the key pair but hard to reverse without. Such an operation is called the trapdoor function. Signature Algorithms use the private key from the key pair and a message in some form as an input to the trapdoor function. The outcome is the signature. Normally, the message passes a hashing function before it gets signed. The signature is easy to calculate using the private key and easy to validate with the public key. Thus everybody with the public key can validate a signed message, verify that it comes from the expected sender (verify the message authenticity) and verify that it was not tampered with (verify the message integrity). Because of the underlying mathematical operation, it is practically infeasible to create a valid signature without the private key. Further, the private key cannot be recalculated from the public key or any other public information of the system, such as the message, hash value, or signature. However, if used in the wrong context, it is possible to retrieve the private key from signatures or even create valid signatures without it despite the trapdoor function. To avoid such pitfalls and security incidents, it's essential to familiarize yourself with the different algorithms. Not many people have had such a significant impact on modern life as Rivest, Shamir, and Adleman. In 1977, these researchers published the description of a public-key algorithm that became known as the RSA algorithm. Even now, about 45 years later, RSA is still widely used. It's impressive that RSA is still considered relatively safe after so many years. Yet, as great as RSA is, it has its limitations. Over the years, researchers, cryptographers, and attackers have found vulnerabilities and impracticalities in the algorithm (mainly concerning RSA encryption and not the signature). Therefore, it's best practice not to use textbook RSA but rather RSASSA-PKCS1-v1_5 (RS256, RS384, or RS512) or RSASSA-PSS (PS256, PS384, or PS512) for your signatures. These are the versions of RSA that we use in the Curity Identity Server. The underlying trapdoor function in RSA relies on the factoring problem. There is no efficient algorithm known that breaks an integer in its prime factors. Finding the prime factors becomes especially hard when the integer is composed of two large prime numbers (as is the case in all variants of RSA). However, with processing power getting stronger and cheaper, attacks on the factoring problem are becoming more concerning. To mitigate the risk of cracked keys, we simply increase the size of the RSA key. This way, it becomes harder and harder to guess the prime factors of the public key, which would further reveal the corresponding private key. However, that strategy comes with a drawback and cost: considering that RSA signatures have the same size as the key, a system using large keys consumes more storage for the keys. It also consumes greater processing power for calculations and bandwidth for transporting the messages. Practically speaking, we cannot continue increasing the key size indefinitely. This is especially relevant for systems like small Internet of Things devices with limited resources. They simply do not have the capacity to increase the keys. At some point, the processing power will catch up, and RSA keys might be broken before they are invalidated. That is where elliptic curves come into play. Elliptic Curves With Elliptic Curve Cryptography (ECC), researchers have found algorithms for signing and encryption that work with small keys and are hard to break. The algorithms are based on a problem called the Elliptic Curve Discrete Logarithm Problem (ECDLP). The trapdoor function in ECDSA is more secure than in RSA, but it makes the algorithm slower. There is a range of different curves, that is, named sets of defined parameters and formulas. P-256 is one example, Curve25519 another. The security of elliptic curves depends on the characteristics, the parameters, and the formulas of the definition. However, more often, security relies on the implementation details. For example, algorithms for digital signatures based on elliptic curves (ECDSA) require a random, single-use nonce. Consequently, the nonce becomes a weak point. A bad random generator or the reuse of a nonce eventually allows an attacker to calculate the private key — a problem that previously affected YubiKeys and Sony's PlayStation 3. ECDSA is complex, and it's difficult to correctly implement elliptic curves. If done wrong, elliptic curves may result in a security risk rather than a benefit. For example, the ECDSA implementation in Java had a serious flaw that allowed an attacker to easily forge signatures. In addition, vulnerabilities for side-channel attacks are a known problem in ECDSA. So make sure you select the right curve and a secure implementation when using ECDSA. If in doubt, choose EdDSA if you can. EdDSA is an evolved version of the elliptic curves Curve25519 and Curve448. A smart combination of parameters and formulas eliminates two main problems with (other) elliptic curve signatures: the requirement for a random nonce and the vulnerability for side-channel attacks. Actually, the latter depends on the implementation and is theoretically possible. Side-channel attacks are a category of attacks that don't target vulnerabilities of the algorithm itself but of its implementation by analyzing system characteristics such as power consumption, electromagnetic leaks in caches or memory, or timing information. EdDSA uses complete formulas, which means that the rules apply to all points on the elliptic curve. As a result, no edge cases need expensive parameter validation and exception handling. Consequently, EdDSA is easier to implement and less prone to side-channel attacks compared to other elliptic curve signature algorithms. Nowadays, there are even complete formulas for Weierstrass curves, meaning that it's now easier to implement elliptic curve algorithms. However, the problem with the nonce in ECDSA remains. EdDSA is not dependent on a random number generator for the nonce and is, therefore, more secure. In addition, EdDSA was designed with high performance in mind. Since the keys are small and the operations are fast, EdDSA saves time, money, and resources. As a result, EdDSA is a green computing alternative that reduces the environmental impact of cryptography. Although the industry now has this progressive technology, some software providers have been slow to adopt it. When switching to EdDSA, you may experience limited support from languages, libraries, and frameworks. However, the Curity Identity Server does not contain such an obstacle. Not only does the Curity Identity Server support EdDSA out of the box, but Curity also provides resources to help you along the way. The right choice should be easy. Choose EdDSA and make a difference for the security and future of your system! If you want to learn more about signature algorithms and why EdDSA is the optimal choice for securing tokens, watch our webinar - An Engineer's Guide to Signature Algorithms and EdDSA. It is available on-demand.
{"url":"https://curity.io/blog/the-evolution-of-signature-algorithms/","timestamp":"2024-11-14T08:15:54Z","content_type":"text/html","content_length":"601302","record_id":"<urn:uuid:abb562dc-4918-49c8-9fe4-4729efffa6a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00542.warc.gz"}
Library Stdlib.Numbers.NatInt.NZDomain In this file, we investigate the shape of domains satisfying the NZDomainSig interface. In particular, we define a translation from Peano numbers nat into NZ. Relationship between points thanks to succ and pred. For any two points, one is an iterated successor of the other. Generalized version of pred_succ when iterating From a given point, all others are iterated successors or iterated predecessors. In particular, all points are either iterated successors of 0 or iterated predecessors of 0 (or both). First case: let's assume such an initial point exists (i.e. S isn't surjective)... ... then we have unicity of this initial point. ... then all other points are descendant of it. NB : We would like to have pred n == n for the initial element, but nothing forces that. For instance we can have -3 as initial point, and P(-3) = 2. A bit odd indeed, but legal according to . We can hence have n == (P^k) m exists k', m == (S^k') n We need decidability of (or classical reasoning) for this: Second case : let's suppose now S surjective, i.e. no initial point. To summarize: S is always injective, P is always surjective (thanks to I) If S is not surjective, we have an initial point, which is unique. This bottom is below zero: we have N shifted (or not) to the left. P cannot be injective: P init = P (S (P init)). (P init) can be arbitrary. II) If S is surjective, we have forall n, S (P n) = n , S and P are bijective and reciprocal. IIa) if exists k<>O, 0 == S^k 0 , then we have a cyclic structure Z/nZ IIb) otherwise, we have Z An alternative induction principle using S and P. It is weaker than . For instance it cannot prove that we can go from one point by many S or , but only by many mixed with many . Think of a model with two copies of N: 0, 1=S 0, 2=S 1, ... 0', 1'=S 0', 2'=S 1', ... and P 0 = 0' and P 0' = 0. We now focus on the translation from nat into NZ. First, relationship with 0, succ, pred. Since P 0 can be anything in NZ (either -1, 0, or even other numbers, we cannot state previous lemma for n=O. If we require in addition a strict order on NZ, we can prove that ofnat is injective, and hence that NZ is infinite (i.e. we ban Z/nZ models) For basic operations, we can prove correspondence with their counterpart in nat.
{"url":"https://coq.inria.fr/doc/master/stdlib/Stdlib.Numbers.NatInt.NZDomain.html","timestamp":"2024-11-14T04:23:14Z","content_type":"application/xhtml+xml","content_length":"81557","record_id":"<urn:uuid:f8456680-f892-467c-838c-7a2ecc4b2cff>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00513.warc.gz"}
Log Rolling. Topic: Inequalities. Level: AIME. : Given reals $ a, b, c \ge 2 $, show that $ \log_{a+b}(c)+\log_{b+c}(a)+\log_{c+a}(b) \ge \frac{3}{2} $. : Recall the change-of-base identity for logs. We can rewrite the LHS as $ \frac{\log{c}}{\log{(a+b)}}+\frac{\log{a}}{\log{(b+c)}}+\frac{\log{b}}{\log{(c+a)}} $. Note, however that $ (a-1)(b-1) \ge 1 \Rightarrow ab \ge a+b \Rightarrow \log{a}+\log{b} = \log{(ab)} \ge \log{(a+b)} $, so it remains to show that $ \frac{\log{c}}{\log{a}+\log{b}}+\frac{\log{a}}{\log{b}+\log{c}}+\frac{\log{b}}{\log{c}+\log{a}} \ge \frac{3}{2} $, which is true by Nesbitt's Inequality . QED. Comment: A good exercise in using log identities and properties to achieve a relatively simple result. Once we made the change-of-base substitution, seeing the $ \frac{3}{2} $ should clue you in to Nesbitt's. That led to the inequality $ \log{a}+\log{b} \ge \log{(a+b)} $, which was easily proven given the conditions of the problem. Practice Problem: Given reals $ a, b, c \ge 2 $, find the best constant $ k $ such that $ \log_{a+b+c}(a)+\log_{a+b+c}(b)+\log_{a+b+c}(c) \ge k $. 3 comments: 1. The new problem is easier, I think. It becomes abc \ge (a + b + c)^k. At a = b = c = 2 we see we require k \le log_6 (8), and it's pretty clear that if we increase a, b, c, the LHS increases faster than the RHS. 2. we talked about logrolling in Health Science today, aka turning someone in bed with a draw sheet, used on people with spinal injuries or just out of spinal surgery. Thought it was a funny 3. Lol nice. Just made the title up randomly.
{"url":"http://www.mathematicalfoodforthought.com/2007/01/log-rolling-topic-inequalities-level_24.html","timestamp":"2024-11-09T10:44:48Z","content_type":"application/xhtml+xml","content_length":"56675","record_id":"<urn:uuid:8f96a486-53dd-4a15-bb85-a2c8cce0735c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00089.warc.gz"}
CS111 Assignment 5 solved ## Overview For this assignment, you will complete searching/sorting tasks and efficiency analysis. No code is to be written for this assignment. Write your answers in the file assign5.txt. ## Problem 1 1. Trace selection sort on the following array of letters (sort into alphabetical order): M U E J R Q X B After each pass (outer loop iteration) of selection sort, show the contents of the array and the number of letter-to-letter comparisons performed on that pass (an exact number, not big-O). 2. Trace insertion sort on the following array of letters (sort into alphabetical order): M U E J R Q X B After each pass (outer loop iteration) of insertion sort, show the contents of the array and the number of letter-to-letter comparisons performed on that pass (an exact number, not big-O). ## Problem 2 For each problems segment given below, do the following: 1. Create an algorithm to solve the problem 2. Identify the factors that would influence the running time, and which can be known before the algorithm or code is executed. Assign names (such as n) to each factor. 3. Identify the operations that must be counted. You need not count every statement separately. If a group of statements always executes together, treat the group as a single unit. If a method is called, and you do not know the running time of that method, count it as a single operation. 4. Count the operations performed by the algorithm or code. Express the count as a function of the factors you identified in Step 2. If the count cannot be expressed as a simple function of those factors, define the bounds that can be placed on the count: the best case (lower bound) and worst case (upper bound). 5. Determine what the Best Case Inputs are, and the Worst Case Inputs are, and the efficiency of your implementation 6. Transform your count formula into big-O notation by: – Taking the efficiency with worst case input, – Dropping insignificant terms. – Dropping constant coefficients. Do Problem 2 for each of these scenarios. a. Determine if 2 arrays contain the same elements b. Counting total number characters that have a duplicate within a string (i.e. “gigi the gato” would result in 7 (g x 3 + i x 2 + t x 2) c. Finding an empty row in a 2-D array where empty is defined as an element with a 0 entry.
{"url":"https://codeshive.com/questions-and-answers/cs111-assignment-5-solved/","timestamp":"2024-11-05T03:33:58Z","content_type":"text/html","content_length":"99439","record_id":"<urn:uuid:7666c337-7ba0-48c2-9693-d88de9212c15>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00364.warc.gz"}
Dipartimento di Ingegneria informatica, automatica e gestionale We study the minority-opinion dynamics over a fully-connected network of n nodes with binary opinions. Upon activation, a node receives a sample of opinions from a limited number of neighbors chosen uniformly at random. Each activated node then adopts the opinion that is least common within the received sample. Unlike all other known consensus dynamics, we prove that this elementary protocol behaves in dramatically different ways, depending on whether activations occur sequentially or in parallel. Specifically, we show that its expected consensus time is exponential in n under asynchronous models, such as asynchronous GOSSIP. On the other hand, despite its chaotic nature, we show that it converges within O(log2 n) rounds with high probability under synchronous models, such as synchronous GOSSIP. Finally, our results shed light on the bit-dissemination problem, that was previously introduced to model the spread of information in biological scenarios. Specifically, our analysis implies that the minority-opinion dynamics is the first stateless solution to this problem, in the parallel passive-communication setting, achieving convergence within a polylogarithmic number of rounds. This, together with a known lower bound for sequential stateless dynamics, implies a parallel-vs-sequential gap for this problem that is nearly quadratic in the number n of nodes. This is in contrast to all known results for problems in this area, which exhibit a linear gap between the parallel and the sequential setting.
{"url":"http://www.corsodrupal.uniroma1.it/publication/28126","timestamp":"2024-11-02T21:02:00Z","content_type":"text/html","content_length":"26576","record_id":"<urn:uuid:dabb2f8a-2d12-4af3-a6fa-556f0a881891>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00498.warc.gz"}
8.14: Hypothesis Test for a Population Proportion (2 of 3) Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Learning Objectives • Conduct a hypothesis test for a population proportion. State a conclusion in context. On the previous page, we looked at determining hypotheses for testing a claim about a population proportion. On this page, we look at how to determine P-values. As we learned earlier, the P-value for a hypothesis test for a population proportion comes from a normal model for the sampling distribution of sample proportions. The normal distribution is an appropriate model for this sampling distribution if the expected number of success and failures are both at least 10. Using the symbols for the population proportion and sample size, a normal curve is a reasonable model if the following conditions are met: np ≥ 10 and n(1 − p) ≥ 10. Health Insurance Coverage Recall this example from the previous page. According to the Government Accountability Office, 80% of all college students (ages 18 to 23) had health insurance in 2006. The Patient Protection and Affordable Care Act of 2010 allowed young people under age 26 to stay on their parents’ health insurance policy. Has the proportion of college students (ages 18 to 23) who have health insurance increased since 2006? A survey of 800 randomly selected college students (ages 18 to 23) indicated that 83% of them had health insurance. Use a 0.05 level of significance. Step 1: Determine the hypotheses. We did this on the previous page. The hypotheses are: • H[0]: p = 0.80 • H[a]: p > 0.80 where p is the proportion of college students ages 18 to 23 who have health insurance now. Step 2: Collect the data. In this random sample of 800 college students, 83% have health insurance. If 80% of all college students have health insurance, is this 3% difference statistically significant or due to chance? We need to find a P-value to answer this question. We must determine if we can use this data in a hypothesis test. First note that the data are from a random sample. That is essential. Now we need to determine if a normal model is a good fit for the sampling distribution. Since we assume that the null hypothesis is true, we build the sampling distribution with the assumption that 0.80 is the population proportion. We check the following conditions, using 0.80 for p: $np=(800)(0.80)=640\text{ }\mathrm{and}\text{ }n(1-p)=(800)(1-0.80)=160$ Because these are both more than 10, we can use the normal model to find the P-value. Step 3: Assess the evidence. Now that we know that the normal distribution is an appropriate model for the sampling distribution, our next goal is to determine the P-value. The first step is to determine the z-score for the observed sample proportion (the data). The sample proportion is 0.83. Recall from Linking Probability to Statistical Inference that the formula for the z-score of a sample proportion is as follows: For this example, we calculate: $Z=\frac{0.83-0.80}{\sqrt{\frac{0.80(1-0.80)}{800}}}\approx 2.12$ This z-score is called the test statistic. It tells us the sample proportion of 0.83 is about 2.12 standard errors above the population proportion given in the null hypothesis. We use this statistic to find the P-value. The P-value describes the strength of the evidence against the null hypothesis. We use the simulation that we first saw in Probability and Probability Distributions to determine the P-value. The P-value is a probability that describes the likelihood of the data if the null hypothesis is true. More specifically, the P-value is the probability that sample results are as extreme as or more extreme than the data if the null hypothesis is true. The phrase “as extreme as or more extreme than” means farther from the center of the sampling distribution in the direction of the alternative hypothesis. In this situation, we want the area to the right of 0.83 because the alternative hypothesis is a “greater-than” statement. The P-value, in this case, is the probability of getting a sample proportion equal to or greater than 0.83. Since we are using the standard normal curve to find probabilities, the P-value is the area to the right of the Z = 2.12. We can find this area with a simulation or other technology. The P-value is approximately 0.0170. Thus, the probability that a random sample proportion is at least as large as 0.83 is about 0.017 (if the population proportion is actually 0.80). If the null hypothesis is true, we observe sample proportions this high or higher only about 1.7% of the time. The P-value is our evidence of statistical significance. It is a measure of whether random chance can explain the deviation of the data from the null hypothesis. Step 4: State a conclusion. To determine our conclusion, we compare the P-value to the level of significance, α = 0.05. If our data are predicted to occur by chance less than 5% of the time, we have reason to reject the null hypothesis and accept the alternative. Since our P-value of 0.017 is less than 0.05, we reject the null hypothesis. We state our conclusion in terms of the alternative hypothesis. We also state it in The data from this study provides strong evidence that the proportion of all college students who have health insurance is now greater than 0.80 (P-value = 0.017). The 0.03 increase in the proportion who have health insurance since 2008 is statistically significant at the 0.05 level. Alternatively, we can give the conclusion using the percentage rather than the decimal: The data from this study provides strong evidence that the percentage of all college students who have health insurance is now greater than 80% (P-value = 0.017). The 3% increase in the percentage who have health insurance since 2008 is statistically significant at the 5% level. Important Note A hypothesis test can be one-tailed or two-tailed. The previous example was a one-tailed hypothesis test. The P-value was the area of the right tail. If the inequality in the alternative hypothesis is < or >, the test is one-tailed. If the inequality is ≠, the test is two-tailed. Internet Access Recall the following example from the previous page. According to the Kaiser Family Foundation, 84% of U.S. children iages 8 to 18 had Internet access at home as of August 2009. Researchers wonder if this percentage has changed since then. They survey 500 randomly selected children (ages 8 to 18) and find that 430 of them have Internet access at home. Use a level of significance of α = 0.05 for this hypothesis test. Step 1: Determine the hypotheses. • H[0]: p = 0.84 • H[a]: p ≠ 0.84 where p is the proportion of children ages 8 to 18 with Internet access at home now. Step 2: Collect the data. Our sample is random, so there is no problem there. Again, we want to determine whether the normal model is a good fit for the sampling distribution of sample proportions. Based on the null hypothesis, we will use 0.84 as our population proportion to check the conditions. $np=(500)(0.84)=420\text{ }\mathrm{and}\text{ }n(1-p)=(500)(1-0.84)=80$ Because these are both more than 10, we can use the normal model to find the P-value. Step 3: Assess the evidence. Since we can use the normal model, we need to calculate the z-test statistic for the sample proportion. We first calculate the sample proportion. Next, we calculate our Z-score, the test statistic: $Z=\frac{\stackrel{ˆ}{p}-p}{\sqrt{\frac{p(1-p)}{n}}}=\frac{0.86-0.84}{\sqrt{\frac{0.84(1-0.84)}{500}}}\approx 1.22$ The sample proportion of 0.86 is about 1.22 standard errors above the population proportion given in the null hypothesis. Now we calculate the P-value. This is where the two-tailed nature of the test is important. The P-value is the probability of seeing a sample proportion at least as extreme as the one observed from the data if the null hypothesis is true. In the previous example, only sample proportions higher than the null proportion were evidence in favor of the alternative hypothesis. In this example, any sample proportion that differs from 0.84 is evidence in favor of the alternative. Statistically significant differences are at least as extreme as the difference we see in the data. We want to determine the probability that the difference in either direction (above or below 0.84) is at least as large as the difference seen in the data, so we include sample proportions at or above 0.86 and sample proportions at or below 0.82. For this reason, we look at the area in both tails. Our simulation shows one tail, so we have to double this area. The area above the test statistic of 1.22 is about 0.11. We double this area to include the area in the left tail, below Z = −1.22. This gives us a P-value of approximately 0.22. Our sample proportion was 0.02 above the population proportion from the null hypothesis. In a sample of size 500, we would observe a sample proportion 0.02 or more away from 0.84 about 22% of the time by chance alone. Step 4: State a conclusion. Again we compare the P-value to the level of significance, α = 0.05. In this case, the P-value of 0.22 is greater than 0.05, which means we do not have enough evidence to reject the null hypothesis. A sample result that could occur 22% of the time by chance alone is not statistically significant. Now we can state the conclusion in terms of the alternative hypothesis. The data from this study does not provide evidence that is strong enough to conclude that the proportion of all children ages 8 to 18 who have Internet access at home has changed since 2009 (P-value = 0.22). The 2% change observed in the data is not statistically significant. These results can be explained by predictable variation in random samples. Note about the Conclusion In the conclusion above, we did not have enough evidence to reject the null hypothesis. As we noted in “Hypothesis Testing,” failing to reject the null hypothesis does not mean the null hypothesis is In the case of the previous example, it is possible that the proportion of children who have Internet access at home has changed. But the data we gathered did not provide the evidence to detect that the proportion had changed significantly. Researchers often note improvements that could be made in their research and suggest follow-up research that might be done. In our example, a second sample with a larger sample size might provide the evidence needed to reject the null hypothesis. The important thing to keep in mind is that at the end of a hypothesis test, we never say that the null hypothesis is true. Try It California College Students Who Drink According to the Centers for Disease Control and Prevention, 60% of all American adults ages 18 to 24 currently drink alcohol. Is the proportion of California college students who currently drink alcohol different from the proportion nationwide? A survey of 450 California college students indicates that 66% currently drink alcohol. The hypotheses were: • H[0]: p = 0.60 • H[a]: p ≠ 0.60 Click here to open the simulation Try It Coin Flips Recall the scenario from the previous page. A psychic claims to be able to predict the outcome of coin flips before they happen. Someone who guesses randomly will predict about half of coin flips correctly. In 100 flips, the psychic correctly predicts 57 flips. Do the results of this test indicate that the psychic does better than random guessing? The hypotheses are • H[0]: p = 0.50 • H[a]: p > 0.50 where p is the proportion of correct coin flip predictions by the psychic. Click here to open the simulation Contributors and Attributions CC licensed content, Shared previously
{"url":"https://stats.libretexts.org/Courses/Lumen_Learning/Concepts_in_Statistics_(Lumen)/08%3A_Inference_for_One_Proportion/8.14%3A_Hypothesis_Test_for_a_Population_Proportion_(2_of_3)","timestamp":"2024-11-02T06:24:01Z","content_type":"text/html","content_length":"152633","record_id":"<urn:uuid:3f351c67-1580-4f7e-9b41-1c51148464c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00446.warc.gz"}
Where Gravity Is Weak and Naked Singularities Are Verboten Physicists have wondered for decades whether infinitely dense points known as singularities can ever exist outside black holes, which would expose the mysteries of quantum gravity for all to see. Singularities — snags in the otherwise smooth fabric of space and time where Albert Einstein’s classical gravity theory breaks down and the unknown quantum theory of gravity is needed — seem to always come cloaked in darkness, hiding from view behind the event horizons of black holes. The British physicist and mathematician Sir Roger Penrose conjectured in 1969 that visible or “naked” singularities are actually forbidden from forming in nature, in a kind of cosmic censorship. But why should quantum gravity censor itself? Now, new theoretical calculations provide a possible explanation for why naked singularities do not exist — in a particular model universe, at least. The findings indicate that a second, newer conjecture about gravity, if it is true, reinforces Penrose’s cosmic censorship conjecture by preventing naked singularities from forming in this model universe. Some experts say the mutually supportive relationship between the two conjectures increases the chances that both are correct. And while this would mean singularities do stay frustratingly hidden, it would also reveal an important feature of the quantum gravity theory that eludes us. “It’s pleasing that there’s a connection” between the two conjectures, said John Preskill of the California Institute of Technology, who in 1991 bet Stephen Hawking that the cosmic censorship conjecture would fail (though he actually thinks it’s probably true). The new work, reported in May in Physical Review Letters by Jorge Santos and his student Toby Crisford at the University of Cambridge and relying on a key insight by Cumrun Vafa of Harvard University, unexpectedly ties cosmic censorship to the 2006 weak gravity conjecture, which asserts that gravity must always be the weakest force in any viable universe, as it is in ours. (Gravity is by far the weakest of the four fundamental forces; two electrons electrically repel each other 1 million trillion trillion trillion times more strongly than they gravitationally attract each other.) Santos and Crisford were able to simulate the formation of a naked singularity in a four-dimensional universe with a different space-time geometry than ours. But they found that if another force exists in that universe that affects particles more strongly than gravity, the singularity becomes cloaked in a black hole. In other words, where a perverse pinprick would otherwise form in the space-time fabric, naked for all the world to see, the relative weakness of gravity prevents it. Santos and Crisford are running simulations now to test whether cosmic censorship is saved at exactly the limit where gravity becomes the weakest force in the model universe, as initial calculations suggest. Such an alliance with the better-established cosmic censorship conjecture would reflect very well on the weak gravity conjecture. And if weak gravity is right, it points to a deep relationship between gravity and the other quantum forces, potentially lending support to string theory over a rival theory called loop quantum gravity. The “unification” of the forces happens naturally in string theory, where gravity is one vibrational mode of strings and forces like electromagnetism are other modes. But unification is less obvious in loop quantum gravity, where space-time is quantized in tiny volumetric packets that bear no direct connection to the other particles and forces. “If the weak gravity conjecture is right, loop quantum gravity is definitely wrong,” said Nima Arkani-Hamed, a professor at the Institute for Advanced Study who co-discovered the weak gravity conjecture. The new work “does tell us about quantum gravity,” said Gary Horowitz, a theoretical physicist at the University of California, Santa Barbara. The Naked Singularities In 1991, Preskill and Kip Thorne, both theoretical physicists at Caltech, visited Stephen Hawking at Cambridge. Hawking had spent decades exploring the possibilities packed into the Einstein equation, which defines how space-time bends in the presence of matter, giving rise to gravity. Like Penrose and everyone else, he had yet to find a mechanism by which a naked singularity could form in a universe like ours. Always, singularities lay at the centers of black holes — sinkholes in space-time that are so steep that no light can climb out. He told his visitors that he believed in cosmic censorship. Preskill and Thorne, both experts in quantum gravity and black holes (Thorne was one of three physicists who founded the black-hole-detecting LIGO experiment), said they felt it might be possible to detect naked singularities and quantum gravity effects. “There was a long pause,” Preskill recalled. “Then Stephen said, ‘You want to bet?’” The bet had to be settled on a technicality and renegotiated in 1997, after the first ambiguous exception cropped up. Matt Choptuik, a physicist at the University of British Columbia who uses numerical simulations to study Einstein’s theory, showed that a naked singularity can form in a four-dimensional universe like ours when you perfectly fine-tune its initial conditions. Nudge the initial data by any amount, and you lose it — a black hole forms around the singularity, censoring the scene. This exceptional case doesn’t disprove cosmic censorship as Penrose meant it, because it doesn’t suggest naked singularities might actually form. Nonetheless, Hawking conceded the original bet and paid his debt per the stipulations, “with clothing to cover the winner’s nakedness.” He embarrassed Preskill by making him wear a T-shirt featuring a nearly-naked lady while giving a talk to 1,000 people at Caltech. The clothing was supposed to be “embroidered with a suitable concessionary message,” but Hawking’s read like a challenge: “Nature Abhors a Naked Singularity.” The physicists posted a new bet online, with language to clarify that only non-exceptional counterexamples to cosmic censorship would count. And this time, they agreed, “The clothing is to be embroidered with a suitable, truly concessionary message.” The wager still stands 20 years later, but not without coming under threat. In 2010, the physicists Frans Pretorius and Luis Lehner discovered a mechanism for producing naked singularities in hypothetical universes with five or more dimensions. And in their May paper, Santos and Crisford reported a naked singularity in a classical universe with four space-time dimensions, like our own, but with a radically different geometry. This latest one is “in between the ‘technical’ counterexample of the 1990s and a true counterexample,” Horowitz said. Preskill agrees that it doesn’t settle the bet. But it does change the story. The Tin Can Universe The new discovery began to unfold in 2014, when Horowitz, Santos and Benson Way found that naked singularities could exist in a pretend 4-D universe called “anti-de Sitter” (AdS) space whose space-time geometry is shaped like a tin can. This universe has a boundary — the can’s side — which makes it a convenient testing ground for ideas about quantum gravity: Physicists can treat bendy space-time in the can’s interior like a hologram that projects off of the can’s surface, where there is no gravity. In universes like our own, which is closer to a “de Sitter” (dS) geometry, the only boundary is the infinite future, essentially the end of time. Timeless infinity doesn’t make a very good surface for projecting a hologram of a living, breathing universe. Despite their differences, the interiors of both AdS and dS universes obey Einstein’s classical gravity theory — everywhere outside singularities, that is. If cosmic censorship holds in one of the two arenas, some experts say you might expect it to hold up in both. Horowitz, Santos and Way were studying what happens when an electric field and a gravitational field coexist in an AdS universe. Their calculations suggested that cranking up the energy of the electric field on the surface of the tin can universe will cause space-time to curve more and more sharply around a corresponding point inside, eventually forming a naked singularity. In their recent paper, Santos and Crisford verified the earlier calculations with numerical simulations. But why would naked singularities exist in 5-D and in 4-D when you change the geometry, but never in a flat 4-D universe like ours? “It’s like, what the heck!” Santos said. “It’s so weird you should work on it, right? There has to be something here.” Weak Gravity to the Rescue In 2015, Horowitz mentioned the evidence for a naked singularity in 4-D AdS space to Cumrun Vafa, a Harvard string theorist and quantum gravity theorist who stopped by Horowitz’s office. Vafa had been working to rule out large swaths of the 10^500 different possible universes that string theory naively allows. He did this by identifying “swamplands”: failed universes that are too logically inconsistent to exist. By understanding patterns of land and swamp, he hoped to get an overall picture of quantum gravity. Working with Arkani-Hamed, Luboš Motl and Alberto Nicolis in 2006, Vafa proposed the weak gravity conjecture as a swamplands test. The researchers found that universes only seemed to make sense when particles were affected by gravity less than they were by at least one other force. Dial down the other forces of nature too much, and violations of causality and other problems arise. “Things were going wrong just when you started violating gravity as the weakest force,” Arkani-Hamed said. The weak-gravity requirement drowns huge regions of the quantum gravity landscape in swamplands. Weak gravity and cosmic censorship seem to describe different things, but in chatting with Horowitz that day in 2015, Vafa realized that they might be linked. Horowitz had explained Santos and Crisford’s simulated naked singularity: When the researchers cranked up the strength of the electric field on the boundary of their tin-can universe, they assumed that the interior was classical — perfectly smooth, with no particles quantum mechanically fluctuating in and out of existence. But Vafa reasoned that, if such particles existed, and if, in accordance with the weak gravity conjecture, they were more strongly coupled to the electric field than to gravity, then cranking up the electric field on the AdS boundary would cause sufficient numbers of particles to arise in the corresponding region in the interior to gravitationally collapse the region into a black hole, preventing the naked singularity. Subsequent calculations by Santos and Crisford supported Vafa’s hunch; the simulations they’re running now could verify that naked singularities become cloaked in black holes right at the point where gravity becomes the weakest force. “We don’t know exactly why, but it seems to be true,” Vafa said. “These two reinforce each other.” Quantum Gravity The full implications of the new work, and of the two conjectures, will take time to sink in. Cosmic censorship imposes an odd disconnect between quantum gravity at the centers of black holes and classical gravity throughout the rest of the universe. Weak gravity appears to bridge the gap, linking quantum gravity to the other quantum forces that govern particles in the universe, and possibly favoring a stringy approach over a loopy one. Preskill said, “I think it’s something you would put on your list of arguments or reasons for believing in unification of the forces.” However, Lee Smolin of the Perimeter Institute, one of the developers of loop quantum gravity, has pushed back, arguing that if weak gravity is true, there might be a loopy reason for it. And he contends that there is a path to unification of the forces within his theory — a path that would need to be pursued all the more vigorously if the weak gravity conjecture holds. Given the apparent absence of naked singularities in our universe, physicists will take hints about quantum gravity wherever they can find them. They’re as lost now in the endless landscape of possible quantum gravity theories as they were in the 1990s, with no prospects for determining through experiments which underlying theory describes our world. “It is thus paramount to find generic properties that such quantum gravity theories must have in order to be viable,” Santos said, echoing the swamplands philosophy. Weak gravity might be one such property — a necessary condition for quantum gravity’s consistency that spills out and affects the world beyond black holes. These may be some of the only clues available to help researchers feel their way into the darkness. This article was reprinted on Wired.com.
{"url":"https://www.quantamagazine.org/where-gravity-is-weak-and-naked-singularities-are-verboten-20170620/","timestamp":"2024-11-02T22:13:19Z","content_type":"text/html","content_length":"211656","record_id":"<urn:uuid:353eb333-2bb9-48cc-ba52-2a2c227acd34>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00811.warc.gz"}
Condenser Design Calculation Excel - Calculatorey Condenser Design Calculation Excel Understanding Condenser Design Calculation in Excel When it comes to designing a condenser, accurate calculations are essential to ensure optimal performance. Excel is a powerful tool that can be used to perform these calculations efficiently. In this article, we will discuss how to use Excel for condenser design calculations, covering the key parameters and formulas involved in the process. The Importance of Condenser Design A condenser is a crucial component in various systems, including refrigeration, air conditioning, and power plants. Its primary function is to transfer heat from the hot vapor to the cooling medium, allowing the vapor to condense into a liquid. Proper condenser design is essential for efficient heat transfer and overall system performance. Key Parameters for Condenser Design Before diving into the Excel calculations, it’s important to understand the key parameters that influence condenser design: 1. Heat Duty: The heat duty of a condenser is the amount of heat that needs to be removed from the vapor to condense it into a liquid. It is typically measured in kilowatts (kW) or BTUs per hour (BTU/hr). 2. Heat Transfer Coefficient: The heat transfer coefficient represents the efficiency of heat transfer between the vapor and the cooling medium. It is influenced by factors such as the surface area of the condenser and the flow rate of the cooling medium. 3. Temperature Difference: The temperature difference between the hot vapor and the cooling medium is a critical factor in condenser design. A larger temperature difference results in higher heat transfer rates. 4. Surface Area: The surface area of the condenser directly affects its heat transfer capacity. It is crucial to calculate the required surface area based on the heat duty and heat transfer coefficient. Condenser Design Calculations in Excel Now that we have an understanding of the key parameters, let’s delve into the Excel calculations for condenser design: 1. Determine the Heat Duty: Begin by calculating the heat duty of the condenser based on the specific application requirements. This can be done using the formula: Heat Duty = Mass Flow Rate x Heat Capacity x Temperature Difference 2. Calculate the Required Surface Area: Next, determine the surface area of the condenser needed to achieve the desired heat transfer. This can be calculated using the formula: Surface Area = Heat Duty / (Heat Transfer Coefficient x Temperature Difference) 3. Optimize the Design: Once you have calculated the heat duty and surface area, you can further optimize the condenser design by adjusting parameters such as the cooling medium flow rate and temperature. Excel allows for easy iteration to find the most efficient design for your specific requirements. Designing a condenser requires careful consideration of various parameters to ensure efficient heat transfer and overall system performance. By utilizing Excel for design calculations, you can streamline the process and make informed decisions to optimize your condenser design. Remember to regularly check and update your calculations as needed to maintain peak performance.
{"url":"https://calculatorey.com/condenser-design-calculation-excel/","timestamp":"2024-11-10T11:32:54Z","content_type":"text/html","content_length":"75902","record_id":"<urn:uuid:e4d2ac34-e2d6-4540-bd05-69f2ca163f44>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00033.warc.gz"}
The Ricci Tensor: A Complete Guide With Examples The Ricci tensor is an important mathematical object used in differential geometry that also shows up a lot in the general theory of relativity, among other things. But what does it actually The Ricci tensor represents how a volume in a curved space differs from a volume in Euclidean space. In particular, the Ricci tensor measures how a volume between geodesics changes due to curvature. In general relativity, the Ricci tensor represents volume changes due to gravitational tides. Now, what does all of the above stuff actually mean? In this article I’ll be going over exactly that. We’ll also be looking at some properties of the Ricci tensor as well as practical examples of some commonly used Ricci tensors. In case you’d want an ad-free PDF version of this article (an my other general relativity articles), you’ll find it here, available as part of my full General Relativity Bundle. What Are Tensors, Intuitively? Essentially, tensors are mathematical objects that are used in many areas of physics (in general relativity, for example, because they have some very useful transformation properties) and also in many areas of mathematics (for example, differential geometry). The key property of tensors is that a tensor is always the same in every coordinate system (in a technical sense, we say that a tensor transforms covariantly). First of all, what is a tensor anyway? A tensor is simply a “collection of objects” (these objects are its tensor components) whose components transform in a nice, predictable way between coordinate changes, while the tensor itself remains unchanged. A nice intuitive way to understand this is by looking at how a vector behaves under coordinate changes (a vector is, in fact, a tensor of “rank 1”): Mathematically, the transformation law of the components of a tensor is as follows: T'_{ij}=\frac{\partial x^m}{\partial x'^i}\frac{\partial x^n}{\partial x'^j}T_{mn} Note that this is the transformation law for a tensor that has two downstairs indices. A different tensor generally follows the same pattern (there is one of these partial derivatives of the coordinates -terms for each index). In fact, this often works as the definition of a tensor. So, we can simply define a tensor as any mathematical object whose components transform by the transformation law given above. Now, a mathematician might say that the definition of a tensor is something more like “a multilinear map from vectors and dual vectors to real numbers”. If you really want to be accurate and are interested in abstract mathematics, then sure, this definition would be more precise. However, for physics and most physical applications, the definition of a tensor through its transformation properties is perfectly good. A tensor is typically represented by its components, which we simply write as either downstairs or upstairs indices. These different indices correspond to the different tensor components. In general relativity, these indices are usually Greek letters and they run from 0 to 3 (the 0-component corresponds to time, while 1, 2 and 3 corresponds to the three spacial directions). In mathematics, it’s typical to use Latin indices, such as i and j. These tensor components can be nicely represented as a “table” as follows (more accurately, this is called the “matrix representation of a tensor” and it really only works for a two-index tensor, such as the Ricci tensor): These T’s here are the components of this tensor T[µν]. For example, T[01] is the component where µ=0 and ν=1. Now, enough about the general properties of tensors. What we’re really interested in is the Ricci tensor. The Ricci tensor is a tensor (as you may have guessed by now) with two indices, denoted as R [ij] (if you’re talking about general relativity, these indices would be Greek letters). Now, sure, the Ricci tensor is a tensor, but what does it actually represent? Luckily, there is a pretty nice geometric interpretation of the Ricci tensor, which we’ll talk about next. Quick tip: For building a deep understanding of tensor mathematics, I’d highly recommend a good, dedicated resource on the topic. For this, I think you would find my Mathematics of General Relativity: A Complete Course (link to the course page) a very good beginner-friendly choice. The course will guide you through everything you need to know about tensor calculus from the very ground up. There are also tons of practice problems and physics examples included. Geometric Interpretation of The Ricci Tensor The Ricci tensor is one of the central mathematical objects in the field of differential geometry. This, of course, relates the Ricci tensor directly to geometry in some way. So, what then is the geometric interpretation of the Ricci tensor? The geometric interpretation of the Ricci tensor is that it describes how much a volume element would differ in curved space compared to Euclidean or flat space. Different components of the Ricci tensor describe how the volume element evolves as one moves along a geodesic in any given direction. By curved space, I’m essentially referring to a Riemannian manifold, which to put it simply, is just a space in which the basis vectors may vary from place to place (and the geometry of that space is described by a metric tensor). A nice way to visualize what the Ricci tensor describes is by taking some “volume element” in Euclidean space (this is simply your original Cartesian coordinate system, for example) and imagine placing it in a Riemannian space. The Ricci tensor would then, in some sense, tell you how much the volume of this volume element changes. I’m aware that this picture is a little bit “sketchy”, so don’t take it too literally. It should simply give you the basic idea of what the Ricci tensor does. There is a very nice way to actually see why the Ricci tensor describes these volume changes, which is by looking at small differences in the metric of a Riemannian space compared to the metric of a Euclidean space. Recommendation: If you’re not familiar with the metric before, definitely check out my full article covering everything you need to know about it. You’ll learn about both the geometry behind the metric as well as its uses in general relativity. This idea can then be extended to volume elements also, and you’ll, in fact, find the Ricci tensor appearing here. The proof can be found below. Mathematical Proof: Why The Ricci Tensor Describes Volume Changes The first thing we’ll do is consider expanding the metric (the metric along a geodesic, to be exact). The first term will be just the Euclidean metric (Kronecker delta), but if the space is curved, there will be higher order terms containing the coordinates (x’s) and the Riemann tensor (a four-index tensor R, which describes the curvature of the space completely): Now, let’s take the natural logarithm on both sides (you’ll see why soon): This is useful, since it allows us to expand the natural logarithm as a Taylor series: \ln\left(1+x\right)=x+...\ \ \Rightarrow\ \ \ln\left(g_{ij}\right)=\frac{1}{3}R_{ikmj}x^kx^m+... A useful property of the Riemann tensor is that its last two indices can be exchanged with the cost of a minus sign (this form of the Riemann tensor is needed later), meaning that $R_{ikmj}=-R_{ikjm} $. We then have: Now, let’s take the trace on both sides. In the language of tensor calculus, the trace of the Riemann tensor is defined as the Ricci tensor, R[km] (if you want to be technical, the trace of the Riemann tensor is obtained by “contracting” the first and third indices, i and j in this case, with the metric). From this, we get the Ricci tensor on the right-hand side: We’ll now make use of a nice result from linear algebra, which is that the trace of the logarithm of something is the logarithm of the determinant of the something (this is why we wanted to take the logarithm of the metric in the first place!). In our case, this would be: Tr\ln\left(g_{ij}\right)=\ln\left(\left|g\right|\right)\ \ \Rightarrow\ \ \left|g\right|=e^{Tr\ln\left(g_{ij}\right)} Then, we’ll use the Taylor expansion of e^x here to expand this exponential as $e^x=1+x+…$: We then need to do only one last thing. Let’s take the square root of this determinant of the metric. This will be useful because we can again expand the square root of 1+something as a Taylor series of the form $\sqrt{1+x}=1+\frac{1}{2}x+…$. If we do this, the Taylor expansion for the square root of the determinant will give us: Now, the reason we did all these steps is because the volume element in a Riemannian space actually has this square root term: The product of these dx’s here is simply the ordinary volume element (the volume element in Euclidean space; for example, in a 3D -Cartesian coordinate system, a volume element would simply be dV= dxdydz). We’ll denote this Euclidean volume element by dV: dV=dx^1dx^2dx^3...dx^n\ \ \Rightarrow\ \ dV_R=\sqrt{\left|g\right|}dV We can then insert our formula for this square root thing and we get: This is indeed exactly why the Ricci tensor (R[km]) describes the difference in a Euclidean volume element (dV) compared to a Riemannian volume element (dV[R]). Now, this thing in the parentheses only has one “expansion term” here, so if you were to include higher order terms (meaning terms which contain third, fourth etc. powers of the coordinates x; this here only has second powers) here, you’d find terms which have covariant derivatives of the Ricci tensor. Physical Meaning of The Ricci Tensor Now, the Ricci tensor is useful for describing curvature mathematically, but does it also have a specific physical meaning? The physical meaning of the Ricci tensor is that it describes how much the spacetime volume of an object changes due to gravitational tides in general relativity. This is because geometrically, the Ricci tensor describes volume changes due to curvature and spacetime curvature is equated to tidal forces. First of all, to understand all this, let’s think of how the Ricci tensor actually encodes information about changes in spacetime volume. In general relativity, all objects move through spacetime along geodesics (a geodesic, in a simple sense, is just the “shortest distance between two spacetime points”). I explain geodesics and really all the important points about general relativity in my introductory article on general relativity. Now imagine you have two different geodesics (that start parallel to each other). If the spacetime is curved (in general relativity, this means that there is gravity present), these geodesics may begin deviating from one another. These geodesics will also, at all times, enclose some kind of volume in spacetime between them. Geometrically, the Ricci tensor then describes how much this spacetime volume changes as you move along these geodesics. This is actually what we already talked about earlier. Here we have a two-dimensional spacetime (since I can’t really draw a four-dimensional spacetime, but the basic idea is still the same) with two geodesics that enclose a volume between them (practically it’s an area since we’re in two dimensions, but you can imagine it as a volume). Let’s now think of the physical implications of this. In particular, let’s think of a physical object with some well-defined volume (a ball for example). In spacetime, all the different parts of the object will follow their own geodesics through spacetime (essentially, you can think of every atom of an object following its own spacetime geodesic). Therefore, if curvature is present, these geodesics may begin deviating from one another and the spacetime volume between them will change. Physically, this means that the object will get stretched and squeezed in different directions. This squeezing and stretching, on the other hand, corresponds to the effect of tidal forces, which are simply the result of geodesic deviation (tidal forces and geodesic deviation is explained in more detail in my general relativity article). So, the physical description of the Ricci tensor is how much the spacetime volume of an object changes due to gravitational tidal forces. Also, note that the object gets deformed in spacetime, not just in the usual three-dimensional space, so, in fact, the object will also get stretched or squeezed in time. This effect is called gravitational time dilation. As a sidenote, I have an article discussing gravitational time dilation near a black hole. The interesting thing about it is that there is actually a subtle geometric explanation of gravitational time dilation that ties really nicely with all the other geometric concepts used in general relativity. Recommendation: In case this section involving general relativity seemed intriguing to you, you may also find my complete guide on learning general relativity on your own interesting. In there, I give some of my recommended resources for self-studying as well as what to expect. Ricci Tensor In Terms of Christoffel Symbols Now that we’ve talked about what the Ricci tensor represents, it’s time to discuss some of its properties. In particular, how is the Ricci tensor defined? Essentially, the Ricci tensor is defined in terms of mathematical objects called Christoffel symbols in the following way: These Christoffel symbols are defined in terms of the metric tensor of a given space and its derivatives: Recommendation: I actually have a whole article discussing the Christoffel symbols and their meaning, which you’ll find here. In there, I go into much more detail on the geometric and physical interpretations of the Christoffel symbols as well as how to calculate them in practice through a little-known but extremely powerful method. Now, since the Ricci tensor is built out of the Christoffel symbols (which, on the other hand, is built out of the metric), this means that the Ricci tensor is actually comprised of products of the metric tensor, products of derivatives of the metric and also second derivatives of the metric. In principle, you could write down the Ricci tensor completely in terms of the metric only, but this would give you a horrendously complicated formula. If you want to get a sense of what this looks like, you can check out this page which has the Einstein field equations fully written out in terms of the metric. I also have an entire article on deriving the Einstein field equations in two different ways (which you’ll find here) and actually, both of these derivations rely on the properties of the Ricci tensor. The much more practical approach is to first calculate the Christoffel symbols through the metric and then based on the properties of the C-symbols, try to simplify the form of the Ricci tensor. We’ll talk about this and how to calculate the Ricci tensor (as well as some examples) later in the article. My Mathematics of General Relativity -course will teach you all the math you need for a deep understanding of general relativity. You’ll find more information here. An Intuitive Derivation of The Ricci Tensor Now, you may wonder why exactly the Ricci tensor is defined in terms of the Christoffel symbols in the way given above. This form can actually be derived by first deriving a tensor called the Riemann tensor and then constructing the Ricci tensor out of that. However, to derive this will require a decent bit of tensor calculus knowledge, so if you’re not familiar with it, you may want to skip the derivation given below. Derivation of the Ricci Tensor From Parallel Transport (click to see more) Note; part of this derivation has been taken from my article “General Relativity For Dummies”, so that’s why I’m using spacetime indices here. In ordinary vector mathematics, you’ve probably been taught that a vector can be moved around in space (while keeping its length and orientation fixed) and that it still remains the exact same For example, you could move a vector around a loop (in Euclidean space) and see that it, in fact, remains pointed in exactly the same direction as you started with. Another way to put it is that a vector will remain unchanged when parallel transporting on flat spaces. On curved spaces, this is generally not true anymore. If you move a vector around a loop while keeping it parallel to itself at all times (this is called parallel transport), the vector will inevitably still change direction if the space itself has some intrinsic curvature and everything has to move along the curvature of this space. The way we can get a mathematical expression for this is by imagining we have some vector A^λ in some spacetime which may or may not be curved. We then parallel transport it around a loop in two different ways (see the picture below): first, we parallel transport it along the coordinate x^ν (path 1) and then along the other coordinate x^µ (path 2). Then we do the same thing but in the opposite order (so first along x^µ, path 3, and then along x^ν, path 4) and compare the difference in the vector. Now, if we imagine this loop as being very very small (infinitesimally small, to be exact), then parallel transporting the vector will really correspond to taking the covariant derivative with respect to that coordinate (covariant derivative instead of a partial derivative, because we’re looking to build a tensor quantity). Here, this object R^ρ[µσν] denotes the difference in this vector after parallel transporting it two different ways (note that it should be zero if the space is flat). It’s a four-index tensor for reasons you’ll see shortly. When we do this, we may or may not end up having the vector orient in the same direction by doing it both ways. In fact, if the vector ends up pointing in a different direction when doing it along paths 1 and 2 than by along paths 3 and 4, then the space must indeed be curved (since the vector will change its direction differently depending on how it’s moved around in the space). So, the way we quantify this curvature is by first taking the covariant derivative of path 1 and then path 2 and seeing whether it is the same as the covariant derivative of path 3 and then 4 (see the picture above): \nabla_{\nu}\nabla_{\mu}A^{\lambda}=\nabla_{\mu}\nabla_{\nu}A^{\lambda}\ \left(?\right) Or moving everything to the left and factoring out the vector: \left(\nabla_{\nu}\nabla_{\mu}-\nabla_{\mu}\nabla_{\nu}\right)A^{\lambda}=0\ \left(?\right) Now, we know that if the space IS flat, then the order of which path you parallel transport along first should not matter. In other words, these double covariant derivatives should be equal and this difference should be zero: I’ve now left out the vector A^λ since it obviously can’t be zero, so we don’t need it anymore in the above expression. If the space is NOT flat, then the order of which paths you take will matter and this difference won’t be zero: In other words, we now have a quantity that is zero if the space is flat and non-zero if the space is curved. The next thing to do is write it out by using the definition of the covariant derivative: This expression may look a little weird since it has different indices on the left- and right-hand sides. This is because the expression doesn’t really make sense by itself since it’s actually an operator, which should always act on something (for example, the covariant derivative acting on a vector would read ∇[µ]A^λ=∂[µ]A^λ+Γ^λ[µν]A^ν, which has perfectly valid indices). What you’ll end up with is a four-index tensor object, denoted by R^ρ[µσν]: This quantity is called the Riemann tensor and it basically gives a complete measure of curvature in any space (if the space has a metric, that is). The Ricci tensor is mathematically defined as the contraction of this Riemann tensor. For this Riemann tensor to be contracted, we have to first lower its upstairs index and this is done by summing it with the metric as follows: Here, the α is a summation index. To get the Ricci tensor, we then multiply it by the inverse metric and sum over the first and third indices (meaning we multiply it by the inverse metric, which has upstairs indices that are the same as the first and third on the Riemann tensor; what we’re left with is a two-index object and this is called index contraction): If you now insert the Christoffel symbol definition of the Riemann tensor with appropriate indices, you’ll get: And this is exactly the Ricci tensor in terms of Christoffel symbols. Ricci Tensor Symmetries The Ricci tensor has an important symmetry, which is that you can interchange its two indices freely: Note; I’m using spacetime indices here, simply because I think they look cooler. Using Greek as opposed to Latin indices, however, makes no difference. Fundamentally, this symmetry comes from the fact that the Christoffel symbols which the Ricci tensor is built out of are by definition symmetric. This is because Riemannian geometry (and general relativity) are known as torsion-free theories. Why Is The Ricci Tensor Symmetric? (A Simple Proof) Now, the symmetry of the Ricci tensor comes from the fact that in Riemannian geometry, an object known as the torsion tensor is zero (this is the assumption of a “torsion-free connection”): This, on the other hand, implies that the Christoffel symbols have to be symmetric on the two lower indices: Also, the metric tensor itself is by assumption symmetric (this is more or less a convention, but a useful one). Interestingly, there are also physical theories in which torsion is not necessarily zero. A great example of this is Einstein-Cartan theory, which describes spin inside matter in the context of curved spacetime by allowing the torsion tensor to be non-zero inside matter. In Einstein-Cartan theory, the Ricci tensor is also not symmetric. The two above facts are actually enough to prove that the Ricci tensor is indeed symmetric. You’ll find a simple proof of this below. Derivation of the Ricci Tensor From Parallel Transport To prove that the Ricci tensor is symmetric only really requires us to prove that each of its terms are symmetric (what I mean by symmetry here, just to make it explicitly clear, is that the Ricci tensor remains the same if we interchange its indices, in this case, µ and ν): The first term and third terms are clearly symmetric. This is because the Christoffel symbols are symmetric under the interchange of µ and ν. Now, the symmetry of the second and fourth term is not necessarily obvious. The second term is easy to prove by using the fact that a Christoffel symbol with the upstairs and one downstairs index equal has the simple form: You can find a proof of this, for example here. Therefore, the second term is: This is clearly symmetric under exchanging µ and ν, since the order of partial differentiation doesn’t matter (meaning that ∂[µ]= ∂[ν]). The symmetry of the last term we can prove by using the definition of the Christoffel symbols in terms of the metric and then writing out this C-symbol product: This can actually be simplified greatly. In the first parentheses, since α and λ are both summation indices, we might as well just swap them in the third term inside the first parentheses. Now, since the metric is symmetric, we can then additionally swap, in the same term, the placement of ν, which means that the first and third term are actually equal now and thus, cancel out (since they have opposite sign). Only the second term inside the first parentheses then survives. The same thing can be repeated for the second parentheses and here also, only the second term survives. We’re then left with: Here, since each of the indices on these metric tensors is a summation index, the only “free” indices are these µ and ν on the partial derivatives. Moreover, we see that interchanging the indices µ and ν only has the effect of swapping the partial derivatives and again, the order of partial differentiation doesn’t matter. Therefore, this term is indeed symmetric as well. We’ve now shown that each of these terms in the Ricci tensor is indeed symmetric and therefore, the Ricci tensor itself is also symmetric in its indices. How To Calculate The Ricci Tensor (Step-By-Step) In this section, I’ll go over some of the more practical stuff in regards to actually calculating the Ricci tensor for different situations. What the Ricci tensor looks like in any given space is ultimately determined by what the metric is in that given space. The general steps for calculating the Ricci tensor are as follows: 1. Specify a metric tensor (either in matrix form or the line element of the metric). 2. Calculate the Christoffel symbols from the metric. 3. Calculate the components of the Ricci tensor from the Christoffel symbols. 4. Special case: in general relativity, if the Ricci scalar for a given spacetime is zero, it’s possible to calculate the Ricci tensor directly from the energy-momentum tensor (without the Christoffel symbols). Now, even though generally, the Ricci tensor has quite a complicated form, once you’ve specified a given metric and the Christoffel symbols in terms of that metric, the complexity of the Ricci tensor can usually be simplified a lot. Also, there is an extremely efficient (and to my surprise, not very commonly used) method to calculate the Christoffel symbols, which can save you a lot of time in many computations. I discuss the method in this article. Calculating Ricci Tensor From The Metric Calculating the Ricci tensor from a given metric essentially follows the exact steps specified above. Down below I’ve included a bunch of examples of different Ricci tensors and the calculations of these all follow exactly the steps given above. Also, for the special case mentioned in the above steps, you can find an example down below for the Reissner-Nordström metric (which describes charged black holes in general relativity). The real beauty of this special case is that if the Ricci scalar happens to be zero, then the Einstein field equations of general relativity essentially reduce to just an equation for the Ricci R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu}\ \ \Rightarrow\ \ R_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu} This is particularly useful because it allows us to essentially skip the whole process of calculating the Christoffel symbols. All we have to specify is the energy-momentum tensor, which usually has a fairly simple form. Examples of Ricci Tensors Below I’ve collected some of the most commonly used Ricci tensors as well as where they come from. Keep in mind that the point of this article is not to derive these, so I haven’t included the explicit calculations here. But, of course, you can do them yourself if you really wish to with the help of all the information, formulas and instructions found earlier in this article. You can also find a comprehensive list of different Christoffel symbols for various metrics in this article. Ricci Tensor In Flat Space The first particularly simple example is the Ricci tensor in flat space. In flat space, the Ricci tensor is zero: Moreover, a tensor being equal to zero means that each of its components has to be zero, so in matrix form, this says: Now, a key point here is that the Ricci tensor being zero technically does not imply that the space has to be completely flat. It only has to be what is called Ricci-flat (which is defined as R[µν]= It is true that a flat space does also have R[µν]=0, however, it is not a sufficient condition. If the Riemann tensor, on the other hand, is zero, then the space is definitely flat. Ricci Tensor of a Sphere This example is the Ricci tensor on the surface of a 3-dimensional sphere. Now, since the surface itself is basically a 2-dimensional space, the metric and the Ricci tensor are therefore both 2×2-matrices (this is enough to specify the space on the surface). The surface is sometimes called a 2-sphere, which really just means the surface of a 3D sphere. Now, the surface of this sphere is defined by the fact that the distance (radius r) from the center is a constant. The two coordinates needed to specify a point on the surface are two angles, θ and The line element on this sphere (the 2-sphere) has the form: ds^2=r^2d\theta^2+r^2\sin^2\theta d\phi^2 The metric can then be written as a 2×2-matrix: Note that these latin indices (i and j) are typically used to refer to spacial components only, while greek indices refer to spacetime components (this example, of course, does not involve time at The Christoffel symbols calculated from this metric are then: The Ricci tensor then takes on a very simple explicit form: Since the metric has two components, the Ricci tensor does as well. We can collect these components into a nice 2×2-matrix (just calculate the components from the above form by plugging in the Ricci Tensor For The Schwarzschild Metric The Schwarzschild metric is a solution of Einstein’s field equations in a vacuum. In particular, it described the spacetime around a spherically symmetric mass. Now, since it is a vacuum solution, the energy-momentum tensor on the right-hand side of Einstein’s equations is zero (you can read my introductory general relativity article for more on the Einstein field equations). This then actually corresponds to the Ricci tensor being zero as well: So, a particularly nice condition for a vacuum is that R[µν]=0. This, of course, doesn’t mean that the spacetime is flat (since the Riemann tensor isn’t zero in this case), so there is definitely gravity present in the Schwarzschild spacetime. Another example of a vacuum Ricci tensor (R[µν]=0) is the Ricci tensor for the Kerr metric. This describes the spacetime and gravity outside a rotating spherically symmetric mass (instead of the Schwarzschild solution, which only describes a stationary mass). Ricci Tensor For The Robertson-Walker (FRW) Metric The Robertson-Walker metric (usually just called the FRW metric) describes what is called a maximally symmetric spacetime (one that is both homogeneous and isotropic). This metric can, for example, describe our universe on a large scale and it, in fact, predicts the expansion of the universe. I discuss the FRW metric, its associated Friedmann equations and their predictions (such as the expansion of the universe) more in this article. The line element for this metric is: ds^2=-dt^2+\frac{a^2\left(t\right)}{1-kr^2}dr^2+a^2\left(t\right)r^2d\theta^2+a^2\left(t\right)r^2\sin^2\theta d\phi^2 Here, k is a curvature parameter that can take on the values -1, 0 or 1, depending on the geometry of the spacetime (to describe a flat universe, k=0, for example). The a(t)-parameter is called the scale factor and it is a function of time that holds information about the expansion of the universe (or whatever space this metric happens to be describing). This metric has the matrix form: For this metric, there are 19 Christoffel symbols in total (which are not all independent). These can be written as four different matrices with the upper index labeling which of the four matrices we’re talking about: Here, $dot{a}$ denotes $da/dt$ (a being a function of time) and c is the speed of light. The Ricci tensor for this metric has four components, with the time component being: R_{00}=-\frac{3}{c^2}\frac{\ddot a}{a} Here, a with two dots means the second time derivative of a(t), d^2a/dt^2 (“acceleration”). The space components are given by: R_{ij}=\left(a\ddot a+2\dot a^2+2kc^2\right)\frac{g_{ij}}{a^2} We can collect all of these into a 4×4-matrix: R_{\mu\nu}=\begin{pmatrix}-\frac{3}{c^2}\frac{\ddot a}{a}&0&0&0\\0&\frac{a\ddot a+2\dot a^2+kc^2}{1-kr^2}&0&0\\0&0&\left(a\ddot a+2\dot a^2+kc^2\right)r^2&0\\0&0&0&\left(a\ddot a+2\dot a^2+kc^2\ Ricci Tensor For The Reissner-Nordström Metric An interesting and more complicated black hole metric is the Reissner-Nordström metric. This describes a charged (but non-rotating) black hole or whatever other spherically symmetric mass. This spacetime is basically identical to the Schwarzschild spacetime, except that the black hole is now charged. The metric is given in matrix form by: The two parameters here are given by: r_Q^2=\frac{Q^2G}{4\pi\varepsilon_0c^4}\ \ \ \&\ \ \ \ r_s=\frac{2GM}{c^2} Here, Q and M are the charge and mass of the black hole, c is the speed of light, G is the gravitational constant and ε[0] is the electric constant (also called vacuum permittivity). Since the black hole is charged, there is an electric field around it and therefore, the energy-momentum tensor is not zero. This means that the right-hand side of the Einstein field equations is not zero and so, the Ricci tensor is not zero either. In fact, it can be shown that the Ricci scalar for this metric is zero, so the Einstein equations have the form: R_{\mu\nu}=8\pi GT_{\mu\nu} From this, it’s possible to directly calculate the Ricci tensor without even needing any of the Christoffel symbols. The energy-momentum tensor (for an electromagnetic field) is given in terms of the EM field tensor F[µν]: For a Reissner-Nordström black hole, it only has one component of the EM tensor, namely a radial electric field: This electric field corresponds to the 01-component of the EM tensor. If we take this to be the only component of the EM tensor, then the energy-momentum tensor has the form: When you plug this into the Einstein field equations, you’ll find that the Ricci tensor actually takes on quite a simple form: The notation here may seem a bit awkward, but this is only because the Ricci tensor has a different signature than the metric for the R[00] and R[11] -components. Namely, these first two components have a minus sign and the last two (R[22] and R[33]) have a plus sign. This minus-plus sign in there is just to indicate this fact. Explicitly, you could write them as: R_{00}=-\frac{r_Q^2}{r^4}g_{00}\ {,}\ R_{11}=-\frac{r_Q^2}{r^4}g_{11}\ {,}\ R_{22}=\frac{r_Q^2}{r^4}g_{22}\ {,}\ R_{33}=\frac{r_Q^2}{r^4}g_{33} Anyway, plugging in the metric into these and simplifying a bit, we can express the Ricci tensor as a 4×4-matrix, as usual: Quick tip: If building a stronger mathematical foundation for general relativity is of interest to you, I think you would find my Mathematics of General Relativity: A Complete Course (link to the course page) extremely useful. This course aims to give you all the mathematical tools you need to understand general relativity – and any of its applications. Inside the course, you’ll learn topics like tensor calculus in an intuitive, beginner-friendly and highly practical way that can be directly applied to understand general relativity.
{"url":"https://profoundphysics.com/the-ricci-tensor/","timestamp":"2024-11-07T17:25:23Z","content_type":"text/html","content_length":"195375","record_id":"<urn:uuid:947fc2f5-b040-497c-9168-a2ff554c2fe5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00285.warc.gz"}
1,820 research outputs found A point particle of small mass m moves in free fall through a background vacuum spacetime metric g_ab and creates a first-order metric perturbation h^1ret_ab that diverges at the particle. Elementary expressions are known for the singular m/r part of h^1ret_ab and for its tidal distortion determined by the Riemann tensor in a neighborhood of m. Subtracting this singular part h^1S_ab from h^ 1ret_ab leaves a regular remainder h^1R_ab. The self-force on the particle from its own gravitational field adjusts the world line at O(m) to be a geodesic of g_ab+h^1R_ab. The generalization of this description to second-order perturbations is developed and results in a wave equation governing the second-order h^2ret_ab with a source that has an O(m^2) contribution from the stress-energy tensor of m added to a term quadratic in h^1ret_ab. Second-order self-force analysis is similar to that at first order: The second-order singular field h^2S_ab subtracted from h^2ret_ab yields the regular remainder h^2R_ab, and the second-order self-force is then revealed as geodesic motion of m in the metric g_ab+h^1R+h^2R.Comment: 7 pages, conforms to the version submitted to PR The geometrical meaning of the Eddington-Finkelstein coordinates of Schwarzschild spacetime is well understood: (i) the advanced-time coordinate v is constant on incoming light cones that converge toward r=0, (ii) the angles theta and phi are constant on the null generators of each light cone, (iii) the radial coordinate r is an affine-parameter distance along each generator, and (iv) r is an areal radius, in the sense that 4 pi r^2 is the area of each two-surface (v,r) = constant. The light-cone gauge of black-hole perturbation theory, which is formulated in this paper, places conditions on a perturbation of the Schwarzschild metric that ensure that properties (i)--(iii) of the coordinates are preserved in the perturbed spacetime. Property (iv) is lost in general, but it is retained in exceptional situations that are identified in this paper. Unlike other popular choices of gauge, the light-cone gauge produces a perturbed metric that is expressed in a meaningful coordinate system; this is a considerable asset that greatly facilitates the task of extracting physical consequences. We illustrate the use of the light-cone gauge by calculating the metric of a black hole immersed in a uniform magnetic field. We construct a three-parameter family of solutions to the perturbative Einstein-Maxwell equations and argue that it is applicable to a broader range of physical situations than the exact, two-parameter Schwarzschild-Melvin family.Comment: 12 page The singular field of a point charge has recently been described in terms of a new Green's function of curved spacetime. This singular field plays an important role in the calculation of the self-force acting upon the particle. We provide a method for calculating the singular field and a catalog of expansions of the singular field associated with the geodesic motion of monopole and dipole sources for scalar, electromagnetic and gravitational fields. These results can be used, for example, to calculate the effects of the self-force acting on a particle as it moves through spacetime.Comment: 14 pages; addressed referee's comments; published in PhysRev It is a known result by Jacobson that the flux of energy-matter through a local Rindler horizon is related with the expansion of the null generators in a way that mirrors the first law of thermodynamics. We extend such a result to a timelike screen of observers with finite acceleration. Since timelike curves have more freedom than null geodesics, the construction is more involved than Jacobson's and few geometrical constraints need to be imposed: the observers' acceleration has to be constant in time and everywhere orthogonal to the screen. Moreover, at any given time, the extrinsic curvature of the screen has to be flat. The latter requirement can be weakened by asking that the extrinsic curvature, if present at the beginning, evolves in time like on a cone and just rescales proportionally to the expansion.Comment: 8+1 pages, final versio Various regularization methods have been used to compute the self-force acting on a static particle in a static, curved spacetime. Many of these are based on Hadamard's two-point function in three dimensions. On the other hand, the regularization method that enjoys the best justification is that of Detweiler and Whiting, which is based on a four-dimensional Green's function. We establish the connection between these methods and find that they are all equivalent, in the sense that they all lead to the same static self-force. For general static spacetimes, we compute local expansions of the Green's functions on which the various regularization methods are based. We find that these agree up to a certain high order, and conjecture that they might be equal to all orders. We show that this equivalence is exact in the case of ultrastatic spacetimes. Finally, our computations are exploited to provide regularization parameters for a static particle in a general static and spherically-symmetric spacetime.Comment: 23 pages, no figure The second-order gravitational self-force on a small body is an important problem for gravitational-wave astronomy of extreme mass-ratio inspirals. We give a first-principles derivation of a prescription for computing the first and second perturbed metric and motion of a small body moving through a vacuum background spacetime. The procedure involves solving for a "regular field" with a specified (sufficiently smooth) "effective source", and may be applied in any gauge that produces a sufficiently smooth regular field We study static spherically symmetric solutions of high derivative gravity theories, with 4, 6, 8 and even 10 derivatives. Except for isolated points in the space of theories with more than 4 derivatives, only solutions that are nonsingular near the origin are found. But these solutions cannot smooth out the Schwarzschild singularity without the appearance of a second horizon. This conundrum, and the possibility of singularities at finite r, leads us to study numerical solutions of theories truncated at four derivatives. Rather than two horizons we are led to the suggestion that the original horizon is replaced by a rapid nonsingular transition from weak to strong gravity. We also consider this possibility for the de Sitter horizon.Comment: 15 pages, 3 figures, improvements and references added, to appear in PR We study the horizon absorption of gravitational waves in coalescing, circularized, nonspinning black hole binaries. The horizon absorbed fluxes of a binary with a large mass ratio (q=1000) obtained by numerical perturbative simulations are compared with an analytical, effective-one-body (EOB) resummed expression recently proposed. The perturbative method employs an analytical, linear in the mass ratio, effective-one-body (EOB) resummed radiation reaction, and the Regge-Wheeler-Zerilli (RWZ) formalism for wave extraction. Hyperboloidal (transmitting) layers are employed for the numerical solution of the RWZ equations to accurately compute horizon fluxes up to the late plunge phase. The horizon fluxes from perturbative simulations and the EOB-resummed expression agree at the level of a few percent down to the late plunge. An upgrade of the EOB model for nonspinning binaries that includes horizon absorption of angular momentum as an additional term in the resummed radiation reaction is then discussed. The effect of this term on the waveform phasing for binaries with mass ratios spanning 1 to 1000 is investigated. We confirm that for comparable and intermediate-mass-ratio binaries horizon absorbtion is practically negligible for detection with advanced LIGO and the Einstein Telescope (faithfulness greater than or equal to 0.997)
{"url":"https://core.ac.uk/search/?q=author%3A(Poisson%20E)","timestamp":"2024-11-08T00:50:19Z","content_type":"text/html","content_length":"129573","record_id":"<urn:uuid:920db6a2-f061-44bc-8d8e-4e026ca01905>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00174.warc.gz"}
Angle trisection Jump to navigation Jump to search Angle trisection is a classical problem of compass and straightedge constructions of ancient Greek mathematics. It concerns construction of an angle equal to one third of a given arbitrary angle, using only two tools: an unmarked straightedge and a compass. The problem as stated is impossible to solve for arbitrary angles, as proved by Pierre Wantzel in 1837. However, although there is no way to trisect an angle in general with just a compass and a straightedge, some special angles can be trisected. For example, it is relatively straightforward to trisect a right angle (that is, to construct an angle of measure 30 degrees). It is possible to trisect an arbitrary angle by using tools other than straightedge and compass. For example, neusis construction, also known to ancient Greeks, involves simultaneous sliding and rotation of a marked straightedge, which cannot be achieved with the original tools. Other techniques were developed by mathematicians over the centuries. Because it is defined in simple terms, but complex to prove unsolvable, the problem of angle trisection is a frequent subject of pseudomathematical attempts at solution by naive enthusiasts. These "solutions" often involve mistaken interpretations of the rules, or are simply incorrect.^[1] Background and problem statement[edit] Using only an unmarked straightedge and a compass, Greek mathematicians found means to divide a line into an arbitrary set of equal segments, to draw parallel lines, to bisect angles, to construct many polygons, and to construct squares of equal or twice the area of a given polygon. Three problems proved elusive, specifically, trisecting the angle, doubling the cube, and squaring the circle. The problem of angle trisection reads: Construct an angle equal to one-third of a given arbitrary angle (or divide it into three equal angles), using only two tools: 1. an unmarked straightedge, and 2. a compass. Proof of impossibility[edit] Pierre Wantzel published a proof of the impossibility of classically trisecting an arbitrary angle in 1837.^[2] Wantzel's proof, restated in modern terminology, uses the abstract algebra of field extensions, a topic now typically combined with Galois theory. However Wantzel published these results earlier than Galois (whose work was published in 1846) and did not use the connection between field extensions and groups that is the subject of Galois theory itself.^[3] The problem of constructing an angle of a given measure θ is equivalent to constructing two segments such that the ratio of their length is cos θ. From a solution to one of these two problems, one may pass to a solution of the other by a compass and straightedge construction. The triple-angle formula gives an expression relating the cosines of the original angle and its trisection: cos θ = 4cos^3(θ/3) − 3cos(θ/3). It follows that, given a segment that is defined to have unit length, the problem of angle trisection is equivalent to constructing a segment whose length is the root of a cubic polynomial. This equivalence reduces the original geometric problem to a purely algebraic problem. Every rational number is constructible. Every irrational number that is constructible in a single step from some given numbers is a root of a polynomial of degree 2 with coefficients in the field generated by these numbers. Therefore, any number that is constructible by a sequence of steps is a root of a minimal polynomial whose degree is a power of two. Note also that π/3 radians (60 degrees , written 60°) is constructible. The argument below shows that it is impossible to construct a 20° angle. This implies that a 60° angle cannot be trisected, and thus that an arbitrary angle cannot be Denote the set of rational numbers by Q. If 60° could be trisected, the degree of a minimal polynomial of cos(20°) over Q would be a power of two. Now let x = cos(20°). Note that cos(60°) = cos(π/3) = 1/2. Then by the triple-angle formula, cos(π/3) = 4x^3 − 3x and so 4x^3 − 3x = 1/2. Thus 8x^3 − 6x − 1 = 0. Define p(t) to be the polynomial p(t) = 8t^3 − 6t − 1. Since x = cos(20°) is a root of p(t), the minimal polynomial for cos(20°) is a factor of p(t). Because p(t) has degree 3, if it is reducible over by Q then it has a rational root. By the rational root theorem, this root must be ±1, ±1/2, ±1/4 or ±1/8, but none of these is a root. Therefore, p(t) is irreducible over by Q, and the minimal polynomial for cos(20°) is of degree 3. So an angle of measure 60° cannot be trisected. Angles which can be trisected[edit] However, some angles can be trisected. For example, for any constructible angle θ, an angle of measure 3θ can be trivially trisected by ignoring the given angle and directly constructing an angle of measure θ. There are angles that are not constructible but are trisectible (despite the one-third angle itself being non-constructible). For example, ^3π⁄[7] is such an angle: five angles of measure ^3π⁄[7] combine to make an angle of measure ^15π⁄[7], which is a full circle plus the desired ^π⁄[7]. For a positive integer N, an angle of measure ^2π⁄[N] is trisectible if and only if 3 does not divide N.^[4]^[5] In contrast, ^2π⁄[N] is constructible if and only if N is a power of 2 or the product of a power of 2 with the product of one or more distinct Fermat primes. Algebraic characterization[edit] Again, denote the set of rational numbers by Q. Theorem: An angle of measure θ may be trisected if and only if q(t) = 4t^3 − 3t − cos(θ) is reducible over the field extension Q(cos(θ)). The proof is a relatively straightforward generalization of the proof given above that a 60° angle is not trisectible.^[6] Other methods[edit] The general problem of angle trisection is solvable by using additional tools, and thus going outside of the original Greek framework of compass and straightedge. Many incorrect methods of trisecting the general angle have been proposed. Some of these methods provide reasonable approximations; others (some of which are mentioned below) involve tools not permitted in the classical problem. The mathematician Underwood Dudley has detailed some of these failed attempts in his book The Trisectors.^[1] Approximation by successive bisections[edit] Trisection can be approximated by repetition of the compass and straightedge method for bisecting an angle. The geometric series 1/3 = 1/4 + 1/16 + 1/64 + 1/256 + ⋯ or 1/3 = 1/2-1/4+1/8-1/16+... can be used as a basis for the bisections. An approximation to any degree of accuracy can be obtained in a finite number of steps.^[7] Using origami[edit] Trisection, like many constructions impossible by ruler and compass, can easily be accomplished by the more powerful operations of paper folding, or origami. Huzita's axioms (types of folding operations) can construct cubic extensions (cube roots) of given lengths, whereas ruler-and-compass can construct only quadratic extensions (square roots). Using a linkage[edit] There are a number of simple linkages which can be used to make an instrument to trisect angles including Kempe's Trisector and Sylvester's Link Fan or Isoklinostat.^[8] With a right triangle ruler[edit] In 1932, Ludwig Bieberbach published in Journal für die reine und angewandte Mathematik his work Zur Lehre von den kubischen Konstruktionen.^[9] He states therein (free translation): "As is known ... every cubic construction can be traced back to the trisection of the angle and to the multiplication of the cube, that is, the extraction of the third root. I need only to show how these two classical tasks can be solved by means of the right angle hook." The following description of the adjacent construction (animation) contains their continuation up to the complete angle trisection. It begins with the first unit circle around its center ${\displaystyle A}$, the first angle limb ${\displaystyle {\overline {BP}}}$, and the second unit circle around ${\displaystyle P}$ following it. Now the diameter ${\displaystyle {\overline {BP}}}$ from ${\displaystyle P}$ is extended to the circle line of this unit circle, the intersection point ${\displaystyle O}$ being created. Following the circle arc around ${\displaystyle P}$ with the radius ${\displaystyle {\overline {BP}}}$ and the drawing of the second angle limb from the angle ${\displaystyle \delta }$, the point ${\ displaystyle C}$ results. Now the so-called additional construction mean is used, in the illustrated example it is the Geodreieck. This geometry triangle, as it is also called, is now placed on the drawing in the following manner: The vertex of the right angle determines the point ${\displaystyle S}$ on the angle leg ${\displaystyle {\overline {PC}}}$, a cathetus of the triangle passes through the point ${\displaystyle O}$ and the other affects the unit circle ${\displaystyle A}$. After connecting the point ${\displaystyle O}$ to ${\displaystyle S}$ and drawing the tangent from ${\ displaystyle S}$ to the unit circle around ${\displaystyle A}$, the above-mentioned right angle hook respectively Rechtwinkelhaken is shown. The angle enclosed by the segments ${\displaystyle {\ overline {OS}}}$ and ${\displaystyle {\overline {PS}}}$ is thus exactly ${\displaystyle {\frac {\delta }{3}}}$. It goes on with the parallel to ${\displaystyle {\overline {OS}}}$ from ${\displaystyle P}$, the alternate angle ${\displaystyle {\frac {\delta }{3}}}$ and the point ${\displaystyle D}$ are being created. A further parallel to ${\displaystyle {\overline {OS}}}$ from ${\displaystyle A}$ determines the point of contact ${\displaystyle E}$ from the tangent with the unit circle about ${\displaystyle A}$.Finally, draw a straight line from ${\displaystyle P}$ through ${\displaystyle E}$ until it intersects the unit circle in ${\displaystyle F}$. Thus the angle ${\displaystyle \delta }$ has exactly three parts. With an auxiliary curve[edit] There are certain curves called trisectrices which, if drawn on the plane using other methods, can be used to trisect arbitrary angles.^[10] Application example[edit] The known Trisectrix of Colin Maclaurin from the year 1742 is used. In Cartesian coordinates this curve is described with the equation ${\displaystyle 2x(x^{2}+y^{2})=a(3x^{2}-y^{2}),}$ or, in implicit form, ${\displaystyle 2x^{3}+2xy^{2}-a3x^{2}+ay^{2}=0.}$ Angle trisection[edit] First, the diameter ${\displaystyle {\overline {AB}}}$ with its center ${\displaystyle M}$ is determined. This is followed by the semicircle ${\displaystyle MBA}$ with the subsequent generation of the trisectrix as the implicit curve.^[11] Thus, the basic construction for the angle trisection of angles ${\displaystyle 0^{\circ }<\alpha \leq 180^{\circ }}$ is completed. Now, the second angle limb ${\displaystyle {\overline {MC}}}$ is drawn in such a way that it encloses with the first angle limb ${\displaystyle {\overline {MB}}}$ the angle ${\displaystyle \alpha }$ to be divided. The angle limb ${\displaystyle {\overline {MC}}}$ intersects the trisectrix in ${\displaystyle D}$. Next, a straight line from ${\displaystyle A}$ is drawn through ${\displaystyle D}$ to the semicircle, resulting in the intersection ${\displaystyle E}$. The angle ${\displaystyle \beta }$ generated by ${\displaystyle {\overline {BA}}}$ and ${\displaystyle {\overline {AE}}}$ is the angle ${\ displaystyle {\frac {1}{3}}\alpha }$ sought. With a marked ruler[edit] Another means to trisect an arbitrary angle by a "small" step outside the Greek framework is via a ruler with two marks a set distance apart. The next construction is originally due to Archimedes, called a Neusis construction, i.e., that uses tools other than an un-marked straightedge. The diagrams we use show this construction for an acute angle, but it indeed works for any angle up to 180 This requires three facts from geometry (at right): 1. Any full set of angles on a straight line add to 180°, 2. The sum of angles of any triangle is 180°, and, 3. Any two equal sides of an isosceles triangle will meet the third in the same angle. Let l be the horizontal line in the adjacent diagram. Angle a (left of point B) is the subject of trisection. First, a point A is drawn at an angle's ray, one unit apart from B. A circle of radius AB is drawn. Then, the markedness of the ruler comes into play: one mark of the ruler is placed at A and the other at B. While keeping the ruler (but not the mark) touching A, the ruler is slid and rotated until one mark is on the circle and the other is on the line l. The mark on the circle is labeled C and the mark on the line is labeled D. This ensures that CD = AB. A radius BC is drawn to make it obvious that line segments AB, BC, and CD all have equal length. Now, triangles ABC and BCD are isosceles, thus (by Fact 3 above) each has two equal angles. Hypothesis: Given AD is a straight line, and AB, BC, and CD all have equal length, Conclusion: angle b = a/3. 1. From Fact 1) above, ${\displaystyle e+c=180}$°. 2. Looking at triangle BCD, from Fact 2) ${\displaystyle e+2b=180}$°. 3. From the last two equations, ${\displaystyle c=2b}$. 4. From Fact 2), ${\displaystyle d+2c=180}$°, thus ${\displaystyle d=180}$°${\displaystyle -2c}$, so from last, ${\displaystyle d=180}$°${\displaystyle -4b}$. 5. From Fact 1) above, ${\displaystyle a+d+b=180}$°, thus ${\displaystyle a+(180}$°${\displaystyle -4b)+b=180}$°. Clearing, a − 3b = 0, or a = 3b, and the theorem is proved. Again, this construction stepped outside the framework of allowed constructions by using a marked straightedge. With a string[edit] Thomas Hutcheson published an article in the Mathematics Teacher^[12] that used a string instead of a compass and straight edge. A string can be used as either a straight edge (by stretching it) or a compass (by fixing one point and identifying another), but can also wrap around a cylinder, the key to Hutcheson's solution. Hutcheson constructed a cylinder from the angle to be trisected by drawing an arc across the angle, completing it as a circle, and constructing from that circle a cylinder on which a, say, equilateral triangle was inscribed (a 360-degree angle divided in three). This was then "mapped" onto the angle to be trisected, with a simple proof of similar triangles. With a "tomahawk"[edit] A "tomahawk" is a geometric shape consisting of a semicircle and two orthogonal line segments, such that the length of the shorter segment is equal to the circle radius. Trisection is executed by leaning the end of the tomahawk's shorter segment on one ray, the circle's edge on the other, so that the "handle" (longer segment) crosses the angle's vertex; the trisection line runs between the vertex and the center of the semicircle. Note that while a tomahawk is constructible with compass and straightedge, it is not generally possible to construct a tomahawk in any desired position. Thus, the above construction does not contradict the nontrisectibility of angles with ruler and compass alone. The tomahawk produces the same geometric effect as the paper-folding method: the distance between circle center and the tip of the shorter segment is twice the distance of the radius, which is guaranteed to contact the angle. It is also equivalent to the use of an architects L-Ruler (Carpenter's Square). With interconnected compasses[edit] An angle can be trisected with a device that is essentially a four-pronged version of a compass, with linkages between the prongs designed to keep the three angles between adjacent prongs equal.^[13] Uses of angle trisection[edit] A cubic equation with real coefficients can be solved geometrically with compass, straightedge, and an angle trisector if and only if it has three real roots.^[14]^:Thm. 1 A regular polygon with n sides can be constructed with ruler, compass, and angle trisector if and only if ${\displaystyle n=2^{r}3^{s}p_{1}p_{2}\cdots p_{k},}$ where r, s, k ≥ 0 and where the p[i] are distinct primes greater than 3 of the form ${\displaystyle 2^{t}3^{u}+1}$ (i.e. Pierpont primes greater than 3).^[14]^:Thm. 2 For any nonzero integer N, an angle of measure ^2π⁄[N] radians can be divided into n equal parts with straightedge and compass if and only if n is either a power of 2 or is a power of 2 multiplied by the product of one or more distinct Fermat primes, none of which divides N. In the case of trisection (n = 3, which is a Fermat prime), this condition becomes the above-mentioned requirement that N not be divisible by 3.^[5] See also[edit] Further reading[edit] • Courant, Richard, Herbert Robbins, Ian Stewart, What is mathematics?: an elementary approach to ideas and methods, Oxford University Press US, 1996. ISBN 978-0-19-510519-3. External links[edit] Other means of trisection[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/Wantzel/en.wikipedia.org/wiki/Trisecting_the_angle.html","timestamp":"2024-11-10T13:57:14Z","content_type":"text/html","content_length":"175828","record_id":"<urn:uuid:c04eeadc-0763-4e32-bb63-2be4e8dbc871>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00212.warc.gz"}
3.6: The Chain Rule Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) We have covered almost all of the derivative rules that deal with combinations of two (or more) functions. The operations of addition, subtraction, multiplication (including by a constant) and division led to the Sum and Difference rules, the Constant Multiple Rule, the Power Rule, the Product Rule and the Quotient Rule. To complete the list of differentiation rules, we look at the last way two (or more) functions can be combined: the process of composition (i.e. one function "inside'' another). One example of a composition of functions is \(f(x) = \cos(x^2)\). We currently do not know how to compute this derivative. If forced to guess, one would likely guess \(f^\prime(x) = -\sin(2x) \),where we recognize \(-\sin x\) as the derivative of \(\cos x\) and \(2x\) as the derivative of \(x^2\). However, this is not the case; \(f^\prime(x)\neq -\sin(2x)\). In Example 62 we'll see the correct answer, which employs the new rule this section introduces, the Chain Rule. Before we define this new rule, recall the notation for composition of functions. We write \((f \circ g)(x)\) or \(f(g(x))\),read as "\(f\) of \(g\) of \(x\),'' to denote composing \(f\) with \(g\). In shorthand, we simply write \(f \circ g\) or \(f(g)\) and read it as "\(f\) of \(g\).'' Before giving the corresponding differentiation rule, we note that the rule extends to multiple compositions like \(f(g(h(x)))\) or \(f(g(h(j(x))))\),etc. To motivate the rule, let's look at three derivatives we can already compute. Example 59: Exploring similar derivatives Find the derivatives of 1. \(F_1(x) = (1-x)^2\), 2. \(F_2(x) = (1-x)^3,\) and 3. \(F_3(x) = (1-x)^4.\) We'll see later why we are using subscripts for different functions and an uppercase \(F\). In order to use the rules we already have, we must first expand each function as 1. \(F_1(x) = 1 - 2x + x^2\), 2. \(F_2(x) = 1 - 3x + 3x^2 - x^3\) and 3. \(F_3(x) = 1 - 4x + 6x^2 - 4x^3 + x^4\). It is not hard to see that: \[\begin{align*} F_1^\prime(x) &= -2 + 2x \\[4pt] F_2^\prime(x) &= -3 + 6x - 3x^2 \\[4pt] F_3^\prime (x) &= -4 + 12x - 12x^2 + 4x^3. \end{align*}\] An interesting fact is that these can be rewritten as \[F_1^\prime (x) = -2(1-x),\quad F_2^\prime(x) = -3(1-x)^2 \text{ and }F_3^\prime (x) = -4(1-x)^3.\] A pattern might jump out at you. Recognize that each of these functions is a composition, letting \(g(x) = 1-x\): \[\begin{eqnarray*}F_1(x) = f_1(g(x)),& \text{ where } f_1(x) = x^2,\\ F_2(x) = f_2(g(x)),& \text{ where } f_2(x) = x^3,\\ F_3(x) = f_3(g(x)),& \text{ where } f_3(x) = x^4. \end{eqnarray*}\] We'll come back to this example after giving the formal statements of the Chain Rule; for now, we are just illustrating a pattern. Theorem 18: The Chain Rule Let \(y = f(u)\) be a differentiable function of \(u\) and let \(u = g(x)\) be a differentiable function of \(x\). Then \(y=f(g(x))\) is a differentiable function of \(x\),and \[y^\prime = f^\prime(g (x))\cdot g^\prime(x).\] To help understand the Chain Rule, we return to Example 59. Example 60: Using the Chain Rule Use the Chain Rule to find the derivatives of the following functions, as given in Example 59. Example 59 ended with the recognition that each of the given functions was actually a composition of functions. To avoid confusion, we ignore most of the subscripts here. \(F_1(x) = (1-x)^2\): We found that \[y=(1-x)^2 = f(g(x)), \text{ where } f(x) = x^2\ \text{ and }\ g(x) = 1-x.\] To find \(y^\prime\), we apply the Chain Rule. We need \(f^\prime(x)=2x\) and \(g^\prime(x)=-1.\) Part of the Chain Rule uses \(f^\prime(g(x))\). This means substitute \(g(x)\) for \(x\) in the equation for \(f^\prime(x)\). That is, \(f^\prime(x) = 2(1-x)\). Finishing out the Chain Rule we have \ [y^\prime = f^\prime(g(x))\cdot g^\prime(x) = 2(1-x)\cdot (-1) = -2(1-x)= 2x-2.\] \(F_2(x) = (1-x)^3\): Let \(y = (1-x)^3 = f(g(x))\),where \(f(x) = x^3\) and \(g(x) = (1-x)\). We have \(f^\prime(x) = 3x^2\),so \(f^\prime(g(x)) = 3(1-x)^2\). The Chain Rule then states \[y^\prime = f^\prime(g(x))\cdot g ^\prime (x) = 3(1-x)^2\cdot(-1) = -3(1-x)^2.\] \(F_3(x) = (1-x)^4\): Finally, when \(y = (1-x)^4\),we have \(f(x)= x^4\) and \(g(x) = (1-x)\). Thus \(f^\prime(x) = 4x^3\) and \(f^\prime(g(x)) = 4(1-x)^3\). Thus \[y^\prime = f^\prime(g(x))\cdot g^\prime(x) = 4(1-x)^3\ cdot (-1) = -4(1-x)^3.\] Example 60 demonstrated a particular pattern: when \(f(x)=x^n\),then \(y^\prime =n\cdot (g(x))^{n-1}\cdot g^\prime (x)\). This is called the Generalized Power Rule. Theorem 19: Generalized Power Rule Let \(g(x)\) be a differentiable function and let \(n\neq 0\) be an integer. Then \[\dfrac{d}{dx}\Big(g(x)^n\Big) = n\cdot \big(g(x)\big)^{n-1}\cdot g^\prime (x).\] This allows us to quickly find the derivative of functions like \(y = (3x^2-5x+7+\sin x)^{20}\). While it may look intimidating, the Generalized Power Rule states that \[y^\prime = 20(3x^2-5x+7+\sin x)^{19}\cdot (6x-5+\cos x).\] Treat the derivative--taking process step--by--step. In the example just given, first multiply by 20, then rewrite the inside of the parentheses, raising it all to the 19\(^{\text{th}}\) power. Then think about the derivative of the expression inside the parentheses, and multiply by that. We now consider more examples that employ the Chain Rule. Example 61: Using the Chain Rule Find the derivatives of the following functions: 1. \(y = \sin{2x}\) 2. \(y= \ln (4x^3-2x^2)\) 3. \(y = e^{-x^2}\) 1. Consider \(y = \sin 2x\). Recognize that this is a composition of functions, where \(f(x) = \sin x\) and \(g(x) = 2x\). Thus \[y^\prime = f^\prime(g(x))\cdot g^\prime(x) = \cos (2x)\cdot 2 = 2\ cos 2x.\] 2. Recognize that \(y = \ln (4x^3-2x^2)\) is the composition of \(f(x) = \ln x\) and \(g(x) = 4x^3-2x^2\). Also, recall that \[\dfrac{d}{dx}\Big(\ln x\Big) = \dfrac{1}{x}.\]This leads us to:\[y^\ prime = \dfrac{1}{4x^3-2x^2} \cdot (12x^2-4x) = \dfrac{12x^2-4x}{4x^3-2x^2}= \dfrac{4x(3x-1)}{2x(2x^2-x)} = \dfrac{2(3x-1)}{2x^2-x}.\] 3. Recognize that \(y = e^{-x^2}\) is the composition of \(f(x) = e^x\) and \(g(x) = -x^2\). Remembering that \(f^\prime(x) = e^x\),we have \[y^\prime = e^{-x^2}\cdot (-2x) = (-2x)e^{-x^2}.\] Example 62: Using the Chain Rule to find a tangent line Let \(f(x) = \cos x^2\). Find the equation of the line tangent to the graph of \(f\) at \(x=1\). The tangent line goes through the point \((1,f(1)) \approx (1,0.54)\) with slope \(f^\prime(1)\). To find \(f^\prime\),we need the Chain Rule. \(f^\prime(x) = -\sin(x^2) \cdot(2x) = -2x\sin x^2\). Evaluated at \(x=1\),we have \(f^\prime(1) = -2\sin 1\approx -1.68\). Thus the equation of the tangent line is \[y = -1.68(x-1)+0.54 .\] The tangent line is sketched along with \(f\) in Figure 2.17. Figure 2.17: \(f(x)=\cos x^2\) sketched along with its tangent line at \(x=1\). The Chain Rule is used often in taking derivatives. Because of this, one can become familiar with the basic process and learn patterns that facilitate finding derivatives quickly. For instance, \[\ dfrac{d}{dx}\Big(\ln (\text{anything})\Big) = \dfrac{1}{\text{anything}}\cdot (\text{anything})^\prime = \dfrac{(\text{anything})^\prime}{\text{anything}}.\] A concrete example of this is \[\dfrac{d}{dx}\Big(\ln(3x^{15}-\cos x+e^x)\Big) = \dfrac{45x^{14}+\sin x+e^x}{3x^{15}-\cos x+e^x}.\] While the derivative may look intimidating at first, look for the pattern. The denominator is the same as what was inside the natural log function; the numerator is simply its derivative. This pattern recognition process can be applied to lots of functions. In general, instead of writing "anything'', we use \(u\) as a generic function of \(x\). We then say \[\dfrac{d}{dx}\Big(\ln u\ Big) = \dfrac{u^\prime}{u}.\] The following is a short list of how the Chain Rule can be quickly applied to familiar functions. Of course, the Chain Rule can be applied in conjunction with any of the other rules we have already learned. We practice this next. Example 63: Using the Product, Quotient and Chain Rules Find the derivatives of the following functions. 1. \(f(x) = x^5 \sin{2x^3}\) 2. \(f(x) = \dfrac{5x^3}{e^{-x^2}}\). 1. We must use the Product and Chain Rules. Do not think that you must be able to "see'' the whole answer immediately; rather, just proceed step--by--step.\[f^\prime(x) = x^5\big(6x^2\cos 2x^3\big) + 5x^4\big(\sin 2x^3\big)= 6x^7\cos2x^3+5x^4\sin 2x^3.\] 2. We must employ the Quotient Rule along with the Chain Rule. Again, proceed step--by--step.\[\begin{align*} f^\prime(x) = \dfrac{e^{-x^2}\big(15x^2\big) - 5x^3\big((-2x)e^{-x^2}\big)}{\big(e^{-x^ 2}\big)^2} &=\dfrac{e^{-x^2}\big(10x^4+15x^2\big)}{e^{-2x^2}}\\ &= e^{x^2}\big(10x^4+15x^2\big). \end{align*}\] A key to correctly working these problems is to break the problem down into smaller, more manageable pieces. For instance, when using the Product and Chain Rules together, just consider the first part of the Product Rule at first: \(f(x)g^\prime(x)\). Just rewrite \(f(x)\),then find \(g^\prime(x)\). Then move on to the \(f^\prime(x)g(x)\) part. Don't attempt to figure out both parts at once. Likewise, using the Quotient Rule, approach the numerator in two steps and handle the denominator after completing that. Only simplify afterward. We can also employ the Chain Rule itself several times, as shown in the next example. Example 64: Using the Chain Rule multiple times Find the derivative of \(y = \tan^5(6x^3-7x)\). Recognize that we have the \(g(x)=\tan(6x^3-7x)\) function "inside'' the \(f(x)=x^5\) function; that is, we have \(y = \big(\tan(6x^3-7x)\big)^5\). We begin using the Generalized Power Rule; in this first step, we do not fully compute the derivative. Rather, we are approaching this step--by--step. \[y^\prime = 5\big(\tan(6x^3-7x)\big)^4\cdot g^\prime(x).\] We now find \(g^\prime(x)\). We again need the Chain Rule; \[g^\prime(x) = \sec^2(6x^3-7x)\cdot(18x^2-7).\]Combine this with what we found above to give \[\begin{align*} y^\prime &= 5\big(\tan(6x^3-7x)\big)^4\cdot\sec^2(6x^3-7x)\cdot(18x^2-7)\\ &= (90x^2-35)\sec^2(6x^3-7x)\tan^4(6x^3-7x). \end{align*}\] This function is frankly a ridiculous function, possessing no real practical value. It is very difficult to graph, as the tangent function has many vertical asymptotes and \(6x^3-7x\) grows so very fast. The important thing to learn from this is that the derivative can be found. In fact, it is not "hard;'' one must take several simple steps and be careful to keep track of how to apply each of these steps. It is a traditional mathematical exercise to find the derivatives of arbitrarily complicated functions just to demonstrate that it can be done. Just break everything down into smaller pieces. Example 65: Using the Product, Quotient and Chain Rules Find the derivative of \( f(x) = \dfrac{x\cos(x^{-2})-\sin^2(e^{4x})}{\ln(x^2+5x^4)}.\) This function likely has no practical use outside of demonstrating derivative skills. The answer is given below without simplification. It employs the Quotient Rule, the Product Rule, and the Chain Rule three times. \[f^\prime(x) = \dfrac{\Big(\ln(x^2+5x^4)\Big)\cdot\Big[\big(x\cdot(-\sin(x^{-2}))\cdot(-2x^{-3})+1\cdot \cos(x^{-2})\big)-2\sin(e^{4x})\cdot\cos(e^{4x})\cdot(4e^{4x})\Big]-\Big(x\cos(x^{-2})-\sin^2 The reader is highly encouraged to look at each term and recognize why it is there. (I.e., the Quotient Rule is used; in the numerator, identify the "LOdHI'' term, etc.) This example demonstrates that derivatives can be computed systematically, no matter how arbitrarily complicated the function is. The Chain Rule also has theoretic value. That is, it can be used to find the derivatives of functions that we have not yet learned as we do in the following example. Example 66: The Chain Rule and exponential functions Use the Chain Rule to find the derivative of \(y= a^x\) where \(a>0\),\(a\neq 1\) is constant. We only know how to find the derivative of one exponential function: \(y = e^x\); this problem is asking us to find the derivative of functions such as \(y = 2^x\). This can be accomplished by rewriting \(a^x\) in terms of \(e\). Recalling that \(e^x\) and \(\ln x\) are inverse functions, we can write \[a = e^{\ln a} \quad \text{and so } \quad y = a^x = e^{\ln (a^x)}. \nonumber\] By the exponent property of logarithms, we can "bring down'' the power to get \[y = a^x = e^{x (\ln a)}. \nonumber\] The function is now the composition \(y=f(g(x))\),with \(f(x) = e^x\) and \(g(x) = x(\ln a)\). Since \(f^\prime(x) = e^x\) and \(g^\prime(x) = \ln a\), the Chain Rule gives \[y^\prime = e^{x (\ln a)} \cdot \ln a. \nonumber\] Recall that the \(e^{x(\ln a)}\) term on the right hand side is just \(a^x\),our original function. Thus, the derivative contains the original function itself. We have \[y^\prime = y \cdot \ln a = a^x\cdot \ln a. \nonumber\] The Chain Rule, coupled with the derivative rule of \(e^x\),allows us to find the derivatives of all exponential functions. The previous example produced a result worthy of its own "box.'' Theorem 20: Derivatives of Exponential Functions Let \(f(x)=a^x\),for \(a>0, a\neq 1\). Then \(f\) is differentiable for all real numbers and \[f^\prime(x) = \ln a\cdot a^x. \nonumber\] Alternate Chain Rule Notation It is instructive to understand what the Chain Rule "looks like'' using "\(\dfrac{dy}{dx}\)'' notation instead of \(y^\prime\) notation. Suppose that \(y=f(u)\) is a function of \(u\),where \(u=g(x) \) is a function of \(x\),as stated in Theorem 18. Then, through the composition \(f \circ g\),we can think of \(y\) as a function of \(x\),as \(y=f(g(x))\). Thus the derivative of \(y\) with respect to \(x\) makes sense; we can talk about \(\dfrac{dy}{dx}.\) This leads to an interesting progression of notation: \[\begin{align*}y^\prime &= f^\prime(g(x))\cdot g^\prime(x) \\ \dfrac{dy}{dx} &= y^\prime(u) \cdot u^\prime(x)\quad \text{(since \(y=f(u)\) and \(u=g(x)\))}\\ \dfrac{dy}{dx} &= \dfrac{dy}{du} \cdot \ dfrac{du}{dx}\quad \text{(using "fractional'' notation for the derivative)}\end{align*}\] Here the "fractional'' aspect of the derivative notation stands out. On the right hand side, it seems as though the "\(du\)'' terms cancel out, leaving \[ \dfrac{dy}{dx} = \dfrac{dy}{dx}.\] It is important to realize that we are not canceling these terms; the derivative notation of \(\dfrac{dy}{dx}\) is one symbol. It is equally important to realize that this notation was chosen precisely because of this behavior. It makes applying the Chain Rule easy with multiple variables. For instance, \[\dfrac{dy}{dt} = \dfrac{dy}{d\bigcirc} \cdot \dfrac{d\bigcirc}{d\triangle} \cdot \dfrac{d\triangle}{dt}.\] where \(\bigcirc\) and \(\triangle\) are any variables you'd like to use. One of the most common ways of "visualizing" the Chain Rule is to consider a set of gears, as shown in Figure 2.18. The gears have 36, 18, and 6 teeth, respectively. That means for every revolution of the \(x\) gear, the \(u\) gear revolves twice. That is, the rate at which the \(u\) gear makes a revolution is twice as fast as the rate at which the \(x\) gear makes a revolution. Using the terminology of calculus, the rate of \(u\)-change, with respect to \(x\),is \(\dfrac{du}{dx} = 2\). Figure 2.18: A series of gears to demonstrate the Chain Rule. Note how \(\dfrac{dy}{dx}=\dfrac{dy}{du}\cdot\dfrac{du}{dx}\). Likewise, every revolution of \(u\) causes 3 revolutions of \(y\): \(\dfrac{dy}{du} = 3\). How does \(y\) change with respect to \(x\)? For each revolution of \(x\),\(y\) revolves 6 times; that is, \ [\dfrac{dy}{dx} = \dfrac{dy}{du}\cdot \dfrac{du}{dx} = 2\cdot 3 = 6.\] We can then extend the Chain Rule with more variables by adding more gears to the picture. It is difficult to overstate the importance of the Chain Rule. So often the functions that we deal with are compositions of two or more functions, requiring us to use this rule to compute derivatives. It is often used in practice when actual functions are unknown. Rather, through measurement, we can calculate \(\dfrac{dy}{du}\) and \(\dfrac{du}{dx}\). With our knowledge of the Chain Rule, finding \(\dfrac{dy}{dx}\) is straightforward. In the next section, we use the Chain Rule to justify another differentiation technique. There are many curves that we can draw in the plane that fail the "vertical line test.'' For instance, consider \(x^2+y^2=1\),which describes the unit circle. We may still be interested in finding slopes of tangent lines to the circle at various points. The next section shows how we can find \(\dfrac {dy}{dx}\) without first "solving for \(y\).'' While we can in this instance, in many other instances solving for \(y\) is impossible. In these situations, implicit differentiation is indispensable.
{"url":"https://math.libretexts.org/Courses/University_of_California_Davis/UCD_Mat_21A%3A_Differential_Calculus/3%3A_Differentiation/3.06%3A_The_Chain_Rule","timestamp":"2024-11-08T13:52:41Z","content_type":"text/html","content_length":"142805","record_id":"<urn:uuid:1b6b3aab-6aa0-4cee-a425-a6cba2a06efa>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00643.warc.gz"}
How to Calculate the Price of Treasury Bills | The Motley Fool (2024) Treasury bills are among the safest investments in the market. They're backed by the full faith and credit of the U.S. government, and they come in maturities ranging from four weeks to one year. When buying Treasury bills, you'll find that quotes are typically given in terms of their discount, so you'll need to calculate the actual price. Image source: Getty Images. The calculation Getting the price from the interest rate To calculate the price, you need to know the number of days until maturity and the prevailing interest rate. Take the number of days until the Treasury bill matures and multiply it by the interest rate in percent. Take the result and divide it by 360, as the Treasury uses interest-rate assumptions using the common accounting standard of 360-day years. Then, subtract the resulting number from 100. That will give you the price of a Treasury bill with a face value of $100. If you want to invest more, then you can adjust the figure accordingly. As a simple example, say you want to buy a $1,000 Treasury bill with 180 days to maturity, yielding 1.5%. To calculate the price, take 180 days and multiply by 1.5 to get 270. Then, divide by 360 to get 0.75, and subtract 100 minus 0.75. The answer is 99.25. Because you're buying a $1,000 Treasury bill instead of one for $100, multiply 99.25 by 10 to get the final price of $992.50. Keep in mind that the Treasury doesn't make separate interest payments on Treasury bills. Instead, the discounted price accounts for the interest that you'll earn. For instance, in the preceding example, you'll receive $1,000 at the end of the 180-day period. Because you only paid $992.50, the remaining $7.50 represents the interest on your investment over that time frame. Treasury bill quotes can look complicated, but it's pretty easy to figure out the price. With just a few simple calculations, you can convert quotes to Treasury-bill prices, and know what you'll need to pay to invest. To get more information on how to start investing -- in Treasury bills and other investment instruments -- head on over to our Broker Center. Related investing topics Related investing topics
{"url":"https://murard.com/article/how-to-calculate-the-price-of-treasury-bills-the-motley-fool","timestamp":"2024-11-04T07:58:38Z","content_type":"text/html","content_length":"112998","record_id":"<urn:uuid:f0d027dd-fea5-42f9-83e5-d4ae741ef243>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00408.warc.gz"}
EViews Help: @mins Minimum values (multiple). Vector or svector the n minimum values of the elements of x. Syntax: @mins(x, n[, s]) x: data object s: (optional) sample string or object when x is a series or alpha Return: vector or svector The minimum n values may be written as where the order statistics For series calculations, EViews will use the current or specified workfile sample. Let x be a series of length 5 whose elements are 1, 3, 5, 4, 2. Then = @mins(x,2) returns a vector of length 2 whose elements are 1 and 2. See also , and
{"url":"https://help.eviews.com/content/functionref_m-@mins.html","timestamp":"2024-11-13T21:41:46Z","content_type":"application/xhtml+xml","content_length":"9403","record_id":"<urn:uuid:f9489a02-779e-42b0-906b-280a33d334b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00408.warc.gz"}
Decoding State Vaccination Rates Using Educational Aptitude, Income, and Political Affiliation November 12, 2021 COVID-19 cost almost 700,000 (seven hundred thousand) deaths in the Unites States and low vaccination rates are widely seen as undermining individual and community protection. The objective of the study was to evaluate the risk factors associated with lower COVID-19 vaccination rates in the United States. The study evaluated the effect of red-blue political affiliation, and the effect of the US state’s average educational aptitude scores and per capita income on states vaccination rates. The study found that states with concomitantly lower income along with lower educational aptitude scores are less vaccinated while the states with higher income have higher vaccination rates even among those with lower educational aptitude scores. These findings stayed significant after adjusting for red-blue political affiliation where states with red political affiliation have lower vaccination rates. Further study is needed to evaluate how to stop online misinformation among states with low income and low educational aptitude scores; and whether such an effort will increase overall vaccination rates in the United States. Online misinformation surrounding the COVID-19 vaccine is the major obstacle in fighting the coronavirus pandemic. Loomba et al. conducted randomized controlled trials in the UK and the USA, showing how exposure to online misinformation around COVID-19 vaccines affects intent to vaccinate to protect oneself or others. [1] The study found that some sociodemographic groups are differentially impacted by exposure to misinformation than others, and scientific-sounding misinformation was found more strongly associated with declines in vaccination intent. Policymakers are struggling to stop online misinformation while the COVID-19 pandemic is taking thousands of lives worldwide. In the setting of highly transmissible and fatal COVID-19 pandemics, vaccination hesitancy is widely seen as undermining individual and community protection. Despite United States have adequate vaccine available for its population, the vaccination rate in the US facing multiple challenges. During pre-COVID-19 era research, the lower vaccination rates were considered resulting from a complex decision-making process that reported as “3 Cs” model, which highlights complacency, convenience, and confidence [2]. Another research done in 2012 shown that acceptance of vaccination can be also potentially influenced by cultural, and religious roots. [3]. Though multiple prior study found that certain socio-demographic groups are more vulnerable to remain unvaccinated, no prior study looked at average state educational aptitude score and per capita income as well as political affiliation as risk factors for lower vaccination rates. The US State’s average educational aptitude scores were extrapolated from the McDaniel study published in 2006 [4]. McDaniel estimated intelligence quotient (educational aptitude score) from the National Assessment of Educational Progress (NAEP) standardized tests for reading and math (administered to a sample of public-school children in each of the 50 states). The means of the standardized reading scores for grades 4 and 8 were averaged across years as well as the means of the standardized math scores for all 50 US states. The author offered two causal models that predicted state educational aptitude score (or states Intelligence quotient) which was estimated from the average of mean reading and mean math scores. These models explained 83% and 89% of the variability of state educational aptitude score. And the estimated educational aptitude scores showed positive correlations with gross state product, health, and government effectiveness and negative correlations with violent crime [4]. The US vaccination data were obtained from the NPR website as of the 15^th of July 2021 [5]. The State level per capita income for the year 2010 to 2014 was collected from the U.S. Census Bureau data [6]. The 50 US States 2020 presidential elections results data (red-blue political affiliation) were collected from the Politico website election result map [7]. The red political affiliation was coded as zero and blue political affiliation was coded as 1 for the purpose of the study. All these five data sets were merged using Python data analysis software. In addition, educational aptitude score, per capita income, and state vaccination rates were ranked to demonstrate trends in the scatter plot. The US States were ranked 1 to 50 based on the average educational aptitude score, where rank number 1 was the highest educational aptitude score and 50 being the lowest. The US vaccination data were ranked from 1 to 50 where rank number 1 was the highest fully vaccinated state and 50th being the lowest. The State level per capita income was ranked from 1 to 50 as well where rank number 1 was the highest per capita income and 50 being the lowest. Pearson’s correlation coefficients were obtained for all the independent variables where two-sided T test were considered significant at 0.05. The multivariate linear regression analysis was conducted using the states vaccination rate as dependent variable and state educational aptitude, average per capita income and red-blue political affiliation as independent variable, A forward selection method was considered to determine the final regression model. The statistical data analysis software Stata was used. A total of fifty (50) US states were considered in the data analysis. The average US full vaccination rate was 47.5% (±8.5) by July 15th, 2021), the average US population educational aptitude score was 100 (±2.71), and the average per capita income was $28,889. Table 1 shows state’s per capita income and income rank are strongly correlated with percent fully vaccinated with correlation coefficients 0.69 and -0.71, respectively. This indicates states average income has a parallel relationship to state’s vaccination rates which means vaccination rates increase with increase of income. Again, states educational aptitude rank and average educational aptitude score were also significantly correlated with percent fully vaccinated with correlation coefficients of 0.45 and -0.47, respectively. This indicates the educational aptitude score has a parallel relationship with vaccination rates which means vaccination increases with the increase of educational aptitude score. The correlation coefficient between state educational aptitude and per capita income rank was highly significant at 0.56 (P <0.001). The correlation between red-blue affiliation and educational aptitude score were statistically not significant (p>0.05) but red-blue affiliation was highly correlated with state vaccination rates and state per capita income with correlation coefficient of 0.77 and 0.59, respectively (P <0.001). Table 1: Correlation matrix of the variables used in the analysis: Demonstrating effect of population average Educational Aptitude (EA) score and per capita income on percent fully vaccinated. Vaccination Rank % Fully Vaccinated EA Average EA Income Rank Per capita Income Red-blue Affiliation Vaccination Rank 1 % Fully Vaccinated -0.99 1 Educational Aptitude (EA) Rank 0.45 -0.47 1 <0.001 <0.001 Average EA -0.44 0.45 -0.98 1 <0.002 <0.001 <0.001 Income Rank -0.72 -0.71 0.56 -0.55 1 <0.001 <0.001 <0.001 <0.001 Per Capita Income -0.71 0.69 -0.52 0.52 -0.97 1 <0.001 <0.001 <0.001 <0.001 <0.001 Red-blue Political Affiliation -0.78 0.77 -0.18 0.16 -0.60 0.59 1 <0.001 <0.001 <0.21 <0.26 <0.001 <0.001 Figure 1 demonstrates there is a decreasing trend of states’ populations who are fully vaccinated among populations with lower educational aptitude score. These lower vaccination rates can be an indication of accepting online misinformation about vaccination among the state with low educational aptitude score. The association has an R square value of 22.48% which means 22.5% variability of the vaccination is explained by the state population educational aptitude score. Fig 1: Scatter plot of percentage of the US states total population fully vaccinated by state rank of educational aptitude scores Figure 2 demonstrates there is a decreasing trend of population vaccination rates among the US states among lower-income populations. These lower vaccination rates can be an indication of accepting online misinformation about vaccination among low-income populations. This association has an R square value of 49.9% which means 49.9% variability of the vaccination is explained by income. Fig 2: Scatter plot of percentage of the US states total population fully vaccinated by state’s rank of per capita income. Figure 3 demonstrates there is a decreasing trend of per capita income among populations with lower educational aptitude score. Previous studies also reported national wealth strongly correlates with its population average educational aptitude scores [6]. According to the current study, the population educational aptitude score explains 28.1% of the variability of the population income. Fig 3: Scatter plot the US states per capita income by state rank of educational aptitude scores A multivariate analysis of the state’s percent vaccination was conducted using the independent effect of educational aptitude score, per capita income, interaction effect of average educational aptitude score and per capita income and red-blue affiliation as predictor variables (Table 2). When the effect of average educational aptitude score and per capita income was entered in the model, all variables were non-significant. But when the main effect of average educational aptitude score and per capita income were removed from the model, the cross-over interaction effect of average educational aptitude score and per capita income and red-blue affiliation were found strongly significant. The R square for the final model was 0.70 indicating 70% variability of vaccination was explained by these predictors. This means the effect of per capita income on the vaccination rate is opposite depending on the value of educational aptitude score. These findings were further explained in Table 3. The linear regression model R square value with only the cross-over interaction effect of average educational aptitude score and per capita income was 0.49 indicating the red-blue political affiliation explaining 21% (R square = 0.21) variability of the final model. Table 2: Regression analysis demonstrating the effect of the product of educational aptitude scores and per capita income on percent fully vaccinated (R square = 0.70). Variables Beta Coefficient Standard Error Lower Limit Upper Limit P-value Product of education aptitude score and per capita Income 0.71 0.18 0.35 1.06 <0.001 Red-blue Political Affiliation 9.39 1.63 6.01 12.58 <0.001 Constant 22.79 4.69 13.35 32.24 <0.001 Given that this study found a significant cross over interaction effect (opposite effect) of average educational aptitude score and per-capita income while predicting percent fully vaccinated, the quartiles of the educational aptitude score and income were sorted by percent fully vaccinated in Table 3. The study findings show a 38.8% vaccination rate among lowest income quartiles with lowest educational aptitude score group but among the highest income quartiles with highest educational aptitude score the vaccination rates remain highest which was 55.2%. Such a cross over interaction effect showing opposite effects of educational aptitude score and income on vaccination rates is a very intriguing and unique finding and will be very interesting research topics for state policy makers. This study finding demonstrates that states with a concomitantly lower income and lower educational aptitude score are the most vulnerable and accepting online misinformation regarding COVID-19 vaccination. Table 3: Demonstrating percentage of full vaccination by quartiles of educational aptitude (EA) scores and income Quartiles EA Quartile 1>102.8 EA Quartile 2(100.85 – 102.8) EA Quartile 3(102.8 – 98.6) EA Quartile 4(< 98.6) Total Income Quartile 1(>$30,830) 55.2% (±8.5)n=6 54.2% (±1.8)n=3 54.5% (±6.8)n=4 n=0 54.7% (±6.5)n=13 Income Quartile 2($27,546 – $30,830) 54.3% (±10.0)n=5 47.2% (±10.1)n=3 49.3% (±2.8)n=2 52% (±1.0)n=2 51.3% (±8.1)n=12 Income Quartile 3($25,229 – $27,546) 44.9% (±1.8)n=2 44.7% (±4.4)n=3 43.1% (±3.6)n=4 43.0% (±4.1)n=4 43.7% (±3.4)n=13 Income Quartile 4(<$25,229) n=0 41.2% (±3.8)n=3 41.6% (±4.1)n=2 38.8% (±7.8)n=7 39.9% (±6.3)n=12 Total 53.2% (±8.8)n=13 46.8% (±7.1)n=12 47.7% (±7.1)n=12 42.1% (±7.6)n=13 47.5% (±8.5)n=50 The COVID-19 cost almost 700,000 (seven hundred thousand) lives in the United States and death rates are very high among the unvaccinated in the United States. The US states that have concomitantly lowest educational aptitude score along with lowest per capita income have the lowest vaccination rates of 38.8% as of July 15^th, 2021. The average US fully vaccination rates were 47.5% at that time. The crossover interaction effect of income and educational aptitude score remains significant even after adjusting for red-blue political affiliation where red political affiliations have significantly lower vaccination rates compared to those of blue political affiliations. Online misinformation about COVID-19 vaccination possibly led to lower vaccination rates among many US states. Prior research on misinformation was related to the context of the 2016 US presidential election [8,9]. The current study found lower vaccination rates possibly related to accepting misinformation among the red politically affiliated US states. There is a strong cross over interaction effect of low income and low educational aptitude score indicating states with the lowest income quartiles have lowest vaccination rates (i.e., most affected with online misinformation about COVID-19 vaccination) if they have the lowest educational aptitude score. The study also found the state population with the highest income quartiles does not get affected by online misinformation even when those have a lower educational aptitude score. This is a unique cross over interaction effect and no other study reported similar findings in the past. The study also found during univariate analysis that income explains almost 50% of the vaccination variability with states with lower income trends to be less vaccinated. Another univariate analysis showed the states with lower educational aptitude scores are less likely to be vaccinated compared to those with high educational aptitude score and educational aptitude score explain 23% of the vaccination variability. Similarly, the red-blue political affiliation explains 21% variability in the final multivariate model while the final multivariate model explains 70% variability of the United States vaccination rates. It is possible that even if certain individuals within the same state have a high educational aptitude score, they may be vulnerable to misinformation as traditionally such misinformation propagates through their friends, families, and acquaintances in society. Roozenbeek et al. reported that higher trust in scientists and higher numeracy skills (which is the ability to use, interpret and communicate mathematical data, maybe equivalent to educational aptitude score) were associated with lower susceptibility to COVID-19 related misinformation. The study demonstrated a clear link between susceptibility to misinformation and vaccine hesitancy and suggests interventions aiming to improve critical thinking and trust in science may be a promising avenue for future research [10]. It is possible that the socio-demographic group with lower educational aptitude score and income fails to interpret scientific data themself and depends on their trusted news source to understand scientific or mathematical data. This may lead to acceptance of online misinformation leading to lower vaccinations among the population with low income along with low educational aptitude groups in the United States. The current study used state’s average data instead of individual data to find predictors of reduced vaccinations. The only study conducted to estimate states educational aptitude was McDaniel study which was done in the year 2006 [4]. This study assumed the average educational aptitude rank stayed same over last decades. Moreover, McDaniel study calculated a surrogate measures of states intelligence quotient even though the author used average of mean reading and mean math scores. It is possible that certain geo-political or state population data reflects a more accurate picture than individual data given that misinformation needs a society or group-level enabler for the misinformation to propagate across the state. Conclusion: The states with concomitantly lower income and lower educational aptitude scores have lower vaccination rates after adjusting for red-blue political affiliation. The study also found that states with red political affiliation have significantly lower vaccination rates. Further study is needed to evaluate how to stop online misinformation among states with lowest income and lowest educational aptitude score and whether such an effort will increase overall vaccination rates in the United States. Conflict of Interest: The author has no conflict of interest to disclose. • Loomba, S., de Figueiredo, A., Piatek, S.J. et al. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat Hum Behav 5, 337–348 (2021). https:// • MacDonald NE; SAGE Working Group on Vaccine Hesitancy. Vaccine hesitancy: Definition, scope, and determinants. Vaccine. 2015 Aug 14;33(34):4161-4. doi: 10.1016/j.vaccine.2015.04.036. Epub 2015 Apr 17. PMID: 25896383] • C. Laberge, M. Guay, P. Clement, E. Dube, R. Roy, J. Bettinger. Workshop on the cultural and religious roots of vaccine hesitancy. Explanation and implications for Canadian healthcare, Longueuil, Quebec, Canada, December 2012 (2015)] • McDaniel, M. A. (2006). Estimating state IQ: Measurement challenges and preliminary correlates. Intelligence, 34(6), 607–619. doi:10.1016/j.intell.2006.08.007 • https://www.npr.org/sections/health-shots/2021/01/28/960901166/how-is-the-covid-19-vaccination-campaign-going-in-your-state • “ACS DEMOGRAPHIC AND HOUSING ESTIMATES 2010-2014 American Community Survey 1-Year Estimates”. U.S. Census Bureau. Archived from the original on 2020-02-14. Retrieved 2016-02-12. • The 50 US States 2020 presidential elections results from the Politico website election result map: https://www.politico.com/2020-election/results/president/ • Grinberg N, Joseph K, Friedland L, Swire Thompson B, Lazer D. 2019 Fake news on Twitter during the 2016 U.S. presidential election. Science 363, 374–378. (doi:10.1126/ science.aau2706) • Allcott H, Gentzkow M. 2017 Social media and fake news in the 2016 election. J. Econ. Perspect. 31, 211–236. (doi:10.1257/jep.31.2.211) • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., Van Der Bles, A. M., & Van Der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world: Susceptibility to COVID misinformation. Royal Society Open Science, 7(10). https://doi.org/10.1098/rsos.201199 Author: Azad Kabir, MD MSPH; Raeed Kabir; Jebun Nahar, PhD; Ritesh Sengar; Affiliations: Doctor Ai, LLC; 1120 Beach Blvd, Biloxi; MS 39530 Corresponding author’s name and contact information (e-mail address, mailing address, phone number): Azad Kabir, MD, MSPH, ABIM; Doctor Ai, LLC; 1120 Beach Blvd, Biloxi; MS 39530; Email: azad.kabir@gmail.com; Cell: 228-342-6278 Discussion about this post
{"url":"https://ddxrx.com/research/2021/11/12/decoding-state-vaccination-rates-using-educational-aptitude-income-and-political-affiliation/","timestamp":"2024-11-11T03:05:59Z","content_type":"text/html","content_length":"60264","record_id":"<urn:uuid:c5fa3a10-babc-47a1-b164-b8ca8cd9aa20>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00886.warc.gz"}
Daniel - Ann Arbor Tutor Certified Tutor I am a graduate student at UCI studying Optical Engineering. I have six years of experience helping students with math and science from elementary school topics through university topics. I believe that as a teacher, it is important to make sure you aren't simply stating the answers. The most important part of learning is developing a link from a question to the correct answer, a path the student can follow on their own terms to reach the correct answer choice. This isn't something that comes with forced repetition and no explanation. I try to help students master a set of tools so they have a path to follow and aren't left to grope around in the dark. It's important for me to understand the way the student's mind works. If they are having trouble understanding a problem, I do my best to walk through the processes they are trying out. Once I understand how the student sees the problem I can identify what tool they need to fix it. I have had some amazing teachers in my academic career and they really made the difference for me. I want to help inspire others to their full potential. I am patient and attentive, but I leave space for the student to come to the answer on their own. Test Scores GRE Quantitative: 167 Video games, Rugby, Football, Astronomy, Optics, Physics Tutoring Subjects ACCUPLACER Arithmetic COMPASS Mathematics DAT Quantitative Reasoning GED Math GMAT Quantitative High School Chemistry High School Physics IB Further Mathematics IB Mathematics: Analysis and Approaches IB Mathematics: Applications and Interpretation ISEE Prep ISEE-Lower Level Mathematics Achievement ISEE-Lower Level Quantitative Reasoning ISEE-Middle Level Mathematics Achievement ISEE-Middle Level Quantitative Reasoning ISEE-Upper Level Mathematics Achievement ISEE-Upper Level Quantitative Reasoning SAT Subject Test in Mathematics Level 2 SAT Subject Tests Prep
{"url":"https://www.varsitytutors.com/tutors/878644187","timestamp":"2024-11-06T08:58:26Z","content_type":"text/html","content_length":"584866","record_id":"<urn:uuid:3133b620-3b77-4d6d-874a-aa43a27b54e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00653.warc.gz"}
SES Program and Resources - IDSS SES Program and Resources First fall term. • IDS.900 Doctoral Seminar in Social and Engineering Systems Take 3 of the 4 following classes. Information, Systems, and Decision Science 5 classes. These will be rigorous classes in the areas of probabilistic modeling, statistics, optimization, and systems/control theory. Classes used to satisfy the core can be counted toward this requirement. However, the remaining classes should be at a more-advanced level. One subject must involve the statistical processing of data. One subject must have substantial mathematical content (as defined by the IDSS-GPC). Two classes must belong to a sequence that provides increasing depth on a particular topic. Social Science 4 classes. A student proposes a coherent and rigorous program-of-study in the social sciences that provides the background necessary for the student’s research. Classes used to satisfy the core can be counted toward this requirement. However, the remaining courses should be at a more-advanced level. Three classes must form a coherent collection that builds depth in a particular social science focus area. Problem Domain 2 classes. A student takes a total of two classes in the application domain of their research. One class may also be counted toward the social science requirement. Another class may be satisfied by an internship or independent study in which the student is graded on their performance of hands-on work in a particular domain. 1 class. A student serves as a teaching trainee for one subject and receives credit for 20 units of IDS.960 Teaching in Data, Systems, and Society. Qualifying Exams A student must pass both written and oral qualifying exams. The written qualifying exams are completed through strong performance in three core subjects from different areas. This is normally accomplished by the end of a student’s third semester in the program. Subjects must be at least 9 units (e.g., 14.121 and 14.122 only count if taken together). Between the student’s fourth and sixth semester in the program, and after the student passes the written qualifying exams, they take the oral qualifying exam. The oral qualifying exam includes a research presentation by the student. To pass the oral qualifying exam a student must demonstrate the ability to undertake doctoral-level research and to handle questions about that research, including extensions to related problems. Classes and qualifying exams are the necessary preliminaries to the SES doctoral program. However, immersion in research is the centerpiece of the SES program, and it is something doctoral students engage with from their very first semester. By their third year in the program, and as they prepare for the oral qualifying exam, a student’s focus will shift predominantly to research. Their research progress will be marked in a number of ways, including by a thesis proposal (typically in their fourth year), and a dissertation defense (typically in their fifth year). Publications in peer reviewed journals are also expected. Explore examples of SES research in News and Graduates. Additional Resources
{"url":"https://idss.mit.edu/academics/ses_doc/ses-program-and-resources/","timestamp":"2024-11-14T17:04:38Z","content_type":"text/html","content_length":"176412","record_id":"<urn:uuid:1faf82f1-6d4c-4fe4-a703-b9f5c0e47c0c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00435.warc.gz"}
Time Series Analysis and Prediction of COVID-19 pandemic using Dynamic Harmonic Regression Models - Open Access Pub Time Series Analysis and Prediction of COVID-19 pandemic using Dynamic Harmonic Regression Models Rapidly spreading Covid-19 virus and its variants, especially in metropoli- tan areas around the world, became a major health public concern. The tendency of Covid-19 pandemic and statistical modelling represent an urgent challenge in the United States for which there are few solutions. In this paper, we demonstrate com- bining Fourier terms for capturing seasonality with ARIMA errors and other dynamics in the data. Therefore, we have analyzed 156 weeks COVID-19 dataset on national level using Dynamic Harmonic Regression model, including simulation analysis and ac- curacy improvement from 2020 to 2023. Most importantly, we provide a new advanced pathways which may serve as targets for developing new solutions and approaches. Author Contributions Received 18 Mar 2023; Accepted 25 Apr 2023; Published 02 May 2023; Academic Editor: Raul Isea, Fundación Instituto de Estudios Avanzados -IDEA Checked for plagiarism: Yes Review by: Single-blind Copyright © 2023 Lei Wang This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Competing interests The authors have declared that no competing interests exist. Lei Wang (2023) Time Series Analysis and Prediction of COVID-19 pandemic using Dynamic Harmonic Regression Models. Journal of Model Based Research - 2(1):28-36. https://doi.org/10.14302/issn.2643-2811.jmbr-23-4528 The COVID-19 pandemic has had a tremendous impact on the world for 3 years and spread to more than 200 countries worldwide, leading to more than 36 million confirmed cases as of October 10, 2020. Some well-respected organizations such as Johns Hopkins University, the Centers for Disease Control and Prevention, the World Health Organization and the United States Census Bureau are involved in the study and tracking of the Covid-19 pandemic ^2. To respond this urgent public health concern, we used 156 weekly time series datasets to evaluate the seasonal patterns of COVID-19 cases and mortality in the United States with the objective to determine the tendency of Covid-19 pandemic. Besides, the implantation of R and simulation analysis can improve the forecasting accuracy Given my prospective research interest in Data Science, smart data analytics is giving profes- sionals and public more insight into the factors impacting than ever before. From assessing risks to analyzing evolving trends, we are now able to anticipate the success of a property more accurately thanks to the abundance of information available to academics and professionals. Our analysis can help in understanding the trends of the disease outbreak and provide suggestions and instructions of adopted countries. Based on complex nature of virus transformation, traditional epidemic models such as Regression and ARIMA methods have been applied for prediction of its spread. Particularly, Dynamic Harmonic Regression (DHR) approaches were used to predict the spreading trends of COVID-19, such as new cases and deaths. We reviewed studies that implemented these strategies ^10. Dynamic Harmonic Regression (DHR) is a nonstationary time-series analysis approach used to identify trends, seasonal, cyclical and irregular components within a state space framework. Many re- searchers studied about this forecasting methods. Dr.Kumar and Dr.Suan (2020) use ARIMA model and day level information of COVID-19 spread for cumulative cases from whole world and 10 mostly affected countries to forecast the impact of the virus in the affected countries and worldwide ^1. Also, Dr.Fuad Ahmed Chyon Md, Dr.Nazmul Hasan Suman employed ARIMA model to analyze the temporal dynamics of the worldwide spread of COVID-19 in the time window from January 22, 2020 to April 7, 2020 ^2. Dr.Tandan, Dr.Acharya, Dr.Pokharel, Dr.Timilsina aimed to discover symptom patterns and overall symptom rules, including rules disaggregated by age, sex, chronic condition, and mortality status, among COVID-19 patients. A Short Review of Covid-19 situations In early December 2019, an outbreak of coronavirus disease 2019 (COVID-19) caused by a novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), occurred in Wuhan City, Hubei Province, On January 30, 2020 the World Health Organization declared the outbreak as a Public Health Emergency of International Concern (PHEIC). As of February 14, 2020, 49,053 laboratory-confirmed and 1,381 deaths have been reported globally. On March 2020, the Journal of the American Medical Association Ophthalmology reported that COVID-19 can be transmitted through the eye. One of the first warnings of the emergence of the SARS-CoV-2 virus came late in 2019 from a Chinese ophthalmologist, Li Wenliang, MD, who treated patients in Wuhan and later died at age 34 from COVID-19. On December 18, 2020, after demonstrating 94 percent efficacy, the NIH-Moderna vaccine was authorized by the U.S. Food and Drug Administration (FDA) for emergency use. Just days earlier, the similar Pfizer/BioNTech vaccine had become the first COVID-19 vaccine to be authorized for use in the United States. In the late summer and fall of 2021, the delta variant was the dominate strain of COVID-19 in the U.S. On 26 November 2021, WHO designated the variant B.1.1.529 a variant of concern, named Omicron. Director of the National Institute of Allergy and Infectious Diseases Anthony Fauci gave an update on the Omicron COVID-19 variant during the daily press briefing at the White House on December 1, 2021 in Washington, DC. He said that we will likely learn to live with COVID-19 like we do with the common cold and flu ^10. Globally, as of 6:32pm CET, 27 January 2023, there have been 752,517,552 confirmed cases of COVID-19, including 6,804,491 deaths, reported to WHO. As of 24 January 2023, a total of13,156,047,747 vaccine doses have been administered. Data Collection The data for the ongoing Covid-19 outbreak in the United States is collected from the Centers for Disease Control and Prevention. The columns of this dataset include the Total number of weekly cases, Weekly Death and Weekly tests volume of Covid-19 patients accumulating all the states, on a weekly basis from 29th Jan 2020 to 18th Jan 2023. The total cases per 100,000, allow for comparisons between areas with different population sizes. Weekly data is difficult to work with because the seasonal period (the number of weeks in a year) is both large and non-integer, like stock prices, employment numbers, or other economic indicators. The average number of weeks in a year is 52.18. Most of the methods we have considered require the seasonal period to be an integer. Even if we approximate it by 52, most of the methods will not handle such a large seasonal period efficiently. So far, many publications and researchers have considered relatively simple seasonal patterns, such as quarterly and monthly data. However, higher frequency time series often exhibit more com- plicated seasonal patterns. For example, daily data may have a weekly pattern as well as an annual pattern. Hourly data usually has three types of seasonality: a daily pattern, a weekly pattern, and an annual pattern. Even weekly data can be challenging to forecast as it typically has an annual pattern with seasonal period of 365.25/7 ≈52.179 on average. Exponential smoothing model didn’t seem applicable, and ARIMA modelling is poor working with high integer seasonal periods (e.g. days/weeks rather than months/quarters), and also struggles with a non-integer seasonal period (i.e. 52 weeks some years, 53 weeks other years). Advanced Forecasting Model: Dynamic Harmonic Regression (DHR) There are several methods for incorporating seasonality into a forecasting model. One common approach is to use time-series models such as SARIMA (Seasonal Autoregressive Integrated Moving Average) or Seasonal Exponential Smoothing. These models can capture the seasonal patterns in the data and adjust the forecast accordingly. The time series processes are usually all stationary processes, but many applied time series, par- ticularly those arising from economic and business areas are non-stationary. With respect to the class of covariance stationary processes, non-stationary time series can occur in many different ways. They could have non-constant means µ[t], time-varying second moments, such as non-constant variance σ^2, or both of these properties ^9. When applied to Covid-19 data, taking the natural logarithm of the number of cases or deaths can help stabilize the variance of the data and make the trend more apparent, especially in the early stages of the pandemic when the growth was exponential. This can also help identify if there are any underlying patterns or seasonality in the data. After applying the log transformation, the resulting data will have a more linear trend and a constant variance, which makes it easier to model using standard statistical techniques such as linear regression or ARIMA models. Many models used in practice are of the simple ARIMA type, which has a long history and was formalized in Box and Jenkins ^6. ARIMA stands for Autoregressive Integrated Moving Average and an ARIMA(p; d; q) model for an observed series, and ’I’ stands for integration; where p is order of autoregression, d is order of differencing, q is order of moving average ^5. Since we are also taking into account the seasonal pattern even if it is weak, we should also examine the seasonal ARIMA process. This model is built by adding seasonal terms in the non- seasonal ARIMA model we mentioned before. One shorthand notation for the model is {(p, d, q)} : non-seasonal part (P, D, Q)m}: seasonal part. P = seasonal AR order, D = seasonal differencing, Q = seasonal MA order m: the number of observations before the next year starts; seasonal period ^12. The seasonal parts have term non-seasonal components with backshifts of the seasonal period. For instance, we take {ARIMA(p, d, q)(P, D, Q)m} model for weekly data (m=52). Without differencing operations, this process can be formally written as: A seasonal ARIMA model inc{(p,d,q)}: non-seasonal part operates both non-seasonal and seasonal factors in a multiplicative fashion. The time series models in ARIMA model and Exponential Smoothing model allow for the inclu sion of information from past observations of a series, but not for the inclusion of other information that may also be relevant. For example, the effects of holidays, competitor activity, changes in the law, the wider economy, or other external variables may explain some of the historical variation and may lead to more accurate forecasts. On the other hand, the regression models allow for the inclusion of a lot of relevant information from predictor variables but do not allow for the subtle time series dynamics that can be handled with ARIMA models. An alternative approach uses a dynamic harmonic regression model. Next, we tried to extend ARIMA models in order to allow other information to be included in the models. Firstly, we consid- ered regression model The system composed by four components: trend (T), sustained cyclical (C) with period different to the seasonality, seasonal (S) and white noise (ϵ[t]) ^9. The measured values of y are the output (observations) series of a system of stochastic state space equations, which can then be broken down to allow for estimation of the four components. So for such time series, we prefer a harmonic regression approach where the seasonal pattern is modelled using Fourier terms with short-term time series dynamics handled by an ARIMA error. In the following example, the number of Fourier terms was selected by minimising the AICc. The order of the ARIMA model is also selected by minimising the AICc although that is done within the auto.arima() function in R. Dynamic harmonic regression is based on the principal that a combination of sine and cosine functions can approximate any periodic function. Where m is the seasonal period, α[j]and β[j]are regression coefficients, and η[t]is modeled as a non-seasonal ARIMA process. The fitted model has 18 pairs of Fourier terms and can be written as Where ηt is an ARIMA(4,1,1) process. Because n[t]is non-stationary, the model is actually esti- mated on the differences of the variables on both sides of this equation. There are 36 parameters to capture the seasonality which is rather a lot but apparently required according to the AICc selection. The total number of degrees of freedom is 42 (the other six coming from the 4 AR parameters, 1 MA parameter, and the drift parameter)^4. The advantages of this approach are : Flexibility: DHR model can be used to model data with various levels of complexity, including data with multiple seasonal patterns, irregular patterns, and non-stationary patterns. It allows any length seasonality; The short-term dynamics are easily handled with a simple ARIMA error. Especially, for data with more than one seasonal period, Fourier terms of different frequencies can be The smoothness of the seasonal pattern can be controlled by K, the number of Fourier sin and cos pairs – the seasonal pattern is smoother for smaller values of K ; The only real disadvantage (compared to a seasonal ARIMA model) is that the seasonality is assumed to be fixed - the seasonal pattern is not allowed to change over time. But in practice, seasonality is usually remarkably constant so this is not a big disadvantage except for long time series. Main Results Forecasting Accuracy Time series analysis and forecasting are an active research area over the last five decades. Thus, various kinds of forecasting models have been developed and researchers have relied on statistical techniques to predict time series data. The accuracy of time series forecasting is fundamental to many decisions processes, and hence the research for improving the performance of forecasting mod- els has never been stopped. However, the time series datasets are often nonlinear and irregular ^3. An interdisciplinary approach afforded in the study of Data Science critically analyzes the relevant disciplinary insights and attempts to produce a more comprehensive understanding or purpose of a holistic solution. The author measured forecasting performance by the mean absolute error (MAE), root mean square error (RMSE), root relative squared error (RSE), and mean absolute percentage error (MAPE). The MAE criterion is most appropriate when the cost of a forecast error rises proportionally with respect to the absolute size of the error. With RMSE, the cost of the error rises as the square of the error, and so large errors can be weighted far more than proportionally. Whether MAE or RMSE is most appropriate surely varies according to circumstances and individual institutions, and in any case we will find that the several measures pick the same model in all but several instances ^8. These measures were calculated by using the following Equations. P[t]is the predicted value at time t, Z[t]is the observed value at time tand Nis the number of predictions. where k is the number of parameters and n the number of samples. It is important to note that these information criteria tend not to be good guides to selecting the appropriate order of differencing (d) of a model, but only for selecting the values of p and q. This is because the differencing changes the data on which the likelihood is computed, making the AIC values between models with different orders of differencing not comparable ^4. In this section, the focus is on statistical methodology and forecasting results on time series datasets regarding Covid-19 pandemic. The comparison table 1 below shows all the potential forecast- ing models. A given forecasting model may have a systematic positive or negative bias and do a poor job of tracking the actual mean of value changes, and measures such as RMSE and MAE could well miss this defect. Obviously, the Log Transformation DHR perform best among other models. Because we evaluated the different models with different criterion. The Log Transformation DHR minimize the RMSE, MAE and shows relatively better forecasting accuracy. Table 1. Comparison Table for forecasting model Model ME RMSE MAE MPE MAPE MASE AICc DHR with ARIMA(2,0,1) error 8447.324 148729.5 92906.71 43052.44 48766.5 0.1582 -18.38 ARIMA(2,1,0)(0,1,0)^52 -4511.181 132336.8 57721.63 -3.0858 8.7082 0.0983 1711.99 Dynamic Regression 16520.74 162314.7 94507.58 0.1878 19.2053 0.1609 3105.5 with ARIMA (2,1,3)error Log transformation 0.00654 0.25964 0.18395 0.26225 1.8929 0.10279 -419.08 ARIMA (1,1,5)(0,0,1)^52 Log transformation DHR 0.01285 0.1753 0.13024 0.34485 1.4169 0.0728 15.88 Collectively, these models are capable of identification of learning parameters that affect dissimi- larities in COVID-19 spread across various regions or populations, combining numerous intervention methods and implementing what-if scenarios by integrating data from diseases having analogous trends with COVID-19 pandemic ^5. (Figure 1) As it was the case with the forecast in Table 2 and Table 3, the number of weekly cases and weekly deaths are projected to continue increase in the following weeks. It shows the noticeable increase in the future. However, weekly cases will decrease at the end of May 2023. However, the weekly deaths forecasting results shows the uncertainty and fluctuations until the end of 2023. The DHR show the smallest RMSE. Because it is a better model than ARIMA(p,d,q)(P,D,Q)[m]and dynamic harmonic regression with ARIMA error. We can easily confirm from the above results that the transformation improves the accuracy if the time series have an unstabilized variance. It also shows that when there are long seasonal periods, a dynamic regression with Fourier terms is often better than other models we have considered from the raw datasets. Table 2. Forecasting results for weekly cases from regression with ARIMA (3,1,1) errors Date Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 2023.01.04 11.84703 2.16397924 21.53008 -2.9619173 26.65597 2023.01.11 11.67934 1.86601883 21.49266 -3.3288382 26.68751 2023.01.18 11.39147 1.44959306 21.33336 -3.813321 26.59627 2023.01.25 11.09728 1.02847775 21.16608 -4.3016243 26.49618 2023.02.01 11.01559 0.821447 21.20973 -4.5750067 26.60619 2023.02.08 11.27106 0.95309604 21.58902 -4.5089033 27.05102 2023.02.15 11.77707 1.33675702 22.21738 -4.1900106 27.74415 2023.02.22 12.34798 1.7867302 22.90922 -3.8040555 28.50001 2023.03.01 12.83167 2.15085746 23.51248 -3.5032215 29.16656 2023.03.08 13.12814 2.32908592 23.92719 -3.3875856 29.64386 2023.03.15 13.20719 2.29118114 24.1232 -3.487405 29.90179 2023.03.22 13.14645 2.11472479 24.17818 -3.7251195 30.01803 2023.03.29 13.05819 1.91194524 24.20444 -3.9885213 30.10491 2023.04.05 12.95955 1.69995251 24.21915 -4.2605198 30.17963 2023.04.12 12.79333 1.42150431 24.16515 -4.5983756 30.18503 2023.04.19 12.55773 1.07477713 24.04068 -5.0039298 30.11939 2023.04.26 12.31002 0.71700654 23.90303 -5.4199636 30.04 2023.05.03 12.06197 0.35992833 23.76401 -5.834757 29.95869 2023.05.10 11.79296 -0.01709568 23.60302 -6.2689634 29.85489 2023.05.17 11.55598 -0.36111708 23.47308 -6.669649 29.78162 2023.05.24 11.44662 -0.57657226 23.4698 -6.9412638 29.8345 2023.05.31 11.46867 -0.65967812 23.59702 -7.0800382 30.01738 Table 3. Forecasting results for weekly deaths with regression with ARIMA (4,0,1) errors Date Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 2023.01.04 7.881919 5.361387 10.402452 4.027098 11.736741 2023.01.11 7.231386 4.707106 9.755666 3.370833 11.091939 2023.01.18 7.014583 4.49027 9.538896 3.153979 10.875187 2023.01.25 7.316785 4.790167 9.843403 3.452656 11.180913 2023.02.01 7.972997 5.438274 10.50772 4.096473 11.849521 2023.02.08 8.628184 6.080713 11.175655 4.732163 12.524205 2023.02.15 8.983049 6.422724 11.543374 5.067369 12.898729 2023.02.22 8.973543 6.404637 11.54245 5.04474 12.902347 2023.03.01 8.738027 6.166017 11.310036 4.804477 12.671576 2023.03.08 8.455716 5.883601 11.02783 4.522006 12.389426 2023.03.15 8.228145 5.654814 10.801476 4.292575 12.163715 2023.03.22 8.086148 5.50775 10.664546 4.142829 12.029467 2023.03.29 8.056633 5.469725 10.643541 4.100298 12.012967 2023.04.05 8.171459 5.575543 10.767374 4.201348 12.141569 2023.04.12 8.408955 5.806693 11.011218 4.429138 12.388773 2023.04.19 8.675691 6.070895 11.280486 4.691999 12.659382 2023.04.26 8.875202 6.270236 11.480169 4.89125 12.859154 2023.05.03 8.969148 6.363566 11.57473 4.984254 12.954042 2023.05.10 8.95649 6.347756 11.565223 4.966776 12.946203 2023.05.17 8.833789 6.21938 11.448199 4.835395 12.832183 2023.05.24 8.605682 5.984966 11.226399 4.597643 12.613722 2023.05.31 8.304007 5.678611 10.929403 4.28881 12.319204 The trend analysis shows unstable situation in the infected cases and weekly deaths and predic- tion study shows increase in the expected active and death cases nationally. However, the time series datasets are often nonlinear and irregular. This data has been used by researchers, policymakers, and others to better understand and respond to the effects of the pandemic. The objective in providing crucial statistical techniques is to enable government and public to make informed decisions regarding Covid-19. Most importantly, we obtain how to add value to public health and apply skills in a real world environment. These models are essential for informing public health decision-making and resource allocation, as well as for predicting future trends in the spread of the disease. The author would like to thank some comments and constructive suggestions from Dr.Olusegun Michael Otunuga from the college of Science and Math and Dr.Hinton Romana from Writing Center in Augusta University. Several stimulating discussions and comments allowed me to develop original ideas and improve my paper. 1. 1.Naresh K, Seba S. (2020) COVID-19 Pandemic Prediction using Time Series Forecasting Models. The 11th ICCCNT 2020 conference . 1. 2.Saud S, Jaini G, Aishita J, Sunny A, Sagar J et al. (2021) Analysis and Prediction of COVID-19 using Regression Models and Time Series Forecasting. 1. 3.Fotios P, Spyros M. (2020) Forecasting the novel coronavirus COVID-19.Plos One 15(3): e0231236.https://doi.org/10.1371/journal.pone.0231236. 1. 4.R J Hyndman, Athanasopoulos G. (2014) Forecasting: Principles and Practice, OTexts, 2nd edition. , ISBN 978-0. 1. 5.RATNADIP A. (2013) An Introductory Study on Time Series Modeling and Forecasting , LAP Lambert Academic Pub- lishing. , ISBN 10, 3659335088. 1. 6.Box G, Jenkins G. (1970) Time Series Analysis: Forecasting and Control, Holden-Day. , San Francisco 1. 7.Faraway J. (2014) Linear Models with R. 1. 8.P J Brockwell, R A Davis. (2002) Introduction to Time Series and Forecasting, Second Edition. , New York 1. 9.A M David, Wlodzimierz T. (2019) Dynamic harmonic regression and irregular sampling; avoiding pre-processing and minimising modelling assumptions Environmental Modelling. , Software 121,
{"url":"https://mail.oap-lifescience.org/jmbr/article/1958","timestamp":"2024-11-08T02:25:48Z","content_type":"text/html","content_length":"137176","record_id":"<urn:uuid:9497a06e-dcfa-4374-bfb7-42211253a2a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00897.warc.gz"}
Semi-empirical mass formula and the pairing term • Thread starter rwooduk • Start date In summary, the student found that if the atom's atomic number is odd, then the pairing term (which is a function of the element's atomic number and the nuclear charge) will miss one term from the atomic energy. We have the formula for the mass of an atom: From our class notes I have: By keeping A constant and varying Z there is generally only one stable nuclide for each odd value of A. We can show this by looking at the pairing term to show that odd A gives a single parabola with a single minimum. Please could someone explain how he manages to get a single parabola from the pairing term? All I understand about the pairing term is that a <0 for Z,N even, even. a =0 for A odd and a >0 for Z,N odd, odd. At a loss if anyone can help? If A is odd, then N+Z is odd... If N+Z is odd then one of them has to be even and the other odd (summing two odds or two evens will give an even number). So a_P becomes zero and that term misses from the energy... If A is even then a_P is non-zero... at this case you can have two different Z,N that give the same A which can achieve the least energy... ChrisVer said: If A is odd, then N+Z is odd... If N+Z is odd then one of them has to be even and the other odd (summing two odds or two evens will give an even number). So a_P becomes zero and that term misses from the energy... If A is even then a_P is non-zero... at this case you can have two different Z,N that give the same A which can achieve the least energy... Thanks that's really helpful and clears up the a_p values. I might be being really stupid here but I'll ask anyway, if the a_p term is zero then how is it a parabola? The only thing I can think of is that if the term is missing and you put N=A-Z into the equation you get some sort of quadratic in Z, which would be a parabola? yup, it's the quadratic equation in Z... you can also check : Last edited by a moderator: eg the figure in page 10 and its caption... FAQ: Semi-empirical mass formula and the pairing term 1. What is the Semi-empirical mass formula? The Semi-empirical mass formula is a mathematical formula used to estimate the nuclear binding energy of atomic nuclei. It takes into account the number of protons and neutrons in the nucleus, as well as other factors such as the pairing energy and the asymmetry energy. 2. How does the pairing term affect the Semi-empirical mass formula? The pairing term in the Semi-empirical mass formula takes into account the additional binding energy that is present when protons and neutrons are paired in the atomic nucleus. This pairing energy is stronger for even numbers of protons and neutrons, leading to a more stable nucleus. 3. What is the role of the pairing term in nuclear stability? The pairing term plays a crucial role in determining the stability of atomic nuclei. It contributes to the overall binding energy of the nucleus, making it more stable. Without the pairing term, many nuclei would be unstable and undergo radioactive decay. 4. How does the pairing term affect the mass of the nucleus? The pairing term affects the mass of the nucleus by adding an additional binding energy, which in turn affects the overall mass of the nucleus. This additional binding energy reduces the mass defect, making the nucleus slightly more massive than it would be without the pairing term. 5. Are there any limitations to the Semi-empirical mass formula and the pairing term? Yes, there are limitations to the Semi-empirical mass formula and the pairing term. It is based on empirical data and is not accurate for all nuclei, especially those with extreme proton-neutron ratios. Additionally, it does not take into account other factors such as nuclear spin and nuclear deformation, which can also affect the stability of nuclei.
{"url":"https://www.physicsforums.com/threads/semi-empirical-mass-formula-and-the-pairing-term.816268/","timestamp":"2024-11-10T15:00:50Z","content_type":"text/html","content_length":"100115","record_id":"<urn:uuid:866768ea-4eb6-45ed-9ec4-d2f4cc8c559b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00691.warc.gz"}
to mAh conversion calculator Wh to mAh conversion calculator Energy in Watt-hours (Wh) to electric charge in milliamp-hours (mAh) calculator. Enter the energy in watt-hours (Wh) and voltage in volts (V) and press the Calculate button: Watt-hours to milliamp-hours calculation formula The electric charge Q[(mAh)] in milliamp-hours (mAh) is equal to 1000 times the energy E[(Wh)] in watt-hours (Wh) divided by the voltage V[(V)] in volts (V): Q[(mAh)] = 1000 × E[(Wh)] / V[(V)] So milliamp-hours is equal to 1000 times watt-hours divided by volts: milliamp-hours = 1000 × watt-hours / volts mAh = 1000 × Wh / V See also
{"url":"https://jobsvacancy.in/calc/electric/wh-to-mah-calculator.html","timestamp":"2024-11-07T07:32:41Z","content_type":"text/html","content_length":"8802","record_id":"<urn:uuid:bc808f41-c5aa-4c55-8506-f0ae206f031c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00419.warc.gz"}
Averageof12's Method Progression Thread | Starting New Projects What should I work on first? (2x2) TCEG-1 Votes: 7 25.9% (2x2) 1LWC Votes: 4 14.8% (3x3) WVLL Votes: 3 11.1% (4x4) Double Parity Alg Votes: 8 29.6% (Pyraminx) L5E Votes: 4 14.8% (Skewb) L5CO Votes: 4 14.8% (Method) Petrus4-7 Votes: 2 7.4% (Method)(Pyraminx) EOMethod Votes: 3 11.1% (Method) ZZ4-7 Votes: 1 3.7% (2x2) 2TCLL Votes: 0 0.0% (Method)(Clock) Layer by Layer Votes: 5 18.5% (Method/Algset) FTO APB Votes: 0 0.0% (Algset) Skewb L6CO Votes: 1 3.7% (Method) Some New 3x3 Method, idk Votes: 2 7.4% (Algset)(2x2) OBL (A subset of CBL) Votes: 0 0.0% Total voters • Poll closed . This is my progression thread for creating new algsets and methods. I will share my progress at documenting algs and methods. Summary of the Algsets/Methods so far: A combination of TCLL and EG. Part of a method where you solve the pieces with white into a CLL, and do one alg to Solve the CLLs on either side. 1764 Algs. You hold a certain corner in a certain spot, and solve the rest in 1 Alg. Like 4000 Algs. You solve last layer when you have a Winter Variation Case. Double Parity Alg; An alg that solves both OLL and PLL parity in one alg. Like L4E with 5 edges. EO Method; A non-alg Pyraminx method. Vote on which one I should do first! Last edited: Jun 18, 2022 Don't you already have like 2 other progression threads? May 19, 2023 Don't you already have like 2 other progression threads? They're all abandoned. Plus, this is a different kind of progression thread. Why? You’ve recently posted in them. Also I’d learn something like OLL, PLL or CLL (for 2x2) before those other alg sets. Why? You’ve recently posted in them. Also I’d learn something like OLL, PLL or CLL (for 2x2) before those other alg sets. Most of this is not for beginners. OLL, PLL, and CLL are still worth learning. Most of this is not for beginners. OLL, PLL, and CLL are still worth learning. what's so confusing? Due to the highest vote, I will be working on a 4x4 DP alg! Day 1: I find an alg that solves double parity. The problem is that it messes up the back centers. Alg V1: r2 U2 r F2 l2 U2 (OLL parity alg) U2 l2 F2 r Uw2 r2 Uw2. Basically just OLL parity jammed inside PLL parity. Does anyone know of a K4 ELL alg that flips two edges like in the H OLL case, but does not move them around? Most of this is not for beginners. OLL, PLL, and CLL are still worth learning. Edit: I already know full OLL and PLL. This is for creating new algsets, not learning them. Due to the highest vote, I will be working on a 4x4 DP alg! Day 1: I find an alg that solves double parity. The problem is that it messes up the back centers. Alg V1: r2 U2 r F2 l2 U2 (OLL parity alg) U2 l2 F2 r Uw2 r2 Uw2. Basically just OLL parity jammed inside PLL parity. Does anyone know of a K4 ELL alg that flips two edges like in the H OLL case, but does not move them around? Day 2: The reason the centers got messed up in alg v1 is because the OLL parity alg rotates centers. Hoping to finish this project by the end of this month! Alg V2: r2 U2 r F2 l2 U2 (OLL parity alg) (R U R' U)x5 U2 l2 F2 r Uw2 r2 Uw2 Longer, but it works! Day 2: The reason the centers got messed up in alg v1 is because the OLL parity alg rotates centers. Hoping to finish this project by the end of this month! Alg V2: r2 U2 r F2 l2 U2 (OLL parity alg) (R U R' U)x5 U2 l2 F2 r Uw2 r2 Uw2 Longer, but it works! I was about to learn it when I saw (OLL parity alg) and (x5) I was about to learn it when I saw (OLL parity alg) and (x5) If the front edge is flipped, and the back edge is opposite, with the two sides solved. I tried out different OLL parity algs to see if I could find one that didn't rotate centers, but they all do Apr 19, 2022 1LWC seems really intresting. It isn't exactly practical, but if someone learned full 1LLL then this could be worth it too. What's the estimated move count of each algorithm? Does the number include cases where the solution is obvious, like in 1, 2, 3, and sometimes 4 move scrmables? 1LWC seems really intresting. It isn't exactly practical, but if someone learned full 1LLL then this could be worth it too. What's the estimated move count of each algorithm? Does the number include cases where the solution is obvious, like in 1, 2, 3, and sometimes 4 move scrmables? Some of the algs are just LS, EG, CLL, or TCLL.After I finish my current project, I will probably work on TCEG, because that willl be a subset too. 1LWC would have 1-4 move alg cases. 1LWC seems really intresting. It isn't exactly practical, but if someone learned full 1LLL then this could be worth it too. What's the estimated move count of each algorithm? Does the number include cases where the solution is obvious, like in 1, 2, 3, and sometimes 4 move scrmables? Some of the algs are just LS, EG, CLL, or TCLL.After I finish my current project, I will probably work on TCEG, because that willl be a subset too. 1LWC would have 1-4 move alg cases. I don't understand. No matter what scramble you have, you can just declare a corner solved so from what I understand this seems to just be an algset that solves any 2x2 scramble with one alg. Which would not be 4000 algs. I think I've probably misunderstood this idea, so what is it meant to be? I don't understand. No matter what scramble you have, you can just declare a corner solved so from what I understand this seems to just be an algset that solves any 2x2 scramble with one alg. Which would not be 4000 algs. I think I've probably misunderstood this idea, so what is it meant to be? You hold the yellow-orange-blue corner in the BDL spot, then do a 1LWC alg. The 4000 is just an estimation. you also don't need to be color neutral. You hold the yellow-orange-blue corner in the BDL spot, then do a 1LWC alg. The 4000 is just an estimation. you also don't need to be color neutral. What did you base that estimation on? There are 3.6 million 2x2 scrambles and this algset would solve any of these in one alg. The total number of algorithms would be 3,674,000/6 with colour neutrality which is over 600,000 algs. What did you base that estimation on? There are 3.6 million 2x2 scrambles and this algset would solve any of these in one alg. The total number of algorithms would be 3,674,000/6 with colour neutrality which is over 600,000 algs. What about without? I said you don't need to be color neutral. Apr 19, 2022 What about without? I said you don't need to be color neutral. Then it's 6 times more for 3.6 million… What about without? I said you don't need to be color neutral. Imagine learning an extra 3 million algs just because you don't want to be colour neutral.
{"url":"https://www.speedsolving.com/threads/averageof12s-method-progression-thread-starting-new-projects.92451/","timestamp":"2024-11-09T22:59:33Z","content_type":"text/html","content_length":"241838","record_id":"<urn:uuid:2fd7677c-92b4-428b-a620-d1e672267a17>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00360.warc.gz"}
2D vs 3D – Difference and Comparison - Differ Between 2D vs 3D – Difference and Comparison What is 2D? A 2D structure is one in which the x- and y-axes define the object’s form in two planes or axes, respectively, as described by the concept of two dimensions. Using the x-axis and y-axis, the only dimensions of a 2D figure are length and width. In contrast to 3D sculptures, these figures do not have any depth, and flat surfaces are the only places where they can be discovered. Their restricted design allows them to cover as much ground as possible while maintaining as little volume as possible. Our daily lives are surrounded by a kaleidoscope of shapes & intangible structures. We are more likely to encounter 2D & 3D objects among the many different types of arrangements we meet in our everyday lives. Shoes, circular, rectangular, square, and pentagons are excellent examples of 2D systems. These objects strictly circumscribe both the x-axis and the y-axis, and they cannot cross and overtop these two borders, although this is not the case for 3D models. According to geometrical definitions, 2D things can be thought of as existing in a space between two imaginary dimensions/planes, denoted by the x-axis and the y-axis. What is 3D? 3D structure defining is to determine the object’s shape, it lives on three planes or axes, namely, the x-axis, the y-axis, and the z-axis. A 3D figure contains length, width, and height, represented by the x, y, and z axes, respectively. A 3D object has a different depth to its structure that extends beyond the limitations of a flat and plane surface; this new dimension, known as the z-axis, has been referred to as the x-axis. The purpose of this additional axis is to reduce the overall height of the figure. Because they do not exist within the limitations of two dimensions, they are not planes or flatforms but rather contain the volume, which is a significant point of distinction between 2D and 3-dimensional structures. As previously said, our daily lives are surrounded by various shapes and intangible structures. Of all the many conditions, 2D and 3D objects are the most popular systems we come up with daily. Examples of 3D designs in nature include sheets, cuboids, pyramidal, cylindrical, and prismatic forms. Difference Between 2D and 3D 1. The x- and y-axes are the only axes used in a 2D construction. In contrast, the x, y, and z axes are used in a 3D structure. 2. The length and width are the only two sides of a 2D construction. The size, width, and height make up its three faces. 3. Due to their appearance, 2D figures also are referred to as “plane” figures or “flat” figures. 3D statistics, on the other hand, are merely referred to as such. 4. The circle, square, rectangle, and pentagon are all 2D structures. Prism, cuboid, pyramid, and cylinder are examples of 3D structures. 5. The volume of a 2D object is zero, and a 2D structure lacks volume, whereas 3D structures do. Comparison Between 2D and 3D Parameters of 2D 3D Axes used One-dimensional structures are composed of only two axes, denoted by their initials (x) and finals Three axes, the x-axis, the y axis, and the z-axis, construct a 3D (y). structure. Defining dimensions both in terms of length and width The dimensions are the length, width, and height of the object. Another name Flat figures or “plane” figures have been used to describe their look. There is no other term for these figures but 3D (3D). Examples Each of the four basic shapes is represented by a symbol. Cuboid, pyramid, cylinder, and prism are all geometric shapes. Volume It is utterly devoid of sound There is a lot of it.
{"url":"https://differbtw.com/difference-between-2d-and-3d/","timestamp":"2024-11-04T11:35:42Z","content_type":"text/html","content_length":"145948","record_id":"<urn:uuid:d5ce4235-31d4-4f0e-8123-86ffc31b9b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00017.warc.gz"}
Sums and products of tangents squared Find the values of the following expressions ($n\geq 3$ odd): \[ \begin{aligned} S(n) &= \sum_{k=0}^{\frac{n-3}{2}}\tan^2\left\{\tfrac{(2k+1)\pi}{2n}\right\}\,, \\ P(n) &= \prod_{k=0}^{\frac{n-3}{2}} \tan^2\left\{\tfrac{(2k+1)\pi}{2n}\right\}\,, \\ \frac{S(n)}{P(n)} &= \sum_{k=0}^{\tfrac{n-3}{2}}\, \prod_{\substack{l=0 \\ l\ne k}}^{\tfrac{n-3}{2}} \cot^2\left\{\tfrac{(2l+1)\pi}{2n}\right\}\,. \ end{aligned} \] Generalize the result to certain sums of products of tangents or cotangents squared. The problem can be solved by looking at the coefficients of the following polynomial (for the second equality use $x^2-\cot^2{\alpha} = (x+\cot{\alpha})(x-\cot{\alpha})$ and recall that $\cot(\pi-\ alpha) = -\cot(\alpha)$): \[ p_n(x) = x\prod_{k=0}^{\frac{n-3}{2}}\left(x^2 - \cot^2\left\{\tfrac{(2k+1)\pi}{2n}\right\}\right) = \prod_{k=0}^{n-1}\left(x - \cot\left\{\tfrac{(2k+1)\pi}{2n}\right\}\ right)\,. \] Expanding the product we see that, except for the sign, $1/P(n)$ is the coefficient of $x$ in $p_n(x)$, and $S(n)/P(n)$ is the coefficient of $x^3$. The other coefficients yield various expressions involving cotangents. So the problem amounts to finding coefficients of $p_n(x)$. To that end we prove that $p_n(x)$ can be rewritten in the following way: \[ p_n(x) = \Re\{(x+i)^n\} \,, \] where $\Re(z)=$ real part of $z$ (applied to a polynomial we mean the polynomial obtained by replacing the coefficients by their real parts; note that if $p$ is a polynomial with complex coefficients and $a$ is a real number then $\Re\{p\}(a)=\Re\{p(a)\}$). We note that the two polynomials we are comparing have the same degree and same leading coefficient, so we only need to show that they have exactly the same roots, i.e., they are zero for exactly the same values of $x$. The roots of $p_n(x)$ are $x_k=\cot\left\{\frac{(2k+1)\pi}{2n}\right\}$, so we must prove that $\Re\ {(x_k+i)^n\}=0$, or equivalently $(x_k+i)^n$ is purely imaginary for $k=0,\dots, n-1$. In fact, putting $\alpha_k = \frac{(2k+1)\pi}{2n}$ we get: \[ \cot{\alpha_k}+i = \frac{\cos{\alpha_k} + i \sin{\ alpha_k}}{\sin{\alpha_k}} = \left\{\sin{\alpha_k}\right\}^{-1} e^{\alpha_k i} \,, \] so $\arg(\cot{\alpha_k}+i)=\alpha_k$, and $\arg(\{\cot{\alpha_k}+i\}^n)=n\alpha_k = (k+1/2)\pi$, which is the argument of a purely imaginary number. So now we can expand $(x+i)^n$ using the binomial theorem and find its real part: \[ \Re\{(x_k+i)^n\} = \sum_{l=0}^{\frac{n-1}{2}} (-1)^l \binom{n}{2l} x^{n-2l} \,. \] Finally, comparing coefficients we get: \[ \begin{aligned} \frac{1}{P(n)} &= \binom{n}{n-1} = n \,, \\ \frac{S(n)}{P(n)} &= \binom{n}{n-3} = \frac{n(n-1)(n-2)}{6} \,, \\ P(n) &= \frac{1}{n} \,, \\ S(n) &= \frac{(n-1)(n-2)}{6} \,, \end{aligned} \] and much more: \[ \begin{aligned} \sum_{k=0}^{\frac{n-3}{2}} \cot^2\left\{\tfrac{(2k+1)\pi}{2n}\right\} &= \binom{n}{2} = \frac{n(n-1)}{2} \,, \\ \cot^2\left(\tfrac{\pi}{14}\right)\cot^2\left(\tfrac{3\pi}{14}\right) &+ \cot^2\left(\tfrac{\pi}{14}\right)\cot^2\left(\tfrac{5\pi}{14}\right) + \cot^2\left(\tfrac{3\pi}{14}\right)\cot^2\left(\ tfrac{5\pi}{14}\right) = \binom{7}{4} = 35 \,, \end{aligned} \] etc., etc., etc. Miguel A. Lerma - 4/10/2003
{"url":"https://mlerma54.github.io/problem_solving/solutions/tan_squared.html","timestamp":"2024-11-11T06:07:45Z","content_type":"text/html","content_length":"4459","record_id":"<urn:uuid:ceb3adcd-5dda-4cca-9c33-3c460617c747>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00065.warc.gz"}
Dispersion interactions# Long-range interaction energy# The interaction energy between two molecular systems \(A\) and \(B\) is given by \[ \Delta E = E_\mathrm{AB} - E_\mathrm{A} - E_\mathrm{B} \] For charge neutral, nonpolar, systems separated by a sufficiently large distance \(R\) this interaction energy is known as dispersion (or van der Waals) energy. Dispersion and correlation# The prototypical system exemplifying dispersion interactions is the He dimer for which the interaction energy equals \[ \Delta E(R) = E_\mathrm{dimer} - 2E_\mathrm{He} \] We will determine this interaction energy with respect to the interatomic separation distance at the Hartree–Fock and MP2 levels of theory. import matplotlib.pyplot as plt import numpy as np import scipy import veloxchem as vlx au2ang = 0.529177 atom_xyz = """1 He atom He 0.000000000000 0.000000000000 0.000000000000 dimer_xyz = """2 He dimer He 0.000000000000 0.000000000000 0.000000000000 He 0.000000000000 0.000000000000 dimer_separation atom = vlx.Molecule.read_xyz_string(atom_xyz) atom_basis = vlx.MolecularBasis.read(atom, "aug-cc-pvtz", ostream=None) dimer = vlx.Molecule.read_xyz_string(dimer_xyz.replace("dimer_separation", "5.0")) dimer_basis = vlx.MolecularBasis.read(dimer, "aug-cc-pvtz", ostream=None) Determine the atomic energies the HF and MP2 levels of theory. scf_drv = vlx.ScfRestrictedDriver() mp2_drv = vlx.mp2driver.Mp2Driver() scf_results = scf_drv.compute(atom, atom_basis) hf_atom_energy = scf_drv.get_scf_energy() mp2_results = mp2_drv.compute(atom, atom_basis, scf_drv.mol_orbs) mp2_atom_energy = hf_atom_energy + mp2_results["mp2_energy"] Determine the dimer energies over a range of interatomic separation distance. distances = np.linspace(2.75, 5.0, 10) hf_dimer_energies = [] mp2_dimer_energies = [] for dist in distances: dimer = vlx.Molecule.read_xyz_string( dimer_xyz.replace("dimer_separation", str(dist)) scf_results = scf_drv.compute(dimer, dimer_basis) mp2_results = mp2_drv.compute(dimer, dimer_basis, scf_drv.mol_orbs) mp2_dimer_energies.append(scf_drv.get_scf_energy() + mp2_results["mp2_energy"]) hf_dimer_energies = np.array(hf_dimer_energies) mp2_dimer_energies = np.array(mp2_dimer_energies) Plot the interaction energies in units of micro-Hartree. hf_interaction_energies = hf_dimer_energies - 2 * hf_atom_energy mp2_interaction_energies = mp2_dimer_energies - 2 * mp2_atom_energy R = np.linspace(2.75, 5.0, 1000) plt.figure(figsize=(7, 5)) x, y = distances, hf_interaction_energies * 1e6 f = scipy.interpolate.interp1d(x, y, kind="cubic") plt.plot(R, f(R), "g-", label="HF") plt.plot(x, y, "go") x, y = distances, mp2_interaction_energies * 1e6 f = scipy.interpolate.interp1d(x, y, kind="cubic") plt.plot(R, f(R), "b-", label="MP2") plt.plot(x, y, "bs") plt.xlabel("He-He distance (Å)") plt.ylabel(r"Interaction energy ($\mu E_h$)") plt.ylim(-30, 50) The failure of the HF method to describe the minimum of the potential energy curve for the helium dimer is due to the lack of correlation. More specifically, let us consider a system setup where the \(z\)-axis is chosen as the internuclear axis and the helium atoms are placed at \(z = 0\) and \(z=R_0\), respectively (see the inset in the figure below). We will then be concerned with the two-particle density, \(n(\mathbf{r}_1, \mathbf{r}_2)\), at coordinates \[\begin{align*} \mathbf{r}_1 & = (d, 0,0) \\ \mathbf{r}_2 & = (d \cos\theta, d \sin\theta,R_0) \\ \end{align*}\] At the HF level of theory, the two-particle density will be independent of the angle \(\theta\) whereas this is not the case when electron correlation is accounted for. The MP2 density shows a maximum when the two electrons are positioned as far apart as possible, or in other words as \(\theta = 180^\circ\). This asymmetry in the two-particle density is referred to as fluctuating induced dipoles, or instantaneous dipole–dipole interactions, and it is the source origin of the weakly attractive van der Waals energies. Dispersion by perturbation theory# The long-range interaction energy between two randomly oriented molecules \(A\) and \(B\) is given by the Casimir–Polder potential \[\begin{eqnarray*} \Delta E(R) & = & - \frac{\hbar}{\pi R^6} \int_{0}^{\infty} \overline{\alpha}_A(i\omega^I) \overline{\alpha}_B(i\omega^I) e^{-2\omega^I R/c} \\ \nonumber && \quad \times \left[ 3 + 6 \frac{\omega^I R}{c} + 5 \left(\frac{\omega^I R}{c}\right)^2 + 2 \left(\frac{\omega^I R}{c}\right)^3 + \left(\frac{\omega^I R}{c}\right)^4 \right] d\omega^I \end{eqnarray*}\] where \(R\) is the intermolecular separation, \(c\) is the speed of light, and \(\overline{\alpha}_A(i\omega^I)\) is the isotropic average of the electric dipole polarizability tensor of molecule \(A \) evaluated at a purely imaginary frequency. The molecules are here considered to be polarizable entities. Additional contributions to the energy will arise from higher-order electric multipole interactions as well as magnetic interactions. The interaction energy is often expressed in terms of dispersion coefficients \(C_m\), where \(m = 6, 8, 10, \ldots\), and long-range coefficients \(K_n\), where \(n = 7, 9, 11, \ldots\), so that, in the van der Waals region, we have \[\begin{equation*} \Delta E(R) = -\frac{C_6}{R^6} -\frac{C_8}{R^8} -\frac{C_{10}}{R^{10}} - \cdots \end{equation*}\] and, at very large intermolecular separation, we have \[\begin{equation*} \Delta E(R) = -\frac{K_7}{R^7} -\frac{K_9}{R^9} -\frac{K_{11}}{R^{11}} - \cdots \end{equation*}\] The reason for the deviation of the interaction energy at large \(R\) from a \(1/R^6\) dependence, as given by London–van der Waals dispersion theory, is the finite speed of the photons mediating the electromagnetic interaction, or, equivalently, retardation. The leading dispersion coefficient is given by \[\begin{equation*} C_6 = \frac{3\hbar}{\pi} \int_{0}^{\infty} \overline{\alpha}_A(i\omega) \overline{\alpha}_B(i\omega) d\omega \end{equation*}\] c6_drv = vlx.C6Driver() scf_results = scf_drv.compute(atom, atom_basis) c6_results = c6_drv.compute(atom, atom_basis, scf_results) hf_c6 = c6_results["c6"] Based on Hartree–Fock data and with access to the \(C_6\) dispersion coefficient, a much more realistic potential energy curve can be constructed as follows \[ \Delta E(R) = \Delta E^\mathrm{HF}(R) - \frac{C_6}{R^6} \] where the first and second term contribute the repulsive and attractive parts of the potential, respectively. plt.figure(figsize=(7, 5)) # Reference data at FCI level plt.axvline(5.6 * au2ang, color="navy") plt.axhline(-28.76, color="navy") plt.text(3.01, -34, "FCI reference") x, y = distances, hf_interaction_energies * 1e6 f = scipy.interpolate.interp1d(x, y, kind="cubic") plt.plot(R, f(R), "g-", label="HF") plt.plot(x, y, "go") x, y = distances, mp2_interaction_energies * 1e6 f = scipy.interpolate.interp1d(x, y, kind="cubic") plt.plot(R, f(R), "b-", label="MP2") plt.plot(x, y, "bs") x, y = distances, (hf_interaction_energies - hf_c6 / (distances / au2ang) ** 6) * 1e6 f = scipy.interpolate.interp1d(x, y, kind="cubic") plt.plot(R, f(R), "m-", label="HF + C6") plt.plot(x, y, "m^") plt.xlabel("He-He distance (Å)") plt.ylabel(r"Interaction energy ($\mu E_h$)") plt.ylim(-40, 50) A FCI reference value for the binding energy, 29 \(\mu E_h\), and equilibrium distance, 2.96 Å [VvLvD90], is depicted in the figure as the crossing point of the vertical and horizontal lines. Both the MP2 and the dispersion-corrected HF minima are quite close to this FCI reference point. It has become a standard procedure to adopt a similar approach to add dispersion corrections to DFT potentials as standard functionals do not offer an inclusion of dispersion interactions. This can be very important when performing molecular structure optimizations of systems with nonbonded interactions.
{"url":"https://kthpanor.github.io/echem/docs/elec_struct/dispersion.html","timestamp":"2024-11-04T02:43:41Z","content_type":"text/html","content_length":"74640","record_id":"<urn:uuid:ef368da5-04ef-4f1d-a38d-e3eadeaeace3>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00353.warc.gz"}
Chapter 11: Inference for Categorical Data: Chi-Squared Tests Notes | Knowt In this chapter, we will look at inference for categorical variables. Chi-Square Tests These are the approximate percentages for the different blood types among people with blue eyes: A: 40%; B: 11%; AB: 4%; O: 45%. A random sample of 1000 people with brown eyes yielded the following blood type data: A: 270; B: 200; AB: 40; O: 490. Does this sample provide evidence that the distribution of blood types among brown-eyed people differs from that of blue-eyed people, or could the sample values simply be due to sampling variation? • Chi Square goodness of fit test can be used to answer these questions. • In the Chi-square goodness-of-fit test, there is one categorical variable (here : blood type) and one population(here : brown eyed people). Comparing observed and expected values of the data, we get the following table: The numbers vary for types A and B but not for types AB and O. • The chi-square statistic ( X2 ) calculates the squared difference between the observed and expected values relative to the expected value for each category. • The X2 statistic is computed as follows: • The chi-square distribution is based on the number of degrees of freedom. • degrees of freedom = (c − 1) where c is the number of categories. The essential parts of the test are summarized in the following table. Now, assume Pa = proportion of brown eyed people with type A blood. Pb = proportion of brown eyed people with type B blood. Pab = proportion of brown eyed people with type AB blood. H0: Pa=0.4, Pb=0.11, Pab=0.45 Ha: Any of the above proportions is not as stated. Using Chi-square Goodness-of-Fit test, expected values are as follows: type A = 400 type B = 110 type AB = 40 type O = 450 Each is >5 and so the test is valid. • In Chi-square test for homogeneity of proportions we may also encounter a situation in which there is one categorical variable measured across two or more populations • Chi-square test for independence is one in which there are two categorical variables measured across a single population. Inference for 2 Way Tables • Two-Way Table or Contingency Table for categorical data is simply a rectangular array of cells. • Each cell contains the frequencies for the joint values of the row and column variables. • If the row variable has r values, then there will be r rows of data in the table. • If the column variable has c values, then there will be c columns of data in the table. • There are r × c cells in the table. • The marginal totals are the sums of the observations for each row and each column. • For a two-way table, the number of degrees of freedom is calculated as ====( number of rows – 1)( number of columns – 1) ===== ( r − 1)( c − 1). Chi-Square Test for Independence The Chi-Squared test for independence is summarized as follows: Chi-Square Test for Homogeneity of Proportions Let’s consider a situation in which a sample of 36 students is selected and then categorized according to gender and political party preference. We then asked if gender and party preference are independent in the population. Now suppose we selected a random sample of 20 males from the population of males in the school and another, independent, random sample of 16 females from the population of females in the school. Within each sample we classify the students as Democrat, Republican, or Independent. The results are presented in the following table: Here, we do not ask if gender and political party preference are independent. Instead, we ask if the proportions of Democrats, Republicans, and Independents are the same within the populations of Males and Females. This is the test for homogeneity of proportions. p1: proportion of Male Democrats p2 : proportion of Female Democrats p3: proportion of Male Republicans p4: proportion of Female Republicans p5: proportion of Independent Males p6: proportion of Independent Females H0 : p1=p2, p3=p4, p5=p6 HA : At least one of these proportions is not as specified. Continue solving after this as followed in previous examples. Click the link to go to the next chapter:
{"url":"https://knowt.com/note/b4071d77-04cf-4afe-a337-ff292d1af380/Chapter-11-Inference-for-Categorical-Da","timestamp":"2024-11-13T19:12:29Z","content_type":"text/html","content_length":"205403","record_id":"<urn:uuid:9f410793-0ea4-4d56-8b13-42ef5fcebad7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00391.warc.gz"}
Directed Acyclic Graphs | HASH Directed Acyclic Graphs If you don’t know your DAGs from your dogs, you can finally get some clarity and sleep easily tonight. Learn what makes a Directed Acyclic Graph a DAG. A Directed Acyclic Graph (or DAG) is a special type of graph made up of nodes (also known as vertices), and edges, in which: 1. → all edges have a direction associated with them, and 2. → the graph as a whole contains no cycles (aka. loops). The below figure illustrates a classic DAG, in which all nodes are connected by at least one directional edge, and all pathways lead to a single end-state. In data applications like HASH, DAGs are commonly used to illustrate: • → data pipelines: the decision and processing steps taken as data flows through a pipeline • → schedules: any system of tasks with ordering constraints (not just a data pipeline) can be illustrated with a DAG • → dependency/citation graphs: a list of dependencies or citations that allows the provenance of work to be tracked In mathematical terms, DAGs are a specific subclass of oriented graphs (graphs without bidirectional edges). Ultimately though, you don’t have to understand the technical ins and outs of DAGs in order to utilize them as part of a data pipeline. Modern data engineering tools such as hCore abstract away complexity through simple, easy-to-use interfaces that provide prompts and feedback, preventing the creation of malformed DAGs (for example, those which may inadvertently contain circular loops or cycles). Create a freeaccount Sign up to try HASH out for yourself, and see what all the fuss is about By signing up you agree to our terms and conditions and privacy policy
{"url":"https://hashdotai-s1ndkcpca.stage.hash.ai/glossary/dag","timestamp":"2024-11-07T10:20:54Z","content_type":"text/html","content_length":"158235","record_id":"<urn:uuid:2eb3b7af-b93d-4502-8607-f96d964ac8b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00490.warc.gz"}
Update: 26 August 2021 We've been trying to use the analytical (Buckley-Leverett) solution of the two-phase flow in porous media to fit the Corey-type relative permeability model to the experimental oil recovery data. In this post, I'm going to compare the numerical solution of the same model with the analytical results. You can find the codes that I have written in this github repository. Here, I only call the codes and compare the results. This document is mostly based on the SPE-7660 paper by Gary Pope. I first implement the simple water flooding analytical solution and then expand it to low salinity water flooding with and without ionic adsorption. Mathematical model¶ The two phase flow equation in a 1D porous medium reads $$\frac{\partial S_w}{\partial t}+\frac{u}{\varphi}\frac{df_w}{dS_w}\frac{\partial S_w}{\partial x} = 0 $$ The dimensionless time and space are defined as $$t_D = \frac{ut}{\varphi L}$$ and $$x_D = \frac{x}{L}$$ The velocity of a constant saturation front is calculated by $$V_{S_w} = (\frac{dx}{dt})_{S_w}=\frac{u}{\varphi}\frac{df_w}{dS_w}$$ The shock front is specified by $$\frac{f_w(S_{w,shock})-f_w(S_{w,init})}{S_{w,shock}-S_{w,init}}=\left(\frac{df_w}{dS_w}\right)_{S_{w,shock}}$$ The injected water front velocity (i.e., a tracer in the injected water, or the low salinity of the injected brine) is calculated by $$V_{c} = (\frac{dx}{dt})_{c}=\frac{u}{\varphi}\frac{f_w}{S_w}$$ an the water saturation that corresponds to the position of the salinity front is given by $$\frac{f_w(S_{w,s})-f_w(0)}{S_{w}-0.0}=\left(\frac{df_w}{dS_w}\right)_{S_{w,shock}}$$ which is the tangent line fron the point (0,0) to the $f_w-S_w$ (fractional flow) curve. The breakthrough time (in number of pore volume) is calculated by $$t_{D, bt} = \left(\frac{df_w}{dS_w}\right)^{-1}_{S_{w,shock}}$$ The other useful relation is the average saturation after breakthrough, which reads $$S_{w,av} = S_{or}+\left[(1-f_w)\left(\frac{df_w}{dS_w}\right)\right]_{x=L}, \;t_D>t_{D,bt}$$ The recovery factor then can be calculated based on the fact that the recovery curve is linear until the breakthrough, and after breakthrough it gradually reaches a plateau. The oil recovery factor before the breakthrough is calculated by $$R = \frac{(1-f_w(S_ {w,init}))t_D}{1-S_{w,init}}, \;t_D<t_{D,bt}$$ and after breakthrough by $$R = \frac{S_{w,init}-S_{w,av}}{1-S_{w,init}}, \; t_D>t_{D,bt}$$ Let's try the above formulation in Julia. In [2]: using PyPlot FF = FractionalFlow; WARNING: replacing module FractionalFlow. Fractional flow package¶ This package can solve a number of multiphase flow problems in a 1D homogeneous porous medium in the absence of capillary pressure and gravity. The package has some convenience functions for data analysis and visualization. We can define and visualize relative permeability curves for oil and water as follows: In [2]: rel_perms = FF.oil_water_rel_perms(krw0=0.4, kro0=0.9, swc=0.15, sor=0.2, nw=2.0, no = 2.0) FF.print_relperm(rel_perms, title="Corey rel-perm parameters") Corey rel-perm parameters krw0 kro0 nw no Swc Sor 0.4 0.9 2.0 2.0 0.15 0.2 Then we can construct the fractional flow curves: In [3]: # define the fluids fluids = FF.oil_water_fluids(mu_water=1e-3, mu_oil=2e-3) # define the fractional flow functions fw, dfw = FF.fractional_flow_function(rel_perms, fluids) # visualize the fractional flow FF.visualize(rel_perms, fluids, label="lowsal") The next stage is to define an injection problem, i.e. a core flooding test: In [4]: core_flood = FF.core_flooding(u_inj=1.15e-5, pv_inject=5.0, p_back=1e5, sw_init=0.2, sw_inj=1.0, rel_perms=rel_perms) core_props = FF.core_properties() wf_res = FF.water_flood(core_props, fluids, rel_perms, core_flood) fw, dfw = FF.fractional_flow_function(rel_perms, fluids) sw_tmp = range(0, 1, length=100) PyObject <matplotlib.legend.Legend object at 0x7fe5f1169d90> The same problem can be solved numerically as well. The code is reasonably fast and suitable for optimization: In [5]: t_sec, pv_num, rec_fact, xt_num, sw_num, c_old, c_out_sal = FF.water_flood_numeric(core_props, fluids, rel_perms, core_flood, Nx = 20); Progress: 100%|█████████████████████████████████████████| Time: 0:00:08 Let's have a look at the recovery factor versus time: In [6]: plot(wf_res.recovery_pv[:,1], wf_res.recovery_pv[:,2], pv_num, rec_fact, "--") legend(["Analytical", "Numerical"]); One can see an underestimation of the recovery factor that is the result of the coarse mesh (Nx = 20). If I use a finer grid of, e.g. 500, the run time will be higher but my solution will be closer to the analytical one: In [7]: t_sec_f, pv_num_f, rec_fact_f, xt_num_f, sw_num_f, c_old_f, c_out_sal_f = FF.water_flood_numeric( core_props, fluids, rel_perms, core_flood, Nx = 500 Progress: 100%|█████████████████████████████████████████| Time: 0:02:03 In [8]: plot(wf_res.recovery_pv[:,1], wf_res.recovery_pv[:,2], pv_num_f, rec_fact_f, "--") legend(["Analytical", "Numerical (fine grid)"]); It takes 2 minutes and seven seconds to run the code with a fine mesh versus 1 second with a coarse mesh. There are ways to make the code run faster on the fine mesh but at the end of the day we have to reach a compromise between the accuracy and speed based on our specific application for the numerical solution. I will write more about it later in the context of parameter estimation.
{"url":"http://fvt.simulkade.com/","timestamp":"2024-11-12T22:36:42Z","content_type":"text/html","content_length":"298962","record_id":"<urn:uuid:8f1df91a-4638-42e7-8317-86027ef4a559>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00334.warc.gz"}
104,910 research outputs found Let $R=k[x_1,..., x_n]$ be a polynomial ring over a field $k$ of characteristic $p>0,$ let \m=(x_1,..., x_n) be the maximal ideal generated by the variables, let $^*E$ be the naturally graded injective hull of R/\m and let $^*E(n)$ be $^*E$ degree shifted downward by $n.$ We introduce the notion of graded $F$-modules (as a refinement of the notion of $F$-modules) and show that if a graded $F$-module \M has zero-dimensional support, then \M, as a graded $R$-module, is isomorphic to a direct sum of a (possibly infinite) number of copies of $^*E(n).$ As a consequence, we show that if the functors $T_1,...,T_s$ and $T$ are defined by $T_{j}=H^{i_j}_{I_j}(-)$ and $T=T_1\circ...\circ T_s,$ where $I_1,..., I_s$ are homogeneous ideals of $R,$ then as a naturally graded $R$-module, the local cohomology module H^{i_0}_{\m}(T(R)) is isomorphic to $^*E(n)^c,$ where $c$ is a finite number. If $\text{char}k=0,$ this question is open even for $s=1.$Comment: Revised result in section In this note we derive the slow-roll and rapid-roll conditions for the minimally and non-minimally coupled space-like vector fields. The function $f(B^{2})$ represents the non-minimal coupling effect between vector fields and gravity, the $f=0$ case is the minimal coupling case. For a clear comparison with scalar field, we define a new function $F=\pm B^{2}/12+f(B^{2})$ where $B^{2}=A_{\mu}A^{\ mu}$, $A_{\mu}$ is the "comoving" vector field. With reference to the slow-roll and rapid-roll conditions, we find the small-field model is more suitable than the large-field model in the minimally coupled vector field case. And as a non-minimal coupling example, the F=0 case just has the same slow-roll conditions as the scalar fields.Comment: no figur
{"url":"https://core.ac.uk/search/?q=author%3A(Yi%20Zhang)","timestamp":"2024-11-05T08:55:47Z","content_type":"text/html","content_length":"104817","record_id":"<urn:uuid:211a1bfd-edca-4914-98a0-0c67c5d397ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00484.warc.gz"}
Search on MyScienceWork's publications - page 5 We elaborate on the recent observation that evolution for twist knots simplifies when described in terms of triangular evolution matrix ${\cal B}$, not just its eigenvalues $\Lambda$, and provide a universal formula for ${\cal B}$, applicable to arbitrary rectangular representation $R=[r^s]$. This expression is in terms of skew characters and it re...
{"url":"https://www.mysciencework.com/search/publications?facets%5Bkeywords%5D%5B0%5D=math.GR&page=5","timestamp":"2024-11-04T04:49:34Z","content_type":"text/html","content_length":"87094","record_id":"<urn:uuid:8204cda8-2a19-40a7-a9b6-5a47d3f09610>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00562.warc.gz"}
Optimizing Time Problem: Maya is 2 km offshore on a boat and wishes to reach a coastal village which is 6 km down the straight shoreline from the point on the shore nearest to the boat. She can row at 2 km/hour and run 5 km/hour. Where should she land her boat to reach the village in the least amount of time? To get an idea of what this question is asking drag the point P along the shoreline to see how the total time taken to complete the trip changes. Read the problem at the top of the page. 1. Drag P to 0 (the longest distance). How long does it take to get to the house? 2. Drag P to 6 (the shortest distance). How long does it take to get to the house? 3. Explain why the above are the shortest and longest distances? 4. Use the applet to find the shortest time to get to the house. Check with the "optimal traveler". Were you correct? 5. Let the distance from (0, 0) to P be x, what is the distance to the house from P? 6. Based on the given rate, how long does it take to travel this distance. Recall that 7. Find an equation, in terms of x, for the time it takes to travel the dashed blue line. 8. Find an equation for the total time it takes to travel to the house. Thanks to J Mulholland for creating the applet and question.
{"url":"https://beta.geogebra.org/m/xPefUb6a","timestamp":"2024-11-01T20:06:00Z","content_type":"text/html","content_length":"93207","record_id":"<urn:uuid:261aebf3-fa2b-4677-bc92-ec0a9020fe90>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00224.warc.gz"}
NCERT Exemplar Problems Class 7 Maths – Algebraic Expression NCERT Exemplar Problems Class 7 Maths – Algebraic Expression Question 1: An algebraic expression containing three terms is called a (a) monomial (b) binomial (c) trinomial (d) All of these (c) An algebraic expression containing one term is called monomial, two terms is called binomial and three terms is called trinomial. Question 2: Number of terms in the expression 3x^2y -2y^2z-z^2x+ 5 is (a) 2 (b) 3 (c) 4 (d) 5 (c) The terms in the expression are 3x^2y, -2y^2z, -z^2x and 5. Hence, total number of terms are 4. Question 3: The terms of expression 4x^2 -3xy are (a) 4x^2 and -3xy (b) 4x^2 and 3xy (c) 4x^2 and -xy (d) x^2 and xy (a) Terms in the expression 4x^2 -3xy are 4x^2 and -3xy. Question 4: Factors of -5x^2y^2z are (a) -5 x x x y x z (b) -5 x x^2 x y x z (c) -5 x x x x x y x y x z (d) -5 x x x y x z^2 (c) -5x^2y^2z can be written as -5 x x x x x y x y x z. Question 5: Coefficient of x in -9xy2z is (a) 9 yz (b) -9 yz (c) 9 y^2z (d) -9 y^2z (d) Coefficient of x in-9x^2yz = – 9y^2z Question 6: Which of the following is a pair of like terms? (a) -7xy^2z, -7x^2yz (b) -10xyz^2, 3xyz^2 (c) 3xyz, 3x^2y^2z^2 (d) 4xyz^2, 4x^2yz (b) Like terms are those terms, having same algebraic factor. Hence, -10ayz^2 and 3ayz^2 are like terms as they contain xyz^2 same factor. Question 7: Identify the binomial out of the following (a) 3xy^2 + 5y – x^2y (b) x^2y-5y-x^2y (c) xy + yz + zx (d) 3xy^2 + 5y-xy^2 (d) We know that, an algebraic expression containing two terms is called binomial. So, taking option (d), 3xy^2 + 5y-xy^2 = 2x^2y + 5y As it contains only two terms, hence it is known as binomial. Question 8: The sum of x^4 – xy+2y^2 and -x^4 + xy+2y^2 is (a) monomial and polynomial in y (b) binomial and polynomial (c) trinomial and polynomial (d) monomial and polynomial in x (a) Required sum = (x^4-xy+2y^2)+ (-x^4 + xy+2y^2) = x^4-xy+2y^2-x^4+ xy+2y^2 = [(x4 + (-x^4)] + (-xy+xy)+(2y^2 + 2y^2) = 0+ 0+ 4y^2 = 4y^2 4y^2 is a monomial and polynomial in y. Question 9: The subtraction of 5 times of y from x is (a) 5x-y (b) y-5x (c) x-5y (d) 5y-x (c) 5 times of y = 5y Now, subtraction of 5 times of y from x is written as x-5y. Question 10: -b- 0 is equal to (a) -1 x b (b) 1-b-0 (c) 0-(-1) x b (d) -b—0 —1 (a) We have, -b-0=-b (a)-1 x b=-b (b) 1-b-0=1-b (c) 0-(-1) x b=0+b=b (d) -b-0-1=-b-1 Hence, option (a) is correct. Question 11: The side length of the top of square table is x. The expression for perimeter is (a) 4 + x (b) 2x (c) 4x (d) 8x (c) Given, side length of a square table = x ∴Perimeter of a square = 4x Side = 4 x x =4x. Question 12: The number of scarfs of length half metre that can be made from y metres of cloth is (a) 2y (b) y/2 (c) y + 2 (d) y + ½ (a) We have, Length of 1 scarf =½ m So, number of scarfs which can be made from y metres =y/(½)=2 y. Question 13: 123x^2y-138x^2y is a like term of (a) 10xy (b) -15xy (c)-15xy^2 (d) 10x^2y (d) We have, 123x^2y-138x^2y=-15x^2y Hence, it is like term of 10x^2y as both contain x^2y. Question 14: The value of 3x^2 – 5x + 3, when x=1 is (a) 1 (b) 0 (c)-1 (d)11 (a) Putting x=1 in given equation we get 3x^2-5x + 3= 3(1)^2-5(1)+3 =3-5+3 =1 Question 15: The expression for the number of diagonals that we can make from one vertex of a n-sided polygon is (a) 2n +1 (b) n -2 (c) 5n + 2 (d)n-3 (d) Since, vertex is formed by joining two sides. Diagonal is line segment joining the two opposite vertex. So, number of diagonal formed by one vertex =n-3. Question 16: The length of a side of square is given as 2x+3. Which expression represents the perimeter of the square? (a) 2x +16 (b) 6x + 9 (c) 8x + 3 (d) 8x + 12 (d) Given, side of the square = (2x + 3) ∴ Perimeter of square = 4 x (Side) = 4 x (2x + 3) = 8x + 12 Fill in the Blanks In questions 17 to 32, fill in the blanks to make the statements true. Question 17: Sum or difference of two like terms is ……………….. Sum or difference of two like terms is a like term, e.g. 138x^2y-125x^2y = 13x^2y Question 18: In the formula, area of circle =πr^2, the numerical constant of the expression πr^2 is ……………. In πr^2, the numerical constant is π as r^2 is variable. Question 19: 3a^2b and -7ba^2 are ………………… terms. 3a^2b and -7ba^2 are like terms as both have same algebraic factor a^2b. Question 20: -5a^2b and -5b^2a are ……………. terms. -5a^2b and -5b^2a are unlike terms as they do not have same algebraic factor. Question 21: In the expression 2πr, the algebraic variable is ……………. In the expression 2πr,2π is constant while r is an algebraic variable. Question 22: Number of terms in a monomial is ……………. Number of terms in a monomial is one. Question 23: Like terms in the expression n(n+1)+6(n-1) are …………….. and ………………. We have, n(n+1)+ 6(n – 1)=n^2 + n+6n-6 Hence, like terms in the expression n(n+1)+6(n-1)are n and 6n. Question 24: The expression 13+90 is a ………………. ∴ 13+ 90=103 ∴ 103 is a constant term. Question 25: The speed of car is 55 km/h. The distance covered in y hours is ……………… Given, speed of car = 55 km/h. ∴ Distance = Speed x Time ∴Distance covered in y hours = 55xy = 55y km Question 26: x+y+z is an expression which is neither monomial nor ………….. Since, x+ y+z has three terms, so it is trinomial. Hence, x + y+z is an expression which is neither monomial nor binomial. Question 27: If (x^2y+y^2 + 3) is subtracted from (3x^2y+2y^2 + 5), then coefficient of y in the result is ………………. We have, (3x^2y+2y^2 + 5)-(x^2y+ y^2 + 3) = 3x^2y+2y^2 + 5-x^2y-y^2-3 = 2x^2y+y^2+2 Coefficient of y = 2x^2 Question 28: -a-b-c is same as -a- (…………….). We have, -a-b-c=-a-(b+c) [by taking common (-) minus sign] So,-a-b-c is same as -a-(b+ c). Question 29: The unlike terms in perimeters of following figures are …………….. and ………………. In Fig. (i), Perimeter = Sum of all sides = 2x + y+2x + y = 4x + 2y In Fig. (ii), Perimeter = Sum of all sides = x+ y^2 + x+ y^2=2x + 2y^2 Unlike terms in perimeters are 2y and 2y^2. Question 30: On adding a monomial ……………… to -2x+4y^2+z, the resulting expression becomes a binomial. We can add 2x, -4y^2 and -z to the expression to make it binomial. => 2x + (-2x + 4y^2 + z) = 4y^2 + z => -4y^2 + (-2x + 4y^2 + z) = -2x + z => -z + (-2x + 4y2 + z) = -2x + 4y^2 Hence, on adding a monomial 2x or -4y^2 or -z to -2x + 4y^2 + z, the resulting expression becomes a binomial. Question 31: 3x+23x^2 + 6y^2 + 2x+y^2 + ………….. =5x+7y^2. Let(3x+23x^2 + 6y^2+2x+y^2)+ M = 5x + 7y^2 => M=(5x+7y^2)-(3x + 23x^2 + 6y^2+2x+ y^2) => M = 5x + 7y^2-3x- 23x^2– 6y^2 -2x – y^2 [with -ve sign, +ve sign in the bracket will change on opening it] => M = 5x-3x-2x + 7y^2-6y^2-y2-23x^2 M = 0 + 0 – 23x^2 =-23x^2 Question 32: If Rohit has 5xy toffees and Shantanu has 20yx toffees, then Shantanu has ………… more toffees. We have, Rohit has toffees =5xy Shantanu has toffees =20yx Difference = 20xy -5xy=15xy Hence, Shantanu had 15xy more toffees. In questions 33 to 52, state whether the statements given are True or False. Question 33: 1+(x/2)+x^3 is a polynomial. Expression with one or more than one term is called a polynomial. Question 34: (3a-b+3)-(a+b) is a binomial. We have , (3a-b+3)-(a + b)= 3a-b+3-a-b = 3a-a-b-b + 3 = 2a-2b+ 3 The expression has three terms, it is a trinomial. Question 35: A trinomial can be a polynomial. Trinomial is a polynomial, because it has three terms. Question 36: A polynomial with more than two terms is a trinomial. A polynomial with more than two terms can be trinomial or more. While a trinomial have exact three terms. Question 37: Sum of x andy is x+y. Sum of x and y is x+y. Question 38: Sum of 2 and p is 2p. Sum of 2 and p is 2 + p. Question 39: A binomial has more than two terms. Binomial has exactly two unlike terms. Question 40: A trinomial has exactly three terms. A trinomial has exactly three unlike terms. Question 41: In like terms, variables and their powers are the same. In like terms, algebraic factors are same. Question 42: The expression x+y+5x is a trinomial. ∴ x+ y + 5x =6x+ y It is a binomial. Question 43: 4p is the numerical coefficient of q^2 in -4pq^2. Numerical coefficient of q^2 in -4pq^2 =-4. Question 44: 5a and 5b are unlike terms. Because both the terms have different algebraic factors. Question 45: Sum of x^2 + x and y+y^2 is 2x^2 + 2y^2. ∴ Sum = (x^2 + x) + (y+ y^2) = x^2 + x + y + y^2 = x^2 + y^2 + x+y Question 46: Subtracting a term from a given expression is the same as adding its additive inverse to given expression. Because additive inverse is the negation of a number or expression. Question 47: The total number of planets of Sun can be denoted by the variable n. As, Sun has infinite planets around it. Question 48: In like terms, the numerical coefficients should also be the same. e.g. -3x^2y and 4x^2y are like terms as they have same algebraic factor x^2y but have different numerical coefficients. Question 49: If we add a monomial and binomial, then answer can never be a monomial. If we add a monomial and a binomial, then answer can be a monomial, e.g. Add x^2 and -x^2 + y^2 = x^2 + (-x^2 + y^2) = x^2 – x^2 + y^2 = y^2 Hence, the answer is monomial. Question 50: If we subtract a monomial from a binomial, then answer is atleast a binomial. If we subtract a monomial from a binomial, then answer is atleast a monomial, e.g. Subtract x and x-y= x-(x-y) = x-x + y = y, i.e. monomial. Hence, the answer is monomial. Question 51: When we subtract a monomial from trinomial, then answer can be a polynomial. When we subtract a monomial from a trinomial, then answer can be binomial or polynomial. e.g. Subtract y^2 from y^2 -x^2 -2xy. = (y^2 -x^2 – 2xy)-y^3 = y^2 – y^2 -x^2 -2xy = -x^2 – 2xy Hence, answer is binomial. Question 52: When we add a monomial and a trinomial, then answer can be a monomial. When we add a monomial and a trinomial, then it can be binomial or trinomial, e.g. Add xy and x^3 + 2xy-y^3 = xy+ (x^3 +2 xy-y^3) = xy+2xy+ x^3 – y^3 = 3xy + x^3 – y^3 Hence, answer is trinomial. Question 53: Write the following statements in the form of algebraic expressions and write whether is monomial, binomial or trinomial. (a) x is multiplied by itself and then added to the product of x and y. (b) Three times of p and two times of q are multiplied and then subtracted from r. (c) Product of p, twice of q and thrice of r. (d) Sum of the products of a and b, b and c, c and a. (e) Perimeter of an equilateral triangle of side x. (f) Perimeter of a rectangle with length p and breadth q. (g) Area of a triangle with base m and height n. (h) Area of a square with side x. (i) Cube of s subtracted from cube of t. (j) Quotient of x and 15 multiplied by x. (k)The sum of square of x and cube of z. (l) Two times q subtracted from cube of q. Question 54: Write the coefficient of x2 in the following: Question 55: Find the numerical coefficient of each of the terms Question 56: Simplify the following by combining the like terms and then write whether the expression is a monomial, a binomial or a trinomial. Question 57: Add the following expressions (a) p^2-7pq-q^2 and -3p^2 -2pq+7q^2 (b) x^3 -x^2y-xy^2 -y^3 and x^3 -2x^2y+3xy^2 +4y (c) ab + bc+ca and -bc-ca-ab (d) p^2 -q + r ,q^2 -r+p and r^2-p+q (e) x^3y^2 +x^2y^3 +3y^4 and x^4 +3x^2y^3 +4y^4 (f) p^2qr+pq^2r+pqr^2 and -3pq^2r-2pqr^2 (g) uv-vw, vw-wu and wu-uv (h) a2 +3ab-bc, bz +3bc-ca and c^2 +3ca-ab Question 58: (a) -7p^2qr from -3p^2qr. (b) -a^2 -ab from b^2 +ab. (c) -4x^2y-y^3 from x^3 +3xy^2 -x^2y. (d) x^4 +3x^3y^3 +5y^4 from 2x^4 -x^3y^3 +7y^4. (e) ab-bc- ca from -ab+bc+ ca. (f) -2a^2-2b^2 from -a^2 -b^2 +2ab. (g) x^3y^3 +3x^2y^2 -7xy^3 from x^4 +y^4 +3x^2y^2 -xy^3. (h) 2(ab+bc+ca) from -ab-bc-ca. (i) 4.5x^5 -3.4x^2 +5.7 from 5x^4 -3.2x^2 -7.3x. (j) 11-15y^2 from y^3 -15y^2 -y-11. Question 59: (a) What should be added to x^3 +3x^2y + 3xy^2 +y^3 to get x^3 +y^3? (b) What should be added to 3pq+5p^2q^2 +p3 to get p^3 +2p^2q^2 +4pq ? Question 60: (a) What should be subtracted from 2x^3 -3x^2y+2xyz +3y^3 to get x^3 -2x^2y+3xy^2 +4y^3? (b) What should be subtracted from -7mn+2m^2 +3n^2 to get m^2 +2mn+n^2? Question 61: How much is 21a^3 -17a^2 less than 89a^3 -64a^2+6a+16 ? Required expression is 89a^3 -64a^2+6a+16-(21a^3 -17a^2) =89a^3 -64a^2+6a+16 -21a^3 + 17a^2 On combining the like terms, = 89a^3 -21a^3 – 64a^2 +17a^2 + 6a+16 = 68a^3 -47a^2 + 6a+16 So,21a^3 -17a^2 is 68a^3-47a^2 + 6a+16 less than 89a^3 -64a^2+6a+16. Question 62: How much is y^4 -12y^2 +y+14 greater than 17y^3+34y^2 -51y+68? Required expression is y^4 -12y^2 +y+14-(17y^3+34y^2 -51y+68) =y^4 -12y^2 +y+14-17y^3-34y^2 +51y-68 On combining the like terms, =y^4-12y^2-34y^2 + y+51y+14-68-17y^3 = y^4-46y^2 + 52y-17y^3 -54 =y4 -17y^3 -46y^2 + 52y-54 So, y^4 -12y^2 +y+14 is y^4-17y^3-46y^2 + 52y-54 greater than 17y^3+34y^2 -51y+68. Question 63: How much does 93p^2 -55p+4 exceed 13p^3 -5p^2 + 17p-90? Required expression is 93p^2 -55p+4 —(13p^3 -5p^2 + 17p-90) = 93p^2 -55p+4 -13p^3+5p^2 – 17p + 90 On combining the like terms, = 93p^2 + 5p^2-55p-17p+4+90-13p^3 = 98p^2-72p+94-13p^3 =-13p^3 + 98p^2-72p+ 94 So, 93p^2 -55p+4 is -13p^3 + 9p^2 -72p+ 94 exceed from 13p^3 -5p^2 + 17p-90. Question 64: To what expression must 99x^3 -33x^2 -13x-41 be added to make the sum zero? In order to find the solution, we will subtract 99x^3 -33x^2 -13x-41 from 0. Required expression is 0- (99x^3 -33x^2 -13x-41) = 0- 99x^3 +33x^2 +13x+41 = – 99x^3 +33x^2 +13x+41 So, If we add -99x^3 +33x^2 +13x+41 to 99x^3 -33x^2 -13x-41, then the sum is zero. Question 65: Subtract 9a^2 – 15a +3 from unity. (a) a^2 +2ab+b^2 (b) a^2-2ab+b^2 (c) a^3 +3a^2b+3ab^2 + b^3 (d) a^3 -3a^2b+3ab^2 -b^3 (e) (a^2+b^2)/3 (f) (a^2 – b^2)/3 (g) (a/b) + (b/a) (h) a^2 +b^2 -ab-b^2 -a^2 In order to find solution, we will subtract 9a^2 -15a + 3 from unity, i.e. 1. Required’expression is 1- (9a^2-15a+3)=1- 9a2 + 15a – 3 Question 66: Find the values of the following polynomials at a = -2 and b=3. Question 67: Find the values of following polynomials at m = 1, n = -1 and p=2 (a) m+n+p (b) m^2+n^2+P^2 (c) m^3+ n^3+p^3 (d) mn+np + pm (e) m^3 +n^3 + p^3 -3mnp (f) m^2n^2 +n^2p^2 +p^2m^2 Given, m=1, n=-1 and p=2 So, putting m = 1, n = -1 and p = 2 in the given expressions, we get (a) m+n+p=1-1+2=2 (b) m^2+n^2+P^2 =(1)^2 + (-1)^2 + (2)^2 =1+1+ 4=6 (c) m^3+ n^3+p^3 =(1)^3 + (-1)^3 + (2)^3 =1 -1 + 8 = 8 (d) mn+np+ pm=(1)(-1)+(-1)(2)+(2)(1)=-1-2 + 2 =-1 (e) m^3 +n^3 + p^3 -3mnp =(1)^3 + (-1)^3 + (2)^3 -3(1)(-1)(2) =1-1+8+6=14 (f) m^2n^2 +n^2p^2 +p^2m^2 =(1)^2 (-1)^2 + (-1)^2 (2)^2 + (2)^2 (1)^2 =1+ 4+ 4=9 Question 68: If A=3x^2 -4x+1, B=5x^2+3x-8 and C=4x^2 -7x+3, then find 1. (A + B)-C 2. B+C-A 3. A-B+C Given, A = 3x^2 – 4x +1, B = 5x^2 + 3x – 8 and C = 4x^2 – 7x + 3 1. (A + B)-C =(3x^2 -4x +1 + 5x^2 + 3x-8) -(4x^2 -1x + 3) On combining the like terms, = (3x^2 + 5x^2 – 4x + 3x + 1 – 8) – (4x^2 – 1x + 3) =(8x^2-x-7)-(4x^2-7x + 3) = 8x^2-x-7-4x^2 + 7x-3 = 8x^2-4x^2 -x + 7x-7-3 =4x^2 + 6x-10 2. B+C-A = 5x^2 + 3x-8+4x^2-7x + 3-(3x^2-4x + 1)) On combining the like terms, = (5x^2 + 4x^2 + 3x-7x-8+ 3)-(3x^2 – 4x + 1) = (9x^2 -4x-5)- (3x^2 – 4x +1) = 9x^2 -4x-5-3x^2 + 4x-1 = 9x^2 – 3x^2 – 4x + 4x – 5 -1 = 6x^2-6 3. A+B+C = 3x^2-4x+ 1 +5x^2 + 3x-8+4x^2-7x+3 On combining the like terms, = 3x^2 + 5x^2 + 4x^2 -4x + 3x-7x +1 – 8 + 3 = 12x^2-8x-4 Question 69: If P = -(x-2), Q = -2(y+1) and R=- x+2y, find a,when P+Q+ R=ax. Given, P = -(x-2),Q = -2(y+ 1)and R = -x + 2y Also given, P+Q + R=ax On putting the values of P,Q and R on LHS, we get -(x-2)+[-2(y+1)]+(-x+2y) = ax => -x+2 + (-2y-2)-x + 2y = ax => -x + 2 – 2y – 2 – x+ 2y = ax On combining the like terms, -x – x – 2y + 2y + 2 – 2=ax => -2x = ax By comparing LHS and RHS, we get a = -2 Question 70: From the sum of x^2 – y^2 – 1, y^2 – x^2 – 1 and 1 – x^2 – y^2, subtract -(1 + y^2). Sum of x^2 – y^2 -1, y^2 – x^2 – 1 and 1 – x^2 – y^2 = x^2 – y^2 – 1 + y^2 – x^2 – 1 + 1 – x^2 – y^2 On combining the like terms, = x^2 – x^2 – x^2 – y^2 + y^2 – y^2 – 1 – 1 + 1 = -x^2 – y^2 – 1 Now, subtract -(1 + y^2) from -x^2 – y^2 -1 = -x^2 – y^2 -1 – [-(1 + y^2)] = – x^2 – y^2 – 1 + 1 + y^2 = -x^2 – y^2 + y^2 – 1 + 1 = -x^2 Question 71: Subtract the sum of 12ab – 10b^2 – 18a^2 and 9ob + 12b^2 + 14a^2 from the sum of ab + 2b^2 and 3b^2 – a^2. Sum of 12ab-10b2 -18a2 and 9ab + 12b^2 + 14a^2 = 12ab – 10b^2 – 18a^2 + 9ab + 12b^2 + 14a^2 On combining the like terms, = 12ab + 9ab – 10b^2 + 12b^2 – 18a^2 + 14a^2 = 21 ab +2b^2 – 4a^2 Sum of ab + 2b^2 and 3b^2 – a^2 = ab + 2b^2 + 3b^2 – a^2 = ab + 5b^2 – a^2 Now, subtracting 21ab + 2b^2 – 4a^2 from ab + 5b^2 – a^2, we get = (ab + 5b^2 – a^2) – (21ab + 2b^2 – 4a^2) = ab+ 5b^2 – a^2 – 21ab – 2b^2 + 4a^2 On combining the like terms, = ab – 21ab + 5b^2 – 2b^2 – a^2 + 4a^2 = – 20ab + 3b^2 + 3a^2 = 3a^2 + 3b^2 – 20ab Question 72: Each symbol given below represents an algebraic expression. Find the expression which is represented by the above symbols. Question 73: Observe the following nutritional chart carefully Write an algebraic expression for the amount of carbohydrates (in grams) for (a) y units of potatoes and 2 units of rajma. (b) 2x units tomatoes and y units apples. (a) By unitary method, ∴ 1 unit of potatoes contain carbohydrates = 22 g y units of potatoes contain carbohydrates =22 x y = 22y g ∴ 1 unit of rajma contain carbohydrates = 60 g ∴ 2 units of rajma contain carbohydrates =(60 x 2) = 120 g Hence, the required expression is 22 y +120. (b) By unitary method, ∴ 1 unit of tomatoes contain carbohydrates = 4 g ∴ 2x units of tomatoes contain carbohydrates = 2x x 4= 8x g ∴ 1 unit apples contain carbohydrates = 14 g y units apples contain carbohydrates = 14 x y = 14y g Hence, the required expression is 8x +14y. Question 74: Arjun bought a rectangular plot with length x and breadth y and then sold a triangular part of it whose base is y and height is z. Find the area of the remaining part of the plot. Question 75: Amisha has a square plot of side m and another triangular plot with base and height each to m. What is the total area of both plots? Question 76: A taxi Service charges Rs. 8 per km levies a fixed charge of Rs. 50. Write an algebraic expression for the above situation, if the taxi is hired for x km. As per the given information, taxi service charges Rs. 8 per km and fixed charge of ? 50. If taxi is hired for x km. Then, algebraic expression for the situation = 8 x x + 50 = 8x + 50 Hence, the required expression is 8x + 50. Question 77: Shiv works in a mall and gets paid Rs. 50 per hour. Last week he worked for 7 h and this week he will work for x hours. Write an algebraic expression for the money paid to him for both the weeks. Given, money paid to shiv = Rs. 50 per h. ∴ Money paid last week =Rs. 50 x 7 = Rs. 350 So, money paid this week = Rs. 50 x x = Rs. 50x Total money paid to shiv =Rs. (350 + 50x) =Rs. 50(x + 7) Question 78: Sonu and Raj have to collect different kinds of leaves for science project. They go to a park where Sonu collects 12 leaves and Raj collects x leaves. After some time Sonu loses 3 leaves and Raj collects 2x leaves. Write an algebraic expression to find the total number of leaves collected by both of them. According to the question, Sonu collected leaves = 12-3=9 Raj collected leaves =x + 2x=3x ∴ Total leaves collected =9 + 3x Hence, the required expression is 9+ 3x. Question 79: A school has a rectangular playground with Length x and breadth y and a square lawn with side x as shown in the figure given below. What is the total perimeter of both of them combined together? Question 80: The rate of planting the grass is Rs. x per square metre. Find the cost of planting the grass on a triangular lawn whose base is y metres and height is z metres. Question 81: Find the perimeter of the figure given below. We know that, perimeter is the sum of all sides. Perimeter of the given figure = AB + BC + CD + DA =(5x – y)+2 (x + y)+ (5x – y)+2(x + y) = 5x – y + 2x + 2y + 5x – y + 2x + 2y On combining the like terms, = 5x + 2x+ 5x + 2x – y + 2y – y + 2y =14x + 2y Question 82: In a rectangular plot, 5 square flower beds of side (x+2) metres each have been laid (see the figure). Find the total cost of fencing the flower beds at the cost of Rs. 50 per 100 metres. Question 83: A wire is (7x-3) metres long. A length of (3x—4) metres is cut for use, answer the following questions (a) How much wire is left? (b) If this left out wire is used for making an equilateral triangle. What is the length of each side of the triangle so formed? Given, length of wire = (7x – 3)m and wire cut for use has length = (3x – 4) m (a) Left wire=(7x – 3) – (3x – 4)=7x – 3 – 3x + 4=7x – 3x – 3 + 4 =(4x + 1)m (b) ∴ Left wire = (4x +1)m ∴ Perimeter of equilateral triangle = Length of wire left => 3 x (side) = 4x + 1 => side = (4x +1)/3 = 1/3 (4x + 1)m. Question 84: Rohan’s mother gave him Rs. 3xy^2 and his father gave him Rs. 5(xy^2 + 2). Out of this money he spent Rs. (10 – 3xy^2) on his birthday party. How much money is left with him? Given, amount given to Rohan by his mother =Rs. 3xy^2 and amount given to Rohan by his father =Rs. 5(xy^2 + 2) ∴ Total amount Rohan have = [(3xy^2)+(5xy^2 + 10)] =Rs. [3xy^2 + 5xy^2 + 10] = Rs. (8xy^2 + 10) Total amount spent by Rohan =Rs. (10 – 3xy^2) ∴After spending, Rohan have left money Question 85: Question 86: The sum of first natural numbers is given by ½ n^2 + ½ n. (i) The sum of first 5 natural numbers. (ii) The sum of first 11 natural numbers. (iii) The sum of natural numbers from 11 to 30. Question 87: The sum of squares of first n natural numbers is given by 1/6(n+1)(2n+1) or 1/6(2n^3 + 3n^2 + n). Find the sum of squares of the first 10 natural numbers. Given, the sum of squares of first n natural numbers = 1/6(n +1)(2n + 1) ∴ The sum of squares of first 10 natural numbers [∴ put n=10] = 1/6 (10)(10+1)(2 x 10+1) =1/6 x 10 x 11 x 21 = 385 Question 88: The sum of the multiplication table of natural number n is given by 55 x n. Find the sum of (a) Table of 7 (b) Table of 10 (c) Table of 19 Given, the sum of multiplication table of n natural numbers = 55 x n (a) Sum of table of 7 = 55 x 7 =385 [put n = 7] (b) Sum of table of 10=55 x 10=550 [put n = 10] (c) Sum of table of 19 = 55 x 19=1045 [put n = 19] Question 89: Question 90: Question 91: 4b – 3 Three subtracted from four times b. Question 92: Eight times the sum of m and 5. Question 93: Quotient on dividing seven by the difference of eight and x(x<8). Question 94: Seventeen times quotient of sixteen divided by w. Question 95: 1. Critical Thinking Write two different algebraic expressions for the word phrase (¼) of the sum of x and 7. 2. What’s the Error? A student wrote an algebraic expression for “5 less than a number n divided by 3” as (n/3 )- 5. What error did the student make? 3. Write About It Shashi used addition to solve a word problem about the weekly cost of commuting by toll tax for Rs. 15 each day. Ravi solved the same problem by multiplying. They both got correct answer. How is this possible? 1. First expression = ¼(x+7) As we know, the addition is commutative. So, it can also be written as = ¼ (7 + x) 2. Since, the expression of 5 less than a number n =n-5 So, 5 less than a number n divided by 3 will be written = (n-5)/3. So, student make an error of quotient. 3. By addition method, Total weekly cost = (15+15+15+15+15+15+15) = Rs. 105 By multiplication method, Total weekly cost = Cost of one day x Seven days =15 x 7 = Rs. 105 Question 96: Write an expression for the sum of 1 and twice a number n, if you let n be any odd number, will the result always be an odd number? Let the number be n. So, according to the statement, the expression can be written as = 2n+1. Yes, the result is always an odd number, because when a number becomes multiplied by 2, it becomes even and addition of 1 in that even number makes it an odd number. Question 97: Critical Thinking Will the value of 11x for x – 5 be greater than 11 or less than 11? Explain. Expression given is 11x = 11 x (-5)= -55 [put x = —5] Clearly, -55 <11. Hence, the value is grater than 11. Question 98: Matching the column I and Column II by the following Question 99: At age of 2 years, a cat or a dog is considered 24 “human” years old. Each year, after age 2 is equivalent to 4 “human” years. Fill in the expression [24+[] (a – 2)], so that it represents the age of a cat or dog in human years. Also, you need to determine for what ‘a’ stands for. Copy the chart and use your expression to complete it. Question 100: Express the following properties with variables x, y and z. 1. Commutative property of addition 2. Commutative property of multiplication 3. Associative property of addition 4. Associative property of multiplication 5. Distributive property of multiplication over addition 1. We know that, Commutative property of addition, a + b = b + c ∴ Required expression is x + y=y + x 2. We know that, Commutative property of multiplication, axb = bxa ∴ Required expression is x x y=y x x 3. We know that, Associative property of addition, a + (b + c) = (a + b) + c ∴ Required expression is x + (y+ z)=(x+ y)+ z 4. We know that, Associative property of multiplication, a x (b x c) = (a x b)x c ∴ Required expression is x x (y x z)=(x x y)x z 5. We know that, Distributive property of multiplication over addition, ax(b + c) = a x b + a x c ∴ Required expression is x x(y+ z)= x x y + x x z
{"url":"https://ncert-books.in/ncert-exemplar-problems-class-7-maths-algebraic-expression/","timestamp":"2024-11-01T19:38:11Z","content_type":"text/html","content_length":"196970","record_id":"<urn:uuid:3e0d6690-7c36-4eb5-bda9-6c078d07e8ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00141.warc.gz"}
Differentiability of transition semigroup of generalized Ornstein-Uhlenbeck process: a probabilistic approach Ben Goldys and Szymon Peszat Let \(P_s\phi(x)=\mathbb{E}\, \phi(X^x(s))\), be the transition semigroup on the space \(B_b(E)\) of bounded measurable functions on a Banach space \(E\), of the Markov family defined by the linear equation with additive noise \[ d X(s)= \left(AX(s) + a\right)d s + B\mathrm{d}W(s), \qquad X(0)=x\in E. \] We give a simple probabilistic proof of the fact that null-controlla\-bility of the corresponding deterministic system \[ d Y(s)= \left(AY(s)+ B\mathcal{U}(t)x)(s)\right)d s, \qquad Y(0)=x, \] implies that for any \(\phi\in B_b(E)\), \(P_t\phi\) is infinitely many times Fréchet differentiable and that \[ D^nP_t\phi(x)[y_1,\ldots ,y_n]= \mathbb{E}\, \phi(X^x(t))(-1)^nI^n_t(y_1,\ldots, y_n), \] where \(I^n_t(y_1,\ldots,y_n)\) is the symmetric n-fold Itô integral of the controls \(\mathcal{U}(t)y_1,\ldots \mathcal{U}(t)y_n\). : Gradient estimates, Ornstein–Uhlenbeck processes, strong Feller property, hypoelliptic diffusions, noise on boundary. AMS Subject Classification: Primary 60H10; secondary 60H15 60H17, 35B30, 35G15.
{"url":"https://www.maths.usyd.edu.au/u/pubs/publist/preprints/2024/goldys-8.html","timestamp":"2024-11-09T15:44:44Z","content_type":"text/html","content_length":"2907","record_id":"<urn:uuid:6d0c2d9c-94b5-43a3-ab60-edcb418e0dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00318.warc.gz"}
Copyright (c) Roman Leshchinskiy 2008-2010 License BSD-style Maintainer Roman Leshchinskiy <rl@cse.unsw.edu.au> Stability experimental Portability non-portable Safe Haskell None Language Haskell2010 Generic interface to mutable vectors Class of mutable vector types class MVector v a where Source # Class of mutable vectors parametrised with a primitive state token. basicLength :: v s a -> Int Source # Length of the mutable vector. This method should not be called directly, use length instead. basicUnsafeSlice Source # :: Int starting index -> Int length of the slice -> v s a -> v s a Yield a part of the mutable vector without copying it. This method should not be called directly, use unsafeSlice instead. basicOverlaps :: v s a -> v s a -> Bool Source # Check whether two vectors overlap. This method should not be called directly, use overlaps instead. basicUnsafeNew :: PrimMonad m => Int -> m (v (PrimState m) a) Source # Create a mutable vector of the given length. This method should not be called directly, use unsafeNew instead. basicInitialize :: PrimMonad m => v (PrimState m) a -> m () Source # Initialize a vector to a standard value. This is intended to be called as part of the safe new operation (and similar operations), to properly blank the newly allocated memory if necessary. Vectors that are necessarily initialized as part of creation may implement this as a no-op. basicUnsafeReplicate :: PrimMonad m => Int -> a -> m (v (PrimState m) a) Source # Create a mutable vector of the given length and fill it with an initial value. This method should not be called directly, use replicate instead. basicUnsafeRead :: PrimMonad m => v (PrimState m) a -> Int -> m a Source # Yield the element at the given position. This method should not be called directly, use unsafeRead instead. basicUnsafeWrite :: PrimMonad m => v (PrimState m) a -> Int -> a -> m () Source # Replace the element at the given position. This method should not be called directly, use unsafeWrite instead. basicClear :: PrimMonad m => v (PrimState m) a -> m () Source # Reset all elements of the vector to some undefined value, clearing all references to external objects. This is usually a noop for unboxed vectors. This method should not be called directly, use clear basicSet :: PrimMonad m => v (PrimState m) a -> a -> m () Source # Set all elements of the vector to the given value. This method should not be called directly, use set instead. basicUnsafeCopy Source # :: PrimMonad m => v (PrimState m) a target -> v (PrimState m) a source -> m () Copy a vector. The two vectors may not overlap. This method should not be called directly, use unsafeCopy instead. basicUnsafeMove Source # :: PrimMonad m => v (PrimState m) a target -> v (PrimState m) a source -> m () Move the contents of a vector. The two vectors may overlap. This method should not be called directly, use unsafeMove instead. basicUnsafeGrow :: PrimMonad m => v (PrimState m) a -> Int -> m (v (PrimState m) a) Source # Grow a vector by the given number of elements. This method should not be called directly, use unsafeGrow instead. MVector MVector a Source # Defined in Data.Vector.Mutable Prim a => MVector MVector a Source # Defined in Data.Vector.Primitive.Mutable Storable a => MVector MVector a Source # Defined in Data.Vector.Storable.Mutable MVector MVector Bool Source # Defined in Data.Vector.Unboxed.Base MVector MVector Char Source # Defined in Data.Vector.Unboxed.Base MVector MVector Double Source # Defined in Data.Vector.Unboxed.Base MVector MVector Float Source # Defined in Data.Vector.Unboxed.Base MVector MVector Int Source # Defined in Data.Vector.Unboxed.Base MVector MVector Int8 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Int16 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Int32 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Int64 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Word Source # Defined in Data.Vector.Unboxed.Base MVector MVector Word8 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Word16 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Word32 Source # Defined in Data.Vector.Unboxed.Base MVector MVector Word64 Source # Defined in Data.Vector.Unboxed.Base MVector MVector () Source # Defined in Data.Vector.Unboxed.Base Unbox a => MVector MVector (Complex a) Source # Defined in Data.Vector.Unboxed.Base (Unbox a, Unbox b) => MVector MVector (a, b) Source # Defined in Data.Vector.Unboxed.Base basicLength :: MVector s (a, b) -> Int Source # basicUnsafeSlice :: Int -> Int -> MVector s (a, b) -> MVector s (a, b) Source # basicOverlaps :: MVector s (a, b) -> MVector s (a, b) -> Bool Source # basicUnsafeNew :: PrimMonad m => Int -> m (MVector (PrimState m) (a, b)) Source # basicInitialize :: PrimMonad m => MVector (PrimState m) (a, b) -> m () Source # basicUnsafeReplicate :: PrimMonad m => Int -> (a, b) -> m (MVector (PrimState m) (a, b)) Source # basicUnsafeRead :: PrimMonad m => MVector (PrimState m) (a, b) -> Int -> m (a, b) Source # basicUnsafeWrite :: PrimMonad m => MVector (PrimState m) (a, b) -> Int -> (a, b) -> m () Source # basicClear :: PrimMonad m => MVector (PrimState m) (a, b) -> m () Source # basicSet :: PrimMonad m => MVector (PrimState m) (a, b) -> (a, b) -> m () Source # basicUnsafeCopy :: PrimMonad m => MVector (PrimState m) (a, b) -> MVector (PrimState m) (a, b) -> m () Source # basicUnsafeMove :: PrimMonad m => MVector (PrimState m) (a, b) -> MVector (PrimState m) (a, b) -> m () Source # basicUnsafeGrow :: PrimMonad m => MVector (PrimState m) (a, b) -> Int -> m (MVector (PrimState m) (a, b)) Source # (Unbox a, Unbox b, Unbox c) => MVector MVector (a, b, c) Source # Defined in Data.Vector.Unboxed.Base basicLength :: MVector s (a, b, c) -> Int Source # basicUnsafeSlice :: Int -> Int -> MVector s (a, b, c) -> MVector s (a, b, c) Source # basicOverlaps :: MVector s (a, b, c) -> MVector s (a, b, c) -> Bool Source # basicUnsafeNew :: PrimMonad m => Int -> m (MVector (PrimState m) (a, b, c)) Source # basicInitialize :: PrimMonad m => MVector (PrimState m) (a, b, c) -> m () Source # basicUnsafeReplicate :: PrimMonad m => Int -> (a, b, c) -> m (MVector (PrimState m) (a, b, c)) Source # basicUnsafeRead :: PrimMonad m => MVector (PrimState m) (a, b, c) -> Int -> m (a, b, c) Source # basicUnsafeWrite :: PrimMonad m => MVector (PrimState m) (a, b, c) -> Int -> (a, b, c) -> m () Source # basicClear :: PrimMonad m => MVector (PrimState m) (a, b, c) -> m () Source # basicSet :: PrimMonad m => MVector (PrimState m) (a, b, c) -> (a, b, c) -> m () Source # basicUnsafeCopy :: PrimMonad m => MVector (PrimState m) (a, b, c) -> MVector (PrimState m) (a, b, c) -> m () Source # basicUnsafeMove :: PrimMonad m => MVector (PrimState m) (a, b, c) -> MVector (PrimState m) (a, b, c) -> m () Source # basicUnsafeGrow :: PrimMonad m => MVector (PrimState m) (a, b, c) -> Int -> m (MVector (PrimState m) (a, b, c)) Source # (Unbox a, Unbox b, Unbox c, Unbox d) => MVector MVector (a, b, c, d) Source # Defined in Data.Vector.Unboxed.Base basicLength :: MVector s (a, b, c, d) -> Int Source # basicUnsafeSlice :: Int -> Int -> MVector s (a, b, c, d) -> MVector s (a, b, c, d) Source # basicOverlaps :: MVector s (a, b, c, d) -> MVector s (a, b, c, d) -> Bool Source # basicUnsafeNew :: PrimMonad m => Int -> m (MVector (PrimState m) (a, b, c, d)) Source # basicInitialize :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> m () Source # basicUnsafeReplicate :: PrimMonad m => Int -> (a, b, c, d) -> m (MVector (PrimState m) (a, b, c, d)) Source # basicUnsafeRead :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> Int -> m (a, b, c, d) Source # basicUnsafeWrite :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> Int -> (a, b, c, d) -> m () Source # basicClear :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> m () Source # basicSet :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> (a, b, c, d) -> m () Source # basicUnsafeCopy :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> MVector (PrimState m) (a, b, c, d) -> m () Source # basicUnsafeMove :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> MVector (PrimState m) (a, b, c, d) -> m () Source # basicUnsafeGrow :: PrimMonad m => MVector (PrimState m) (a, b, c, d) -> Int -> m (MVector (PrimState m) (a, b, c, d)) Source # (Unbox a, Unbox b, Unbox c, Unbox d, Unbox e) => MVector MVector (a, b, c, d, e) Source # Defined in Data.Vector.Unboxed.Base basicLength :: MVector s (a, b, c, d, e) -> Int Source # basicUnsafeSlice :: Int -> Int -> MVector s (a, b, c, d, e) -> MVector s (a, b, c, d, e) Source # basicOverlaps :: MVector s (a, b, c, d, e) -> MVector s (a, b, c, d, e) -> Bool Source # basicUnsafeNew :: PrimMonad m => Int -> m (MVector (PrimState m) (a, b, c, d, e)) Source # basicInitialize :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> m () Source # basicUnsafeReplicate :: PrimMonad m => Int -> (a, b, c, d, e) -> m (MVector (PrimState m) (a, b, c, d, e)) Source # basicUnsafeRead :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> Int -> m (a, b, c, d, e) Source # basicUnsafeWrite :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> Int -> (a, b, c, d, e) -> m () Source # basicClear :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> m () Source # basicSet :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> (a, b, c, d, e) -> m () Source # basicUnsafeCopy :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> MVector (PrimState m) (a, b, c, d, e) -> m () Source # basicUnsafeMove :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> MVector (PrimState m) (a, b, c, d, e) -> m () Source # basicUnsafeGrow :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e) -> Int -> m (MVector (PrimState m) (a, b, c, d, e)) Source # (Unbox a, Unbox b, Unbox c, Unbox d, Unbox e, Unbox f) => MVector MVector (a, b, c, d, e, f) Source # Defined in Data.Vector.Unboxed.Base basicLength :: MVector s (a, b, c, d, e, f) -> Int Source # basicUnsafeSlice :: Int -> Int -> MVector s (a, b, c, d, e, f) -> MVector s (a, b, c, d, e, f) Source # basicOverlaps :: MVector s (a, b, c, d, e, f) -> MVector s (a, b, c, d, e, f) -> Bool Source # basicUnsafeNew :: PrimMonad m => Int -> m (MVector (PrimState m) (a, b, c, d, e, f)) Source # basicInitialize :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> m () Source # basicUnsafeReplicate :: PrimMonad m => Int -> (a, b, c, d, e, f) -> m (MVector (PrimState m) (a, b, c, d, e, f)) Source # basicUnsafeRead :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> Int -> m (a, b, c, d, e, f) Source # basicUnsafeWrite :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> Int -> (a, b, c, d, e, f) -> m () Source # basicClear :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> m () Source # basicSet :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> (a, b, c, d, e, f) -> m () Source # basicUnsafeCopy :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> MVector (PrimState m) (a, b, c, d, e, f) -> m () Source # basicUnsafeMove :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> MVector (PrimState m) (a, b, c, d, e, f) -> m () Source # basicUnsafeGrow :: PrimMonad m => MVector (PrimState m) (a, b, c, d, e, f) -> Int -> m (MVector (PrimState m) (a, b, c, d, e, f)) Source # Length information Extracting subvectors unsafeSlice Source # :: MVector v a => Int starting index -> Int length of the slice -> v s a -> v s a Yield a part of the mutable vector without copying it. No bounds checks are performed. new :: (PrimMonad m, MVector v a) => Int -> m (v (PrimState m) a) Source # Create a mutable vector of the given length. unsafeNew :: (PrimMonad m, MVector v a) => Int -> m (v (PrimState m) a) Source # Create a mutable vector of the given length. The memory is not initialized. replicate :: (PrimMonad m, MVector v a) => Int -> a -> m (v (PrimState m) a) Source # Create a mutable vector of the given length (0 if the length is negative) and fill it with an initial value. replicateM :: (PrimMonad m, MVector v a) => Int -> m a -> m (v (PrimState m) a) Source # Create a mutable vector of the given length (0 if the length is negative) and fill it with values produced by repeatedly executing the monadic action. clone :: (PrimMonad m, MVector v a) => v (PrimState m) a -> m (v (PrimState m) a) Source # Create a copy of a mutable vector. grow :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> m (v (PrimState m) a) Source # Grow a vector by the given number of elements. The number must be positive. unsafeGrow :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> m (v (PrimState m) a) Source # Grow a vector by the given number of elements. The number must be positive but this is not checked. Restricting memory usage clear :: (PrimMonad m, MVector v a) => v (PrimState m) a -> m () Source # Reset all elements of the vector to some undefined value, clearing all references to external objects. This is usually a noop for unboxed vectors. Accessing individual elements read :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> m a Source # Yield the element at the given position. write :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> a -> m () Source # Replace the element at the given position. modify :: (PrimMonad m, MVector v a) => v (PrimState m) a -> (a -> a) -> Int -> m () Source # Modify the element at the given position. swap :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> Int -> m () Source # Swap the elements at the given positions. exchange :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> a -> m a Source # Replace the element at the give position and return the old element. unsafeRead :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> m a Source # Yield the element at the given position. No bounds checks are performed. unsafeWrite :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> a -> m () Source # Replace the element at the given position. No bounds checks are performed. unsafeModify :: (PrimMonad m, MVector v a) => v (PrimState m) a -> (a -> a) -> Int -> m () Source # Modify the element at the given position. No bounds checks are performed. unsafeSwap :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> Int -> m () Source # Swap the elements at the given positions. No bounds checks are performed. unsafeExchange :: (PrimMonad m, MVector v a) => v (PrimState m) a -> Int -> a -> m a Source # Replace the element at the give position and return the old element. No bounds checks are performed. Modifying vectors nextPermutation :: (PrimMonad m, Ord e, MVector v e) => v (PrimState m) e -> m Bool Source # Compute the next (lexicographically) permutation of given vector in-place. Returns False when input is the last permtuation Filling and copying set :: (PrimMonad m, MVector v a) => v (PrimState m) a -> a -> m () Source # Set all elements of the vector to the given value. copy Source # Copy a vector. The two vectors must have the same length and may not overlap. move :: (PrimMonad m, MVector v a) => v (PrimState m) a -> v (PrimState m) a -> m () Source # Move the contents of a vector. The two vectors must have the same length. If the vectors do not overlap, then this is equivalent to copy. Otherwise, the copying is performed as if the source vector were copied to a temporary vector and then the temporary vector was copied to the target vector. unsafeCopy Source # Copy a vector. The two vectors must have the same length and may not overlap. This is not checked. unsafeMove Source # Move the contents of a vector. The two vectors must have the same length, but this is not checked. If the vectors do not overlap, then this is equivalent to unsafeCopy. Otherwise, the copying is performed as if the source vector were copied to a temporary vector and then the temporary vector was copied to the target vector. Internal operations unstream :: (PrimMonad m, MVector v a) => Bundle u a -> m (v (PrimState m) a) Source # Create a new mutable vector and fill it with elements from the Bundle. The vector will grow exponentially if the maximum size of the Bundle is unknown. unstreamR :: (PrimMonad m, MVector v a) => Bundle u a -> m (v (PrimState m) a) Source # Create a new mutable vector and fill it with elements from the Bundle from right to left. The vector will grow exponentially if the maximum size of the Bundle is unknown. vunstream :: (PrimMonad m, Vector v a) => Bundle v a -> m (Mutable v (PrimState m) a) Source # Create a new mutable vector and fill it with elements from the Bundle. The vector will grow exponentially if the maximum size of the Bundle is unknown. munstream :: (PrimMonad m, MVector v a) => MBundle m u a -> m (v (PrimState m) a) Source # Create a new mutable vector and fill it with elements from the monadic stream. The vector will grow exponentially if the maximum size of the stream is unknown. munstreamR :: (PrimMonad m, MVector v a) => MBundle m u a -> m (v (PrimState m) a) Source # Create a new mutable vector and fill it with elements from the monadic stream from right to left. The vector will grow exponentially if the maximum size of the stream is unknown.
{"url":"https://hackage-origin.haskell.org/package/vector-0.12.0.1/docs/Data-Vector-Generic-Mutable.html","timestamp":"2024-11-03T18:55:53Z","content_type":"application/xhtml+xml","content_length":"254909","record_id":"<urn:uuid:4099f78c-f8be-49ee-b550-28d7b29a0840>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00024.warc.gz"}
The natural linear concatenative basis ¶The natural linear concatenative basis I've already called out the 2- and 3-element bases from Brent Kerby's writeup. The two-element linear concatenative basis The three-element linear concatenative basis But I neglected to talk about the 4-element basis! Kerby doesn't mention that one directly, but while rewatching my Strange Loop talk, I realized it's worth discussing. In particular, I mention in my talk that the 6-element nonlinear basis (i, cat, drop, dup, unit, swap) is the most commonly chosen basis, because there is a 1:1 correspondence between primitive instructions and the “categories” of instruction that you have to cover to have a complete basis. (That is, each category is covered by exactly one primitive instruction, and each primitive instruction does only what is required by its category and nothing else.) I think it's worth calling this 1:1 basis the “natural” basis. The natural normal concatenative basis [Strange Loop] Concatenative programming and stack-based languages Categories of instructions in a concatenative basis So, what would the natural linear basis be? Well, you'd just remove drop and dup, leaving you with: Is that enough to be complete? To see if it is, we just have to reduce from one of the other bases: cons ≜ swap unit swap cat ┃ [B] [A] cons ┃ [B] [A] swap unit swap cat [B] ┃ [A] swap unit swap cat [B] [A] ┃ swap unit swap cat [A] [B] ┃ unit swap cat [A] [[B]] ┃ swap cat [[B]] [A] ┃ cat [[B] A] ┃ sap ≜ swap cat i ┃ [B] [A] sap ┃ [B] [A] swap cat i [B] ┃ [A] swap cat i [B] [A] ┃ swap cat i [A] [B] ┃ cat i [A B] ┃ i A B ┃ i (I originally had more complicated definitions, but was able to find simpler ones.) Defining ‘cons’ with only empty quotations
{"url":"https://dcreager.net/concatenative/natural-linear-basis/","timestamp":"2024-11-08T02:23:02Z","content_type":"text/html","content_length":"4846","record_id":"<urn:uuid:5376026b-f869-47af-9ddf-175ca4bd5e42>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00475.warc.gz"}
Extract Paired Correlation — extract_r_paired A function for estimating the correlation from a paired samples t-test. Useful for when using tsum_TOST and the correlation is not available. extract_r_paired(m1, sd1, m2, sd2 = NULL, n, tstat = NULL, pvalue = NULL) mean of group 1. standard deviation of group 1. mean of group 2. standard deviation of group 2. Sample size (number of pairs) The t-value from a paired samples t-test The two-tailed p-value from a paired samples t-test An estimate of the correlation. Lajeunesse, M. J. (2011). On the meta‐analysis of response ratios for studies with correlated and multi‐group designs. Ecology, 92(11), 2049-2055
{"url":"https://aaroncaldwell.us/TOSTERpkg/reference/extract_r_paired.html","timestamp":"2024-11-01T22:07:55Z","content_type":"text/html","content_length":"8493","record_id":"<urn:uuid:e37e55a6-9703-40dd-bc3f-94e1252b3931>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00133.warc.gz"}
Weisfeiler-Lehman Kernel The Weisfeiler-Lehman kernel is an iterative integration of neighborhood information. We initialize the labels for each node using its own node degree. At each step, we take the neighboring node degrees to form a [[multiset]] Multiset, mset or bag A bag is a set in which duplicate elements are allowed. An ordered bag is a list that we use in programming. . At step $K$, we have the multisets for each node. Those multisets at each node can be processed to form an representation of the graph which is in turn used to calculate statistics of the graph. Iterate $k$ steps This iteration can be used to test if two graphs are isomorphism^1. Planted: by L Ma; L Ma (2021). 'Weisfeiler-Lehman Kernel', Datumorphism, 09 April. Available at: https://datumorphism.leima.is/cards/graph/graph-weisfeiler-lehman-kernel/.
{"url":"https://datumorphism.leima.is/cards/graph/graph-weisfeiler-lehman-kernel/","timestamp":"2024-11-02T15:05:21Z","content_type":"text/html","content_length":"114291","record_id":"<urn:uuid:f611e05b-e0fe-400c-9d73-02a545877cb8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00296.warc.gz"}
Mathworks topics: details Concise | By set Times square red set, green set, blue set The children build the familiar multiplication square but graphed vertically, progressing from the natural numbers to the integers at experiment 30 and from the integers to the real numbers at experiment 36. Of the 4 standard models for multiplication, only one, repeated addition, is used here. This is the one children turn to most readily. The point of the exercise is to realise algebraic symmetry geometrically. Teachers may like to devise a parallel treatment for the addition square. Zoom green set, blue set On the one hand, any child pushing a toy car, playing with a doll or recognising Mummy’s photo in an album accepts the same object on different scales. On the other, the consequences for measurement of change of scale take us up to Level 9 on the National Curriculum. But a rich dynamic experience of scaling – not enlarging but zooming – may lead us to appreciate why we needn’t leave so much space when they deliver that ton of sand or those thousand bricks but why we’ll need a lot more wool for our sister’s cardigan than for little Penny’s. The treatment is qualitative except where the quantities are experienced but not abstracted or where they lead to a surprise. Indeed it is the mixture of recognition and surprise which makes this such a good topic. The maths implicit here is that area goes up as the square of the linear scale factor: volume as the cube. If you have friends with the equipment – or the school has it – the children should look through a zoom lens while zooming it and enlarge one of their pictures on a zoom copier. Slices and solids blue set The growing child starts with 3-D objects and later abstracts 2-D shapes. by looking at 2-D sections through 3-D objects we can move back and forth between the two worlds and enrich our experience of both. We look at the complete slice, approaching the shape … from the inside: … from the outside: … and from both: Abbott’s Flatland is the syllabus for this section. (See the original or Martin Gardner’s Further Mathematical Diversions ch 12.) A water surface, a slice of light, a rubber band, a sheet of cardboard, the junction between 2 layers of coloured plasticine, may all be used to define the plane of section. Note that in almost all cases the ‘slices’ are related by two kinds of transformation: affine – represented by sections through the general prism or cylinder, or, wore generally, projective – represented by sections through the general pyramid or cone. Left and right red set, green set, blue set This section deals with mirror symmetry. In these explorations of space the young child can make discoveries and the older person examine observations long taken for granted. (If the visitor has mixed eye-hand dominance, it doesn’t matter. We’re not using lateral discrimination here but studying the phenomenon of handedness itself. For the purpose of these exercises it’s of no concern which hand you call which. For ‘…left …, then right …’ on the caption cards, read ‘… one …, then … the other …’.) At the end of the sequence we extend the idea of reflection from that in a plane to that in a line and, finally, a point. In The Ambidextrous Universe Martin Gardner covers all this material and goes on to examine nature’s preference for one handedness or the other at a fundamental level. All sorts blue set In exploring different ways of sorting and representing data we move between the ‘table’ scale and the ‘room’ scale: between placing a counter in a drawn circle and standing in a rope loop, between following a flowchart with a finger and negotiating an obstacle course of tables and chairs, and so on. The syllabus comprises trees and flowcharts, Carroll and Venn diagrams (1-, 2- and 3-D), bar and pie charts, scattergrams and barycentric graphs. Packing shapes green set, blue set This sequence starts with ways of tiling the plane then advances one dimension to ways of filling space. It moves from an examination of atomic packing to an investigation of the shape of soap bubbles in a foam. Though, as elsewhere, the sequence is progressive, certain experiments can be performed by both first-year undergraduates and – with different motives, preconceptions and expectations! – pre-school children. Transformations red set, green set, blue set Felix Klein’s transformations of space get more and more general as conditions are relaxed. Thus you start with the isometries. then take these as a special case of the similarities, and so on up through affinities, projectivities and topological distortions. This is how the sequence Transformations develops but only in the loosest possible way. In fact the title is little more than an excuse for drawing attention to everyday but surprising ways in which one mathematical object changes into another. In every case, however, one or more quantities are invariant. Teachers may like to name them as they go through the sequence. Be aware of transformations in other sequences: translations, rotations and reflections: Packing shapes; reflections: Left and right; dilatations: Zoom; affinities, perspectivities: Slices and solids. Angle blue set Angle is a dimensionless measure – it must always be defined as a ratio (a fraction of a turn, arc: radius, . . .) – so already abstract to that extent, and children find it hard to deal with this quantity they can’t locate: The kinetic, operational treatment (angle as ‘turn’) now familiar through LOGO is the approach least prone to misconstruction. But the static manifestation (angle as ‘shape’) can’t be ignored. The procedure here is to establish frames of reference with spirit level, plumbline, compass and use the vocabulary which goes with them – ‘vertical ‘/‘horizontal ‘ – ‘steep/ ‘shallow’; ‘north’/’east’ – ‘north-east’, etc. -, then free the angles from their reference directions – walk around with a ‘right angle checker’, record angles found at the vertices of loose objects with an ‘angle indicator’ and use the corresponding terms – ‘perpendicular’, ‘parallel’, ‘inclined’; ‘acute’, ‘obtuse’, ‘reflex’. At sixth-form level the approach to trigonometry is the same: a ‘trigogram’ displays the 3 ratios on Cartesian axes in the standard way, from which thereafter they may be divorced. L.C.M.s red set, green set, blue set The lowest common multiple of 3 and 4 is 12. If we look along the number line we thus find multiples of 3 and 4 coinciding at multiples of 12. The number line is a spatial model but the same arithmetic can be modeled in time. In fact embodiments of this simple idea are many and diverse. The familiar Cuisenaire rods are out in the sequence but also gear wheels, a glockenspiel and acetate masks each of which lets through multiples of a particular number from the ‘times’ table. Dissections red set, green set, blue set Here is another subsequence which has outgrown its parent, in this case Transformations. The core is a set of dissection puzzles. To solve them quickly one must: a) predict the effects of grouping simple angles into compound ones and adding lengths, b) remember the effects of so doing – successful or unsuccessful For stage (a) careful, directed observation is a prerequisite, and in both stages the capacity to ‘visualise’ – form and manipulate mental images — is exercised and developed. In all these transformations area is invariant but, though the lengths and angles of individual polygons remain unchanged – i.e. the transformations they suffer are isometric – the composite shape is not preserved. The puzzles stress this independence. The sequence is extended 1 dimension by a group of solid dissections – including the celebrated SOMA cube on both the ‘table’ and ‘floor’ scale. A series of puzzles where a given polygon must be produced from the intersection of 2 (or more!) others demands the same skills. 2-D to 3-D blue set This is a little exhibition of ways to simulate 3 dimensions in 2: anaglyphs, stereopairs, linear perspective. Once you’ve got your eye in, objects in a conventional projection like the isometric are seen in 3-D); but note that in certain cases there is a many-one mapping of points on the object to points on the drawing, making for ambiguity. Pascal’s Triangle blue set This important array embodies number sequences ubiquitous in mathematics – the successive orders of triangle numbers, the binomial coefficients, the Fibonacci sequence – and their relations. The sequence Pascal’s Triangle opens a few doors to this Alhambra. Like Times Square most of this sequence can be adapted readily for use in the classroom. Though written quite independently, Tony Colledge’s (photocopiable) book Pascal’s Triangle (Tarquin) is virtually a teacher’s guide to this sequence. Loci and linkages green set, blue set Loci – Sliding ladders, rolling wheels, … What paths will selected points upon them follow? Does the geometry of the mechanism contain simple features to help you make your prediction? Linkages – Though less visible than they were a century ago, link motions are essential to devices we take for granted: tool and sewing boxes, folding steps. umbrellas, floor mops, screw jacks, cupboard hinges, door closers. In this sequence we study how the properties of the rhombus and parallelogram are exploited. Pythagoras’ Theorem blue set There are many ways to demonstrate why Pythagoras’ Theorem holds: here we take just 3. Before moving to the general case we look at the 3, 4, 5 and 1, root 2, root 3 triangles. We also apply the converse to check some right angles. Symmetry red set, green set, blue set We meet line and rotation symmetry first separately embodied then combined. Next we make designs with our own choice of symmetries. And finally we are introduced to the idea of a group of symmetry operations by fitting solid shapes in holes. Weigh-In red set, green set, blue set Formerly under the Challenges topic but now a sequence in its own right, this includes a number of exercises for both the 2-pan and mathematical balances. Challenges red set, green set, blue set In other parts of the Circus it’s clear what sort of maths is involved. But in situations like that set up for ‘Grandpa’s Armchair’ it’s difficult to get a mathematical handle on the problem. That particular example comes from John Mason’s excellent introduction to the psychology of problem-solving, Thinking Mathematically (Addison-Wesley).
{"url":"http://magicmathworks.org/find-out-more/mathworks-topics/mathworks-topics-details","timestamp":"2024-11-10T21:03:32Z","content_type":"application/xhtml+xml","content_length":"39417","record_id":"<urn:uuid:b2bb8248-7368-4c35-a3d8-5313b803843a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00386.warc.gz"}
Joule Heating Calculator Greetings, fellow physics enthusiasts! 🧪 Ready to embark on a shocking journey through the electrifying world of Joule heating? Hold on to your circuit boards, because we’re about to decode the electrifying formula with a zesty twist! The Electrifying Formula Behold, the electrifying code formula that powers Joule heating: Q = I² * R * t Yes, you read it right! This formula helps us calculate the amount of heat (Q) generated when an electric current (I) flows through a resistor (R) for a certain time (t). Now, let’s dive into the mesmerizing realm of Joule heating calculations! Categories of Joule Heating From tiny resistors to massive electrical grids, Joule heating finds its place everywhere. Here are some categories with a dash of imperial units: Category Type Range (Imperial) Joule Heating (Imperial) Everyday Wonders Household devices 0.01 W to 2 kW 9.6 BTU/hr (Space Heater) Industrial Giants Machinery 1 kW to 1 MW 3.41e+7 BTU/hr (Arc Furnace) Cutting-Edge Tech Electronics 0.001 W to 100 W 341.2 mBTU/hr (Laptop) Now, let’s add some sparks to Y+ calculations! Y+ Calculations for Electric Personalities We’ve calculated Y+ values for fictional characters with electrifying quirks. Check it out! Character Resistance (Ohms) Y+ Value Calculation Shocking Steve 100 12 Static electricity enthusiast Wired Wendy 5000 68 Electrical engineer by day Zapster Zane 10 2 Lightning chaser on weekends Ways to Calculate Joule Heating Now, let’s explore various methods to calculate Joule heating, each with its own sparks, pros, and cons: Method Advantages Disadvantages Accuracy Basic Power Formula Simple and intuitive Assumes constant resistance (R) Good Advanced Physics Accounts for varying resistance (R) with temperature changes Requires detailed knowledge of the material’s behavior Excellent Numerical Simulation Highly accurate for complex geometries Computational complexity Excellent Limitations of Joule Heating Calculation Accuracy 1. Temperature Dependency: Joule heating calculations can be sensitive to temperature variations, affecting accuracy. 2. Material Behavior: The accuracy of calculations heavily relies on understanding how materials behave under electrical stress. Alternative Methods for Measurement Let’s explore alternative methods to measure Joule heating and their unique sparks! Method Pros Cons Infrared Thermography Non-contact and real-time heat detection Limited to surface temperature measurement Calorimetry Direct measurement of heat generated Requires sophisticated equipment and techniques Thermoelectric Analysis Accurate measurements based on thermoelectric effects Limited to certain materials FAQs on Joule Heating Calculator 1. What is Joule heating, and why is it important? Joule heating is the process where electrical energy is converted into heat energy. It’s essential in various applications, from electronics to industrial processes. 2. Can I feel Joule heating in everyday life? Absolutely! When you touch a warm laptop charger or a light bulb, you’re experiencing Joule heating firsthand. 3. How can I reduce Joule heating in electronic devices? Using materials with lower resistance, improving heat dissipation, and reducing current flow are common strategies. 4. Is Joule heating the same as Ohmic heating? Yes, Ohmic heating is another term for Joule heating, often used in the context of electrically heating fluids. 5. Why do power lines sometimes heat up? Joule heating in power lines is due to the resistance of the wire, especially when high currents flow through them. 6. Can I use Joule heating for cooking? While it’s not practical for everyday cooking, Joule heating is used in industrial processes like induction cooking. 7. Are there materials that are immune to Joule heating? No material is entirely immune, but superconductors come close by having virtually zero resistance. 8. What’s the most spectacular application of Joule heating? Arc furnaces, used in steelmaking, produce an incredible amount of heat through Joule heating. 9. Is it safe to touch a heated wire? It depends on the temperature and the material. Be cautious, as high temperatures can cause burns. 10. Can I use the Joule Heating Calculator for educational purposes? Absolutely! Our calculator is a valuable learning tool for students and enthusiasts. Enlightening Resources Expand your knowledge of Joule heating with these illuminating government and educational resources: 1. MIT – Electrical Engineering: Explore Massachusetts Institute of Technology’s open courseware on electrical engineering for in-depth insights. 2. NREL – Energy Basics: The National Renewable Energy Laboratory provides resources on energy fundamentals, including Joule heating. 3. Khan Academy – Electric Potential Energy: Khan Academy offers educational videos and tutorials on electric potential energy and Joule heating concepts.
{"url":"https://calculator.dev/physics/joule-heating-calculator/","timestamp":"2024-11-06T04:08:07Z","content_type":"text/html","content_length":"115763","record_id":"<urn:uuid:665016f3-6aeb-4b12-ae28-1f49a6685dde>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00312.warc.gz"}
Unit 23 Mathematics for Software Development Assignment Task 1 LO1: Understand core mathematical skills for software engineers P1.1 A1. Solve the following linear and quadratic equations: │(i) 2(3 – 5x) = 15 │(ii) x2 + x -20 = 0 │ P1.1 A2. Solve the following sets of simultaneous equations by (a) algebraic method (b) graphical method. │(i) y=2x; y=-2x+1 │ │ │ │(iii) −6y = 3x − 4; 2y = x + 5 │ │(ii) y = 5x + 1; y =−5x + 1│ │ P1.2 A3. Find the volume of the following shapes to three significant figures by showing your work step by step: (i) a cube with a length of one side 27 metres (ii) a sphere with radius 20 inches P1.2 A4.Using Pythagoras’ theorem, proof that triangle ?ABC (9:12:15) is a right-angled triangle (i) Calculate sine, cosine and tangent for each angles of ?ABC. (ii) Using an appropriate Excel function, demonstrate on a spreadsheet that ?ABC is a right-angled triangle. P1.3 A5. Two robots, Alice and Bob are pulling a box as shown on the figure i. Calculate vector c = a+b. ii. Calculate magnitude of vector c. iii. Write a Pseudocode for calculating magnitude of vector c. LO2: Understand the application of algebraic concepts P2.1 B1. A certain British company has three departments. Following sets are showing departments, surnames and annual salaries of employees of this company: A={ Martin, Marriott, Boast, Preston, Kans} B= {24k, 25k, 26k, 27k, 30k} C= {Production, Sales, Finance} Mr Martin and Mrs Marriott are working at production department, Mrs Boast and Mrs Preston working at sales department and Mr Kans works at Finance department. a. Find the Cartesian product of set A and set B. (R=A×B) b. Find the Natural join of R and C. ( R c. Fill in the below table by using provided information: (Note: explain your work step by step) │Employee name │Salary│Department │ P2.1 B2. A small ICT firm, has three branches in 1. Redbridge, 2. Enfield and 3. Barnet. Five technicians with following details are working at this company; Ali (Location: Barnet, age: 25, salary: £21,000), Steve (Location: Redbridge, age: 45, salary: 23,000), Mike (Location: Enfield, age: 50, salary: 19,000), Linda (Location: Barnet, age: 55 , salary: 24,000 ), Carol (Location: Redbridge, age: 43, salary: 27,000), 1. Draw required number of tables and fit in the above information there. 2. List individuals satisfying the conditions below: 1. (Age<46) AND (Salary> £ 23,000) 2. (Age> 26 ) OR (Salary < £24,000) 3. (Age< 53) AND (Salary>29) OR (Location=1) 4. (Age> 25) XOR (Salary>30) OR (Location=2) Explain how you took the above steps. P2.2 B3. Create a magic square by identifying values of p, q, r, s, t, u, x, y, z in matrix A. A = [Show your work step by step] B4. Show that if P = 1 2 Q = −2 1 1.5 −0.5 Then P is the inverse of Q. LO3: Be able to apply the fundamentals of formal methods P3.1 C1. Suppose that two sets are A and B, defined by A = { g, e, r, m, a, n, i } B = { p, o, l, a, n, d } Identify the following statements as true or false: │(i) │a ∈ A, │ │(ii) │b ∈ B, │ │(iii) │d ∉ B, │ │(iv) │u ∉ A, │ │(v) │a ∈ A?B, │ │(vi) │|A| = |B|, │ │(vii) │{ i, r, a, n} ⊂ A, │ │(viii) │|A?B| = 8, │ P3.1 C2. Suppose we have a universal set {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27} and consider two sets P and O defined as follows: P = “all multiples of 3” O = “the first ten even numbers” Represent all of the elements in a Venn diagram and identify the elements in P?O, P?O and PΔO. P3.1 C3. For all of the following sets defined in set−theoretic notation, list out all of the elements: S1 = {x: x = 2n, where 1 ≤ n ≤ 6} S2 = {x: x = 3n2, where 1 ≤ n ≤ 5} S3 = {y: y = 5n3, where 1 ≤ n ≤ 4} S4 = {x: x = √n, where 3 < n < 5} P3.2 C4. For the circuit shown below, construct a truth table for each intermediate function. Hence, find the output function X. P3.2 C5. Suppose that a salesman has 4 differently-located customers. 1. Find the number of different ways that the salesman can leave home, visit two different customers and then return home. 2. Write a pseudocode for calculating the answer for the previous section. LO4: Be able to apply statistical techniques to analyse data P4.1 D1. A research in 157 households found that the number of children per household is 1. Calculate the Mean of frequency distribution for the above case. 2. What is the Mode value of number of children’s per household? P4.1 D2. A company has ten sales territories with approximately the same number of sales people working in each territory. Last month the sales orders achieved were as follows: │Area │A │B │C │D │E │F │G │H │I │J │ │Sales │150│130│140│150│140│300│110│120│140│120│ For these sales calculate the following: 1. Arithmetic mean 2. Mode 3. Median 4. Lower quartile 5. Upper quartile 6. Quartile deviation 7. Standard deviation 8. Mean deviation Show all the steps you took to complete your answer. P4.1 D3. Identify a topic in one of the following areas and conduct a research on its application in software development. • Boolean algebra • Propositional logic • Relations and functions • Probability, sets, reliability
{"url":"https://www.locusassignments.com/assignment/unit-23-mathematics-software-development-assignment","timestamp":"2024-11-10T01:43:47Z","content_type":"text/html","content_length":"275556","record_id":"<urn:uuid:e10ec5db-bd8d-470a-b33a-ed55621ac12f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00806.warc.gz"}
Mastering Double Integrals in MATLAB: The Essential Guide for Engineers & Scientists – TheLinuxCode Mastering Double Integrals in MATLAB: The Essential Guide for Engineers & Scientists As an engineer or scientist, you likely encounter advanced mathematical concepts like double and triple integrals regularly in your work. Manually evaluating these complex integrals can be extremely tedious and error-prone. Luckily, MATLAB offers accessible tools to accurately compute multidimensional integrals with ease. In this comprehensive guide, we‘ll explore what double integrals are, why they are useful, and most importantly, how to evaluate them effectively in MATLAB using the powerful integral2 function. Whether you need to find volumes, centers of mass, probability distribution functions, or solve physics and engineering problems that involve double integration, this guide has you covered. Let‘s get What Exactly Are Double Integrals? Double integrals extend one-dimensional definite integrals to a two-dimensional plane. While single variable integrals allow you to find the area under a curve over a one-dimensional interval, double integrals allow you to find the volume between a surface and the xy-plane over a two-dimensional region. For example, consider the two-dimensional region R shown below, bounded by the x-axis, the line x = 5, the curve y = 2 + sin(x), and the line y=0: If we had a function z = f(x,y) defining a surface over this region, the double integral would allow us to compute volume between that surface and xy-plane over R. Some common applications that rely on double integration include: • Finding mass and centers of mass in physics • Computing probability distribution functions in statistics • Analyzing heat flow and thermodynamic cycles in engineering • Calculating fluid flow and frictional losses • Determining stressed and potential energies In short, any application that involves a quantity distributed over a 2D planar region will require double integration techniques. Pretty much any STEM field leverages them! Now let‘s switch gears and see how we can evaluate double integrals efficiently using MATLAB‘s integral2 function. Evaluating Double Integrals Numerically with MATLAB‘s integral2 While some double integrals can be evaluated analytically, most real-world applications involve complex functions and irregular regional boundaries. In these cases, numerical methods are preferred. MATLAB provides the very useful integral2 function for computing double integrals numerically. The syntax takes the form: q = integral2(fun,xmin,xmax,ymin,ymax) Where fun is the function handle to the integrand f(x,y) and [xmin,xmax] and [ymin,ymax] define the regional boundaries over which to integrate. For example, to integrate the function f(x,y) = x^2 + y^2 over the square region between [-5,5] in x and [-5,5] in y we would execute: f = @(x,y) x.^2 + y.^2; q = integral2(f,-5,5,-5,5) Resulting in q = 1250. Let‘s look at a more complex example: f = @(x,y) sin(x) + exp(y); xmin = 0; xmax = pi; ymin = @(x) sqrt(x); ymax = @(x) x.^2; q = integral2(f,xmin,xmax,ymin,ymax); Here we integrated f(x,y) = sin(x) + exp(y) over the region bounded by the curves y=sqrt(x) and y=x^2 between x=0 and x=pi. integral2 handles both the complex integrand and irregular regional boundaries with ease, giving the numerically estimated integral value. Customizing Computational Parameters The MATLAB team has invested immense resources into making integral2 extremely robust and efficient. Nonetheless, you can customize parameters like the integration method or error tolerances to achieve your specific precision and performance needs. The syntax for supplying additional parameters is: q = integral2(fun,xmin,xmax,ymin,ymax,‘Method‘,method,‘AbsTol‘,abstol,‘RelTol‘,reltol) Where Method can be set to ‘tiled‘ (default) or ‘iterated‘, and abstol and reltol control the absolute and relative error tolerances respectively. Playing with these parameters can help integrate tricky functions faster or with higher accuracy. When to Avoid integral2? Handling Improper Integrals & Singularities While integral2 excels for most well-behaved functions over finite regions, it will fail if: 1. The boundaries approach infinity (improper integrals) 2. There are internal singularities in the region. In these cases, you‘ll have to analytically isolate and separately handle the problematic areas before numerically evaluating the remainder with integral2. For example, integrating a function with an internal pole at (a,b) would require splitting the region in two at the singularity and handling it separately from the main integral. Handling these scenarios analytically alongside the numerical integral2 workflow allows you to evaluate much more complex integrals than possible otherwise. Tips & Best Practices for Accurate, Efficient Double Integration Through many years of conducting research involving multivariate surface fitting and integration, I‘ve compiled a set of tips and best practices when working with double integrals: • Visualize 2D regional boundaries with tools like ezplot whenever possible – visually confirming they match expectations is wise. • Start with a coarse grid size for integral2, then refine until error thresholds are met for efficiency. • Try both the ‘tiled‘ (default) and ‘iterated‘ methods – one often converges much faster depending on the shape and orientation of your specific region and integrand. • Adjust integral and relative error tolerances to match precision needs – tighter thresholds improve accuracy but slow down computation. • Analyze singularities, discontinuities, rapid oscillations first analytically before numerical evaluation. • Vectorize your integrand function for order-of-magnitude computation speedups • Confirm integrals over symmetric regions match expected symmetric results – provides reassurance integrand and boundaries are accurate. And those are just a few tips of the trade for harnessing MATLAB‘s integral2 effectively! Conclusion & Next Steps As we‘ve explored, double integrals serve vital roles in scientific and engineering analysis by enabling numerical quantification over two-dimensional domains. Fields ranging from physics to economics to thermodynamics rely on double integration routines to solve real-world problems. Manually evaluating double integrals becomes extremely arduous over complex regions even with simple integrand functions. Luckily, MATLAB‘s integral2 function provides efficient and accessible numerical integration for the most challenging 2D domains and functional surfaces – no pencil and paper derivation required! I hope this guide gave you an appreciation for the power of numerical double integration and how tools like MATLAB enable taking multidimensional calculations to new levels. If you found this introduction useful, I encourage you to explore related MATLAB capabilities like: • Triple integration over 3D volumes with integral3 • Symbolic integrations for deriving analytical solutions with MATLAB Symbolic Math Toolbox • 2D and 3D visualization of domains and surfaces using plotting tools The ability to move between numerical computations, symbolic manipulations, and stunning visualizations make MATLAB an unmatched environment for scientific computing and multifaceted engineering analysis even beyond double integration. I guarantee mastering tools like integral2 will exponentially increase your productivity and enable solving problems you didn‘t think possible before! So why not kick the tires with some test integrations for your own research? Please drop me any questions in the comments section below. I look forward to hearing about the creative ways you end up applying double integration!
{"url":"https://thelinuxcode.com/what-is-double-integral-how-to-solve-matlab/","timestamp":"2024-11-07T00:11:04Z","content_type":"text/html","content_length":"178053","record_id":"<urn:uuid:47a177f3-d093-4381-af0f-f980bb4cc9de>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00146.warc.gz"}
Compound Interest (Definition, Formulas and Solved Examples) Compound interest is the interest imposed on a loan or deposit amount. It is the most commonly used concept in our daily existence. The compound interest for an amount depends on both Principal and interest gained over periods. This is the main difference between compound and simple interest. Suppose we observe our bank statements, we generally notice that some interest is credited to our account every year. This interest varies with each year for the same principal amount. We can see that interest increases for successive years. Hence, we can conclude that the interest charged by the bank is not simple interest; this interest is known as compound interest or CI. In this article, you will learn what is compound interest, the formula and the derivation to calculate compound interest when compounded annually, half-yearly, quarterly, etc. Also, one can understand why the return on compound interest is more than the return on simple interest through the examples given based on real-life applications of compound interest here. Compound Interest Definition Compound interest is the interest calculated on the principal and the interest accumulated over the previous period. It is different from simple interest, where interest is not added to the principal while calculating the interest during the next period. In Mathematics, compound interest is usually denoted by C.I. Also, try out: Compound Interest Calculator. Compound interest finds its usage in most of the transactions in the banking and finance sectors and other areas. Some of its applications are: 1. Increase or decrease in population. 2. The growth of bacteria. 3. Rise or Depreciation in the value of an item. Compound Interest in Maths In Maths, Compound interest can be calculated in different ways for different situations. We can use the interest formula of compound interest to ease the calculations. To calculate compound interest, we need to know the amount and principal. It is the difference between amount and principal. Compound Interest Formula As we have already discussed, the compound interest is the interest-based on the initial principal amount and the interest collected over the period of time. The compound interest formula is given Compound Interest = Amount – Principal Here, the amount is given by: • A = amount • P = principal • r = rate of interest • n = number of times interest is compounded per year • t = time (in years) Alternatively, we can write the formula as given below: CI = A – P \(\begin{array}{l}CI=P\left ( 1+\frac{r}{n} \right )^{nt}-P\end{array} \) This formula is also called periodic compounding formula. • A represents the new principal sum or the total amount of money after compounding period • P represents the original amount or initial amount • r is the annual interest rate • n represents the compounding frequency or the number of times interest is compounded in a year • t represents the number of years It is to be noted that the above formula is the general formula for the number of times the principal is compounded in a year. If the interest is compounded annually, the amount is given as: \(\begin{array}{l}A = P \left (1 + \frac{R}{100} \right )^t\end{array} \) Thus, the compound interest rate formula can be expressed for different scenarios such as the interest rate is compounded yearly, half-yearly, quarterly, monthly, daily, etc. Interest Compounded for Different Years Let us see, the values of Amount and Interest in case of Compound Interest for different years- Time (in years) Amount Interest 1 P(1 + R/100) \(\begin{array}{l}\frac{PR}{100}\end{array} \) 2 \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{2}\end{array} \) \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{2}-P\end{array} \) 3 \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{3}\end{array} \) \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{3}-P\end{array} \) 4 \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{4}\end{array} \) \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{4}-P\end{array} \) n \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{n}\end{array} \) \(\begin{array}{l}P\left (1+\frac{R}{100} \right )^{n}-P\end{array} \) The above formulas help determine the interest and amount in case of compound interest quickly. From the data, it is clear that the interest rate for the first year in compound interest is the same as that in simple interest. PR/100. Other than the first year, the interest compounded annually is always greater than that in simple interest. Derivation of Compound Interest Formula To derive the formula for compound interest, we use the simple interest formula as we know SI for one year is equal to CI for one year (when compounded annually). Let, Principal amount = P, Time = n years, Rate = R Simple Interest (SI) for the first year: \(\begin{array}{l}SI_1 = \frac{P~\times~R~\times~T}{100}\end{array} \) Amount after first year: \(\begin{array}{l}= P~+~SI_1\end{array} \) \(\begin{array}{l}= P ~+~ \frac{P~\times ~R~\times ~T}{100}\end{array} \) \(\begin{array}{l}=P \left(1+ \frac{R}{100}\right)= P_2\end{array} \) Simple Interest (SI) for second year: \(\begin{array}{l}SI_2 = \frac{P_2~\times ~R~\times ~T}{100}\end{array} \) Amount after second year: \(\begin{array}{l}= P_2~+~SI_2\end{array} \) \(\begin{array}{l}= P_2 ~+~ \frac{P_2~\times ~R~\times ~T}{100}\end{array} \) \(\begin{array}{l}=P_2\left(1~+~\frac{R}{100}\right)\end{array} \) \(\begin{array}{l}=P\left(1~+~\frac{R}{100}\right) \left(1~+~\frac{R}{100}\right)\end{array} \) \(\begin{array}{l}= P \left(1~+~\frac{R}{100}\right)^2\end{array} \) Similarly, if we proceed further to n years, we can deduce: \(\begin{array}{l}A = P\left(1~+~\frac{R}{100}\right)^n\end{array} \) \(\begin{array}{l}CI = A-P = P \left[\left(1~+~ \frac{R}{100}\right)^n- 1\right]\end{array} \) Compound Interest when the Rate is Compounded half Yearly Let us calculate the compound interest on a principal, P for 1 year at an interest rate R % compounded half-yearly. Since interest is compounded half-yearly, the principal amount will change at the end of the first 6 months. The interest for the next six months will be calculated on the total amount after the first six months. Simple interest at the end of first six months, \(\begin{array}{l}SI_1 = \frac{P~\times ~R~\times ~1}{100~\times ~2}\end{array} \) Amount at the end of first six months, \(\begin{array}{l}A_1 = P~ + ~SI_1\end{array} \) \(\begin{array}{l}= P ~+~ \frac{P~\times ~R~\times ~1}{2~\times ~100}\end{array} \) \(\begin{array}{l}= P \left(1~+~\frac{R}{2~\times ~100}\right)\end{array} \) \(\begin{array}{l}= P_2\end{array} \) Simple interest for next six months, now the principal amount has changed to P[2] \(\begin{array}{l}SI_2 = \frac{P_2~\times ~R~\times ~1}{100~\times ~2}\end{array} \) Amount at the end of 1 year, \(\begin{array}{l}A_2 = P_2~ +~ SI_2\end{array} \) \(\begin{array}{l}= P_2 ~+~ \frac{P_2~\times ~R~\times ~1}{2~\times ~100}\end{array} \) \(\begin{array}{l}=P_2\left(1~+~\frac{R}{2~\times ~100}\right)\end{array} \) \(\begin{array}{l}= P \left(1~+~\frac{R}{2~\times ~100}\right)\left(1~+~\frac{R}{2~\times ~100}\right)\end{array} \) \(\begin{array}{l}= P \left(1~+~\frac{R}{2~\times ~100}\right)^2\end{array} \) Now we have the final amount at the end of 1 year: \(\begin{array}{l}A = P\left(1~+~\frac{R}{2~\times ~100}\right)^2\end{array} \) Rearranging the above equation, \(\begin{array}{l}A = P\left(1~+~\frac{\frac{R}{2}}{100}\right)^{2~\times ~1}\end{array} \) Let R/2 = R’ ; 2T = T’, the above equation can be written as, [for the above case T = 1 year] \(\begin{array}{l}A = P\left(1~+~\frac{R’}{100}\right)^{T’}\end{array} \) Hence, when the rate is compounded half-yearly, we divide the rate by 2 and multiply the time by 2 before using the general formula for compound interest. Quarterly Compound Interest Formula Let us calculate the compound interest on a principal, P kept for 1 year at an interest rate R % compounded quarterly. Since interest is compounded quarterly, the principal amount will change at the end of the first 3 months(first quarter). The interest for the next three months (second quarter) will be calculated on the amount remaining after the first 3 months. Also, interest for the third quarter will be calculated on the amount remaining after the first 6 months and for the last quarter on the remaining after the first 9 months. Thus the interest compounded quarterly formula is given \(\begin{array}{l}A=P(1+\frac{\frac{R}{4}}{100})^{4T}\end{array} \) CI = A – P \(\begin{array}{l}CI =P(1+\frac{\frac{R}{4}}{100})^{4T}-P\end{array} \) A = Amount CI = Compound interest R = Rate of interest per year T = Number of years Formula for Periodic Compounding Rate The total accumulated value, including the principal P plus compounded interest I, is given by the formula: P’ = P[1 + (r/n)]^nt P = Principal P’ = New principal r = Nominal annual interest rate n = Number of times the interest is compounding t = Time (in years) In this case, compound interest is: How to Calculate Compound Interest? Let us understand the process of calculating compound interest with the help of the below example. Example: What amount is to be repaid on a loan of Rs. 12000 for one and half years at 10% per annum compounded half yearly? For the given situation, we can calculate the compound interest and total amount to be repaid on a loan in two ways. In the first method, we can directly substitute the values in the formula. In the second method, compound interest can be obtained by splitting the given time bound into equal periods. This can be well understood with the help of the table given below. Related Articles Compound Interest Solved Examples As mentioned above, compound interest has many applications in real-life. Let us solve various examples based on these applications to understand the concept in a better manner. Increase or Decrease in Population Examples 1: A town had 10,000 residents in 2000. Its population declines at a rate of 10% per annum. What will be its total population in 2005? The population of the town decreases by 10% every year. Thus, it has a new population every year. So the population for the next year is calculated on the current year population. For the decrease, we have the formula A = P(1 – R/100)^n Therefore, the population at the end of 5 years = 10000(1 – 10/100)^5 = 10000(1 – 0.1)^5 = 10000 x 0.9^5 = 5904 (Approx.) The Growth of Bacteria Examples 2: The count of a certain breed of bacteria was found to increase at the rate of 2% per hour. Find the bacteria at the end of 2 hours if the count was initially 600000. Since the population of bacteria increases at the rate of 2% per hour, we use the formula A = P(1 + R/100)^n Thus, the population at the end of 2 hours = 600000(1 + 2/100)^2 = 600000(1 + 0.02)^2 = 600000(1.02)^2 = 624240 Rise or Depreciation in the Value of an Item Examples 3: The price of a radio is Rs. 1400 and it depreciates by 8% per month. Find its value after 3 months. For the depreciation, we have the formula A = P(1 – R/100)^n. Thus, the price of the radio after 3 months = 1400(1 – 8/100)^3 = 1400(1 – 0.08)^3 = 1400(0.92)^3 = Rs. 1090 (Approx.) Compound Interest and Simple Interest Now, let us understand the difference between the amount earned through compound interest and simple interest on a certain amount of money, say Rs. 100 in 3 years . and the rate of interest is 10% Below table shows the process of calculating interest and total amount. Compound Interest Word Problems Question 1: A sum of Rs.10000 is borrowed by Akshit for 2 years at an interest of 10% compounded annually. Find the compound interest and amount he has to pay at the end of 2 years. Principal/ Sum = Rs. 10000, Rate = 10%, and Time = 2 years From the table shown above it is easy to calculate the amount and interest for the second year, which is given by- \(\begin{array}{l}Amount(A_{2}) = P\left (1+\frac{R}{100} \right )^{2}\end{array} \) Substituting the values, \(\begin{array}{l}A_{2} = 10000 \left ( 1 + \frac{10}{100} \right )^{2} = 10000 \left ( \frac{11}{10} \right )\left ( \frac{11}{10} \right )= Rs.12100\end{array} \) Compound Interest (for 2nd year) = A[2] – P = 12100 – 10000 = Rs. 2100 Question 2: What is the compound interest (CI) on Rs.5000 for 2 years at 10% per annum compounded annually? Principal (P) = Rs.5000 , Time (T)= 2 year, Rate (R) = 10 % We have, Amount, \(\begin{array}{l}A = P \left ( 1 + \frac{R}{100} \right )^{T}\end{array} \) \(\begin{array}{l}A = 5000 \left ( 1 + \frac{10}{100} \right )^{2}\\ = 5000 \left ( \frac{11}{10} \right )\left ( \frac{11}{10} \right )\\ = 50 \times 121\\ = Rs. 6050\end{array} \) Interest (Second Year) = A – P = 6050 – 5000 = Rs.1050 Directly we can use the formula for calculating the interest for the second year, which will give us the same result. \(\begin{array}{l}Interest (I1) = P\times \frac{R}{100} = 5000 \times \frac{10}{100} =500\end{array} \) \(\begin{array}{l}Interest (I2) = P\times \frac{R}{100}\left (1 + \frac{R}{100} \right )\\ = 5000 \times \frac{10}{100}\left ( 1 + \frac{10}{100} \right )\\ = 550\end{array} \) Total Interest = I[1]+ I[2] = 500 + 550 = Rs. 1050 Question 3: What is the compound interest to be paid on a loan of Rs.2000 for 3/2 years at 10% per annum compounded half-yearly? Solution: From the given, Principal, P = Rs.2000, Time, T’=2 (3/2) years = 3 years, Rate, R’ = 10% / 2 = 5% Amount, A can be given as: \(\begin{array}{l}A = P(1+\frac{R’}{100})^{T’}\end{array} \) \(\begin{array}{l}A = 2000~\times ~\left(1~+~\frac{5}{100}\right)^3\end{array} \) \(\begin{array}{l}=2000~\times ~\left(\frac{21}{20}\right)^3 \\ = Rs.2315.25\end{array} \) CI = A – P = Rs.2315.25 – Rs.2000 = Rs.315.25 Compound Interest Practice Problems Try solving the below questions on compound interest. 1. What is the least number of complete years in which a sum of money put out at 20% compound interest will be more than doubled? 2. Heera invests Rs. 20,000 at the beginning of every year in a bank and earns 10 % annual interest, compounded at the end of the year. What will be her balance in the bank at the end of three 3. What is the difference between the compound interests on Rs. 5000 for one and half years at 4% per annum compounded yearly and half-yearly? For a detailed discussion on compound interest, download BYJU’S -The learning app. Frequently Asked Questions on Compound Interest What is Compound interest? Compound interest is the interest calculated on the principal and the interest accumulated over the previous period. How do you calculate compound interest? Compound interest is calculated by multiplying the initial principal amount (P) by one plus the annual interest rate (R) raised to the number of compound periods (nt) minus one. That means, CI = P[(1 + R)^nt – 1] Here, P = Initial amount R = Annual rate of interest as a percentage n = Number of compounding periods in a given time Who benefits from compound interest? The investors benefit from the compound interest since the interest pair here on the principle plus on the interest which they already earned. What is interest compounded quarterly formula? The formula for interest compounded quarterly is given by: A = P(1 + (R/4)/100)^4T How do you find the compound interest rate? The compound interest rate can be found using the formula, A = P(1 + r/n)^{nt} A = Total amount P = Principal r = Annual nominal interest rate as a decimal n = Number of compounding periods t = Time (in years) Thus, compound interest (CI) = A – P What is the formula of compound interest with an example? The compound interest formula is given below: Compound Interest = Amount – Principal Where the amount is given by: A = P(1 + r/n)^{nt} P = Principal r = Annual nominal interest rate as a decimal n = Number of compounding periods t = Time (in years) For example, If Mohan deposits Rs. 4000 into an account paying 6% annual interest compounded quarterly, and then the money will be in his account after five years can be calculated as: Substituting, P = 4000, r = 0.06, n = 4, and t = 5 in A = A = P(1 + r/n)^{nt}, we get A = Rs. 5387.42 What is the compounded daily formula? The compound interest formula when the interest is compounded daily is given by: A = P(1 + r/365)^{365 * t} Leave a Comment
{"url":"https://byjus.com/maths/compound-interest/","timestamp":"2024-11-02T05:45:47Z","content_type":"text/html","content_length":"629235","record_id":"<urn:uuid:fd1f4b23-d3ca-45bc-9310-0fbf7786239a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00554.warc.gz"}
Right-Handed 3D Coordinate Frame Right-Handed 3D Coordinate Frame In the previous lecture when we were discussing 2-dimensional worlds, we introduced the concept of a right-handed coordinate frame. And, just to remind you, the right-handed coordinate frame is constructed like this. We draw the x-axis and then we swing the y-axis up by 90 degrees. A three-dimensional coordinate frame is actually very similar to the two-dimensional coordinate frame. So, here is the two-dimensional right-handed coordinate frame that we just introduced. And, what I’m going to do do now is to rotate that in three dimensions. And, what we see is that the z-axis was previously hidden. Previously, it was sticking up at us out of the screen. To create a 3-dimensional right-handed coordinate frame, we start in a similar way. We draw the x-axis. We swing the y-axis so that it makes an angle of 90 degrees to the x-axis. And then, we swing the z-axis upwards and it also makes an angle of 90 degrees to the x-axis and also makes an angle of 90 degrees to the y-axis. There’s a very close relationship between the 2-dimensional and the 3-dimensional coordinate frames. Let me put down the two-dimensional coordinate frame and here is three-dimensional coordinate frame. And, I can overlay the x-axis and the y-axis and we see that the z-axis points upwards. We refer to this as a right-handed coordinate frame. And, the reason we call it a right-handed frame is the orientation of the axes can be defined very simply using my right hand. So, this is the x-axis. This is the y-axis. And, this is the z-axis pointing upwards. So, whenever we’re doing work with 3-dimensional coordinate frames, they are always right-handed coordinate frames. This particular coordinate frame that I built, we have an x-axis and a y-axis and this one, the z-axis pointing downwards. This is a left-handed coordinate frame. So it’s best not to use these, they are going to cause you all sorts of grief. So, don’t use left-handed coordinate frames. Go with the right-handed coordinate frame. So, just to recap. When we create a right-handed coordinate frame, we use this rule what’s called the right-handed rule, and I take my right hand. The x-axis is parallel to my thumb. The y-axis is parallel to my index finger. And, the z-axis is parallel to my middle finger. There is no code in this lesson. We discuss the structure of a right-handed 3D coordinate frame and the spatial relationship between its axes which is encoded in the right-hand rule. Skill level Undergraduate mathematics This content assumes high school level mathematics and requires an understanding of undergraduate-level mathematics; for example, linear algebra - matrices, vectors, complex numbers, vector calculus and MATLAB programming. Rate this lesson You must to submit a review. 1. Is it really true, that the third coordinate frame is righthanded? I’m pretty sure one would have to switch the x-axis with the y- or z- axis to fulfill that (or switch the direction of the z 1. It’s a good question. While it looks RH to me, I have become aware that different people have different interpretations since there are very few perspective cues in the 2-dimensional picture. These were created using the Toolbox trplot() function and the ‘perspective’ option. Here’s the same frame from a different viewpoint, and with the world-frame coordinates shown. Does that make it easier? 2. Why is the second frame wrong? 1. See the comment above your question. 3. The answer to the first question is showing incorrect even if I chose the option “Out of the page” 1. The correct answer is “into the page”. If you put your thumb (X axis) pointing upwards in the page, and the next finger (Y axis) pointing to the right, you will inmediately see that your z finger points down and into the paper. in a different way, if you start with your fingers in the way mentioned in the video in 1:34, you will see that you need to rotate your hand 90° around Z in order to put X pointing upwards, and then you will need to rotate 180° around X for Y to point to the right. Z will be then pointing down. I hope this helps. Please Sign In to leave a comment.
{"url":"https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/","timestamp":"2024-11-15T04:24:16Z","content_type":"text/html","content_length":"54763","record_id":"<urn:uuid:96c36680-410e-437d-9eae-9645bd04239b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00120.warc.gz"}
What is NP-hard and NP-complete problem? A problem is NP-hard if all problems in NP are polynomial time reducible to it, even though it may not be in NP itself. If a polynomial time algorithm exists for any of these problems, all problems in NP would be polynomial time solvable. These problems are called NP-complete. What are NP-complete problems? NP-complete problem, any of a class of computational problems for which no efficient solution algorithm has been found. Many significant computer-science problems belong to this class—e.g., the traveling salesman problem, satisfiability problems, and graph-covering problems. Does NP-complete mean NP-hard? A problem is said to be NP-hard if everything in NP can be transformed in polynomial time into it even though it may not be in NP. Conversely, a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. What is NP-hard problem with example? An example of an NP-hard problem is the decision subset sum problem: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be What is meant by NP-complete? (definition) Definition: The complexity class of decision problems for which answers can be checked for correctness, given a certificate, by an algorithm whose run time is polynomial in the size of the input (that is, it is NP) and no other NP problem is more than a polynomial factor harder. Are all NP problems NP-complete? Not necessarily. It can happen that NP is a known upper-bound (ie. we know how to solve it in non-deterministic polynomial time) but not a known lower-bound (a more efficient algorithm may or may not exist). An example of such a problem is graph isomorphism. What is P and NP problems? P is set of problems that can be solved by a deterministic Turing machine in Polynomial time. • NP is set of problems that can be solved by a Non-deterministic Turing Machine in Polynomial time. Are puzzles NP-complete? Often this difficulty can be shown mathematically, in the form of computational intractibility results: every NP-complete problem is in some sense a puzzle, and conversely many puzzles are NP-complete. Two-player games often have higher complexities such as being PSPACE-complete. What is difference between NP and P? Roughly speaking, P is a set of relatively easy problems, and NP is a set that includes what seem to be very, very hard problems, so P = NP would imply that the apparently hard problems actually have relatively easy solutions. What is recursion and backtracking? Difference between Recursion and Backtracking: In recursion, the function calls itself until it reaches a base case. In backtracking, we use recursion to explore all the possibilities until we get the best result for the problem. Is Sudoku A NP? Sudoku is NP-complete when generalized to a n × n grid however a standard 9 × 9 Sudoku is not NP- complete.
{"url":"https://locke-movie.com/2022/08/07/what-is-np-hard-and-np-complete-problem/","timestamp":"2024-11-02T03:11:53Z","content_type":"text/html","content_length":"43246","record_id":"<urn:uuid:9fc73211-40e6-4673-97c4-00256379e5da>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00831.warc.gz"}
Rate vs. timing (X) Rate theories in spiking network models According to the rate-based hypothesis, 1) neural activity can be entirely described by the dynamics of underlying firing rates and 2) firing is independent between neurons, conditionally to these rates. This hypothesis can be investigated in models of spiking neural networks by a self-consistency strategy. If all inputs to a neuron are independent Poisson processes, then the output firing rate can be calculated as a function of input rates. Rates in the network are then solutions of a fixed point equation. This has been investigated in random networks in particular by Nicolas Brunel. In a statistically homogeneous network, theory gives the stationary firing rate, which can be compared to numerical simulations. The approach has also been applied to calculate self-sustained oscillations (time-varying firing rates) in such networks. In general, theory works nicely for sparse random networks, in which a pair of neurons is connected with a low probability. Sparseness implies that there are no short cycles in the connectivity graph, so that the fact that the inputs to a neuron and its output are strongly dependent has little impact on the dynamics. Results of simulations diverge from theory when the connection probability increases. This means that the rate-based hypothesis is not true in general. On the contrary, it relies on specific hypotheses. Real neural networks do not look like random sparse networks, for example they can be strongly connected locally, neurons can be bidirectionally connected or form clusters. Recently, there have been a number of nice theoretical papers on densely connected balanced networks (Renart et al., Science 2010; Litwin-Kumar and Doiron, Nat Neurosci 2012), which a number of people have interpreted as supporting rate-based theories. In such networks, when inhibition precisely counteracts excitation, excitatory correlations (due to shared inputs) are cancelled by the coordination between inhibition and excitation. As a result, there are very weak pairwise correlations between neurons. I hope it is now clear from my previous posts that this is not an argument in favor of rate-based theories. The fact that correlations are small says nothing about whether dynamics can be faithfully described by underlying time-varying rates. In fact, in such networks, neurons are in a fluctuation-driven regime, meaning that they are highly sensitive to coincidences. What inhibition does is to cancel the correlations due to shared inputs, i.e., the meaningless correlations. But this is precisely what one would want the network to do in spike-based schemes based on stimulus-specific synchrony (detecting coincidences that are unlikely to occur by chance) or on predictive coding (firing when there is a discrepancy between input and prediction). In summary, these studies do not support the idea that rates are an adequate basis for describing network dynamics. They show how it is possible to cancel expected correlations, a useful mechanism in both rate-based and spike-based theories. Update. These observations highlight the difference between correlation and synchrony. Correlations are meant as temporal averages, for example pairwise cross-correlation. But on a timescale relevant to behavior, temporal averages are irrelevant. What might be relevant are spatial averages. Thus, synchrony is generally meant as the fact that a number of neurons fire at the same time, or a number of spikes arrive at the same time at a postsynaptic neuron. This is a transient event, which may not be repeated. A single event is meaningful if such synchrony (possibly involving many neurons) is unlikely to occur by chance. The terms “by chance” refer to what could be expected given the past history of spiking events. This is precisely what coordinated inhibition may correspond to in the scheme described above: the predicted level of input correlations. In this sense, inhibition can be tuned to cancel the expected correlations, but by definition it cannot cancel coincidences that are not expected. Thus, the effect of such an excitation-inhibition coordination is precisely to enhance the salience of unexpected synchrony. Laisser un commentaire
{"url":"https://romainbrette.fr/rate-vs-timing-x-rate-theories-in-spiking-network-models/","timestamp":"2024-11-07T06:17:54Z","content_type":"text/html","content_length":"28302","record_id":"<urn:uuid:1efebd55-d753-4ea0-b6f2-1bd640b60411>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00071.warc.gz"}
Percentage Type 3 Shortcut Tricks - Math Shortcut Tricks Percentage Type 3 Shortcut Tricks A very important chapter of math is Percentage. Percentage Type 3 Shortcut Tricks is based on, to find the amount spent, percentage of a expenditure of a particular person whose different expenses are given in percent form from saving and according with that. 7 comments 1. satyam says: Please elaborate the solutions 2. shivam says: Kindly explain example 6 3. Rinku says: very nice 4. praveen says: Geeta saves 18% of her salary.If her savings is equal to 1890 what would be her actual salery. Plsss answer fast any one.. □ Divya says: Rs 10500 □ sukhi says: Reduction of 2kg enablea a man to purchase 4kg more sugar fir rs 16. Calculate the orignal price of sugar. □ umesh says: pls explain 9th question Leave a Reply If you have any questions or suggestions then please feel free to ask us. You can either comment on the section above or mail us in our e-mail id. One of our expert team member will respond to your question with in 24 hours. You can also Like our Facebook page to get updated on new topics and can participate in online quiz. All contents of this website is fully owned by Math-Shortcut-Tricks.com. Any means of republish its content is strictly prohibited. We at Math-Shortcut-Tricks.com are always try to put accurate data, but few pages of the site may contain some incorrect data in it. We do not assume any liability or responsibility for any errors or mistakes in those pages. Visitors are requested to check correctness of a page by their own. © 2024 Math-Shortcut-Tricks.com | All Rights Reserved.
{"url":"https://www.math-shortcut-tricks.com/percentage-type3-shortcut-tricks/","timestamp":"2024-11-06T17:24:09Z","content_type":"text/html","content_length":"224428","record_id":"<urn:uuid:bd46bc02-2074-4608-ad03-2768b87c4135>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00756.warc.gz"}
Understanding Mathematical Functions: Which Equation Is A Linear Funct Mathematical functions are essential in understanding the relationships between variables and making predictions in various fields, including economics, engineering, and physics. Linear functions are one of the most fundamental types of functions and play a crucial role in understanding more complex mathematical concepts. In this blog post, we will explore what mathematical functions are and why it is important to understand linear functions in particular. Key Takeaways • Linear functions are essential in understanding the relationships between variables and making predictions in various fields. • It is important to understand linear functions as they are fundamental in understanding more complex mathematical concepts. • Recognizing linear patterns in graphs and understanding the slope-intercept form are crucial in identifying linear functions. • Linear functions have real-world applications in various fields and are used in problem solving. • Avoid common mistakes in identifying linear functions by understanding the characteristics and misconceptions about them. Definition of Linear Functions When working with mathematical functions, it is important to understand the concept of linear functions. Linear functions are a fundamental part of algebra and calculus, and they are used to describe relationships between two variables. A. Explanation of linear functions A linear function is a function that can be expressed in the form f(x) = mx + b, where m and b are constants. In this formula, x represents the independent variable, and f(x) represents the dependent variable. The constant m represents the slope of the line, and the constant b represents the y-intercept. B. Characteristics of linear functions Linear functions have several key characteristics that set them apart from other types of functions. One of the most important characteristics is that the graph of a linear function is a straight line. Additionally, the slope of the line is constant, meaning that the rate of change is consistent throughout the function. Another characteristic is that the function's output increases or decreases at a constant rate as the input changes. C. Examples of linear functions There are many real-world examples of linear functions, such as the relationship between time and distance traveled at a constant speed, or the relationship between the number of items sold and the total revenue generated. In mathematical terms, examples of linear functions include f(x) = 3x + 2 and g(x) = -0.5x + 4, where the constants m and b determine the slope and y-intercept of the function, respectively. Understanding linear functions is essential for anyone studying mathematics or working in fields such as engineering, physics, or economics. By grasping the definition and characteristics of linear functions, individuals can better analyze and interpret the relationships between variables in various contexts. Identifying Linear Functions Understanding mathematical functions is essential in many areas of life, including economics, engineering, and physics. One common type of function is the linear function, which has a distinctive form and behavior. In this chapter, we will explore how to identify linear functions and the key elements that define them. A. How to determine if an equation is a linear function Identifying whether an equation represents a linear function can be determined by examining its form. A linear function is one that can be written in the form y = mx + b, where m is the slope and b is the y-intercept. This means that the variable y is directly proportional to x, and the graph of the function is a straight line. Additionally, the highest power of the variable in a linear function is 1. B. Understanding the slope-intercept form The slope-intercept form, y = mx + b, is a key representation of a linear function. The slope, m, represents the rate of change or steepness of the line, while the y-intercept, b, represents the value of y when x = 0. By understanding this form, one can easily identify linear functions and interpret their behavior. C. Recognizing linear patterns in graphs Graphs can provide visual cues to identify linear functions. Linear functions will have a straight line, indicating a constant rate of change between the variables. By observing the direction and steepness of the line, one can determine if the relationship is linear. Additionally, the y-intercept will be the point where the line intersects the y-axis, providing further confirmation of a linear function. Contrasting Linear Functions with Other Types of Functions When it comes to understanding mathematical functions, it's important to differentiate between linear and non-linear functions. Linear functions are a specific type of mathematical equation, and it's crucial to comprehend how they differ from other types of functions. A. Explanation of non-linear functions Non-linear functions are mathematical equations that do not create a straight line when graphed. Instead, they exhibit curving or bending. This means that the rate of change of the function is not constant. Examples of non-linear functions include quadratic, exponential, and logarithmic functions. B. Example of quadratic functions One common example of a non-linear function is the quadratic function, which takes the form f(x) = ax^2 + bx + c. When graphed, a quadratic function creates a parabola, a U-shaped curve that does not form a straight line. C. Differentiating between linear and non-linear functions When distinguishing between linear and non-linear functions, it's important to consider the rate of change. Linear functions have a constant rate of change, resulting in a straight line when graphed. On the other hand, non-linear functions exhibit varying rates of change, leading to curved or non-linear graphs. Real-World Applications of Linear Functions Linear functions, a fundamental concept in mathematics, find widespread applications in various real-world scenarios. Let's explore some of the practical examples and the significance of linear functions in different fields, along with their role in problem-solving. A. Practical examples of linear functions • 1. Cost Analysis: In business and economics, linear functions are used to analyze costs and revenue. For example, the cost of production can be modeled using a linear function where the total cost is a function of the number of units produced. • 2. Distance-Time Graphs: Linear functions are used to represent distance-time graphs, where the distance traveled by an object is directly proportional to the time taken, assuming a constant • 3. Temperature Change: When studying thermodynamics or weather patterns, linear functions are used to model temperature change over time or space. B. Importance of linear functions in various fields • 1. Engineering: Linear functions are crucial in engineering for analyzing structural loads, electrical circuits, and mechanical systems. • 2. Physics: In physics, linear functions are used to describe simple harmonic motion, linear momentum, and other fundamental concepts. • 3. Finance: Linear functions play a significant role in financial analysis, such as modeling investment returns and loan amortization. C. How linear functions are used in problem solving • 1. Predictive Modeling: Linear functions are used to make predictions and forecast trends in various fields, including market analysis and population growth. • 2. Optimization: Linear programming, a method based on linear functions, is used to solve complex optimization problems in operations research and management science. • 3. Decision Making: Linear functions help in making informed decisions by providing a quantitative basis for evaluating different options and scenarios. Common Mistakes in Identifying Linear Functions Understanding mathematical functions, particularly linear functions, is essential in the field of mathematics and its applications in various industries. However, there are common misconceptions and pitfalls that can lead to errors in identifying linear functions. It is important to recognize these mistakes and learn how to avoid them in order to correctly identify linear equations. A. Misconceptions about linear functions • Equating linearity with simplicity: One common misconception is that linear functions are always simple and straightforward. While this may be true in some cases, it is not a defining characteristic of linear functions. Linear functions can exhibit complexity and variability just like any other type of function. • Ignoring the coefficient of the independent variable: Some people wrongly assume that any equation with a single independent variable is a linear function. However, the coefficient of the independent variable must be a constant to qualify as a linear function. B. Pitfalls in identifying linear equations • Confusing linear and non-linear relationships: It can be challenging to differentiate between linear and non-linear equations, especially when dealing with complex mathematical expressions. This confusion can lead to misidentifying linear functions. • Incorrectly applying the slope-intercept form: Many people mistakenly try to fit every equation into the slope-intercept form (y = mx + b) without considering the specific characteristics of linear functions. C. Tips for avoiding common mistakes in recognizing linear functions • Understand the defining characteristics of linear functions: Familiarize yourself with the key attributes of linear functions, such as having a constant rate of change and a straight-line graph. • Examine the coefficients and exponents: Pay attention to the coefficients and exponents in the equation to determine if it meets the criteria for a linear function. • Use graphing and visualization tools: Plotting the equation on a graph can provide a visual representation of whether it is a linear function or not. A. Recap of the key points about linear functions: In this blog post, we discussed the characteristics of linear functions, such as their equation form (y = mx + b) and their graph appearing as a straight line. We also looked at how to determine if a given equation represents a linear function. B. Importance of being able to identify linear functions: Understanding linear functions is crucial in various fields such as economics, physics, and engineering. It allows us to analyze and interpret data, make predictions, and solve real-world problems. C. Encouragement to continue learning about mathematical functions: As we continue to expand our knowledge of mathematical functions, we gain a deeper understanding of the world around us and develop essential problem-solving skills. I encourage you to keep exploring different types of functions and their applications. Keep learning, and happy calculating! ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-which-equation-is-a-linear-function","timestamp":"2024-11-13T01:51:54Z","content_type":"text/html","content_length":"215866","record_id":"<urn:uuid:55f1ea14-e1d8-417c-8317-b63e2ea5fa04>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00655.warc.gz"}
Theoretical Machine Learning Seminar Generalized Energy-Based Models I will introduce Generalized Energy Based Models (GEBM) for generative modelling. These models combine two trained components: a base distribution (generally an implicit model), which can learn the support of data with low intrinsic dimension in a high dimensional space; and an energy function, to refine the probability mass on the learned support. Both the energy function and base jointly constitute the final model, unlike GANs, which retain only the base distribution (the "generator"). In particular, while the energy function is analogous to the GAN critic function, it is not discarded after training. GEBMs are trained by alternating between learning the energy and the base. Both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base. Samples from the posterior on the latent space of the trained model can be obtained via MCMC, thus finding regions in this space that produce better quality samples. Empirically, the GEBM samples on image-generation tasks are of much better quality than those from the learned generator alone, indicating that all else being equal, the GEBM will outperform a GAN of the same complexity. GEBMs also return state-of-the-art performance on density modelling tasks, and when using base measures with an explicit form. Date & Time July 28, 2020 | 12:30pm – 1:45pm Remote Access Only - see link below University College London Event Series We welcome broad participation in our seminar series. To receive login details, interested participants will need to fill out a registration form accessible from the link below. Upcoming seminars in this series can be found here.
{"url":"https://www.ias.edu/math/events/theoretical-machine-learning-seminar-96","timestamp":"2024-11-02T05:48:22Z","content_type":"text/html","content_length":"51127","record_id":"<urn:uuid:02faf9e9-b4c6-4043-a85e-92b62d1de619>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00131.warc.gz"}
Elger's Weblog! A friend of mine just sent me a brain teaser. It looked so intriguing, all those numbers ordered in some kind of fashion. So i decided to take on the challenge. This is what Sebastiaan sent: Taking on the challenge My first attempt was performed in Textedit, and it was a cumbersome process of shuffling and reshuffling numbers. My guess was that R was a negative number, and with some negative trickery the puzzle was solvable. After a few minutes of failed attempts I recognized my brain failed at this hour. So lets do some stupid programming, in order to figure out the correct values for the puzzle. Firing up a text editor and writing out the formula, i hunched at the formula’s tightness. If the formula is not written correctly, the puzzle is incorrect and thus unsolvable. If not, the numbers will probably even out in a few iterations. The challenge then is to find an iteration that matches the start value: Z = 1. Finding initial values The formula did balance out. Only with some initial values set to 8 or higher, the result would crash. With the following initial values, i found the simplest outcome of this puzzle: $K = -1; $L = -1; $R = 2; $W = -1; $Y = 1; $Z = 1; iteration 0 Y + R = K | 1 + 2 = 3 K + R = L | 3 + 2 = 5 R + R = W | 3 + 2 = 4 Y - R = Z | 1 - 2 = -1 K - W = Z | 3 - 4 = -1 L - W = Y | 5 - 4 = 1 R * R - W = R | 2 * 2 - 4 = 0 iteration 1 Y + R = K | 1 + 0 = 1 K + R = L | 1 + 0 = 1 R + R = W | 1 + 0 = 0 Y - R = Z | 1 - 0 = 1 K - W = Z | 1 - 0 = 1 L - W = Y | 1 - 0 = 1 R * R - W = R | 0 * 0 - 0 = 0 K = 1 L = 1 R = 0 W = 0 Y = 1 Z = 1 I also found one other answer after a few other input combinations. How many more could there be? Taking it one step further That wasn’t too hard, just some guessing, and it was fixed. All iterations after iteration 1 resulted in the same answer: Z is 1 and manually checking this sum confirmed correctness. Now, what other answers could there be. I expanded the sourcecode of the original puzzle to try all input variables from -9 to 9. That would be 18 * 18 * 18 * 18 * 18 * 18 * 18 attempts, or 18 ^ 6 or 3401224 attempts to solve this puzzle. I was wondering how many solutions there would be, and how many input variables resulted to the same solution. Couldn’t be that hard. With little rewriting and some debugging we came to the following The final answer For humans, trying any value below -9 or higher than 9 would mean intense complexity. So after performing all 34 million possibilities with some iterative logic, the chance of someone guessing the right answer is pretty high: [notfound] => 33487344 [7.10.3.6.4.1] => 314928 [1.1.0.0.1.1] => 209952 In this order: K.L.R.W.Y.Z Of all possibilities, 0.94% deliver the variable numbering result. A lower 0.63 goes for the binary solution. From any combination of input combinations, you’ll have a chance of 1.57% for hitting a correct one. However, i managed to find both in less than 20 tries. I’m wondering if there is correlation between finding a correct result and input combinations. Growth of the incorrect versus correct answers has shown to have a sweetspot. With samples every 1000 calculations, it’s visible that every so often no new correct answers are found. With a total over 34000 samples, excel couldn’t make a graph anymore. So i gave it a break. (it says: “could not make a graph with > 32000 datapoints”) Well, something showed up, what appears to be a linear graph, with little dents. The dents could also be pixel aliassing. Since it takes until forever to get more details from excel, we put it to rest. Its included in the downloads. My hopes where to find a sweet spot; a range that has significant impact on the number of correct answers that where found. Such a thing is available, look at the begin and end numbers of columns N and O. Those represent both correct answers. Column P holds the notfound value. After a series of correct answers, increasing N and O (as seen in the picture) almost the same amount of records show no increase in N or O, only in notfound. This means there is a sweetspot. The exact range for this sweetspot, or what input variables do not matter in the puzzle, is hard to determine without a graph. Just looking at the numbers make my head spin, and this is only 75 If there where a sweetspot, i would try to perform the same calculations, but then with floats instead of integers. I expect to find 20 extra correct answers then. But who knows, i didn’t get the chance to try it. Oh, wait. Just let me do this from 0 to 3, with 0.1 increases instead of integers. Floating points Using floating points resulted in nothing new. Unfortunately, i’d hoped for some cool answers that did not involve integers. Well, at least we’ve tried. The cool thing with floats is that the answers keep on fluctuating; it never evens out. Well, i tried with 10.000.000 iterations, and still different things happen. The following morning: Hey just wait a minute… Only R and Y make sense in this equation. Thats why most of the time nothing happened. Lets try it with only modifying R and Y… nothing 🙁 My initial assumption was wrong: there is no answer that uses negative integers in the range of -1 to -9. It just doesn’t exist. Excel couldn’t make heads or tails of the data, it was simply too large for a graph. I’m thinking of datawhorehouse tooling. Trying with fewer parameters and only 0.1 and etc didn’t result in any answer. Well, enough for now, i could stretch this to the end of the galaxy. All files are available for download. That concludes this nights brainteaser, its time for slee… zzzzzzz
{"url":"http://elgerjonker.nl/2010/08/","timestamp":"2024-11-11T06:53:58Z","content_type":"text/html","content_length":"29837","record_id":"<urn:uuid:1e075c0b-cbac-4116-9451-0e368f04ce9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00715.warc.gz"}
MT3510 Introduction to Mathematical Computing Floating point numbers# We’ve seen that we can represent decimals using floats, and also that these floats can sometimes have strange behaviour. It’s important to understand what is going on here. A floating point number is one that is (approximately) represented in a format similar to “scientific notation”, but where the number of significant figures and the base of the exponent is fixed. For example, we might fix the number of significant figures at \(16\) and the base as \(10\), and represent two numbers of very different \[\begin{split} \sqrt{2} \approx 1.414213562373095 \times 10^{0}, \\ e^{35} \approx 1.586013452313431 \times 10^{15}. \end{split}\] The significant digits are called the significand or mantissa, and the exponent is conveniently called the exponent. Note that the error in the second approximation will be much larger in absolute value. The term “floating point” refers to how the exponent moves the decimal point across the significant figures. Computers typically use a floating-point system to represent non-integer real numbers. The system used by Python is a little different to the representation above. It assumes that the point lies after the last significant digit, rather than after the first as above. It also uses base-2 (binary), and stores 53 significant binary digits (bits) along with 11 bits for the exponent, for a total of 64 bits (8 bytes). This system is called a double-precision float. The two numbers above would be represented as follows: \[\begin{split} \sqrt{2} \approx 6369051672525773 \times 2^{-52}, \\ e^{35} \approx 6344053809253723 \times 2^{-2}. \end{split}\] Here we have given the significands and exponents in base ten for convenience, but they would be stored in binary. Since \(2^{53} \approx 10^{16}\), we roughly get 16 significant decimal digits in a double-precision float. There are also single-precision floats, which take up 4 bytes (24 significant bits and 8 exponent bits). This translates into around 7 significant decimal digits. Precision and the machine epsilon# Since there are a fixed number of significant digits, there are often issues when adding together numbers of different magnitudes. Consider the following: import numpy as np np.exp(35) + 0.1 == np.exp(35) Since the exponent for \(e^{35}\) is large, the fixed 53 significant bits cannot show the difference between \(e^{35}\) and \(e^{35} + 0.1\). A very important example comes from considering numbers just slightly larger than 1. # 1e-14 is shorthand for 10**(-14). # Test if Python can distinguish between 1 + 1e-14 and 1 1 + 1e-14 == 1 Python cannot distinguish between \(1\) and \(1 + 10^{-16}\); they are represented by the same float. This value of \(10^{-16}\) is a good approximation for \(2^{-53}\), which is the “true” largest value \(\varepsilon\) such that Python cannot distinguish between \(1\) and \(1 + \varepsilon\). This value \(\varepsilon\) is called the machine epsilon, and represents the relative error that appears in floating point representations. It is important to remember that the machine epsilon is a relative error. The gaps between indistinguishable floats grow as the exponent increases, and shrink as it decreases - the machine epsilon is the gap when the exponent is 0. The machine epsilon is not the smallest representible number - see the section on underflow. We saw above that \(e^{35}\) and \(e^{35} + 0.1\) also could not be distinguished. We can use the machine epsilon to get a rough estimate for the largest value \(\delta\) such that \(e^{35} + \delta \) is indistinguishable from \(e^{35}\) as follows: delta = np.exp(35) * 2**-53 np.exp(35) + delta == np.exp(35) np.exp(35) + 0.5 * delta == np.exp(35) Binary representations# There is another issue that can crop up with floats: the fact that they use a binary representation means that some simple decimals cannot be easily represented. For example, the number \(0.1\) is a nice decimal fraction, but cannot be represented as a finite binary fraction. This can cause some strange effects: The issue here is that a will be the closest representable float to \(0.1\), and 3 * a is then not necessarily the closest float to the true value \(0.3\). You can find out the representation that Python is using: (3602879701896397, 36028797018963968) This means that \(0.1\) is being represented as \(\frac{3602879701896397}{2^{55}}\). Comparing floats# Given the issues above, it is often not a good idea to directly compare floats x and y using x == y. Instead, consider testing their absolute difference: abs(x - y) <= err for some fixed value of err Overflow and underflow# As well as the limitations discussed above, caused by the number of significant bits, there are limitations caused by the fixed number of bits available for the exponent. Since we have 11 bits available for the exponent, and one of those bits is used to determine whether it is positive or negative, the exponent can go up to \(2^{10} - 1\). OverflowError Traceback (most recent call last) Input In [14], in <cell line: 1>() ----> 1 2.0 ** (2 ** 10) OverflowError: (34, 'Result too large') An OverflowError occurs when the result of a calculation is too large to fit in a float. A similar issue can occur when the exponent gets too small, though here we don’t get an error. Infinity and NaN# If we directly create a float which is too large, Python will treat it like infinity. # 2.3 * (10**310) is, of course, equal to infinity The other special value is nan, standing for “not a number”, which can arise if your calculations take a strange turn like multiplying infinity by 0. # infinity times 0 is not a number 2.3e310 * 0 # infinity minus infinity is not a number 2.3e310 - 4.5e350
{"url":"https://danl21.github.io/docs/1_IntroNotebooks/14%20Floating%20Point%20Numbers.html","timestamp":"2024-11-11T03:43:05Z","content_type":"text/html","content_length":"42344","record_id":"<urn:uuid:a36df945-b2e0-44a9-8ba9-78bd3db594f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00654.warc.gz"}
Seminar Announcement - MSCS Analysis and Applied Mathematics Seminar Sung-Jin Oh UC Berkeley Wellposedness of the electron MHD without resistivity for large perturbations of the uniform magnetic field Abstract: We prove the local wellposedness of the Cauchy problems for the electron magnetohydrodynamics equations (E-MHD) without resistivity for possibly large perturbations of nonzero uniform magnetic fields. While the local wellposedness problem for (E-MHD) has been extensively studied in the presence of resistivity (which provides dissipative effects), this seems to be the first such result without resistivity. (E-MHD) is a fluid description of plasma in small scales where the motion of electrons relative to ions is significant. Mathematically, it is a quasilinear dispersive equation with nondegenerate but nonelliptic second-order principal term. Our result significantly improves upon the straightforward adaptation of the classical work of Kenig–Ponce–Rolvung–Vega on the quasilinear ultrahyperbolic Schrödinger equations, as the regularity and decay assumptions on the initial data are greatly weakened to the level analogous to the recent work of Marzuola–Metcalfe–Tataru in the case of elliptic principal term. Monday April 15, 2024 at 4:00 PM in 636 SEO
{"url":"https://www.math.uic.edu/persisting_utilities/seminars/view_seminar?id=7397","timestamp":"2024-11-10T18:47:50Z","content_type":"text/html","content_length":"12151","record_id":"<urn:uuid:5211e0a7-1d99-419c-8f05-513abca255be>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00242.warc.gz"}
CPM Homework Help Copy the diagram at right on your own graph paper. Then enlarge or reduce it by each of the following ratios. If we use each ratio as new/original, then the example below shows the lengths of a new diagram that would have sides that are five times as long as the original. a. $\frac { 3 } { 1 }$ If you choose to use a ratio of new/original, you will need to enlarge the figure. Use the example. Can you draw your own diagram? b. $\frac { 2 } { 3 }$ $\text{For part (b), the new diagram will be }\frac{2}{3}\text{ the size of the original.}$
{"url":"https://homework.cpm.org/category/ACC/textbook/acc6/chapter/6%20Unit%206/lesson/CC1:%206.2.5/problem/6-117","timestamp":"2024-11-04T13:56:22Z","content_type":"text/html","content_length":"36744","record_id":"<urn:uuid:14961060-14fc-42f7-8797-959be82f551a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00303.warc.gz"}
n glam::f64 pub struct DAffine2 { pub matrix2: DMat2, pub translation: DVec2, Expand description A 2D affine transform, which can represent translation, rotation, scaling and shear. §matrix2: DMat2§translation: DVec2 The degenerate zero transform. This transforms any finite vector and point to zero. The zero transform is non-invertible. The identity transform. Multiplying a vector with this returns the same vector. Creates an affine transform from three column vectors. Creates an affine transform from a [f64; 6] array stored in column major order. Creates a [f64; 6] array storing data in column major order. Creates an affine transform from a [[f64; 2]; 3] 2D array stored in column major order. If your data is in row major order you will need to transpose the returned matrix. Creates a [[f64; 2]; 3] 2D array storing data in column major order. If you require data in row major order transpose the matrix first. Creates an affine transform from the first 6 values in slice. Panics if slice is less than 6 elements long. Writes the columns of self to the first 6 elements in slice. Panics if slice is less than 6 elements long. Creates an affine transform that changes scale. Note that if any scale is zero the transform will be non-invertible. Creates an affine transform from the given rotation angle. Creates an affine transformation from the given 2D translation. Creates an affine transform from a 2x2 matrix (expressing scale, shear and rotation) Creates an affine transform from a 2x2 matrix (expressing scale, shear and rotation) and a translation vector. Equivalent to DAffine2::from_translation(translation) * DAffine2::from_mat2(mat2) Creates an affine transform from the given 2D scale, rotation angle (in radians) and translation. Equivalent to DAffine2::from_translation(translation) * DAffine2::from_angle(angle) * DAffine2::from_scale(scale) Creates an affine transform from the given 2D rotation angle (in radians) and translation. Equivalent to DAffine2::from_translation(translation) * DAffine2::from_angle(angle) The given DMat3 must be an affine transform, Extracts scale, angle and translation from self. The transform is expected to be non-degenerate and without shearing, or the output will be invalid. Will panic if the determinant self.matrix2 is zero or if the resulting scale vector contains any zero elements when glam_assert is enabled. Transforms the given 2D point, applying shear, scale, rotation and translation. Transforms the given 2D vector, applying shear, scale and rotation (but NOT translation). To also apply translation, use Self::transform_point2() instead. Returns true if, and only if, all elements are finite. If any element is either NaN, positive or negative infinity, this will return false. Returns true if any elements are NaN. Returns true if the absolute difference of all elements between self and rhs is less than or equal to max_abs_diff. This can be used to compare if two 3x4 matrices contain similar elements. It works best when comparing with a known value. The max_abs_diff that should be used used depends on the values being compared against. For more see comparing floating point numbers. Return the inverse of this transform. Note that if the transform is not invertible the result will be invalid. Trait Implementations§ The resulting type after dereferencing. Dereferences the value. Mutably dereferences the value. Converts to this type from the input type. The resulting type after applying the * operator. The resulting type after applying the * operator. The resulting type after applying the * operator. This method tests for self and other values to be equal, and is used by ==. This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason. Method which takes an iterator and generates Self from the elements by multiplying the items. Auto Trait Implementations§ Blanket Implementations§ Returns the argument unchanged. Calls U::from(self). That is, this conversion is whatever the implementation of From<T> for U chooses to do. The resulting type after obtaining ownership. Creates owned data from borrowed data, usually by cloning. Read more Uses borrowed data to replace owned data, usually by cloning. Read more The type returned in the event of a conversion error. Performs the conversion. The type returned in the event of a conversion error. Performs the conversion.
{"url":"https://embarkstudios.github.io/rust-gpu/api/glam/f64/struct.DAffine2.html","timestamp":"2024-11-14T17:45:57Z","content_type":"text/html","content_length":"68716","record_id":"<urn:uuid:f0c9f7b8-ca88-4bc8-a25f-11f1f013fded>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00400.warc.gz"}
December 2014 – Gregor Ulm When students encounter higher-order functions in functional programming, they are normally exposed to the following three first: map, filter, and fold. The higher-order functions map and filter are intuitively accessible. With the former you apply a function to each element of a list, while the latter retains only elements for which a given predicate is true. One might want to implement those functions in Haskell as follows: map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs filter :: (a -> Bool) -> [a] -> [a] filter _ [] = [] filter p (x:xs) | p x = x : filter p xs | otherwise = filter p xs On the other hand, fold is somewhat more confusing. As I’ve found, the treatment of this topic on the Haskell wiki is not overly accessible to novices, while the explanation given in Learn You A Haskell is, like pretty much everything in it, too cute for its own good. There are two cases of fold, namely foldr and foldl, with ‘r’l and ‘l’ describing right-associativity and left-associativity, respectively. One possible definition of foldr is: foldr :: (a -> b -> b) -> b -> [a] -> b foldr _ z [] = z foldr f z (x:xs) = f x (foldr f z xs) ‘z’ is the last element of the list you’re applying the function to. I sometimes hear people refer to this as the “identity element”, but this is not necessarily correct. ‘z’ can be the identity element. For instance, if you want to multiply all values in a list of integers, you would chose the integer 1 to take the place of ‘z’. However, nothing is keeping you from picking any other To illustrate foldr with an example, let’s evaluate the following function call: foldr (*) 1 [1,2,3] (*) 1 (foldr (*) 1 [2,3]) (*) 1 ((*) 2 (foldr (*) 1 [3])) (*) 1 ((*) 2 ((*) 3 (foldr (*) 1 []))) (*) 1 ((*) 2 ((*) 3 1)) (*) 1 ((*) 2 3) (*) 1 6 To make this evaluation more digestible from the fourth line onwards, it could also be written as: 1 * (2 * (3 * 1)) 1 * (2 * 3) 1 * 6 This example shows the application of the function to each element in the list. Maybe this reminds you of how lists are constructed using the cons operator, i.e. (1 : (2 : (3 : []))) is represented, after adding syntactic sugar, as [1, 2, 3]. In fact, a list can be constructed using a fold. So, if you were in a silly mood, you could create a function that takes a list, runs it through fold, and returns the same list: listId :: [a] -> [a] listId xs = foldr (:) [] xs Of course, ‘xs’ can be omitted on both sides of this equation. If you now call this function with the argument [1, 2, 3], you’ll get (1 : (2 : (3 : []))) in return, which is [1, 2, 3]. Let’s now look at foldl, which is fold with left-associativity. One possible definition is as follows. foldl :: (a -> b -> a) -> a -> [b] -> a foldl _ z [] = z foldl f z (x:xs) = foldl f (f z x) xs Evaluating the same expression as given above, with foldr, results in: foldl (*) 1 [1,2,3] foldl (*) ((*) 1 1) [2, 3] foldl (*) ((*) ((*) 1 1) 2) [3] foldl (*) ((*) ((*) ((*) 1 1) 2) 3) [] ((*) ((*) ((*) 1 1) 2) 3) ((*) ((*) 1 2) 3) ((*) 2 3) You can probably see why defining a ‘listId’ function with foldl results in an error. Also note that foldl and foldr only give the same result if the binary operator f is associative. Just consider what happens when you’re using an operator that isn’t, for instance: > foldr (/) 1 [1,2,3] > 1.5 > foldl (/) 1 [1,2,3] > 0.16666666666666666 The first function call evaluates to (1 / (2 / (3 / 1))), and the second to (((1 / 2) / 3) / 1). Review: Introduction to Functional Programming (edX) I just finished the last problem set in Erik Meijer’s online course Introduction to Functional Programming. This seems like a good opportunity to briefly reflect on it. There aren’t a lot of functional programming MOOCs available. I’m only aware of two Coursera courses, one on FP in Scala, and another on FRP in Akka. While Erik Meijer repeatedly made the point that his course was not on Haskell but on FP in general, there most certainly was a strong focus on exploring functional programming with Haskell. The recommended course literature was Graham Hutton’s Programming in Haskell, which is incidentally the same book I used when I took a similar course at university. As far as programming-related textbooks go, Hutton’s book is among the best, as he explains topics concisely, and poses carefully selected exercises to the reader. It’s the exact opposite of your typical Java or Python textbook, or the, in my opinion, highly overrated “Learn you a (Haskell|Erlang) for Great Good” books, but that may be a topic for another article. If you just used the textbook, you’d be well-prepared for the homework exercises and labs already. Still, I enjoyed Erik Meijer’s presentation, and his sometimes quirky remarks, such as that he wants his students to “think like a fundamentalist and code like a hacker”. In special “jam sessions” he demonstrated functional programming concepts in other languages, such as Scala, Dart, Hack, and Kotlin. What I also liked was that some of the labs were offered in several programming languages. The very first lab was offered in Haskell, Groovy, F#, Frege (!), and Ruby, for instance, which led me to playing around with some new languages. This course is certainly, for the most part, comparable to a university-level course in functional programming. I do have some gripes with the form of the assessment, though. For instance, a common type of question asked you to indicate which of a given number of alternative function definitions were valid. Sometimes the code was obfuscated, and since you couldn’t just copy and paste it, it could easily happen that a GHCi error message was due to a mistake you made while copying the program. This might not have been a problem if those questions had been rare, but because there were so many of them, the tedium was palpable. In later weeks I skipped those questions since I saw very little educational value in them. Further, the labs were a bit too straight-forward for my taste, but that may be a limitation of the MOOC format. The advice “follow the types” was repeated like a mantra. It is of course a good idea to use type signatures as a guide. However, being given a template that contains correct type signatures and that only requires one to write a few lines of code — if I’m not mistaken, in some weeks the labs required just about a dozen lines of Haskell in total — seems partly misguided. Obviously, it is much more difficult to design a program yourself, and define the correct type signatures. Merely filling in function definitions, on the other hand, is somewhat akin to painting by numbers. It might therefore be a good idea to add a peer-reviewed project to this course in its next iteration. My experience with peer-review on MOOCs is mixed, but it’s better than nothing. After all, the theory behind FP is sufficiently covered. It’s just that the course doesn’t require writing a lot of code, which could only be excused on the labs that focus on type classes and monads. Overall, Introduction to Functional Programming is a very good course. However, if you’re taking it as a novice, you might want to do the exercises in Hutton’s book in order to get more practice with programming in Haskell.
{"url":"https://gregorulm.com/2014/12/","timestamp":"2024-11-02T07:49:59Z","content_type":"text/html","content_length":"44790","record_id":"<urn:uuid:7bbc088c-5c9f-4f42-8485-d992acc75a3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00514.warc.gz"}
[1] C. Kirisits. Total variation denoising on hexagonal grids. Preprint on ArXiv arXiv:1204.3855, University of Vienna, Austria, 2012. [preprint] [2] G. Bal, W. Naetar, O. Scherzer, and J. Schotland. The Levenberg-Marquardt Iteration for Numerical Inversion of the Power Density Operator. J. Inverse Ill-Posed Probl., feb 2013. [http | preprint [3] C. Kirisits, L. F. Lang, and O. Scherzer. Optical Flow on Evolving Surfaces with an Application to the Analysis of 4D Microscopy Data. In A. Kuijper, editor, SSVM'13: Proceedings of the fourth International Conference on Scale Space and Variational Methods in Computer Vision, volume 7893 of Lecture Notes in Computer Science, pages 246-257, Berlin, Heidelberg, 2013. Springer-Verlag. [ http | preprint] [4] Wolfgang Dvořák, Monika Henzinger, and David P. Williamson. Maximizing a Submodular Function with Viability Constraints. In ESA 2013: Proceedings of the 21st European Symposium on Algorithms, Lecture Notes in Computer Science, pages 409-420. Springer Berlin Heidelberg. [http | eprint] [5] C. Kirisits, L. F. Lang, and O. Scherzer. Optical Flow on Evolving Surfaces with Space and Time Regularisation. J. Math. Imaging Vision, 52(1):55--70, 2015. [http | preprint] [6] C. Kirisits, L. F. Lang, and O. Scherzer. Decomposition of optical flow on the sphere. GEM - International Journal on Geomathematics, volume 5, issue 1, pages 117-141, 2014. [http | preprint] [7] M. Kucharík, I. L. Hofacker, P. F. Stadler, and J. Qin. Basin Hopping Graph: A computational framework to characterize RNA folding landscapes, Bioinformatics, 2014. [http] [8] O. Chernomor, B.Q. Minh, F. Forest, S. Klaere, T. Ingram, M. Henzinger, and A. von Haeseler. Split Diversity in Constrained Conservation Prioritization using Integer Programming, 2014, accepted for publication in Methods in Ecology and Evolution. [9] O. Chernomor, A. von Haeseler, and B.Q. Minh, Terrace Aware Phylogenomic Inference from Supermatrices, 2014, submitted. [10] O. Chernomor, S. Klaere, A. von Haeseler, and B.Q. Minh, Split diversity: measuring and optimizing biodiversity using phylogenetic split networks. Biodiversity Conservation and Phylogenetic Systematics (eds R. Pellens & P. Grandcolas). Springer, 2014, in press. [11] M. Bauer, M. Grasmair, and C. Kirisits. Optical flow on moving manifolds. SIAM J. Imaging Sciences, 8(1):484--512, 2015. [http | preprint] [12] C. Kirisits, C. Pöschl, E. Resmerita, and O. Scherzer. Finite dimensional approximation of convex regularization via hexagonal pixel grids. Technical report, University of Vienna, Austria, 2014. Accepted in Appl. Anal. [13] O. Scherzer and C. Kirisits. Convex variational regularization methods for inverse problems. In P. Bühlmann, T. Cai, A. Munk, and B. Yu, editors, Frontiers in Nonparametric Statistics, volume 14 of Oberwolfach reports, pages 43–45. EMS Publishing House, 2012. [14] M. Mann, M. Kucharík, C. Flamm, and M.T. Wolfinger. Memory-efficient RNA energy landscape exploration. Bioinformatics, 2014. [http] [15] C. Leitold and C. Dellago. Folding mechanism of a polymer chain with short-range attractions. Journal of Chemical Physics 141, 134901, 2014. [http] [16] W. Naetar and O. Scherzer. Quantitative Photoacoustic Tomography with Piecewise Constant Material Parameters, SIAM J. Imaging Sci., 7(3), 1755–1774, 2014. [http | preprint] [17] T. Flouri, F. Izquierdo-Carrasco, D. Darriba, A.J. Aberer, L-T. Nguyen, B.Q. Minh, A. von Haeseler, and A. Stamatakis. The phylogenetic likelihood library. Syst. Biol., 2014, in press. [18] L.-T. Nguyen, H.A. Schmidt, A. von Haeseler, and B.Q. Minh. IQ-TREE: A fast and effective stochastic algorithm for estimating maximum likelihood phylogenies. Mol. Biol. Evol., 2014, in press. [19] C. Leitold, W. Lechner, and C. Dellago. A string reaction coordinate for the folding of a polymer chain. J. Phys. Condens. Matter 27, 194126, 2015. [http] [20] L. F. Lang, and O. Scherzer. Optical Flow on Evolving Sphere-Like Surfaces. Preprint on ArXiv arXiv:1506.03358, University of Vienna, Austria, 2015. [preprint]
{"url":"https://www.csc.univie.ac.at/ik/publications/","timestamp":"2024-11-09T22:52:36Z","content_type":"application/xhtml+xml","content_length":"10683","record_id":"<urn:uuid:94a882cc-4d76-4f9b-b8e7-3d25517ece3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00037.warc.gz"}
Thermally driven elastic membranes are quasi-linear across all scales We study the static and dynamic structure of thermally fluctuating elastic thin sheets by investigating a model known as the overdamped dynamic Föppl-von Kármán equation, in which the Föppl-von Kármán equation from elasticity theory is driven by white noise. The resulting nonlinear equation is governed by a single nondimensional coupling parameter g where large and small values of g correspond to weak and strong nonlinear coupling respectively. By analysing the weak coupling case with ordinary perturbation theory and the strong coupling case with a self-consistent methodology known as the self-consistent expansion, precise analytic predictions for the static and dynamic structure factors are obtained. Importantly, the maximum frequency nmax supported by the system plays a role in determining which of three possible classes such sheets belong to: (1) when g≫1, the system is mostly linear with roughness exponent ζ = 1 and dynamic exponent z = 4, (2) when g≪2/nmax, the system is extremely nonlinear with roughness exponent ζ=1/2 and dynamic exponent z = 3, (3) between these regimes, an intermediate behaviour is obtained in which a crossover occurs such that the nonlinear behaviour is observed for small frequencies while the linear behaviour is observed for large frequencies, and thus the large frequency linear tail is found to have a significant impact on the small frequency behaviour of the sheet. Back-of-the-envelope calculations suggest that ultra-thin materials such as graphene lie in this intermediate regime. Despite the existence of these three distinct behaviours, the decay rate of the dynamic structure factor is related to the static structure factor as if the system were completely linear. This quasi-linearity occurs regardless of the size of g and at all length scales. Numerical simulations confirm the existence of the three classes of behaviour and the quasi-linearity of all classes. Bibliographical note Publisher Copyright: © 2023 The Author(s). Published by IOP Publishing Ltd. • Family-Vicsek scaling • Föppl-von Kármán equations • out-of-equilibrium dynamics • roughness • self-consistent expansion • thin sheets Dive into the research topics of 'Thermally driven elastic membranes are quasi-linear across all scales'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/thermally-driven-elastic-membranes-are-quasi-linear-across-all-sc","timestamp":"2024-11-01T23:09:47Z","content_type":"text/html","content_length":"53097","record_id":"<urn:uuid:d1be98a9-ed9f-404c-aac7-f5e3b8b5fea6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00507.warc.gz"}
Semiconductor Physics (BBS00015) Study Material - 2024-2025 PDF | Quizgecko Semiconductor Physics (BBS00015) Study Material - 2024-2025 PDF Document Details Brainware University, Kolkata semiconductor physics quantum physics classical mechanics physics This document is study material for a Semiconductor Physics module. It covers topics like blackbody radiation, quantum vs classical mechanics, Wien's displacement law, and Stefan-Boltzmann law. It also explains the photoelectric effect using Einstein's postulate. Full Transcript B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD Study Material (Semiconductor Physics, BBS00015) Module I part 1 Contents 1... B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD Study Material (Semiconductor Physics, BBS00015) Module I part 1 Contents 1 Introduction………………………………………………………………………………………………………………………………… 1 2 Limitation of Classical Physics……………………………………………………………………………………………………… 1 2.1 Blackbody Radiation and the Ultraviolet Catastrophe………………………………………………………….. 1 2.2 Photoelectric effect………………………………………………………………………………………………………………….. 2 3 Quantum vs. Classical Mechanics………………………………………………………………………………………………… 3 4 Planck’s theory of blackbody radiation ……………………………………………………………………………………… 3 4.1 Deduction from Planck’s law..………………………………………………………………………………………………… 5 4.1.1 Wien’s Displacement law………………….…………………………………………………………………………….. 5 4.1.2 Stefan-Boltzmann law……………………………………………………………………………………………………… 5 5 Photoelectric effect……………………………………………………………………………………………………………………… 6 5.1 Characteristics of photoemission………………………………………………………………………………………………. 6 5.2 Explanation of Photoelectric effect using Einstein’s postulate:…………………………………………………. 7 5.3 Numerical example:………………………………………………………………………………………………………………….. 7 1 Introduction This section deals with a “qualitative” overview of quantum physics and how it compares to classical physics. We shall learn about a few fundamental experiments that illustrate the limitation of classical mechanics and the need for a more fundamental theory. Next, we shall compare the basic properties of Quantum and Classical mechanics using non-technical terms (minimal usage of mathematical derivation) as far as possible. 2 Limitation of Classical Physics 2.1 Blackbody Radiation and the Ultraviolet Catastrophe Fig. 1 shows the energy distribution of blackbody radiation as a function of wavelength. Classical physics tells us that the amount of the radiation is inversely proportional to the wavelength, or, more precisely, the power emitted 1 per unit area per unit solid angle per unit wavelength is proportional to 𝜆4 where λ is the wavelength. This means that as the wavelength approaches zero, the amount of radiation becomes infinite! The black curve in Fig. 1 illustrates the same. This result is known as the ultraviolet catastrophe since ultraviolet light has shorter wavelengths than the visible range of em waves. Obviously, this does not match well with experimental data, since when we measure the total radiation emitted from a black body, we certainly do not measure it to be infinity! Hence Classical theory has a serious limitation in explaining the experimentally observed distribution of electromagnetic radiation emitted by a blackbody. Therefore, we need a more fundamental concept, Quantum theory. Prepared by the faculty members of the Physics department 1 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD UVVISIBLE INFRARED 14 5000 K 12 Classical theory (5000 K) 10 8 6 4000 K 4 2 3000 K 0 0 0.5 1 1.5 2 2.5 3 Wavelength (μm) Fig. 1: The electromagnetic spectrum of energy radiated by a black body. Source:Wikipedia. According to quantum theory, radiation can only be emitted in a discrete “packet” of energy called quanta. If we assume this idea of emission of quanta of energy, we can theoretically calculate the distribution curve of blackbody radiation that agrees well with the experimentally measured spectrum. The law describing the amount of radiation at each wavelength is called Planck’s law of blackbody radiation, after the name of Max Planck who proposed this theory. 2.2 Photoelectric effect When light falls on a metal surface, the metal surface emits electrons if the frequency of the incident light is greater than the minimum frequency, known as the threshold frequency of the metal. Such a phenomenon is known as the photoelectric effect. Using classical physics and the assumption that light is a wave, one can make the following predictions: Brighter light should have more energy, so it should cause the emitted electrons to have more kinetic energy and thus move faster. Light with higher frequency should hit the material more often, so it should cause a higher rate of electron emission, resulting in a larger electric current. However, what happens is the exact opposite: The kinetic energy of the emitted electrons increases with frequency, not brightness. The electric current increases with brightness, not frequency. Prepared by the faculty members of the Physics department 2 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD Since the classical theory appears inadequate in explaining this effect we need Quantum theory. Similar to Plank’s theory of blackbody radiation, Einstein proposed that light consists of discrete quanta called photons. Each photon has energy proportional to the frequency of the light. Brighter light of the same frequency has more photons; however, each photon has the same amount of energy. This proposed model agrees well with the experimentally observed phenomenon. 3 Quantum vs. Classical Mechanics Quantum mechanics is, as far as we know, the exact and fundamental theory of reality. Quantum mechanics is necessary to describe small objects, like, elementary particles, atoms, and molecules. All big objects are effectively made of microscopic particles; therefore, in principle, quantum mechanics can describe humans, planets, galaxies, and even the whole universe. This is where Classical mechanics comes into the picture; when many small quantum systems make up one large system, classical mechanics generally offers an adequate description for all practical purposes. This is similar to how relativity is always the correct way to describe physics, but at low velocities, much smaller than the speed of light, Newtonian physics is a good enough approximation. 4 Planck’s theory of blackbody radiation The failure of classical theory to explain the experimentally observed energy distribution of the radiation emitted by a blackbody made the path for a new theory. At the beginning of the twentieth century, Max Planck proposed a new concept to explain the blackboy radiation and absorption of radiation is not a continuous process but occurs discretely as an integral multiple of a basic unit, called the quantum of energy. Each quantum of energy carries a definite amount of energy E = hν, where h is a universal constant known as Planck’s constant (h = 6.626 × 10−34Js). Planck estimated the value of h by fitting the theory to the experimentally measured data (Fig. 1). The derivation of the expression of energy density radiated by a blackbody using Planck’s postulate is beyond the scope of this course and therefore we assume the following form of the energy density of a blackbody radiated for wavelengths between λ and λ + dλ 8𝜋ℎ𝑐 𝑑𝜆 𝑢𝜆 𝑑𝜆 = 5 … … … … … … … … (1) 𝜆 exp(ℎ𝑐/𝜆𝐾𝐵 𝑇) − 1 8𝜋ℎ 𝜈3 𝑑𝜈 ⇒ 𝑢𝜆 𝑑𝜆 = ………………………………...(2) 𝑐 3 exp(ℎ𝜈/𝐾𝐵 𝑇)−1 where KB is Boltzmann constant (1.38 × 10−23JK−1) and c is speed of light in vacuum. Planck’s formula for the energy distribution of the blackbody radiation agrees well with the experimentally measured data (Fig. 1) for any value of wavelength (λ). Previously, two other laws using Classical theory were known that can partially reproduce the experimentally measured energy distribution of the blackbody radiation (Fig. 1). Wien’s law of blackbody radiation Wilhelm Wien, from thermodynamical considerations and some arbitrary assumptions regarding the mechanism of emission and absorption of radiation, proposed a functional form of uλ for a given temperature T, 𝑎 −𝑏 𝑢𝜆 (𝑇)𝑑𝜆 = 𝜆5 exp ( 𝜆𝑇 ) 𝑑𝜆 (3) where a and b are constants. Prepared by the faculty members of the Physics department 3 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD Rayleigh-Jeans Law The British physicist Lord Rayleigh and Sir James Jeans, based on classical theory arguments and empirical facts, proposed that a functional form of the energy density of blackbody radiation as, 𝑎𝑇 𝑢𝜆 𝑑𝜆 = 𝑑𝜆 (4) 𝜆4 where a is a constant and T is the temperature of the blackbody. This form agrees well with the experimentally measured spectra of radiation of the blackbody only at long wavelengths. This empirical law, however, severely fails at short wavelengths known as the ultraviolet catastrophe. Below we will show that for short and long wavelength limits functional form of Planck’s law of blackbody radiation reduces to Wien’s law (for λ → 0) and Rayleigh-Jean’s law (for λ →∞), respectively. ℎ𝑐 For very short wavelengths, i.e., λ → 0, we have, ≫ 1, and therefore 𝜆𝐾𝐵 𝑇 ℎ𝑐 ℎ𝑐 exp ( ) − 1 ≈ exp ( ) 𝜆𝐾𝐵 𝑇 𝜆𝐾𝐵 𝑇 if we consider 8πhc = a and hc/KB = b, for short wavelength limit Eq. 2 reduces to, 8𝜋ℎ𝑐 1 8𝜋ℎ𝑐 ℎ𝑐 𝑎 𝑏 lim 𝑢𝜆 𝑑𝜆 = 𝑑𝜆 = exp (− ) = exp (− ) 𝑑𝜆 (5) 𝜆→0 𝜆5 exp( ℎ𝑐 ) 𝜆5 𝜆𝐾𝐵 𝑇 𝜆5 𝜆𝑇 𝜆𝐾𝐵 𝑇 which is the same as Eq. 3. Therefore, Planck’s law for blackbody radiation reduces to Wien’s law at short wavelengths of the radiation. ℎ𝑐 𝑥2 𝑥3 Again for long wavelengths, i.e., 𝜆 → ∞, 𝜆𝐾 ≪ 1. we know, for 𝑥 ≪ 1, 𝑒 𝑥 = 1 + 𝑥 + + +⋯ 𝐵𝑇 2! 3! ℎ𝑐 Hence neglecting higher-order terms of 𝜆𝐾 𝑇 , we can write 𝐵 ℎ𝑐 ℎ𝑐 exp (𝜆𝐾 𝑇 ) ≈ 1 + 𝜆𝐾 𝐵 𝐵𝑇 ℎ𝑐 ℎ𝑐 ℎ𝑐 exp ( )−1 = 1+ −1= 𝜆𝐾𝐵 𝑇 𝜆𝐾𝐵 𝑇 𝜆𝐾𝐵 𝑇 Thus we have, 8𝜋ℎ𝑐 𝑑𝜆 8𝜋ℎ𝑐 𝑑𝜆 lim 𝑢𝜆 𝑑𝜆 = lim = ℎ𝑐 𝜆→∞ 𝜆→∞ 𝜆5 exp( ℎ𝑐 )−1 𝜆5 𝜆𝐾𝐵 𝑇 𝜆𝐾𝐵 𝑇 8𝜋ℎ𝑐 𝜆𝐾𝐵 𝑇 𝑇 𝑎𝑇 = 𝑑𝜆 = 8𝜋𝐾𝐵 4 𝑑𝜆 = 4 𝜆5 ℎ𝑐 𝜆 𝜆 which is the same as Rayleigh-Jeans law (Eq. 4). Prepared by the faculty members of the Physics department 4 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD 4.1 Deduction from Planck’s law 4.1.1 Wien’s Displacement law When the temperature of a blackbody increases, the overall radiated energy increases and the peak of the radiation curve moves to shorter wavelengths. According to the Wien’s displacement law, the product of the wavelength at which the energy density of radiation of a blackbody becomes maximum and the corresponding temperature of the blackbody is constant. From Planck’s law (Eq. 2) one can easily find that uλ is maximum when the denominator is minimum, since the numerator is a constant. The denominator can be written as, ℎ𝑐 𝑧 = 𝜆5 (exp ( ) − 1) 𝜆𝐾𝐵 𝑇 𝑑𝑧 Since 𝑑𝜆 = 0 at 𝜆 = 𝜆𝑚 , we have, 𝑑𝑧 ℎ𝑐 ℎ𝑐 ℎ𝑐 𝑑𝜆 = 5𝜆4 (exp (𝜆𝐾 𝑇 ) − 1) + 𝜆5 exp (𝜆𝐾 𝑇) (− 𝜆2 𝐾 𝑇 ) = 0 for λ = λm 𝐵 𝐵 𝐵 ℎ𝑐 ℎ𝑐 ⇒ 𝑒 𝜆𝑚𝐾𝐵 𝑇 (5𝜆4𝑚 − 𝜆3𝑚 ) − 5𝜆4𝑚 = 0 𝐾𝐵 𝑇 ℎ𝑐 ℎ𝑐 ⇒ 𝜆4𝑚 𝑒 𝜆𝑚𝐾𝐵 𝑇 (5 − ) − 5𝜆4𝑚 = 0 𝜆𝑚 𝐾𝐵 𝑇 ℎ𝑐 ⇒ 𝜆4𝑚 (𝑒 𝑥 (5 − 𝑥) − 5) = 0 (𝑥 = ) 𝜆𝑚 𝐾𝐵 𝑇 ⇒ 𝑒 𝑥 (5 − 𝑥) − 5 = 0 𝑎𝑠 𝜆𝑚 ≠ 0 ⇒ 𝑒 𝑥 (5 − 𝑥) = 5 ⇒ (5 − 𝑥) = 5𝑒 −𝑥 ⇒ 5(1 − 𝑒 −𝑥 ) = 𝑥 𝑥 ⇒ (1 − 𝑒 −𝑥 ) = 5 This is a transcendental equation that cannot be solved analytically. Therefore, we solve it graphically by plotting y = 1 − exp(−x) and y = x/5 on the same graph paper. The point of intersection of two curves gives the solution for x which is x = 4.96. We thus get ℎ𝑐 ℎ𝑐 𝜆𝑚 𝑇 = = = 0.0029𝐾𝑚 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝐾𝐵 𝑥 4.96𝐾𝐵 This is Wien’s displacement law. 4.1.2 Stefan-Boltzmann law The Stefan-Boltzmann law states that the total energy radiated per unit surface area of a blackbody across all wavelengths per unit time is proportional to the fourth power of its absolute temperature. The corresponding proportionality constant is known as Stefan constant, σ = 5.67 × 10−8J/(s m2 K4). Prepared by the faculty members of the Physics department 5 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD If Eλ is the intensity of the emitted radiation between wavelength λ and λ + dλ, the total energy across all wavelengths emitted by the blackbody is given by ∞ 𝐸 = ∫ 𝐸𝜆 𝑑𝜆 0 The energy density uλ is related to the intensity of the emitted radiation as 𝑐 𝐸𝜆 = 𝑢𝜆 4 ∞ 𝑐 ∞ 𝑐 ∞ 1 𝑑𝜆 𝐸 = ∫ 𝐸𝜆 𝑑𝜆 = ∫ 𝑢𝜆 𝑑𝜆 = 8𝜋ℎ𝑐 ∫ 5 0 4 0 4 0 𝜆 exp ( ℎ𝑐 )−1 𝜆𝐾𝐵 𝑇 ℎ𝑐 1 𝐾𝐵 𝑇 Let’s assume 𝜆𝐾𝐵 𝑇 = 𝑥. So, − 𝜆2 𝑑𝜆 = ℎ𝑐 𝑑𝑥 ∞ 𝐾𝐵 𝑇 4 𝑥 3 Therefore we can write,𝐸 = 2𝜋ℎ𝑐 2 ∫0 ( ℎ𝑐 ) 𝑒 𝑥 −1 𝑑𝑥 2𝜋𝐾𝐵4 𝑇 4 ∞ 3 𝑥 𝑑𝑥 = ∫ ℎ3 𝑐 2 0 𝑒 𝑥 − 1 2𝜋𝐾𝐵4 𝑇 4 𝜋 4 = ℎ3 𝑐 2 15 2𝜋 4 𝐾𝐵4 𝑇 4 = 15ℎ3 𝑐 2 = 𝜎𝑇 4 where σ is Known as Stefan constant. Putting all the values of the constants we get, 2𝜋 4 𝐾𝐵4 = 5.67 × 10−8 𝐽/(𝑠𝑚2 𝐾 4 ) 15ℎ3 𝑐 2 Thus Stefan-Boltzmann’s law can be derived from Planck’s law of blackbody radiation. 5 Photoelectric effect The photoelectric effect is the phenomenon of the emission of electrons from the surface of a metal when a beam of electromagnetic radiation of appropriate frequency is incident on it. Electrons thus emitted are known as photoelectrons and the current produced by the emission of the photoelectrons is known as photocurrent. 5.1 Characteristics of photoemission 1. The number of photoelectrons emitted per second, that is, the photoelectric current is directly proportional to the intensity of the incident radiation but it is independent of the frequency (or wavelength) of incident light. 2. The maximum speed of the emitted photoelectron is independent of the intensity of the incident light, but depends on its frequency, increasing linearly with the increase of the frequency of the incident light. 3. For each material emitting photoelectrons, there is a minimum energy, φ0, that must be supplied to the material to have photoelectrons emitted from its surface. This minimum energy is called the work function of the material. In other words, there is a minimum frequency ν0 of the incident light below which no Prepared by the faculty members of the Physics department 6 Brainware University, Kolkata B.Tech CSE-DS, B.Tech CSE-AIML, B.Tech CSE-CYS 2024 and Semester-I Semiconductor Physics (BBS00015) Class 2024-25 ODD photoelectron would be ejected from the surface of the metal. This cut-off frequency is known as the threshold frequency. 5.2 Explanation of Photoelectric effect using Einstein’s postulate: In 1905, Albert Einstein proposed a new theory of electromagnetic radiation to explain the photoelectric effect. According to this theory, photoelectric emission does not take place by continuous absorption of energy from radiation. Light energy is built up of discrete units – the so-called quanta of energy of radiation. Each quantum of radiant energy has energy hν, where h is Planck’s constant and ν is the frequency of light. Such a quantum or packet of light energy is known as a photon. When light of energy hν falls on the surface of a metal having work function φ0, photoelectrons are ejected only if hν > φ0. The excess energy hν −φ0 is taken by the electrons as its kinetic energy. Thus the maximum kinetic energy of the emitted photoelectrons is 1 2 2 𝑚𝑣𝑚𝑎𝑥 = ℎ𝜈 − 𝜑0 (6) One can apply a negative potential to stop the ejected photoelectrons. The minimum negative potential must be applied to stop the fastest moving photoelectron is known as stopping potential. If Vs be the potential required to stop the fastest moving photoelectron having speed vmax then, 1 2 𝑒𝑉𝑠 = 𝑚𝑣𝑚𝑎𝑥 2 Therefore the Equ.6 can also be in an alternative form as follows, hν − φ0 = eVs hν = hν0 + eVs (7) 5.3 Numerical example: In an experiment using a tungsten cathode that has a threshold wavelength 2300Å is irradiated by the ultraviolet light of wavelength 1800 Å. Calculate the maximum kinetic energy of the emitted photoelectrons and the work function of tungsten. 1 2 𝑚𝑣𝑚𝑎𝑥 = ℎ𝜈 − ℎ𝜈𝑜 2 = ℎ𝜈 − ℎ𝜈0 1 1 = ℎ𝑐 ( − ) 𝜆 𝜆0 1 1 = 6.626 × 10−34 × 3 × 108 ( −10 − ) 1800 × 10 2300 × 10−10 −19 = 2.4 × 10 𝐽 2.4×10−19 = 1.6×10−19 𝑒𝑉 = 1.5 𝑒𝑉 Prepared by the faculty members of the Physics department 7 Brainware University, Kolkata
{"url":"https://quizgecko.com/upload/1303-study-material-semiconductor-physics-bbs00015-module-i-part-1-1-cCS5WY","timestamp":"2024-11-02T01:50:38Z","content_type":"text/html","content_length":"183742","record_id":"<urn:uuid:53c99386-5c19-4d5d-8449-7795448c70a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00267.warc.gz"}
bilinear map Group Theory Classical groups Finite groups Group schemes Topological groups Lie groups Super-Lie groups Higher groups Cohomology and Extensions Related concepts For abelian groups For $A$, $B$ and $C$abelian groups and $A \times B$ the cartesian product group, a bilinear map $f : A \times B \to C$ from $A$ and $B$ to $C$ is a function of the underlying sets (that is, a binary function from $A$ and $B$ to $C$) which is a linear map – that is a group homomorphism – in each argument separately. The definition of tensor product of abelian groups is precisely such that the following is an equivalent definition of bilinear map: For $A, B, C \in Ab$ a function of sets $f : A \times B \to C$ is a bilinear map from $A$ and $B$ to $C$ precisely if it factors through the tensor product of abelian groups $A \otimes B$ as $f \;\colon\; A \times B \to A \otimes B \to C \,.$ For modules For $R$ a ring (or rig) and $A, B, C \in R$Mod being modules (say on the left, but on the right works similarly) over $R$, a bilinear map from $A$ and $B$ to $C$ is a function of the underlying sets $f : A \times B \to C$ which is a bilinear map of the underlying abelian groups as in def. and in addition such that for all $r \in R$ we have $f(r a, b) = r f(a,b)$ $f(a, r b) = r f(a,b) \,.$ As before, if $R$ is commutative then this is equivalent to $f$ factoring through the tensor product of modules $f : A \times B \to A \otimes_R B \to C \,.$ Multilinear maps are again a generalisation. For bimodules For rings $R$ and $S$ and $R$-$S$-bimodules $A$, $B$, and $C$, a $R$-$S$-bilinear map from $A$ and $B$ to $C$ is a function of the underlying sets $f : A \times B \to C$ which is a bilinear map of the underlying abelian groups as in def. and in addition such that for all $r \in R$ and $s \in S$ we have $f(r a s, b) = r f(a,b) s$ $f(a, r b s) = r f(a,b) s \,.$ If $R$ and $S$ are commutative rings, then this is equivalent to $f$ factoring through the tensor product of bimodules $f : A \times B \to A \otimes_{R, S} B \to C \,.$ Multilinear maps are again a generalisation. For $\infty$-modules See at tensor product of ∞-modules A bilinear form is a bilinear map $f\colon A, B \to K$ whose target is the ground ring $K$; more generally, a multilinear form? is multilinear map whose target is $K$. A bilinear map $f\colon A, A \to K$ whose two sources are the same is symmetric? if $f(a, b) = f(b, a)$ always; more generally, a multilinear map whose sources are all the same is symmetric? if $f (a_1, a_2, \ldots, a_n) = f(a_{\sigma(1)}, a_{\sigma(2)}, \ldots, a_{\sigma(n)})$ for each permutation $\sigma$ in the symmetric group $S_n$. (It's enough to check the $n-1$ generators of $S_n$ that transpose two adjacent arguments.) In particular, this defines symmetric bilinear and multilinear? forms. A bilinear map $f\colon A, A \to K$ whose two sources are the same is antisymmetric? if $f(a, b) = -f(b, a)$ always; more generally, a multilinear map whose sources are all the same is antisymmetric? if $f(a_1, a_2, \ldots, a_n) = (-1)^\sigma f(a_{\sigma(1)}, a_{\sigma(2)}, \ldots, a_{\sigma(n)})$ for each permutation $\sigma$ in the symmetric group $S_n$, where $(-1)^\sigma$ is $1$ or $-1$ according as $\sigma$ is an even or odd permutation. (It's enough to check the $n-1$ generators of $S_n$ that transpose two adjacent arguments, which are each odd and so each introduce a factor of $-1$.) In particular, this defines antisymmetric bilinear and multilinear? forms. A bilinear map $f\colon A, A \to K$ whose two sources are the same is alternating? if $f(a, a) = 0$ always; more generally, a multilinear map whose sources are all the same is alternating if $f(a_1, a_2, \ldots, a_n) = 0$ whenever there exists a nontrivial permutation $\sigma$ in the symmetric group $S_n$ such that $(a_1, a_2, \ldots, a_n) = (a_{\sigma(1)}, a_{\sigma(2)}, \ldots, a_{\sigma(n)})$ , in other words whenever there exist indexes $i e j$ such that $a_i = a_j$. (It's enough to say that $f(a_1, a_2, \ldots, a_n) = 0$ whenever two adjacent arguments are equal, although this is not as trivial as the analogous statements in the previous two paragraphs.) In particular, this defines alternating bilinear and multilinear? forms. In many cases, antisymmetric and alternating maps are equivalent: An alternating bilinear (or even multilinear) map must be antisymmetric. If $f$ is an alternating bilinear map, then $f(a + b, a + b) = f(a, a) + f(a, b) + f(b, a) + f(b, b) = 0 + f(a, b) + f(b, a) + 0$, so $f(a, b) + f(b, a) = f(a + b, a + b) = 0$, so $f(a, b) = -f(b, a) $; that is, $f$ is antisymmetric. The general multilinear case is similar. (Note that linearity is essential to this proof.) If the ground ring is a field whose characteristic is not $2$, or more generally if $1/2$ exists in the ground ring, or more generally if $2$ is cancellable in the target of the map in question, then an antisymmetric bilinear (or even multilinear) map must be alternating. If $f$ is an antisymmetric bilinear map, then $f(a, a) = -f(a, a)$, so $2 f(a, a) = f(a, a) + f(a, a) = f(a, a) - f(a, a) = 0$, so $f(a, a) = 0$ (by dividing by $2$, multiplying by $1/2$, or cancelling $2$, as applicable). The general multilinear case is similar. (Note that linearity is irrelevant to this proof.) The argument that the simplified description of alternation is correct is along the same lines as Proposition above: If a trilinear map is alternating in the first two arguments and in the last two arguments, or more generally if a multilinear map is alternating in every pair of adjacent arguments (or indeed in any set of transpositions that generate the entire symmetric group), then the map is alternating overall. If $f$ is a trilinear map that alternates in each adjacent pair of arguments, then $f(a + b, a + b, a) = f(a, a, a) + f(a, b, a) + f(b, a, a) + f(b, b, a) = 0 + f(a, b, a) + 0 + 0$, so $f(a, b, a) = f(a + b, a + b, a) = 0$; that is, $f$ is alternating in the remaining pair of arguments. The general multilinear case is similar. (Again, linearity is essential to this proof.) In the context of higher algebra/(∞,1)-category theory bilinear maps in an (∞,1)-category are discussed in section 4.3.4 of
{"url":"https://ncatlab.org/nlab/show/bilinear+map","timestamp":"2024-11-12T00:37:51Z","content_type":"application/xhtml+xml","content_length":"69969","record_id":"<urn:uuid:5df72f7a-0dcb-4149-86dc-c1f5366f7b70>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00591.warc.gz"}
Radius of Gyration Calculator \( k = \sqrt{\frac{I}{m}} \) What is Radius of Gyration? Radius of Gyration is a fundamental concept in the field of structural engineering, mechanics, and material science. It plays a crucial role in understanding and analyzing the stability, strength, and behavior of structures under various load conditions. The Radius of Gyration, denoted usually by ‘k’, is a measure that describes the distribution of a cross-sectional area of a column (or any structural member) around an axis. More precisely, it is the square root of the area moment of inertia divided by the cross-sectional area. Significance in Engineering 1. Buckling Analysis: In structural engineering, the Radius of Gyration is particularly important in the analysis of column buckling. It helps in determining the slenderness ratio of a column, which is a key factor in predicting whether a column will fail by buckling under a given load. The slenderness ratio is defined as the effective length of the column divided by the radius of gyration. Columns with a larger slenderness ratio are more prone to buckling. 2. Structural Design: The concept is also essential in the design of various structural elements. It aids in understanding how a structural member will behave under compressive loads, thereby influencing the choice of materials and cross-sectional shapes in design considerations. 3. Material Science Applications: In material science, the radius of gyration is used to describe the dimensions of polymers. It provides an average measure of the size of the polymer coil, which is critical in understanding the physical properties of polymers like viscosity and flow behavior. Radius of Gyration Formula \( k = \sqrt{\frac{I}{A}} \) • k – Radius of Gyration (meters, m) • I – Area Moment of Inertia (square meters, m²) • A – Cross-sectional Area (square meters, m²) Applications of Radius of Gyration in Engineering The concept of the radius of gyration plays a pivotal role in various engineering disciplines, offering insights into the structural integrity and performance of materials and components. This section delves into the diverse applications of the radius of gyration in engineering, highlighting its significance in improving design efficiency, safety, and functionality. 1. Structural Engineering: Ensuring Stability and Strength In structural engineering, the radius of gyration is essential for assessing the buckling resistance of columns and beams. It is a critical parameter in the Euler’s Buckling Formula, which determines the critical load at which a slender column will buckle under compression. Engineers use this concept to design structures that can withstand specific load conditions without compromising safety. By calculating the radius of gyration, engineers ensure that buildings, bridges, and other structures have adequate strength and stability. 2. Mechanical Engineering: Design of Rotating Machinery In the realm of mechanical engineering, the radius of gyration is crucial for designing rotating machinery components like gears, flywheels, and rotors. It helps in analyzing the dynamic behavior of these components, ensuring that they can operate efficiently at high speeds. Understanding the distribution of mass around the axis of rotation, as indicated by the radius of gyration, allows engineers to optimize the design for minimal vibration and maximal performance. 3. Aerospace Engineering: Aircraft and Spacecraft Design Aerospace engineers utilize the radius of gyration in designing aircraft and spacecraft. It plays a significant role in determining the moment of inertia, which is vital for stability and control in flight. Accurate calculations of the radius of gyration help in optimizing the weight distribution of an aircraft or spacecraft, leading to improved aerodynamic performance and fuel efficiency. 4. Naval Architecture: Ship Stability and Design In naval architecture, the radius of gyration is used to assess the stability of ships and other watercraft. It provides insights into the ship’s ability to resist capsizing and ensures that the vessel remains stable in various sea conditions. By evaluating the radius of gyration, naval architects can design ships that are not only safe but also comfortable for passengers and crew. 5. Civil Engineering: Bridge Design and Analysis In civil engineering, particularly in bridge design, the radius of gyration helps in understanding how a bridge will behave under load. It aids in the analysis of stress distribution and deflection under traffic loads, environmental forces, and during earthquakes. This understanding is crucial for designing bridges that are not only structurally sound but also resilient in the face of natural Radius of Gyration Example Problem Radius of Gyration Example Problem Problem Statement Calculate the radius of gyration for a rectangular beam with a width of 200 mm and a height of 400 mm, about its base. • Width (b) – 200 mm • Height (h) – 400 mm Solution Steps Step 1: Calculate the Area Moment of Inertia (I) I = \(\frac{bh^3}{12}\) Step 2: Convert dimensions to meters b = 200 mm = 0.2 m h = 400 mm = 0.4 m Step 3: Substitute values into the formula I = \(\frac{0.2 \times 0.4^3}{12}\) I = 0.001067 m^4 Step 4: Calculate the Radius of Gyration (k) k = \(\sqrt{\frac{I}{b × h}}\) A = b × h = 0.08 m^2 k ≈ \(\sqrt{\frac{0.001067}{0.08}}\) ≈ 0.11549 m Why Is Radius of Gyration Important in Structural Engineering? In structural engineering, the radius of gyration is vital for assessing the buckling resistance of columns and beams. It helps determine the load capacity of a structure and ensures that it can withstand various stressors without collapsing. By accurately calculating the radius of gyration, engineers can predict how a structure will behave under specific loads, leading to safer and more reliable designs, especially in high-rise buildings and bridges. Applications of Radius of Gyration in Mechanical Engineering The radius of gyration in mechanical engineering is essential for designing and analyzing rotating components like gears and turbines. It helps in understanding the mass distribution relative to the rotation axis, which is critical for balancing and minimizing vibrations in machinery. This leads to more efficient, stable, and durable mechanical systems, especially in high-speed applications like automotive and aerospace industries.
{"url":"https://turn2engineering.com/calculators/radius-of-gyration-calculator","timestamp":"2024-11-13T08:30:55Z","content_type":"text/html","content_length":"208190","record_id":"<urn:uuid:6dd2f786-ec24-4df2-b1c1-64aa386f214e>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00351.warc.gz"}
Multiple Measures for Enrollment in GE B4 (Math-STEM) All first-time students are required to complete 39 lower division and 9 upper division semester units (or quarter unit equivalent) of General Education (GE) coursework to complete a bachelor’s degree. GE Area B4 (math/quantitative reasoning) is one of the GE requirements students take in their first year of college. The diagram below displays the criteria used for students to enroll in their first-year GE math/quantitative reasoning course.
{"url":"https://csustudentsuccess.org/multiple-measures/math/STEM","timestamp":"2024-11-06T10:44:20Z","content_type":"text/html","content_length":"20900","record_id":"<urn:uuid:bc6c4fc8-2b43-421d-95f6-0f43e452ddc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00623.warc.gz"}
Hey, guys. So in this video, we're going to start solving rotation problems with conservation of energy. Let's check it out. So you may remember that when you have a motion problem between two points, meaning the object starts here and ends up over here somewhere, where either the speed v, the height h, or the spring compression x changes, any combination of those three changes, we can use most of the time the conservation of energy equation to solve these problems. So we're going to do that now to rotation questions. The only difference is that in rotation, your kinetic energy can be not only linear but also rotational. So that's the new thing that you could be spinning. And it could actually also be both. Right? It could be that our total kinetic energy is linear plus rotational. So we're going to use the conservation of energy equation, which is Kinitial+Uinitial+work nonconservative=Kfinal+Ufinal. I want to remind you that work nonconservative is the work done by you, by some external force, plus the work done by friction if you have some. Now when you do this, remember, you write the energy equation, and then you start expanding the equation. What I mean by expanding is you replace k with what it is. And k used to be simply 12mv2. But now, it could be that you have both of them. Right? It could be, let's say that or let's say instead of 12mv2, the object is just spinning. So you're going to write Iω2. Okay? The key thing to remember, and you would do this for the rest of them. The key thing to remember, the most important thing in these questions to remember is that you will rewrite v and ω in terms of each other. What do I mean by that? What I mean is that when you expand the entire equation, you might end up with 1 v and one ω, or 2 v's and one ω, whatever. If you have a v and a ω, that's 2 variables, you're going to change 1 into the other so that you end up with just one variable. For example, most of the time, v and ω are linked by this, or sometimes they are linked by this. Right? Sometimes they're not linked at all. But most of the time, they're connected by either one of these two equations, which means what I'm going to do is rewrite ω as v over r. And wherever I see a ω, I'm going to replace it with a v over r. So instead of having v and ω, I have v and v. And that means that instead of having 2 variables, I have just 1, and it's easier to solve the problems. That's the key thing to remember is rewrite 1 into the other. Let's do an example. Alright. So here we have a solid disc. Solid disc, let's stop there, means that the moment of inertia we're going to use is that of a solid disk, which is the same as a solid cylinder, and it's going to be 12mr2. And it says it's free to rotate about a fixed perpendicular axis through its center. Lots of words. Let's analyze what it's saying here. Free to rotate just means that you could rotate, right? Like you can actually spin. Some things can be spun around, others can't. But even though it says that it's free to rotate, it's around a fixed axis. Okay? Remember, it's the difference between a roll of toilet paper that is fixed on the wall and it's free to rotate around the fixed axis versus a free roll of toilet paper that can roll around the floor. Okay. Here, we're fixed in place. So we're going to say that it spins like this. So it has no v, right? Like the actual disc has no velocity v because it's not moving sideways. The center of mass doesn't change position, but it does spin. Okay. Actually, we don't know which way it spins. Let's leave it alone for now. But I'm just going to write that ω is not 0 because it's going to spin. Now what else? It says that the axis is through its center, so it's spinning around the center like this. Okay. And it's saying that it's perpendicular. Perpendicular means that it makes a 90-degree angle. Okay. Perpendicular means it makes a 90-degree angle with the object. So I got a little disc here. This sort of looks like a disc, and I wan
{"url":"https://www.pearson.com/channels/physics/learn/patrick/rotational-inertia-energy/conservation-of-energy-with-rotation?chapterId=8fc5c6a5","timestamp":"2024-11-14T02:05:09Z","content_type":"text/html","content_length":"533186","record_id":"<urn:uuid:037ad182-ec85-48bd-a2fd-18454273141b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00637.warc.gz"}
Closed Form of the BV and On-Resistance of the Punch-Through Unipolar and Bipolar Devices A minimum specific on-resistance for unipolar power devices (or small voltage drop for bipolar power devices) can be obtained if the drift doping is increased and the drift length is decreased. When a device is designed to breakdown in the punch-through (PT) mode, it is suitable to obtain a lower while maintaining the required BV compared to the avalanche breakdown case. Therefore, most power MOSFETs and bipolar devices are designed to have a PT mode. The punch-through structure has a lower doping concentration on the lightly doped side with a high concentration region for the contact. The thickness of the lightly doped region is smaller than that of the normal abrupt junction. The electric field profile of the punch-through structure has a rectangular shape compared to the triangular shape of the abrupt junction structure. As shown in Figure 3.6, the electric field has a maximum value at the -junction and decreases linearly in the -region to a value of at the -interface. From (3.25) and (3.29) the avalanche breakdown voltage for the abrupt nonpunch through case is and the corresponding depletion width and the critical electric field are where is the doping concentration in the -region. Assuming for the nonpunch-through and punch-through junction the same -doping concentration , the of a punch-through junction (see Figure 3.6) can be written as where is the length of the punch-through structure and is Using (3.25) and (3.33), and considering the normalized depletion width The , and of the punch-through structure can be expressed as follows From (3.36) and (3.37) and of the punch-through structure can be determined with the proper choice of and . Figure 3.7 shows the breakdown voltages as a function of doping concentration in the lightly doped region calculated for punch-through diodes. With the increased doping concentration and thickness of the lightly doped region, the breakdown voltage becomes equal to that of the avalanche breakdown for the abrupt junction. As shown in the Figure 3.7 the breakdown voltage of the punch-through diode is a week function of the doping concentration in the lightly doped region. Simultaneously with the desired BV a minimum on-resistance is important for unipolar devices. In addition doping and length of the drift region contribute mainly to the total on-resistance of a high-voltage device. From Figure 3.6 the on-resistance for unit area is By combining (3.37) and (3.38) it is possible to choose the normalized depletion width which has a minimum . Jong-Mun Park 2004-10-28
{"url":"https://www.iue.tuwien.ac.at/phd/park/node37.html","timestamp":"2024-11-03T02:59:17Z","content_type":"text/html","content_length":"18414","record_id":"<urn:uuid:6678fc89-a64e-4ceb-81ba-27a9e6ca04e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00587.warc.gz"}
Printable Agility Ladder Drills Printable Agility Ladder Drills - Run the length of the entire ladder with your feet landing in. Web perform ladder drills by placing an agility ladder on the ground. Ladders can also be drawn on. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Each exercise should be performed twice, leading with a different. The quicker the athlete’s feet are off from the ground, the. Stand on one end of the ladder. Printable Agility Ladder Drills Printable World Holiday Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. The quicker the athlete’s feet are off from the ground, the. Run the length of the entire ladder with your feet landing in. Stand on one end of the ladder. Each exercise should be performed twice, leading with a different. Printable Agility Ladder Drills Stand on one end of the ladder. The quicker the athlete’s feet are off from the ground, the. Ladders can also be drawn on. Web perform ladder drills by placing an agility ladder on the ground. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Agility Ladder Workout For Basketball The quicker the athlete’s feet are off from the ground, the. Each exercise should be performed twice, leading with a different. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Stand on one end of the ladder. Web agility ladder exercises perform each of the following drills throughout the. Printable Agility Ladder Drills Web perform ladder drills by placing an agility ladder on the ground. Each exercise should be performed twice, leading with a different. Ladders can also be drawn on. The quicker the athlete’s feet are off from the ground, the. Run the length of the entire ladder with your feet landing in. Agility Ladder Exercise Series The quicker the athlete’s feet are off from the ground, the. Each exercise should be performed twice, leading with a different. Run the length of the entire ladder with your feet landing in. Ladders can also be drawn on. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Printable Agility Ladder Drills Printable Word Searches Ladders can also be drawn on. Web perform ladder drills by placing an agility ladder on the ground. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Stand on one end. Printable Agility Ladder Drills Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Run the length of the entire ladder with your feet landing in. Web perform ladder drills by placing an agility ladder on. 9 Agility ladder drills for PE lessons Pe lessons, Elementary Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Each exercise should be performed twice, leading with a different. The quicker the athlete’s feet are off from the ground, the. Run the length of the entire ladder with your feet landing in. Stand on one end of the ladder. Schematic representation of the agility ladder exercises performed Run the length of the entire ladder with your feet landing in. Each exercise should be performed twice, leading with a different. Web perform ladder drills by placing an agility ladder on the ground. The quicker the athlete’s feet are off from the ground, the. Web agility ladder exercises perform each of the following drills throughout the full length of. Ladder Workouts Blog Dandk The quicker the athlete’s feet are off from the ground, the. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Each exercise should be performed twice, leading with a different. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Web. The quicker the athlete’s feet are off from the ground, the. Ladders can also be drawn on. Web perform ladder drills by placing an agility ladder on the ground. Each exercise should be performed twice, leading with a different. Stand on one end of the ladder. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Run the length of the entire ladder with your feet landing in. The Quicker The Athlete’s Feet Are Off From The Ground, The. Ladders can also be drawn on. Web agility ladder exercises perform each of the following drills throughout the full length of the agility ladder. Web agility ladder the key when using the agility ladder is to minimize the ground time with each foot contact. Web perform ladder drills by placing an agility ladder on the ground. Each Exercise Should Be Performed Twice, Leading With A Different. Stand on one end of the ladder. Run the length of the entire ladder with your feet landing in. Related Post:
{"url":"https://tineopprinnelse.tine.no/en/printable-agility-ladder-drills.html","timestamp":"2024-11-09T19:53:20Z","content_type":"text/html","content_length":"27377","record_id":"<urn:uuid:23a0e502-c925-47c7-b613-3ef7959970ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00267.warc.gz"}
Slope Calculator - cryptocrape.com Slope Calculator Calculate Slope (m) between Two Points How to Calculate Slope Using the Calculator 1. Navigate to the Page with the Calculator: □ Open your web browser and go to the page where you added the Slope Calculator shortcode (e.g., yourwebsite.com/your-page). 2. Select Calculation Type: □ 2 Points are Known: Choose this option if you have the coordinates of two points on a line. The calculator will compute the slope (m) using these two points. □ 1 Point and the Slope are Known: Choose this option if you know one point on the line and the slope (m). The calculator will find the y-intercept (b) for you. 3. Input Values: □ Depending on your selection, different fields will appear: For “2 Points are Known”: □ Enter the coordinates of the two points: ☆ Point 1 (x1, y1): Fill in the x- and y-coordinates of the first point. ☆ Point 2 (x2, y2): Fill in the x- and y-coordinates of the second point. □ Click Calculate Slope to find the slope. For “1 Point and the Slope are Known”: □ Enter the x- and y-coordinates of the known point. □ Enter the Slope (m) of the line. □ Click Calculate Y-Intercept to find the y-intercept (b) of the line. 4. View the Result: □ The result will appear below the form, showing the slope (m) or y-intercept (b), depending on your calculation type. 5. Clear Fields (optional): □ To clear all input fields and the output result, click Clear. Alternatively, switching between calculation types will also clear the fields. Example Scenarios Example 1: Calculate Slope When “2 Points are Known” • Input: □ Point 1 (x1, y1): (2, 3) □ Point 2 (x2, y2): (5, 11) • Result: Slope (m) will display as 2.67. Example 2: Calculate Y-Intercept When “1 Point and the Slope are Known” • Input: □ Point (x, y): (2, 3) □ Slope (m): 2 • Result: Y-Intercept (b) will display as -1. This guide should help you easily calculate slopes and y-intercepts using your calculator.
{"url":"https://cryptocrape.com/slope-calculator/","timestamp":"2024-11-14T02:03:18Z","content_type":"text/html","content_length":"88493","record_id":"<urn:uuid:d692d660-09d8-4a61-8bbf-a7ce2f4c6f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00285.warc.gz"}
Premium Members USD/JPY & ES_F Harmonic Scenario Charts - Structural Trading Premium Members USD/JPY & ES_F Harmonic Scenario Charts Using the USD/JPY currency chart gives indication of the Japenese Yen price behavior as well as the strength of the US Dollar. The daily chart shows extreme indecision. The current sideways range is constricting, creating opposing emerging patterns. What this does is gives us targets whether price stays sideways or breaks important levels that can make or break one of the opposing emerging Harmonic Patterns. The initial sideways range extremes are 101.52 and 95.44, a break and hold … this is a daily candle close beyone one of these levels followed by continuation in that break direction … of these levels then has the larger sideways range extremes and Harmonic Pattern invalidation potential. So, a close above 101.52, first of all invalidates the blue emerging Bullish Butterfly and Crab Harmonic Patterns, then secondly has the initial resistance test of 103.73, a hold above there invalidates the green emerging Bullish Bat Harmonic Pattern, and finally has the ideal target at 109.88 with scaling point at 106.44. A hold below 95.44 has an initial support test at 93.78, below there invalidates the brown emerging Bearish Butterfly/Crab Harmonic Patterns and has the ideal target at 80.43 with scaling points at 91.67, 89 and 83.06. Notice how the completion of the blue Butterfly and Crab help price push below the significant level 93.78, which increases the probability of completing the larger pattern. ES_F breached a Bearish Shark Harmonic Pattern’s ideal completion zone, aka PRZ (Potential Reversal Zone) but two things of interest occurred after that breach … one, price has so far failed to hold above the top of the PRZ, and two, price is attempting to hold below the bottom of the PRZ. Most Harmonic Patterns require a failure to make a higher high or a lower low … But Sharks and 5-0’s do. The 5-0 does represent a 50% retracement after a higher high has been made … there are specifics of course with how much pullback before making the higher high (in this case) and how much of a higher high it is … but back to our daily chart, if price goes into retracement mode of the Bearish Shark, the ideal target is the 50% mark to form the Bullish 5-0 Harmonic Pattern. This level is 1670.75 … so the GRZ (Golden Ratio Zone) is the initial support test at 1683, this will help price fill that gap. It is possible that price breaches the 1670.75 level to test the bottom of the GRZ at 1658.50, the important thing is if downside continuation occurs … if so, the ideal target is another Shark PRZ, but this Harmonic Shark is bullish. Then we can seek another 5-0 scenario, but for now, price is at a stall place or pullback into support scenario, so as long as price holds below 1708 (bottom of bearish Shark PRZ), the bias is to downside … at least to 1683 or 1670.75. Not until price can hold above 1723, does the bias have upside target to 1770.25.
{"url":"https://structuraltrading.com/premium-members-usdjpy-es_f-harmonic-scenario-charts/","timestamp":"2024-11-07T00:16:55Z","content_type":"text/html","content_length":"43234","record_id":"<urn:uuid:ffd0a96e-7ab4-4607-86ec-5fcfaadb2c51>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00663.warc.gz"}
Towards a scientific exploration of computational realities. computer "science" predictive theory experiments peer review blog about Computer "science" Computer science deals with machines, employs mathematics and has perfectly reproducible experiments. Science! Right? What if the mathematics do not predict real-world behaviour? What if the experimental design does not allow to generalise beyond a handful problem instances? What are we missing here? Consider the following paradigms: A Reproduceable experiments and objective manner to analyse results B Formal computation model and asymptotic complexity results C Targeted observations that allow to distinguish between possible realities While the engineering paradigm A and mathematical paradigm B may achieve a scientific appearance due to plots and formulas, only the science paradigm C specifically targets observations (e.g., empirical and theoretical results) that build conclusive knowledge about computational realities (more details at the bottom of this page). A hopeful quote from over 15 years ago: The science paradigm has not been part of the mainstream perception of computer science. But soon it will be. — Former ACM President Peter J. Denning [source] Are we there yet? It could be argued that to this day computer science leaves itself open to the following criticisms: • Asymptotic computational complexity: Even average-case/smoothed complexities can be a very poor predictor for real-world behaviour, because for finite problem instances asymptotic formulas are just course approximations. • Handpicked experiments: A handful cherrypicked problem instances are typically not approriate for generalisations. • Disingenous/sensationalist presentation: Colourful narratives are very exciting, but without clear and rigorous evidence it misleads readers and creates unrealistic expectations. Asymptotic computational complexity has been a staple of computer science for many decades and while it is a very useful heuristic, it is perhaps not surprising that simple formulas can only provide some overly simplistic view of computational realities which are often more complex [more details]. There have been some efforts to address cherrypicking and replication problems: • Established Benchmarks: Not available for most problems. Counteracts cherrypicked experiments, but invites cherrypicked approaches aimed to only perform well for the reference benchmark. • Reproducibility efforts [SIGMOD] [ML/NLP]: Counteracts misreported experiments. Publicly available code facilitates follow-up studies, but does not affect any incentive structures (e.g., How can we get there? While previous efforts have been a step in the right direction, a general culture change is needed, specifically: • Predictive theory [more details]: □ Theory that is either guaranteed or demonstrated to bound or predict real-world behaviour. □ Worst-case/average-case/smoothed complexities are supplemented by some study of hidden constants, or even non-asymptotic formulas. □ When approriate, theoretical models are devised for sets of problem instances • Antagonistic experimental design [more details]: Alongside representative problem instances, picking some challenging problem instances that are likely harder than most problem instances. • Honest presentation [more details]: Clearly presenting novelties, but also critically discussing limitations and exploring both strengths and weaknesses in the experiments. Clearly, a lot of it comes down to peer-review of publication manuscripts and applications for grants or positions. If reviewers set unrealistic expectations that all new approaches should be better in every regard, then authors are forced to run a tailored benchmark that shows the new approach in the best light. Furthermore, if old approaches only need to be better in one way, it would almost guarantee getting stuck with bad approaches. If reviewers only consider experimental or purely theoretical results, then there is no incentive for predictive theory although it has larger scientific How do we compare to sciences studying human subjects? One way to reflect on computer science, is to compare it to other sciences, specifically ones where repeatability of experiments is quite challenging. While social/medical sciences investigate a human population, computer science investigates a population of problem instances: Computer Science Medical/Social Sciences Population Problem instances Humans Sample Problem instances used in experiments Humans participating in experiments Sample Size Typically, handful of problem instances (N < 10) Typically, hundreds of participants (N > 100) Sample Type Typically a non-random, handpicked sample Typically a randomised convenience sample Sample publicly available Typically yes Typically no Theory e.g., asymptotic complexity predictive/descriptive theory Independent Variable old approaches vs new approach control group vs intervention Dependent Variable performance measures various measures Study Design within-subject/repeated-measures experiment (without order effects) various Reproducibility Limited to "sample" See replication crisis. Analysis Description of effect sizes Statistical/effect size analysis Generalisability Completely subjective and often overstated Critically discussed and investigated In medical/social sciences most of the statistical analysis is aimed to rule out that the observed effects are merely sampling artifacts (and hence expected results for the null hypothesis). For non-random samples as used in computer science this is not possible and potentially all reported improvements generalise very poorly to other problem instances. It is therefore important to interpret such results very carefully and try to avoid overreaching generalisations. Furthermore, antagonistic sampling can help a bit (that we can think of as the opposite of cherrypicking), i.e., looking for problem instances that are likely harder than a random sample would be. How would we apply the science paradigm to something like algorithms? It boils down to the following questions: Scenarios and hypotheses Which hypothetical scenarios must be considered? How can they be grouped into hypotheses? Predictions Which observations are expected in each scenario? Methodology Which observations can reliably distinguish the considered scenarios? Replicability How can similar observations be replicated? Testing What are the observations and is it possible to repeat them? Analysis Which considered scenarios can be ruled out? Which hypothetical scenarios must be considered? How can they be grouped into hypotheses? For algorithms there are quite a few scenarios to consider. One may for instance consider a table, where the rows are important classes of problem instances, the columns are different priorisations between performance metrics and each table entry rates how well the algorithm fares for a particular class and metric. Each scenario is then a different way to fill out the table. Due to the many scenarios it is often useful to group all scenarios into the ones that reflect a scientifically interesting result (research hypothesis) and the rest (null hypothesis). Which observations are expected in each scenario? The expectations for each scenario formalise how one can assess if a scenario table is an apt description of the algorithm. Which observations can reliably distinguish the considered scenarios? The study design is about figuring out which empirical and theoretical results need to be collected and derived to rule out most of the scenarios. The goal is to be as conclusive as possible. How can similar observations be replicated? How to obtain similar empirical results for other representative problem instances is rarely discussed. Despite that replicability is key to any generalisation claims. While a particular theoretical result is replicable due to the proofs, the broader claims may require to obtain similar results for slightly deviating assumptions and models. Otherwise, the theoretical results can be an artifact of the chosen computation model and presumed performance metrics, which is still of interest, but may not support any broader claims. What are the observations and is it possible to repeat them? In computer science it is typically expected that experiments are repeatable. Repeatability is often required to keep researchers honest and aids transparency. Which considered scenarios can be ruled out? A discussion of the results then should narrow down which scenarios are plausible given the observations. While it is difficult to publish in computer science when admitting to inconclusive results, it is the typical reality of most distribution-dependent algorithms. Why science As a concluding remark some obvious reasons why computer science should aim to live up to its name:
{"url":"https://compsci.science/","timestamp":"2024-11-06T03:56:39Z","content_type":"text/html","content_length":"15182","record_id":"<urn:uuid:12f48c3a-d553-4927-aae6-49917cc7d4eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00135.warc.gz"}
Another Rchievement of the day | R-bloggersAnother Rchievement of the day Another Rchievement of the day [This article was first published on ASCIImoose: William K. Morris’s Blog » R , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Time for another Rchievement of the day. > while (!any(as.logical(x <- rbinom(3, 1, .5)))) {} > [1] 0 1 0 This is a neat little example demonstrating the power of control flow (type ?Control in R to find out more). But perhaps a not-so obvious way of using it. So what does this snippet of code do? It simply makes three Bernoulli samples with p=.5 (or three fair coin flips if you like). But it will only return sets that aren’t all zeros. There are probably lots of other ways to do this and it’s a fairly trivial example. But the concept is useful and has wider application. So what exactly is going on. The point of while is to keep evaluating the expression in the set of curly brackets as long as the result of the logical statement enclosed in the first set of brackets is TRUE. In this example the expression in the curly brackets is absent, so all while does is keep checking the result of the first expression. This is where it gets interesting. Because of R’s object orientation we can assign some value to x and interrogate the properties of x at the same time. In this case I’ve asked, “are there any 1’s?”, and if so while will stop evaluating. Which is exactly what we wanted. This technique could be useful for quite a few things. In my case I have been using it to sub-sample some large datasets but making sure the sub-samples meet certain conditions. Another use might be to evaluate some external process or the properties of a local file or website. As Brian Butterfield would say, “it’s up to you”.
{"url":"https://www.r-bloggers.com/2011/08/another-rchievement-of-the-day/","timestamp":"2024-11-06T05:30:21Z","content_type":"text/html","content_length":"85712","record_id":"<urn:uuid:6afe8429-4dde-464c-83a2-7014d83b9e24>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00563.warc.gz"}
Function matching We present problems in the following three application areas: identifying similar codes in which global register reallocation and spill code minimization were done (programming languages); protein threading (computational biology); and searching for color icons under different color maps (image processing). We introduce a new search model called function matching that enables us to solve the above problems. The function matching problem has as its input a text T of length n over alphabet E[T] and a pattern P = P[1]P[2] ⋯ P[m] of length m over alphabet Σ[P]. We seek all text locations i, where the m-length substring that starts at i is equal to f(P[1])/(P[2]) ⋯ f(P[m]), for some function f : Σ[P] → Σ[T]. We give a randomized algorithm that solves the function matching problem in time O(n log n) with probability 1/n of declaring a false positive. We give a deterministic algorithm whose time is O(n|Σ[P]| log m) and show that it is optimal in the convolutions model. We use function matching to efficiently solve the problem of two-dimensional parameterized matching. • Color maps • Function matching • Parameterized matching • Pattern matching Dive into the research topics of 'Function matching'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/function-matching-7","timestamp":"2024-11-07T07:43:23Z","content_type":"text/html","content_length":"54796","record_id":"<urn:uuid:a21406cb-108e-43c9-b0df-a635c19364d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00733.warc.gz"}
How do you identify all asymptotes or holes and intercepts for f(x)=(x+3)/(x^2+7x+12)? | Socratic How do you identify all asymptotes or holes and intercepts for #f(x)=(x+3)/(x^2+7x+12)#? 1 Answer First you need to factor the denominator. ${x}^{2} + 7 x + 12$ will be factored into $\left(x + 3\right)$ and $\left(x + 4\right)$ Now you will have $\frac{x + 3}{\left(x + 3\right) \left(x + 4\right)}$ Now cancel out any like terms, in this case the $\left(x + 3\right)$ values. Now $x = - 3$ becomes your x coordinate of the hole to find your y component, plug in $x = - 3$ back into the remaining equation , which will be $\frac{1}{x + 4}$ so $\frac{1}{- 3 + 4}$ This equals to $\frac{1}{1}$ which just equals to 1 so the coordinate for your hole would be $\left(- 3 , 1\right)$ Now you still have the $\left(x + 4\right)$ remaining so $x = - 4$ would be your V.A. • note that when two like values cancel out there is a whole and when there are no values cancelling out then the zero of $x$ in the denominator will be your V.A. * Now we can check for horizontal asymptotes. H.A.s exist only when the numerator's leading degree is less than or equal to the leading degree of the denominator. In this case the degree of the numerator is x and the degree of the denominator is ${x}^{2}$ thus an H.A exists. To find the H.A we just divide the degree of the numerator by the degree of the denominator. This looks like $\frac{x}{{x}^{2}}$. If the degree of the denominator is greater then the H.A will always be $y = 0$. So we have an H.A. at $y = 0$. Finally we can check for Slant asymptotes. S.A.s exist where the degree of the numerator is 1 greater than the degree of the denominator. In this case the numerator has a degree smaller than the denominator. This means we will not have a slant asymptote. We look at the zeros of the numerator however we already cancelled out the zero $\left(x + 3\right)$ so there will not be any x intercepts To find the y intercept plug in 0 for x in the remaining equation which is $\frac{1}{x + 4}$. The y int will be $\frac{1}{4}$. Hole at $\left(- 3 , 1\right)$ V.A, at $x = - 4$ H.A. at $y = 0$ S.A: does not exist X int: does not exist Y int: $\frac{1}{4}$ Impact of this question 1814 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-identify-all-asymptotes-or-holes-and-intercepts-for-f-x-x-3-x-2-7x-12#496631","timestamp":"2024-11-12T16:16:26Z","content_type":"text/html","content_length":"38127","record_id":"<urn:uuid:0d5ee4b0-2f72-44b7-9926-c3870ee0acd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00022.warc.gz"}
Crossovers by POWGI PS4 | Price history | PS Store (United States) | MyGameHunter Release: April 16, 2019 Players: 1 Genre: Puzzle Manufacturer: Lightwood Games Solve crossword puzzles - one letter at a time! Crossovers are little crosswords with just one letter missing. Find the letters, then unscramble them to solve the cryptic clue. Crossovers by POWGI contains 200 crossword clues - and 200 terrible jokes, as a “reward” for solving each one. You can subscribe to price tracking and we'll notify you when Crossovers by POWGI has a sale on PS Store (United States). Crossovers by POWGI: description, trailers and screenshots, price history, trophies, add-ons. Available on PlayStation 4.The information was taken from the official PlayStation Store website. All rights reserved. Collect all the trophies Give me an "A" Find a missing letter "A" Give me a "B" Find a missing letter "B" Give me a "C" Find a missing letter "C" Give me a "D" Find a missing letter "D" Give me an "E" Find a missing letter "E" Give me an "F" Find a missing letter "F" Give me a "G" Find a missing letter "G" Give me an "H" Find a missing letter "H" Give me an "I" Find a missing letter "I" Give me a "J" Find a missing letter "J" Give me a "K" Find a missing letter "K" Give me an "L" Find a missing letter "L" Give me an "M" Find a missing letter "M" Give me an "N" Find a missing letter "N" Give me an "O" Find a missing letter "O" Give me a "P" Find a missing letter "P" Give me a "Q" Find a missing letter "Q" Give me an "R" Find a missing letter "R" Give me an "S" Find a missing letter "S" Give me a "T" Find a missing letter "T" Give me a "U" Find a missing letter "U" Give me a "V" Find a missing letter "V" Give me a "W" Find a missing letter "W" Give me an "X" Find a missing letter "X" Give me a "Y" Find a missing letter "Y" Give me a "Z" Find a missing letter "Z" First try! Find the correct letter on only the fourth attempt Who's a clever boy? Solve a puzzle without making a single mistake I don't get it Unscramble a word that isn't the answer I still don't get it Unscramble words that are not the answer ten times The mind boggles Unscramble three other words before solving a clue Mix it up a bit Completely miss the point of an anagram-based clue Ruff guess Try an answer that isn't even a word Drop it! Try the same wrong answer twice Barking up the wrong tree Try five different wrong answers for the same clue Paws for thought Spend at least a minute considering a solution You spelled it wrong Oh grow up! Anagrams, huh? Cross training Solve every puzzle
{"url":"https://mygamehunter.com/details/crossovers-by-powgi-ps4-62838","timestamp":"2024-11-13T04:17:55Z","content_type":"text/html","content_length":"117612","record_id":"<urn:uuid:9251cca3-8e04-4630-a773-5edb6c064146>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00335.warc.gz"}
A particle that is the center of mass of other particles. A decorator which constrains a particle to be the center of mass of a set of other particles, and whose mass is the sum of their masses. The c.o.m. is updated before model evaluation and its derivatives are copied to its children, using a constraint that is created at setup time. The derivatives propagated to each particle are scaled based on its mass relative to the total mass. The maintenance of the invariant is done by an associated IMP::Constraint. As a result, the state is only guaranteed to be correct either during model evaluation, or immediately following model evaluation before any particles have been changed. Definition at line 33 of file CenterOfMass.h. CenterOfMass (::IMP::Model *m,::IMP::ParticleIndex id) CenterOfMass (const IMP::ParticleAdaptor &d) Constraint * get_constraint () const Float get_coordinate (int i) const const algebra::Vector3D & get_coordinates () const Float get_mass () const void show (std::ostream &out=std::cout) const bool get_is_valid () const Returns true if constructed with a non-default constructor. More... Model * get_model () const Returns the Model containing the particle. More... Particle * get_particle () const Returns the particle decorated by this decorator. More... ParticleIndex get_particle_index () const Returns the particle index decorated by this decorator. More... operator Particle * () const operator ParticleIndex () const Particle * operator-> () const static CenterOfMass IMP::atom::CenterOfMass::setup_particle ( Model * m, ParticleIndex pi, static ParticleIndexesAdaptor members Sets up CenterOfMass over members, and constrains CenterOfMass to be computed before model evaluation and to propagate derivatives following model evaluation. pi is decorated with core::XYZ and atom::Mass decorators, its coordinates are set to the current center of mass of pis, and its mass is set to the sum of their masses. a CenterOfMass object that decorates particle pi Definition at line 81 of file CenterOfMass.h. static CenterOfMass IMP::atom::CenterOfMass::setup_particle ( Model * m, ParticleIndex pi, static Refiner * refiner Sets up CenterOfMass over particles passed by applying the refiner over the particle pi, and constrains CenterOfMass to be computed before model evaluation and to propagate derivatives following model evaluation. pi is decorated with the core::XYZ and atom::Mass decorators, its coordinates are set to the current center of mass of refiner->get_refined_indexes(m, pi), and its mass is set to the sum of their a CenterOfMass object that decorates particle pi Definition at line 93 of file CenterOfMass.h.
{"url":"https://integrativemodeling.org/nightly/doc/ref/classIMP_1_1atom_1_1CenterOfMass.html","timestamp":"2024-11-07T03:52:54Z","content_type":"application/xhtml+xml","content_length":"33026","record_id":"<urn:uuid:e0464bd0-a9e6-4ecd-a9d5-b7b83119bb62>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00723.warc.gz"}
Electronics (14 lectures) Thévenin and Norton equivalent circuits, superposition principle, RC, LC and LRC circuits. Semiconductor diode. Bipolar transistor. Operational amplifiers. Computer controlled instrumentation. Electromagnetism (21 lectures) Electrostatics: Coulomb’s law, divergence and curl of E, Gauss’ law, Laplace’s equation, image charge problems, multipole expansion. Magnetostatics: Lorenz force, Biot-Savart law, divergence and curl of magnetic field strength, Ampère’s law, magnetic vector potential, multipole expansion, boundary conditions. Electrodynamics: Electromotive force, electromagnetic induction, Maxwell’s equations, wave equation. Electric and magnetic fields in matter: Polarisation, electric displacement and Gauss’s law in dielectrics, linear dielectrics. Magnetisation (diamagnets, paramagnets, ferromagnets), auxiliary field H and Ampère’s law in magnetised materials, linear and nonlinear media. Quantum mechanics (28 lectures) The Schrödinger equation, the statistical interpretation of the wave function, momentum, the uncertainty principle, the time-independent Schrödinger equation, stationary states, the infinite square well potential, the harmonic oscillator, the free particle, the Delta-Function potential, the finite square well potential, Hilbert spaces, observables, eigen functions of a Hermitian operator, Dirac notation, the Schrödinger equation in spherical coordinates, the hydrogen atom, angular momentum spin.
{"url":"https://www.up.ac.za/faculty-of-education/yearbooks/2019/EBIT-faculty/UD-programmes/view/12134001/lg/af","timestamp":"2024-11-08T20:25:13Z","content_type":"text/html","content_length":"264721","record_id":"<urn:uuid:aedd5977-7aed-4c24-a007-680072bbdc0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00525.warc.gz"}
Simplifying Expressions in context of resolution of instrument formula 31 Aug 2024 Here is an academic article on simplifying expressions in the context of resolving instrument formulas: Title: Simplifying Expressions: A Crucial Step in Resolving Instrument Formulas In the realm of mathematical modeling, instrument formulas play a vital role in describing complex phenomena. However, these formulas often involve intricate expressions that require careful manipulation to extract meaningful insights. This article focuses on the process of simplifying expressions, highlighting the importance of this step in resolving instrument formulas. We will explore various techniques for simplifying expressions using BODMAS (Brackets, Orders, Division, Multiplication, Addition, and Subtraction) and ASCII notation. Instrument formulas are mathematical representations of physical systems, used to predict behavior, optimize performance, or diagnose issues. These formulas typically involve a combination of algebraic operations, trigonometric functions, and exponential terms. To extract valuable information from these formulas, it is essential to simplify the expressions involved. This process enables us to identify key relationships, eliminate unnecessary complexity, and facilitate further analysis. Simplifying Expressions: The BODMAS rule provides a framework for evaluating mathematical expressions in the correct order: 1. Brackets: Evaluate any expressions within parentheses or brackets first. 2. Orders: Next, evaluate any exponents (orders) on variables. 3. Division: Perform any division operations from left to right. 4. Multiplication: Followed by multiplication operations from left to right. 5. Addition and Subtraction: Finally, perform any addition or subtraction operations from left to right. Using this rule, we can simplify expressions step-by-step: Example 1: Simplify the expression 2 × (3 + 4) - 5 ASCII notation: 2 * (3 + 4) - 5 Step 1: Evaluate brackets (3 + 4) = 7 Step 2: Multiply 2 by the result 2 * 7 = 14 Step 3: Subtract 5 from the result 14 - 5 = 9 Final simplified expression: 9 Example 2: Simplify the expression (x + 2) × (x - 1) ASCII notation: (x + 2) * (x - 1) Step 1: Evaluate brackets (x + 2) and (x - 1) separately Step 2: Multiply the results x^2 - x + 2 Final simplified expression: x^2 - x + 2 Simplifying expressions is a crucial step in resolving instrument formulas, enabling us to extract meaningful insights from complex mathematical representations. By applying the BODMAS rule and carefully evaluating algebraic operations, we can eliminate unnecessary complexity and facilitate further analysis. This article has demonstrated various techniques for simplifying expressions using ASCII notation, highlighting the importance of this process in the context of instrument formulas. 1. [1] “Mathematics for Scientists and Engineers” by Richard Fitzpatrick 2. [2] “Instrumentation and Measurement” by Thomas K. Gaylord Note: The article is written in a formal academic tone, with references provided at the end. The examples are designed to illustrate the simplification process using BODMAS and ASCII notation. Related articles for ‘resolution of instrument formula’ : • Reading: Simplifying Expressions in context of resolution of instrument formula Calculators for ‘resolution of instrument formula’
{"url":"https://blog.truegeometry.com/tutorials/education/df9fe9b8331b70baab0f31cc6c77bf8b/JSON_TO_ARTCL_Simplifying_Expressions_in_context_of_resolution_of_instrument_for.html","timestamp":"2024-11-04T07:37:21Z","content_type":"text/html","content_length":"18923","record_id":"<urn:uuid:6f1f3365-6704-4ff4-bccb-a0195f4e4309>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00135.warc.gz"}
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. The rotational spectrum (power spectral density vs. rotational frequency) of polar molecules can be measured in absorption or emission by microwave spectroscopy^[1] or by far infrared spectroscopy. The rotational spectra of non-polar molecules cannot be observed by those methods, but can be observed and measured by Raman spectroscopy. Rotational spectroscopy is sometimes referred to as pure rotational spectroscopy to distinguish it from rotational-vibrational spectroscopy where changes in rotational energy occur together with changes in vibrational energy, and also from ro-vibronic spectroscopy (or just vibronic spectroscopy ) where rotational, vibrational and electronic energy changes occur simultaneously. Part of the rotational spectrum of trifluoroiodomethane, CF^ [3]I.^[notes 1] Each rotational transition is labeled with the quantum numbers, J, of the final and initial states, and is extensively split by the effects of nuclear quadrupole coupling with the ^ 127I nucleus. For rotational spectroscopy, molecules are classified according to symmetry into spherical tops, linear molecules, and symmetric tops; analytical expressions can be derived for the rotational energy terms of these molecules. Analytical expressions can be derived for the fourth category, asymmetric top, for rotational levels up to J=3, but higher energy levels need to be determined using numerical methods. The rotational energies are derived theoretically by considering the molecules to be rigid rotors and then applying extra terms to account for centrifugal distortion, fine structure, hyperfine structure and Coriolis coupling. Fitting the spectra to the theoretical expressions gives numerical values of the angular moments of inertia from which very precise values of molecular bond lengths and angles can be derived in favorable cases. In the presence of an electrostatic field there is Stark splitting which allows molecular electric dipole moments to be An important application of rotational spectroscopy is in exploration of the chemical composition of the interstellar medium using radio telescopes. Rotational spectroscopy has primarily been used to investigate fundamental aspects of molecular physics. It is a uniquely precise tool for the determination of molecular structure in gas-phase molecules. It can be used to establish barriers to internal rotation such as that associated with the rotation of the CH^ [3] group relative to the C^ [4]Cl group in chlorotoluene (C^ [7]Cl).^[2] When fine or hyperfine structure can be observed, the technique also provides information on the electronic structures of molecules. Much of current understanding of the nature of weak molecular interactions such as van der Waals, hydrogen and halogen bonds has been established through rotational spectroscopy. In connection with radio astronomy, the technique has a key role in exploration of the chemical composition of the interstellar medium. Microwave transitions are measured in the laboratory and matched to emissions from the interstellar medium using a radio telescope. [3] was the first stable polyatomic molecule to be identified in the interstellar medium.^[3] The measurement of chlorine monoxide^[4] is important for atmospheric chemistry. Current projects in astrochemistry involve both laboratory microwave spectroscopy and observations made using modern radiotelescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA).^[5] A molecule in the gas phase is free to rotate relative to a set of mutually orthogonal axes of fixed orientation in space, centered on the center of mass of the molecule. Free rotation is not possible for molecules in liquid or solid phases due to the presence of intermolecular forces. Rotation about each unique axis is associated with a set of quantized energy levels dependent on the moment of inertia about that axis and a quantum number. Thus, for linear molecules the energy levels are described by a single moment of inertia and a single quantum number, ${\displaystyle J}$ , which defines the magnitude of the rotational angular momentum. For nonlinear molecules which are symmetric rotors (or symmetric tops - see next section), there are two moments of inertia and the energy also depends on a second rotational quantum number, ${\ displaystyle K}$ , which defines the vector component of rotational angular momentum along the principal symmetry axis.^[6] Analysis of spectroscopic data with the expressions detailed below results in quantitative determination of the value(s) of the moment(s) of inertia. From these precise values of the molecular structure and dimensions may be obtained. For a linear molecule, analysis of the rotational spectrum provides values for the rotational constant^[notes 2] and the moment of inertia of the molecule, and, knowing the atomic masses, can be used to determine the bond length directly. For diatomic molecules this process is straightforward. For linear molecules with more than two atoms it is necessary to measure the spectra of two or more isotopologues, such as ^16O^12C^32S and ^16O^12C^34S. This allows a set of simultaneous equations to be set up and solved for the bond lengths).^[notes 3] A bond length obtained in this way is slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, to which the rotational states refer, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by ${\displaystyle B_{v}=B-\alpha \left(v+{\frac {1}{2}}\right)}$ where v is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated if the B values for two different vibrational states can be found.^[7] For other molecules, if the spectra can be resolved and individual transitions assigned both bond lengths and bond angles can be deduced. When this is not possible, as with most asymmetric tops, all that can be done is to fit the spectra to three moments of inertia calculated from an assumed molecular structure. By varying the molecular structure the fit can be improved, giving a qualitative estimate of the structure. Isotopic substitution is invaluable when using this approach to the determination of molecular structure. Classification of molecular rotors In quantum mechanics the free rotation of a molecule is quantized, so that the rotational energy and the angular momentum can take only certain fixed values, which are related simply to the moment of inertia, ${\displaystyle I}$ , of the molecule. For any molecule, there are three moments of inertia: ${\displaystyle I_{A}}$ , ${\displaystyle I_{B}}$ and ${\displaystyle I_{C}}$ about three mutually orthogonal axes A, B, and C with the origin at the center of mass of the system. The general convention, used in this article, is to define the axes such that ${\displaystyle I_{A}\leq I_{B} \leq I_{C}}$ , with axis ${\displaystyle A}$ corresponding to the smallest moment of inertia. Some authors, however, define the ${\displaystyle A}$ axis as the molecular rotation axis of highest The particular pattern of energy levels (and, hence, of transitions in the rotational spectrum) for a molecule is determined by its symmetry. A convenient way to look at the molecules is to divide them into four different classes, based on the symmetry of their structure. These are Spherical tops (spherical rotors) Linear molecules Symmetric tops (symmetric rotors) Asymmetric tops (asymmetric rotors) Selection rules Microwave and far-infrared spectra Transitions between rotational states can be observed in molecules with a permanent electric dipole moment.^[9]^[notes 4] A consequence of this rule is that no microwave spectrum can be observed for centrosymmetric linear molecules such as N^ [2] (dinitrogen) or HCCH (ethyne), which are non-polar. Tetrahedral molecules such as CH^ [4] (methane), which have both a zero dipole moment and isotropic polarizability, would not have a pure rotation spectrum but for the effect of centrifugal distortion; when the molecule rotates about a 3-fold symmetry axis a small dipole moment is created, allowing a weak rotation spectrum to be observed by microwave spectroscopy.^[10] With symmetric tops, the selection rule for electric-dipole-allowed pure rotation transitions is ΔK = 0, ΔJ = ±1. Since these transitions are due to absorption (or emission) of a single photon with a spin of one, conservation of angular momentum implies that the molecular angular momentum can change by at most one unit.^[11] Moreover, the quantum number K is limited to have values between and including +J to -J.^[12] Raman spectra For Raman spectra the molecules undergo transitions in which an incident photon is absorbed and another scattered photon is emitted. The general selection rule for such a transition to be allowed is that the molecular polarizability must be anisotropic, which means that it is not the same in all directions.^[13] Polarizability is a 3-dimensional tensor that can be represented as an ellipsoid. The polarizability ellipsoid of spherical top molecules is in fact spherical so those molecules show no rotational Raman spectrum. For all other molecules both Stokes and anti-Stokes lines^[notes 5] can be observed and they have similar intensities due to the fact that many rotational states are thermally populated. The selection rule for linear molecules is ΔJ = 0, ±2. The reason for the values ±2 is that the polarizability returns to the same value twice during a rotation.^[14] The value ΔJ = 0 does not correspond to a molecular transition but rather to Rayleigh scattering in which the incident photon merely changes direction.^[15] The selection rule for symmetric top molecules is ΔK = 0 If K = 0, then ΔJ = ±2 If K ≠ 0, then ΔJ = 0, ±1, ±2 Transitions with ΔJ = +1 are said to belong to the R series, whereas transitions with ΔJ = +2 belong to an S series.^[15] Since Raman transitions involve two photons, it is possible for the molecular angular momentum to change by two units. The units used for rotational constants depend on the type of measurement. With infrared spectra in the wavenumber scale (${\displaystyle {\tilde {u }}}$ ), the unit is usually the inverse centimeter , written as cm^−1, which is literally the number of waves in one centimeter, or the reciprocal of the wavelength in centimeters (${\displaystyle {\tilde {u }}=1/\lambda }$ ). On the other hand, for microwave spectra in the frequency scale (${\displaystyle u }$ ), the unit is usually the gigahertz. The relationship between these two units is derived from the expression ${\displaystyle u \cdot \lambda =c,}$ where ν is a frequency, λ is a wavelength and c is the velocity of light. It follows that ${\displaystyle {\tilde {u }}/{\text{cm}}^{-1}={\frac {1}{\lambda /{\text{cm}}}}={\frac {u /{\text{s}}^{-1}}{c/\left({\text{cm}}\cdot \mathrm {s} ^{-1}\right)}}={\frac {u /{\text{s}}^{-1}} {2.99792458\times 10^{10}}}.}$ As 1 GHz = 10^9 Hz, the numerical conversion can be expressed as ${\displaystyle {\tilde {u }}/{\text{cm}}^{-1}\approx {\frac {u /{\text{GHz}}}{30}}.}$ Effect of vibration on rotation The population of vibrationally excited states follows a Boltzmann distribution, so low-frequency vibrational states are appreciably populated even at room temperatures. As the moment of inertia is higher when a vibration is excited, the rotational constants (B) decrease. Consequently, the rotation frequencies in each vibration state are different from each other. This can give rise to "satellite" lines in the rotational spectrum. An example is provided by cyanodiacetylene, H−C≡C−C≡C−C≡N.^[16] Further, there is a fictitious force, Coriolis coupling, between the vibrational motion of the nuclei in the rotating (non-inertial) frame. However, as long as the vibrational quantum number does not change (i.e., the molecule is in only one state of vibration), the effect of vibration on rotation is not important, because the time for vibration is much shorter than the time required for rotation. The Coriolis coupling is often negligible, too, if one is interested in low vibrational and rotational quantum numbers only. Effect of rotation on vibrational spectra Historically, the theory of rotational energy levels was developed to account for observations of vibration-rotation spectra of gases in infrared spectroscopy, which was used before microwave spectroscopy had become practical. To a first approximation, the rotation and vibration can be treated as separable, so the energy of rotation is added to the energy of vibration. For example, the rotational energy levels for linear molecules (in the rigid-rotor approximation) are ${\displaystyle E_{\text{rot}}=hcBJ(J+1).}$ In this approximation, the vibration-rotation wavenumbers of transitions are ${\displaystyle {\tilde {u }}={\tilde {u }}_{\text{vib}}+B''J''(J''+1)-B'J'(J'+1),}$ where ${\displaystyle B''}$ and ${\displaystyle B'}$ are rotational constants for the upper and lower vibrational state respectively, while ${\displaystyle J''}$ and ${\displaystyle J'}$ are the rotational quantum numbers of the upper and lower levels. In reality, this expression has to be modified for the effects of anharmonicity of the vibrations, for centrifugal distortion and for Coriolis coupling.^[17] For the so-called R branch of the spectrum, ${\displaystyle J'=J''+1}$ so that there is simultaneous excitation of both vibration and rotation. For the P branch, ${\displaystyle J'=J''-1}$ so that a quantum of rotational energy is lost while a quantum of vibrational energy is gained. The purely vibrational transition, ${\displaystyle \Delta J=0}$ , gives rise to the Q branch of the spectrum. Because of the thermal population of the rotational states the P branch is slightly less intense than the R branch. Rotational constants obtained from infrared measurements are in good accord with those obtained by microwave spectroscopy, while the latter usually offers greater precision. Structure of rotational spectra Spherical top Spherical top molecules have no net dipole moment. A pure rotational spectrum cannot be observed by absorption or emission spectroscopy because there is no permanent dipole moment whose rotation can be accelerated by the electric field of an incident photon. Also the polarizability is isotropic, so that pure rotational transitions cannot be observed by Raman spectroscopy either. Nevertheless, rotational constants can be obtained by ro–vibrational spectroscopy. This occurs when a molecule is polar in the vibrationally excited state. For example, the molecule methane is a spherical top but the asymmetric C-H stretching band shows rotational fine structure in the infrared spectrum, illustrated in rovibrational coupling. This spectrum is also interesting because it shows clear evidence of Coriolis coupling in the asymmetric structure of the band. Linear molecules Energy levels and line positions calculated in the rigid rotor approximation The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the centre of mass. The two degrees of rotational freedom correspond to the spherical coordinates θ and φ which describe the direction of the molecular axis, and the quantum state is determined by two quantum numbers J and M. J defines the magnitude of the rotational angular momentum, and M its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on J. Under the rigid rotor model, the rotational energy levels, F(J), of the molecule can be expressed as, ${\displaystyle F\left(J\right)=BJ\left(J+1\right)\qquad J=0,1,2,...}$ where ${\displaystyle B}$ is the rotational constant of the molecule and is related to the moment of inertia of the molecule. In a linear molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, that is, ${\displaystyle I_{B}=I_{C},I_{A}=0}$ , so ${\displaystyle B={h \over {8\pi ^{2}cI_{B}}}={h \over {8\pi ^{2}cI_{C}}}}$ For a diatomic molecule ${\displaystyle I={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}d^{2}}$ where m[1] and m[2] are the masses of the atoms and d is the distance between them. Selection rules dictate that during emission or absorption the rotational quantum number has to change by unity; i.e., ${\displaystyle \Delta J=J^{\prime }-J^{\prime \prime }=\pm 1}$ . Thus, the locations of the lines in a rotational spectrum will be given by ${\displaystyle {\tilde {u }}_{J^{\prime }\leftrightarrow J^{\prime \prime }}=F\left(J^{\prime }\right)-F\left(J^{\prime \prime }\right)=2B\left(J^{\prime \prime }+1\right)\qquad J^{\prime \prime where ${\displaystyle J^{\prime \prime }}$ denotes the lower level and ${\displaystyle J^{\prime }}$ denotes the upper level involved in the transition. The diagram illustrates rotational transitions that obey the ${\displaystyle \Delta J}$ =1 selection rule. The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent ${\displaystyle J^{\prime \prime }{\leftarrow }J^{\prime }}$ transitions are separated by 2B in the observed spectrum. Frequency or wavenumber units can also be used for the x axis of this plot. Rotational line intensities Rotational level populations with Bhc/kT = 0.05. J is the quantum number of the lower rotational state. The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number J, relative to the number of molecules in the ground state, N[J]/N[0] is given by the Boltzmann distribution as ${\displaystyle {\frac {N_{J}}{N_{0}}}=e^{-{\frac {E_{J}}{kT}}}=e^{-{\frac {BhcJ(J+1)}{kT}}}}$ , where k is the Boltzmann constant and T the absolute temperature. This factor decreases as J increases. The second factor is the degeneracy of the rotational state, which is equal to 2J + 1. This factor increases as J increases. Combining the two factors^[18] ${\displaystyle {\text{population}}\propto (2J+1)e^{-{\frac {E_{J}}{kT}}}}$ The maximum relative intensity occurs at^[19]^[notes 6] ${\displaystyle J={\sqrt {\frac {kT}{2hcB}}}-{\frac {1}{2}}}$ The diagram at the right shows an intensity pattern roughly corresponding to the spectrum above it. Centrifugal distortion When a molecule rotates, the centrifugal force pulls the atoms apart. As a result, the moment of inertia of the molecule increases, thus decreasing the value of ${\displaystyle B}$ , when it is calculated using the expression for the rigid rotor. To account for this a centrifugal distortion correction term is added to the rotational energy levels of the diatomic molecule.^[20] ${\displaystyle F\left(J\right)=BJ\left(J+1\right)-DJ^{2}\left(J+1\right)^{2}\qquad J=0,1,2,...}$ where ${\displaystyle D}$ is the centrifugal distortion constant. Therefore, the line positions for the rotational mode change to ${\displaystyle {\tilde {u }}_{J^{\prime }\leftrightarrow J^{\prime \prime }}=2B\left(J^{\prime \prime }+1\right)-4D\left(J^{\prime \prime }+1\right)^{3}\qquad J^{\prime \prime }=0,1,2,...}$ In consequence, the spacing between lines is not constant, as in the rigid rotor approximation, but decreases with increasing rotational quantum number. An assumption underlying these expressions is that the molecular vibration follows simple harmonic motion. In the harmonic approximation the centrifugal constant ${\displaystyle D}$ can be derived as ${\displaystyle D={\frac {h^{3}}{32\pi ^{4}I^{2}r^{2}kc}}}$ where k is the vibrational force constant. The relationship between ${\displaystyle B}$ and ${\displaystyle D}$ ${\displaystyle D={\frac {4B^{3}}{{\tilde {\omega }}^{2}}}}$ where ${\displaystyle {\tilde {\omega }}}$ is the harmonic vibration frequency, follows. If anharmonicity is to be taken into account, terms in higher powers of J should be added to the expressions for the energy levels and line positions.^[20] A striking example concerns the rotational spectrum of hydrogen fluoride which was fitted to terms up to [J(J+1)]^5.^[21] The electric dipole moment of the dioxygen molecule, O^ [2] is zero, but the molecule is paramagnetic with two unpaired electrons so that there are magnetic-dipole allowed transitions which can be observed by microwave spectroscopy. The unit electron spin has three spatial orientations with respect to the given molecular rotational angular momentum vector, K, so that each rotational level is split into three states, J = K + 1, K, and K - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. The energy difference between successive J terms in any of these triplets is about 2 cm^−1 (60 GHz), with the single exception of J = 1←0 difference which is about 4 cm^−1. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number K there are two allowed transitions. The ^16O nucleus has zero nuclear spin angular momentum, so that symmetry considerations demand that K have only odd values.^[22]^[23] Symmetric top For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. In the absence of an external electrical field, the rotational energy of a symmetric top is a function of only J and K and, in the rigid rotor approximation, the energy of each rotational state is given by ${\displaystyle F\left(J,K\right)=BJ\left(J+1\right)+\left(A-B\right)K^{2}\qquad J=0,1,2,\ldots \quad {\mbox{and}}\quad K=+J,\ldots ,0,\ldots ,-J}$ where ${\displaystyle B={h \over {8\pi ^{2}cI_{B}}}}$ and ${\displaystyle A={h \over {8\pi ^{2}cI_{A}}}}$ for a prolate symmetric top molecule or ${\displaystyle A={h \over {8\pi ^{2}cI_{C}}}}$ for an oblate molecule. This gives the transition wavenumbers as ${\displaystyle {\tilde {u }}_{J^{\prime }\leftrightarrow J^{\prime \prime },K}=F\left(J^{\prime },K\right)-F\left(J^{\prime \prime },K\right)=2B\left(J^{\prime \prime }+1\right)\qquad J^{\prime \prime }=0,1,2,...}$ which is the same as in the case of a linear molecule.^[24] With a first order correction for centrifugal distortion the transition wavenumbers become ${\displaystyle {\tilde {u }}_{J^{\prime }\leftrightarrow J^{\prime \prime },K}=F\left(J^{\prime },K\right)-F\left(J^{\prime \prime },K\right)=2\left(B-2D_{JK}K^{2}\right)\left(J^{\prime \prime } +1\right)-4D_{J}\left(J^{\prime \prime }+1\right)^{3}\qquad J^{\prime \prime }=0,1,2,...}$ The term in D[JK] has the effect of removing degeneracy present in the rigid rotor approximation, with different K values.^[25] Asymmetric top Pure rotation spectrum of atmospheric water vapour measured at Mauna Kea (33 cm^−1 to 100 cm^−1) The quantum number J refers to the total angular momentum, as before. Since there are three independent moments of inertia, there are two other independent quantum numbers to consider, but the term values for an asymmetric rotor cannot be derived in closed form. They are obtained by individual matrix diagonalization for each J value. Formulae are available for molecules whose shape approximates to that of a symmetric top.^[26] The water molecule is an important example of an asymmetric top. It has an intense pure rotation spectrum in the far infrared region, below about 200 cm^−1. For this reason far infrared spectrometers have to be freed of atmospheric water vapour either by purging with a dry gas or by evacuation. The spectrum has been analyzed in detail.^[27] Quadrupole splitting When a nucleus has a spin quantum number, I, greater than 1/2 it has a quadrupole moment. In that case, coupling of nuclear spin angular momentum with rotational angular momentum causes splitting of the rotational energy levels. If the quantum number J of a rotational level is greater than I, 2I + 1 levels are produced; but if J is less than I, 2J + 1 levels result. The effect is one type of hyperfine splitting. For example, with ^14N (I = 1) in HCN, all levels with J > 0 are split into 3. The energies of the sub-levels are proportional to the nuclear quadrupole moment and a function of F and J. where F = J + I, J + I − 1, …, |J − I|. Thus, observation of nuclear quadrupole splitting permits the magnitude of the nuclear quadrupole moment to be determined.^[28] This is an alternative method to the use of nuclear quadrupole resonance spectroscopy. The selection rule for rotational transitions becomes^[29] ${\displaystyle \Delta J=\pm 1,\Delta F=0,\pm 1}$ Stark and Zeeman effects In the presence of a static external electric field the 2J + 1 degeneracy of each rotational state is partly removed, an instance of a Stark effect. For example, in linear molecules each energy level is split into J + 1 components. The extent of splitting depends on the square of the electric field strength and the square of the dipole moment of the molecule.^[30] In principle this provides a means to determine the value of the molecular dipole moment with high precision. Examples include carbonyl sulfide, OCS, with μ = 0.71521 ± 0.00020 debye. However, because the splitting depends on μ^ 2, the orientation of the dipole must be deduced from quantum mechanical considerations.^[31] A similar removal of degeneracy will occur when a paramagnetic molecule is placed in a magnetic field, an instance of the Zeeman effect. Most species which can be observed in the gaseous state are diamagnetic . Exceptions are odd-electron molecules such as nitric oxide, NO, nitrogen dioxide, NO^ [2], some chlorine oxides and the hydroxyl radical. The Zeeman effect has been observed with dioxygen, O^ Rotational Raman spectroscopy Molecular rotational transitions can also be observed by Raman spectroscopy. Rotational transitions are Raman-allowed for any molecule with an anisotropic polarizability which includes all molecules except for spherical tops. This means that rotational transitions of molecules with no permanent dipole moment, which cannot be observed in absorption or emission, can be observed, by scattering, in Raman spectroscopy. Very high resolution Raman spectra can be obtained by adapting a Fourier Transform Infrared Spectrometer. An example is the spectrum of ^15 [2]. It shows the effect of nuclear spin, resulting in intensities variation of 3:1 in adjacent lines. A bond length of 109.9985 ± 0.0010 pm was deduced from the data.^[33] Instruments and methods The great majority of contemporary spectrometers use a mixture of commercially available and bespoke components which users integrate according to their particular needs. Instruments can be broadly categorised according to their general operating principles. Although rotational transitions can be found across a very broad region of the electromagnetic spectrum, fundamental physical constraints exist on the operational bandwidth of instrument components. It is often impractical and costly to switch to measurements within an entirely different frequency region. The instruments and operating principals described below are generally appropriate to microwave spectroscopy experiments conducted at frequencies between 6 and 24 GHz. Absorption cells and Stark modulation A microwave spectrometer can be most simply constructed using a source of microwave radiation, an absorption cell into which sample gas can be introduced and a detector such as a superheterodyne receiver. A spectrum can be obtained by sweeping the frequency of the source while detecting the intensity of transmitted radiation. A simple section of waveguide can serve as an absorption cell. An important variation of the technique in which an alternating current is applied across electrodes within the absorption cell results in a modulation of the frequencies of rotational transitions. This is referred to as Stark modulation and allows the use of phase-sensitive detection methods offering improved sensitivity. Absorption spectroscopy allows the study of samples that are thermodynamically stable at room temperature. The first study of the microwave spectrum of a molecule (NH^ [3]) was performed by Cleeton & Williams in 1934.^[34] Subsequent experiments exploited powerful sources of microwaves such as the klystron, many of which were developed for radar during the Second World War. The number of experiments in microwave spectroscopy surged immediately after the war. By 1948, Walter Gordy was able to prepare a review of the results contained in approximately 100 research papers.^[35] Commercial versions^[36] of microwave absorption spectrometer were developed by Hewlett-Packard in the 1970s and were once widely used for fundamental research. Most research laboratories now exploit either Balle-Flygare or chirped-pulse Fourier transform microwave (FTMW) spectrometers. Fourier transform microwave (FTMW) spectroscopy The theoretical framework^[37] underpinning FTMW spectroscopy is analogous to that used to describe FT-NMR spectroscopy. The behaviour of the evolving system is described by optical Bloch equations. First, a short (typically 0-3 microsecond duration) microwave pulse is introduced on resonance with a rotational transition. Those molecules that absorb the energy from this pulse are induced to rotate coherently in phase with the incident radiation. De-activation of the polarisation pulse is followed by microwave emission that accompanies decoherence of the molecular ensemble. This free induction decay occurs on a timescale of 1-100 microseconds depending on instrument settings. Following pioneering work by Dicke and co-workers in the 1950s,^[38] the first FTMW spectrometer was constructed by Ekkers and Flygare in 1975.^[39] Balle–Flygare FTMW spectrometer Balle, Campbell, Keenan and Flygare demonstrated that the FTMW technique can be applied within a "free space cell" comprising an evacuated chamber containing a Fabry-Perot cavity.^[40] This technique allows a sample to be probed only milliseconds after it undergoes rapid cooling to only a few kelvins in the throat of an expanding gas jet. This was a revolutionary development because (i) cooling molecules to low temperatures concentrates the available population in the lowest rotational energy levels. Coupled with benefits conferred by the use of a Fabry-Perot cavity, this brought a great enhancement in the sensitivity and resolution of spectrometers along with a reduction in the complexity of observed spectra; (ii) it became possible to isolate and study molecules that are very weakly bound because there is insufficient energy available for them to undergo fragmentation or chemical reaction at such low temperatures. William Klemperer was a pioneer in using this instrument for the exploration of weakly bound interactions. While the Fabry-Perot cavity of a Balle-Flygare FTMW spectrometer can typically be tuned into resonance at any frequency between 6 and 18 GHz, the bandwidth of individual measurements is restricted to about 1 MHz. An animation illustrates the operation of this instrument which is currently the most widely used tool for microwave spectroscopy.^[ Chirped-Pulse FTMW spectrometer Noting that digitisers and related electronics technology had significantly progressed since the inception of FTMW spectroscopy, B.H. Pate at the University of Virginia^[42] designed a spectrometer^[ 43] which retains many advantages of the Balle-Flygare FT-MW spectrometer while innovating in (i) the use of a high speed (>4 GS/s) arbitrary waveform generator to generate a "chirped" microwave polarisation pulse that sweeps up to 12 GHz in frequency in less than a microsecond and (ii) the use of a high speed (>40 GS/s) oscilloscope to digitise and Fourier transform the molecular free induction decay. The result is an instrument that allows the study of weakly bound molecules but which is able to exploit a measurement bandwidth (12 GHz) that is greatly enhanced compared with the Balle-Flygare FTMW spectrometer. Modified versions of the original CP-FTMW spectrometer have been constructed by a number of groups in the United States, Canada and Europe.^[44]^[45] The instrument offers a broadband capability that is highly complementary to the high sensitivity and resolution offered by the Balle-Flygare design. External links • infrared gas spectra simulator • Hyperphysics article on Rotational Spectrum • A list of microwave spectroscopy research groups around the world
{"url":"https://www.knowpia.com/knowpedia/Rotational_spectroscopy","timestamp":"2024-11-04T05:05:49Z","content_type":"text/html","content_length":"311671","record_id":"<urn:uuid:d4e49fa3-b58f-42ad-a1f0-d77d18ec2c19>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00599.warc.gz"}
Neighboorod filters We focus in this section on neighborood filters. Those filters, starting from linear filters, and going to Yaroslavsky filter or the Bilateral filter, have been popularized in the 2000's by the introduction of the Non Local Means (NLM). Here we give some new insights on those methods, focusing on several simpler variants. Noise Model We are concerned with the problem of the restoration of noisy images. We assume that we are given a grayscale image $ \newcommand{\inoisy}{\mathbf Y} \inoisy$ being a noisy version of an unobservable image $\newcommand{\itrue}{\mathbf f}\itrue$. In this context one usually deals with additive Gaussian noise: $$ewcommand{\itrue}{\mathbf f} ewcommand{\inoisy}{\mathbf Y} ewcommand{\boldx}{\mathbf x} ewcommand{\N}{\mathbb{N}} ewcommand{\sfP}{\textsf{P}} ewcommand{\R}{\mathbb{R}} ewcommand{\sfP}{P} ewcommand{\IYF}{\hat{ \mathbf {f}} ^{YF}} ewcommand{\PYF}{\hat{ \mathbf {f}} ^{PYF}} ewcommand{\ INLM}{\hat{ \mathbf {f}} ^{NLM}} ewcommand{\INLMLPR}{\hat{\mathbf{f}}^{NLM-LPR}} ewcommand{\wnlm}[2]{\omega ( #1 , #2 )} ewcommand{\patch}{P} ewcommand{\argmin}{\mathop{\mathrm{arg\,min}}} \inoisy(\ boldx)=\itrue(\boldx)+\boldsymbol{\varepsilon}(\boldx) \: ,$$ where $\boldx=(x,y) \in \Omega$, is any pixel in the image $\Omega$ and $\boldsymbol{\varepsilon}$ is a centered Gaussian noise with known variance $\sigma^2$. The Yaroslavksy filter In this paragraph we give some new insight on neighboor filters using only pixelwise information to compute image similarity. Let us remind what we mean by the Yaroslavsky filter. Here is the mathematical formulation: $$\label{eq2} \IYF(\boldx)=\frac{\sum_{\boldx'} K( [\inoisy(\boldx')-\inoisy(\boldx)]/g) \cdot L( [\boldx'-\boldx]/g) \cdot \inoisy(\boldx')}{\sum_{\boldx''} K([\inoisy(\ boldx'')-\inoisy(\boldx)]/g) \cdot L([\boldx''- \boldx ]/h)} \:,$$ where $\boldx'$ runs in $\Omega$, $K,L$ are kernel functions, $g>0$ and $h>0$ are bandwidth parameters. For simplicity we usually use this filter with both the spatial kernel $L$ and the photometric kernel $K$ being box kernels. The preprocessed Yaroslavksy filter The idea of the preprocessed Yaroslavsky filter (PYF), is to proposed a first estimate $\tilde{f}$ of out targeted image, and then to use this cleaner version of the image to compute the photometric distance, instead of using the original noisy version. Possible candidates for the first step could be wavelet denoising, curvelet denoising, linear filtering, etc. The PYF could be written in the following way: $$\label{eq3} \PYF(\boldx)=\frac{\sum_{\boldx'} K( [\tilde{f}(\boldx')-\tilde{f}(\boldx)]/g) \cdot L( [\boldx'-\boldx]/g) \cdot \inoisy(\boldx')}{\sum_{\boldx''} K([\tilde{f}(\boldx'') -\tilde{f}(\boldx)]/g) \cdot L([\boldx''- \boldx ]/h)} \:.$$ "A two-stage denoising filter: the preprocessed Yaroslavsky filter" J. Salmon, R. Willett, E. Arias-Castro, 2012, PDF. Corresponding Matlab and toolbox "Oracle inequalities and minimax rates for non-local means and related adaptive kernel-based methods" E. Arias-Castro, J. Salmon, R. Willett, SIAM J. Imaging Sci., vol.5, pp. 944--992, 2012, PDF. Corresponding Matlab and toolbox Contact us us if you have any question.
{"url":"https://josephsalmon.eu/code/index_codes.php?page=Neighborhood_filters","timestamp":"2024-11-06T07:06:19Z","content_type":"text/html","content_length":"12109","record_id":"<urn:uuid:45a2c76e-52dc-4d57-8706-72d652baaff3>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00652.warc.gz"}
Data Science Interview Questions Part-4 (Unsupervised Learning) Top-20 frequently asked data science interview questions and answers on Unsupervised Learning for fresher and experienced Data Scientist, Data analyst, statistician, and machine learning engineer job Data Science is an interdisciplinary field. It uses statistics, machine learning, databases, visualization, and programming. So in this fourth article, we are focusing on unsupervised learning Let’s see the interview questions. 1. What is clustering? Clustering is unsupervised learning because it does not have a target variable or class label. Clustering divides s given data observations into several groups (clusters) or a bunch of observations based on certain similarities. For example, segmenting customers, grouping super-market products such as cheese, meat products, appliances, etc. 2. What is the difference between classification and clustering? 3. What do you mean by dimension reduction? Dimensionality reduction is the process of reducing the number of attributes from large dimensional data. There are lots of methods for reducing the dimension of the data: Principal Components Analysis(PCA), t-SNE, Wavelet Transformation, Factor Analysis, Linear Discriminant Analysis, and Attribute Subset Selection. 4. How the K-means algorithm work? Kmeans algorithm is an iterative algorithm that partitions the dataset into a pre-defined number of groups or clusters where each observation belongs to only one group. K-means algorithm works in the following steps: 1. Randomly initialize the k initial centers. 2. Assigned observation to the nearest center and form the groups. 3. Find the mean point of each cluster. Update the center coordinates and reassign the observations to the new cluster centers. 4. Repeat steps 2–3 the process until the no change in the cluster observations. 5. How to choose the number of clusters or K in the k-means algorithm? Elbow Criteria: This method is used to choose the optimal number of clusters (groups) of objects. It says that we should choose a number of clusters so that adding another cluster does not add sufficient information to continue the process. The percentage of variance explained is the ratio of the between-group variance to the total variance. It selects the point where marginal gain will You can also create an elbow method graph between the within-cluster sum of squares(WCSS) and the number of clusters K. Here, the within-cluster sum of squares(WCSS) is a cost function that decreases with an increase in the number of clusters. The Elbow plot looks like an arm, then the elbow on the arm is an optimal number of k. 6. What are some disadvantages of K-means? There are the following disadvantages: • The k-means method is not guaranteed to converge to the global optimum and often terminates at a local optimum. • The final results depend upon the initial random selection of cluster centers. • Needs the number of clusters in advance to input the algorithm. • Not suitable for convex shape clusters. • It is sensitive to noise and outlier data points. 7. How do you evaluate the clustering algorithm? The cluster can be evaluated using two types of measures intrinsic and extrinsic evaluation parameters. Intrinsic does not consider the external class labels while extrinsic considers the external class labels. Intrinsic cluster evaluation measures are the Davie-Bouldin Index and Silhouette coefficient. Extrinsic evaluation measures are Jaccard and Rand Index. 8. How do you generate arbitrary or random shape clusters? There are some clustering algorithms that can generate random or arbitrary shape clusters such as Density-based methods such as DBSCAN, OPTICS, and DENCLUE. Spectral clsutering can also generate arbitrary or random shape clusters. 9. What is Euclidean and Manhatten distance? Euclidean measures the ‘as-the-crow-flies’ distance and Manhattan distance is also known as a city block. It measures the distance in blocks between any two points in a city. (or city block). Data mining by Jiawei Han; Micheline Kamber; JianPei Euclidean Distance 10. Explain spectral clustering. It is based on standard linear algebra. Spectral Clustering uses the connectivity approach to clustering. It easy to implement, faster especially for the sparse datasets, and can generate non-convex clusters. Spectral clustering kind of graph partitioning algorithm. The spectral algorithm works in the following steps. 1. Create a similarity graph 2. Create an Adjacency matrix W and Degree matrix D. 3. The adjacency matrix is an n*n matrix that has 1 in each cell that represents the edge between nodes of the column and row. The degree matrix is a diagonal matrix where the diagonal value is the sum of all the elements in each row of the adjacency matrix. 4. Create a Laplacian matrix L by subtracting the adjacency matrix from the degree matrix. 5. Calculates the eigenvectors of the Laplacian matrix L and performs the k-means algorithm on the second smallest eigenvector. 11. What is tSNE? t-SNE stands for t-Distributed Stochastic Neighbor Embedding which considers the nearest neighbors for reducing the data. t-SNE is a nonlinear dimensionality reduction technique. With a large dataset, it will not produce better results. t-SNE has quadratic time and space complexity. The t-SNE algorithm computes the similarity between pairs of observations in the high dimensional space and low dimensional space. And then it optimizes both similarity measures. In simple words we can say, it maps the high-dimensional data into a lower-dimensional space. After transformation input features can’t be inferred from the reduced dimensions. It can be used in recognizing feature expressions, tumor detection, compression, information security, and bioinformatics. 12. What is principal component analysis? PCA is the process of reducing the dimension of input data into a lower dimension while keeping the essence of all original variables. It used is used to speed up the model generation process and helps in visualizing the large dimensional data. 13. How will you decide the number of components in PCA? There are three methods for deciding the number of components: 1. Eigenvalues: you can choose the number of components that have eigenvalues higher than 1. 2. Amount of explained variance: you can choose factors that explain 70 to 80% of your variance at least. 3. Scree plot: It is a graphical method that helps us in choosing the factors until a break in the graph. 14. What is Eigenvalues and Eigenvector? Eigenvectors are rotational axes of the linear transformation. These axes are fixed in direction, and eigenvalue is the scale factor by which the matrix is scaled up or down. Eigenvalues are also known as characteristic values or characteristic roots and eigenvectors are also known as the characteristic vector. 15. How dimensionality reduction improves the performance of SVM? SVM works better with lower-dimensional data compared to large dimensional data. When the number of features is greater than the number of observations, then performing dimensionality reduction will generally improve the SVM. 16. What is the difference between PCA and t-SNE? t-SNE in comparison to PCA: • When the data is huge (in size), t-SNE may fail to produce better results. • t-SNE is nonlinear whereas PCA is linear. • PCA will preserve things that t-SNE will not. • PCA is deterministic; t-SNE is not • t-SNE does not scale well with the size of the dataset, while PCA does. 17. What are the benefits and limitations of PCA? • Removes Correlated Features • Reduces Overfitting • Visualize large dimensional data • Independent variables become less interpretable • Data standardization is a must before PCA • Information Loss • assumes the Linear relationship between original features. • High variance axes considered as components and low variance axes considered as noise. • It assumes principal components as orthogonal. 18. What is the difference between SVD and PCA? • Both are eigenvalue methods that are used to reduce a high-dimensional dataset into fewer dimensions for retaining important information. • PCA is the same as SVD but it is not as efficient to compute as the SVD. • PCA is used for finding the directions while SVD is the factorization of a matrix. • We can use SVD to compute principal components but it is more expensive. 19. Explain DBSCAN. The main idea is to create clusters and add objects as long as the density in its neighborhood exceeds some threshold. The density of any object measured by the number of objects closed to that. It connects the main object with its neighborhoods to form dense regions as clusters. You can also define density as the size of the neighborhood €. DBSCAN also uses another user-specified parameter, MinPts, that specifies the density threshold of dense regions. 20. What is hierarchical clustering? Hierarchical method partition data into groups at different levels such as in a hierarchy. Observations are group together on the basis of their mutual distance. Hierarchical clustering is of two types: Agglomerative and Divisive. Agglomerative methods start with individual objects like clusters, which are iteratively merged to form larger clusters. It starts with leaves or individual records and merges two clusters that are closest to each other according to some similarity measure and form one cluster. It is also known as AGNES (AGglomerative NESting). Divisive methods start with one cluster, which they iteratively split into smaller clusters. It divides the root cluster into several smaller sub-clusters, and recursively partitions those clusters into smaller ones. It is also known as DIANA (DIvisive ANAlysis). In this article, we have focused on unsupervised learning interview questions. In the next article, we will focus on the interview questions related to data preprocessing. Data Science Interview Questions Part-5 (Data Preprocessing)
{"url":"https://discuss.boardinfinity.com/t/data-science-interview-questions-part-4-unsupervised-learning/4837","timestamp":"2024-11-10T05:05:59Z","content_type":"text/html","content_length":"34600","record_id":"<urn:uuid:e86b725e-abb4-4e39-a72f-14e0575c4290>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00099.warc.gz"}
Inertial Mass of an Elementary Particle from the Holographic Scenario Total Page:16 File Type:pdf, Size:1020Kb [email protected] Abstract Various attempts have been made to fully explain the mechanism by which a body has inertial mass. Recently it has been proposed that this mechanism is as follows: when an object accelerates in one direction a dy- namical Rindler event horizon forms in the opposite direction, suppressing Unruh radiation on that side by a Rindler-scale Casimir effect whereas the radiation in the other side is only slightly reduce by a Hubble-scale Casimir effect. This produces a net Unruh radiation pressure force that always op- poses the acceleration, just like inertia, although the masses predicted are twice those expected, see [17]. In a later work an error was corrected so that its prediction improves to within 26% of the Planck mass, see [10]. In this paper the expression of the inertial mass of a elementary particle is derived from the holographic scenario giving the exact value of the mass of a Planck particle when it is applied to a Planck particle. Keywords: inertial mass; Unruh radiation; holographic scenario, Dark matter, Dark energy, cosmology. PACS 98.80.-k - Cosmology PACS 04.62.+v - Quantum fields in curved spacetime PACS 06.30.Dr - Mass and density 1 Introduction The equivalence principle introduced by Einstein in 1907 assumes the com- plete local physical equivalence of a gravitational field and a corresponding non- inertial (accelerated) frame of reference (Einstein was thinking of his famous elevator experiment). In a similar way we can assume a holographic equiva- lence principle where it is the same to have a particle accelerated because it is attracted by a central mass than a particle accelerated by an event horizon. The question of why a particle is accelerated towards an event horizon has two different answers. In the Verlinde's holographic model, see [26], the acceleration 1 of the particle towards the event horizon is due to the entropic force arising from thermodynamics on a holographic screen (the event horizon). The entropic force appears in order to increase the general entropy according to the second law of thermodynamics. However we can also think that the radiation from the region of space behind this event horizon can never hope to catch the particle causing a real imbalance in the momentum transferred by all the radiation from all direc- tions which produces an acceleration of the particle towards the event horizon, see [10, 17]. Both arguments are claims because are based in the existence of effects not universally accepted. If in a future they are proved then we will ac- cept that there will be a complete physical equivalence between a gravitational field and a corresponding event horizon. This holographic equivalence principle would be the base of a new gravitational theory where gravity will be emerging from an holographic scenario from a dynamical point of view. In this work, we can establish the origin of the inertial mass of a elementary particle from the holographic scenario. The problem of the inertial mass of a macroscopic body is still open in this context. First we recall some concepts. The Hawking radiation, predicted by Hawking [14] in 1974, is black-body radia- tion to be released by black holes due to quantum effects near the event horizon of the black hole. The vacuum fluctuations cause a particle-antiparticle pair to appear close to the event horizon. One of the pair falls into the black hole while the other escapes. The particle that fell into the black hole must had negative energy in order to preserve total energy. The black hole loses mass because for an outside observer the black hole just emitted a particle. The Unruh effect [25] is the prediction that an accelerating observer will observe black-body radiation where an inertial observer would observe none. A priori the Unruh effect and the Hawking radiation seem unrelated, but in both cases the radiation is due to the existence of an event horizon. In the case of the Unruh radiation, on the side that the observer is accelerating away from there appears an apparent dynamical Rindler event horizon, see [19]. The appearance of this event horizon produces two effects: a radiation in a similar way to the Hawking radiation from the horizon and a force toward the horizon that accounts for the inertial mass of the elementary particle (see below). Therefore an accelerating observer perceives a warm background whereas a non-accelerated observer will see a cold background with no radiation. Various attempts have been made to fully explain the mechanism by which a body has inertial mass, see for instance [3] where the principle of equivalence is examined in the quantum context. We recall that the relativistic mass [24] is the measure of mass dependent on the velocity of the observer in the context of the special relativity but is not an explanation of the rest mass. In [17] an origin of the inertia mass of a body was suggested: for an accelerated particle the Unruh radiation becomes non-uniform because the Rindler event horizon reduces the energy density in the direction opposite to the acceleration vector due to a Rindler-scale Casimir effect whereas the radiation on the other side is only slightly reduced by a Hubble-scale Casimir effect due to the cosmic horizon. Therefore there is an imbalance in the momentum transferred by the Unruh 2 radiation and this produces a force which is always opposed to the acceleration, like inertia. In [10] it is corrected a mistake detected in [17]. The correct expression for the force is π2ha Fx = − ; (1) 48clp −35 where lp = 1:616 × 10 m is the Planck distance. Hence the inertial mass 2 −8 is given by mi ∼ π h=(48clp) ∼ 2:75 × 10 kg which is 26% greater than the −8 Planck mass mp = 2:176 × 10 kg. In this paper we derive an expression for the inertia of an elementary particle from the holographic scenario, giving the exact value of the mass of the Planck particle when it is applied to this Planck particle. 2 Holographic scenario for the inertia The holographic principle proposed by 't Hooft states that the description of a volume space is encoded on a boundary to the region, preferably a light-like boundary like a gravitational horizon, see [23]. This principle suggests that the entire universe can be seen as a two-dimensional information structure encoded on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Verlinde pro- posed a model where the Newton's second law and Newton's law of gravitation arise from basic thermodynamic mechanisms. In the context of Verline's holo- graphic model, the response of a body to the force may be understood in terms of the first law of thermodynamics. Indeed Verlinde conjecture that Newton and Einstein's gravity originate from an entropic force arising from the thermo- dynamics on a holographic screen, see [26]. Moreover the holographic screen in Verlinde's formalism can be identified as local Rindler horizons and it is sug- gested that quantum mechanics is not fundamental but emerges from classical information theory applied to these causal horizons, see [15, 16]. An important cosmological consequence is that at the horizon of the universe there is a horizon temperature given by ~ H −30 TH = ∼ 3 × 10 K; (2) 2πkB and this temperature has associated the acceleration aH given by the Unruh [25] relationship 2πc kBTH aH = ~ ; (3) −9 2 and substituting the value of TH we arrive to aH = cH ∼ 10 m=s in agree- ment with the observation. The entropic force pulls outward towards the horizon apparently creating a Dark energy component and the accelerated expansion of the universe, see [4, 5]. 3 Due to the existence of the cosmic horizon all the matter of the universe is attracted by the horizon comparable to the Hubble horizon due to the entropic force and accelerated towards this horizon with an acceleration given by Eq. 3. However this acceleration is ridiculously small compared to local acceleration due to nearby bodies and it is only relevant for isolated bodies with very low local accelerations for instance a star at the edge of a galaxy giving also an explanation to the obtained rotation curves. First you fix an observer and equation (3) gives the acceleration that any body feels toward the horizon in the direction far away from the observer. Moreover this acceleration is ridiculously small compared with the local acceleration of bodies at small distance where the local movement is the relevant. For instance the movement in collision of our galaxy with the Andromeda galaxy. However for distant bodies, where the local movement is irrelevant for an observer so far, the accelerate expansion is relevant and we see that these bodies accelerate outside from the observer. Additionally in an accelerating universe, the universe was expanding more slowly in the past than it is today. Therefore the total acceleration measured by an observer is a = aL + aH where aL is the local acceleration due to the local dynamics that suffers a particle. It is clear that only for very low local movements the acceleration aH becomes im- portant. We can assume that the local movement is the gravitational attraction of a central mass and then we have GM⊙ a − a = a = : (4) H L r2 Equation (4) can be written into the form ( ) a GM⊙ a 1 − H = : (5) a r2 Hence, following [9] (see also [7]) for low local accelerations we obtain a modified inertia given by ( ) ( ) a 2πc k T m = m 1 − H = m 1 − B H ; (6) I i a i ~a where mi is the inertial mass and mI is the modified inertial mass.
{"url":"https://docslib.org/doc/525375/inertial-mass-of-an-elementary-particle-from-the-holographic-scenario","timestamp":"2024-11-12T08:59:39Z","content_type":"text/html","content_length":"65732","record_id":"<urn:uuid:ce359917-46c6-485f-a3d6-2a194e80711b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00520.warc.gz"}
Stand-up Maths: Why do Biden's votes not follow Benford's Law? - Physics of Risk Stand-up Maths: Why do Biden's votes not follow Benford's Law? There are many unsubstantiated claim about election fraud in the recent US presidential elections. Most of these claims provide no proofs or arguments, while there are few which are supposedly scientific. One of the example relies on the Benford's law, which specifies the expected frequency at which we should observe specific first digit of certain number. So if vote counts in polling stations do not follow Benford's law, it should be an indication of wide spread fraud! Not so fast, as often in science, laws and models are applicable only when certain conditions (assumptions) are satisfied. In case of Benford's law, one main condition is that the original numbers (first digits of which we consider) should span multiple orders of magnitude. Also Benford's law is formulated as a rather general empirical observation. The law is supposedly observed in many naturally occurring data. Vote counts is obviously an example of naturally occurring data. The issue is that vote counts do not span many order of magnitude. Also vote counts, especially in elections with two competitors, are not related to exponential growth (which is prevalent in many naturally occurring systems), which can actually be a driver for the Benford's law. Watch the following video by Stand-up Maths for a video discussion on applicability of the Benford's law to the data from the recent presidential election in the US. The video below covers another simple method to check for the election fraud.
{"url":"https://rf.mokslasplius.lt/standupmaths-why-do-bidens-votes-not-follow-benfords-law/","timestamp":"2024-11-12T15:14:16Z","content_type":"text/html","content_length":"22539","record_id":"<urn:uuid:ffcbe984-c922-492a-b03c-01e3ec5e52d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00107.warc.gz"}
Class 10 Maths All in One - Student Factory Class 10 Maths All in One This app Class 10 Maths NCERT Solution ++ contains NCERT Solutions, Notes, Old Question Paper, Important Q/A, NCERT Book 🥳 to all the chapters included in the Class 10 Maths NCERT Book📖 in which is also used in Bihar Board & UP Board This App Contains✨ Class 10 Maths NCERT Solution Class 10 Maths Notes, Important Q/A Class 10 Maths Sample Paper (Old Papers with Solution) Class 10 Maths NCERT Book Chapter 1: Real Numbers Chapter 2: Polynomials Chapter 3: Pair of Linear Equations in Two Variables Chapter 4: Quadratic Equations Chapter 5: Arithmetic Progression Chapter 6: Triangles Chapter 7: Coordinate Geometry Chapter 8: Introduction to Trigonometry Chapter 9: Some Applications of Trigonometry Chapter 10: Circles Chapter 11: Constructions Chapter 12: Area Related to Circles Chapter 13: Surface Areas and Volumes Chapter 14: Statistics Chapter 15: Probability Class 10 Maths NCERT solution app is developed as per the requirements of our CBSE students to solve maths problem effectively and in real time with better understanding. In this Maths NCERT solution app, you can find every chapter wise solution, notes, previous year papers with solution. All things are managed. You will feel easy to use this app. Experience the App Now💯
{"url":"https://studentfactory.in/class-10-maths-all-in-one/","timestamp":"2024-11-09T18:49:17Z","content_type":"text/html","content_length":"270697","record_id":"<urn:uuid:a8ac8960-041d-45e8-bdc7-67548bea693b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00393.warc.gz"}
Maths | CRANESWATER top of page Our vision: “At Craneswater, all children will become resilient, fluent mathematicians with an ability to tackle problem solving.” Our key principles: • All children can learn to do maths. • Fluency, Reasoning and problem solving are embedded within each of our units across all year groups. • Children are supported in their understanding through the use of concrete, pictorial and abstract. What does maths typically look like at Craneswater? • Children are taught within mixed ability classes by their class teacher. • Lessons begin with ‘fluency starters’ that encourage children to develop their mental strategies. They explore efficiency and discuss different ways of working things out. • Concrete manipulatives are available in every classroom and are accessible for children to use as directed or independently. • Children can choose their starting points in lessons and are able to move themselves on if they feel they are able to do something. Children complete 5 questions that are similar and are then challenged in more depth. • Challenges are readily available for children to move onto at their discretion or as directed by the class teacher or teaching assistant. • Children are encouraged to use the correct mathematical vocabulary and use their reasoning skills when answering questions. They are also encouraged to explain their rationale in tackling problems. • Through their time at the school, children will develop their written calculation methods in line with the Calculation Policy. • Additional time, outside of lessons, is given to teaching and learning multiplication facts. Children complete regular times tables assessments which are differentiated and aim to improve their score over time. • Children enjoying maths and engaged in what they are doing. Maths Activities at Home Many of our parents have asked for some possible maths activities that they can do with their child at home to support their passion for maths. Here are some suggested activities and some resources that you can print off. Oral and mental work is beneficial and supports all aspects of maths teaching in school. The best rule is ‘little, often and varied’ so that your child is engaged. The most effective thing to do is to be positive about their maths learning and engage with your child about what they are doing, show an interest and question them on what they have learnt. Mr McMaster – Maths Manager • Quick-fire times-tables and related division facts • Number bonds – what goes with …… to make …… Play matching pairs. • Doubling and halving numbers • Ordering numbers • Play card games such as pontoon, sevens • Baking – practising the skills of measuring and reading from a scale • Talking about time, e.g. How long is it until lunch time? The journey takes 2½ hours, when will we arrive? We need to be there at 2.00 pm, when do we need to leave home? Many children will still need practice with reading clock times, particularly minutes past and minutes to the hour. • Guess my number - This is a useful game for playing on a journey. As your child plays the game they will practise thinking about the order of numbers. Start the game by saying to your child ' I am thinking of a number between 1 and ??'. Explain that the aim of the game is to guess the mystery number by asking questions and that you will only answer 'yes' or 'no'. Children soon learn that it is more useful to ask "Is the number bigger than 5?" than to ask 'Is it 7?" Older children can progress to guessing mystery numbers up to 100 and can ask more complex 'Is it an odd number?' 'Is the number a multiple of 10?' (e.g. 20, 30, 40) • Handling amounts of money when shopping, working out total costs, working out change, checking receipts. Printable resources: Online resources: All children have individual logins for this site. Please see your child's class teacher if unsure. SATs Example Questions In 2016, the end of KS2 assessment test changed from the old format to a new system of papers. As previously, there is no requirement for the children to use calculators in KS2. There are still 3 papers to complete in the week of SAT’s but they are as follows: Paper 1- ARITHMETIC PAPER (30 mins) - to test the children’s ability to answer pure calculation type questions. Paper 2- REASONING (45 min) – to test the children’s ability to apply their maths knowledge in a variety of different scenarios and problems. Paper 3- REASONING (45 mins) – to test the children’s ability to apply their maths knowledge in a variety of different scenarios and problems. Year 3 Autumn 1 Place Value to 1000 Compare numbers to 1000 Read and write numbers in words Add and subtract formally with 3 digits including exchange Estimation of calculations Use of inverse to check answers Mental arithmetic within add/ subtract Solve problems including missing number problems Mixed number problems involving 4 operations Autumn 2 Add and subtract formally with 3 digits including exchange Estimation of calculations Use of inverse to check answers Mental arithmetic within add/ subtract Solve problems including missing number problems Multiplication facts within 3,4,8 times tables Multiply and divide to TU by U with mental and written methods Count from 0 in 4, 8, 50, 100 Spring 1 Multiplication facts within 3,4,8 times tables Multiply and divide to TU by U with mental and written methods Solve mixed number problems involving 4 operations and missing number/ digit problems Add and subtract money amounts in context Solve problems within money Spring 2 Measure, compare, add and subtract: lengths (m/cm/mm); mass (kg/g); volume/capacity (l/ml) Measure the perimeter of simple 2D shapes Solve simple problems involving the above Fractions into tenths Understand unit fractions with denominators Find simple fractions of amounts Summer 1 Use diagrams for recognising equivalent fractions Compare and order unit fractions Add and subtract fractions with the same denominator Solve mixed number problems involving the above Recognise and use term angle Identify right angles within a full turn Identify horizontal, vertical, perpendicular, parallel Draw 2D shapes and make 3D shapes using modelling materials Recognise 3d shapes in different orientations Summer 2 Bar charts, pictograms and tables- solve problems within this context Understand, use and compare measure within mass and capacity Problem solving including the above and within 4 operations Tell and write the time using 12 and 24 hour clock and Roman numerals Estimate and read the time to one minute Seconds in a minute, days in a month including a leap year Compare and record durations of time Year 5 Number and place value: Order and comparing up to 7 digits Interpreting negative numbers Roman numerals Recognising and finding numbers using various representations. Addition and subtraction: Mental with increasing number size Formal written methods Rounding to check Missing digit and multi-step problems Multiplication and division: Mental, multiply numbers by up to 4 digits Divide numbers up to 4 digits by 1 digit numbers Solve problems involving all four operations Comparing data, timetables and time intervals Area and Perimeter: Measure and calculate perimeter of composite and rectilinear shapes, Calculate and compare the areas of rectangles Multiplication and Division: Multiply and divide mentally Multiply and divide by 10,100 and 1000 Multiples & factors Square & cube numbers Prime & composite numbers Solve problems using knowledge of the above. Compare and order fractions Equivalent fractions Recognise and convert mixed & improper fractions Add & subtract fractions Multiply improper fractions by whole numbers Read and write decimal numbers as fractions and solve problems using the above. Decimals and percentages Read, write, order and compare decimals with up to 3 places, recognise and use thousandths, round decimals and solve problems using the above. Recognise % and write percentages as fractions & decimals. Decimals – multiply and divide by 10,100 and 1000, use all 4 operations to solve measure problems. Properties of shapes and angles Position and direction- identify and describe the position of a shape following a translation or reflection Converting units- Convert between units or metric measure, understand use equivalences between metric & imperial and solve problems using the above. Volume- Estimate volume and capacity. Use all 4 operations to solve problems. Year 4 Autumn 1 Place value of 4 digit numbers Ordering, comparing, rounding 1000 more or less than a given number Counting on/back through 0 Roman numerals Addition and subtraction Autumn 2 Length and perimeter Convert between units of measure Multiplication and division Spring 1 Multiplication and division Find area of rectilinear shapes Equivalent fractions Counting in tenths and hundredths Adding and subtracting fractions with the same denominator Fractions of amounts Spring 2 Decimal equivalents of tenths and hundredths Dividing 2 digit numbers by 10 and 100 Comparing and rounding Equivalents of half, quarter, three quarters Summer 1 Estimate and compare pounds and pence Solve measure and money problems involving 4 operations Interpret discrete and continuous data Interpret information presented in charts and pictograms Summer 2 Properties of shape Identify acute and obtuse angles Classify geometric shapes Position and direction Read and convert digital 12 and 24 hour clocks Convert between minutes, hours, days, months and years Year 6 Autumn 1 Round any number to given accuracy Negative numbers in context Recognise numbers in a variety of representations Order numbers with up to 3 decimal places Add and subtract multi step problems to include missing digit problems and deciding what operation to use Formal multiplication and division to ThHTU by TU Common factors, prime factors and prime numbers Perform mental calculations using all the above Use estimation and inverse to check Autumn 2 Simplify fractions to lowest form Compare and order fractions including greater than 1 Add and subtract fractions with different denominations and mixed fractions Multiply fractions writing answer in simplest form Divide fractions by whole numbers Link fractions to decimals and percentages in context Solve multi-step problems involving reasoning an a range of skills taught Describe and plot positions in all 4 quadrants Translate and reflect in context applying skills learnt Spring 1 Work in decimals to times and divide by 10, 100, 1000 Multiply 1 digit numbers with up to 2 decimal places by whole numbers using an appropriate written method Solve problems with rounding to required accuracy Solve problems involving percentages Recall equivalences between fractions, decimals and percentages Use simple formulae Understand use and find sequences including the nth term Find pairs of numbers to satisfy and equation Enumerate possibilities and combinations of two variables Spring 2 Solve problems involving calculations and conversions of units Use, read and write conversions of standard units involving decimals with 3 decimal places Convert between miles and kilometres Area and Volume: Recognise shapes can have same areas and different perimeters and viceversa Use formula for volume and areas of parallelograms and triangles Number- Ratio: Solve problems involving relative sizes, scale factors and unequal sharing applying skills from other areas of math Summer 1 Draw 2D shapes given dimensions and angles Compare and classify geometric shapes based on properties Recognise and use angles around a point and straight lines Find missing angles and understand corresponding angle notation Understand circles and definitions around them; then apply skills in problems Interpret and draw pie charts Calculate the mean as an average Summer 2 Real Life Maths Theme park project bottom of page
{"url":"https://www.craneswater.portsmouth.sch.uk/maths?lang=pl","timestamp":"2024-11-12T08:52:29Z","content_type":"text/html","content_length":"781062","record_id":"<urn:uuid:54a08252-4cdc-4d12-a38b-49f60d3f959a>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00513.warc.gz"}
COVID-19: Weibull recovery model - Physics of Risk COVID-19: Weibull recovery model In the previous post we have estimated mean recovery time in COVID-19 Lithuanian data set. We have tried to generate fake recovery time series assuming that recovery times are exponentially distributed. We have failed. This time we will assume that recovery times are Weibull distributed. Assuming exponential distribution to describe the recovery times has a specific meaning. It means that the recovery rate is constant in time. Namely, it does not matter how long some one is sick, the time one will continue being sick will be exponentially distributed with the same recovery rate. Weibull distribution takes into account ageing effects. Namely, the recovery rate varies in time: • With \( k < 1 \) it decreases in time. While larger number of quick recoveries is observed. Though note that extremely long recoveries are also more probable in comparison to simple exponential • With \( k > 1 \) the recovery rate increases in time. There is smaller number of quick recoveries, but extremely long recoveries are also less probable in comparison to simple exponential • In special case of \( k = 1 \) Weibull distribution is identical to exponential distribution. We parametrize Weibull distribution as follows (assuming \( \tau > 0 \)): $$p(\tau) = k \lambda \left(\lambda \tau \right)^{k-1} \exp\left[ - \left(\lambda \tau \right)^k \right] .$$ For the data available we have observed that \( k = 2.5 \) and \( \lambda^{-1} = 32 \) generate quite good simulation results. This parameter set implies average recovery time of: $$\langle \tau \rangle = \frac{\Gamma(1 + 1/k)}{\lambda} \approx 28.4 .$$ Which is really close to the value we have estimated manually in the previous post. Likely, a better set of parameters could be found by conducting multiple simulations with the same parameters (estimating confidence intervals for RMSE) or by using convolution (the topic of our next post). At this point we have to satisfy ourselves with a simple example providing an interesting point - recovery times are more likely to be Weibull distributed than to be exponentially distributed.
{"url":"https://rf.mokslasplius.lt/covid-19-weibull-recovery/","timestamp":"2024-11-12T16:50:56Z","content_type":"text/html","content_length":"22585","record_id":"<urn:uuid:5ec8c27f-ae54-40dc-9973-2da1826bb331>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00876.warc.gz"}
Mechanical Energy in context of frequency to energy 30 Aug 2024 Journal of Mechanical Engineering Volume 12, Issue 3, 2023 Mechanical Energy and Frequency: A Theoretical Analysis This article explores the relationship between mechanical energy and frequency, with a focus on the conversion of one into the other. We derive mathematical expressions for the energy associated with different types of motion, including rotational and translational motion, and examine how these energies are related to the frequency of oscillation. Mechanical energy is a fundamental concept in physics, representing the ability of an object to do work. In many mechanical systems, energy is transferred from one form to another, often involving changes in frequency. Understanding the relationship between mechanical energy and frequency is crucial for designing efficient and effective mechanical systems. Rotational Energy Consider a rotating wheel with moment of inertia I and angular velocity ω. The rotational kinetic energy (E_rot) associated with this motion can be expressed as: E_rot = 0.5 * I * ω^2 where I is the moment of inertia, and ω is the angular velocity. Translational Energy Now consider a translating object with mass m and velocity v. The translational kinetic energy (E_trans) associated with this motion can be expressed as: E_trans = 0.5 * m * v^2 where m is the mass, and v is the velocity. Energy-Frequency Relationship The frequency (f) of oscillation is related to the angular velocity (ω) by the following expression: f = ω / (2 * π) Substituting this expression into the rotational energy equation, we get: E_rot = 0.5 * I * (2 * π * f)^2 Similarly, substituting the translational energy equation into the frequency-expression, we get: E_trans = 0.5 * m * (2 * π * f)^2 In this article, we have explored the relationship between mechanical energy and frequency in the context of rotational and translational motion. We have derived mathematical expressions for the energy associated with these types of motion and examined how these energies are related to the frequency of oscillation. These results provide a theoretical foundation for understanding the conversion of one form of energy into another, which is essential for designing efficient mechanical systems. [1] Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics. Addison-Wesley. [2] Landau, L. D., & Lifshitz, E. M. (1976). Mechanics. Butterworth-Heinemann. Note: The references provided are classic textbooks in the field of physics and mechanics, which provide a comprehensive understanding of the concepts discussed in this article. Related articles for ‘frequency to energy’ : • Reading: Mechanical Energy in context of frequency to energy Calculators for ‘frequency to energy’
{"url":"https://blog.truegeometry.com/tutorials/education/d3cbb2f0020bb503be424b8ed3a5ebbf/JSON_TO_ARTCL_Mechanical_Energy_in_context_of_frequency_to_energy.html","timestamp":"2024-11-05T16:11:50Z","content_type":"text/html","content_length":"16482","record_id":"<urn:uuid:7d0b1963-c828-44ce-94f0-5d1e65a1821c>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00761.warc.gz"}
Algebraic Curves When f is a polynomial in x and y, then f defines an algebraic curve in a plane. Puiseux Expansion The function has two singularities and one regular point on the line . We can obtain information (such as the tangent lines, the delta invariant, and other invariants) on singularities by computing the Puiseux expansions. One can view these Puiseux expansions as a sort of Taylor expansion (note that Puiseux expansions can also have fractional powers of x, whereas a Taylor expansion does not) of the algebraic function RootOf(f, y). Because this algebraic function is multivalued, we will get several expansions corresponding to the different branches of at . The following command gives these expansions of at : The fourth argument tells puiseux to compute a minimal number of terms. The number of terms that will be computed in this way is precisely the number of terms that are required to be able to distinguish the different Puiseux expansions from one another. Note: It appears as though only three different Puiseux expansions were given, whereas the function has five different branches. The other two expansions are implicitly given by taking the conjugates of these expansions over the field Q((x)). This command means the following: Give the Puiseux expansions up to accuracy 3, which means modulo . So the coefficients of are given, but not the coefficients of . To view the terms of the Puiseux expansions, we must compute the Puiseux expansions up to accuracy > 3. As one can see from the Puiseux expansions, the point , is singular, because two Puiseux expansions are going through this point: and its conjugate. Similarly, is a singular point. The output consists of lists of the form [ the location of the singularity, the multiplicity, the delta invariant, the number of local branches ]. The location is given as a list of three homogeneous coordinates, (x, y, z). The points (x, y, 1) are points in the affine plane , where is the field of constants. The points (x, y, 0) are on the line at infinity. (In this example, there are no singularities at infinity.) A point is singular if and only if the multiplicity of that point is > 1, and also if and only if the delta invariant is > 0. In this example, all of the singularities are double points. A double point has multiplicity 2 and delta invariant 1. The genus of an algebraic curve equals minus the sum of the delta invariants. Because these have already been determined by the previous command, computing the genus is easy now: The genus only depends on the algebraic function field of the curve. This field does not change if we apply birational transformations, so the genus is invariant under such transformations. This means, for example, that must have genus 0 as well: Parametrization for Curves with Genus 0 An irreducible algebraic curve allows a parametrization if and only if the genus is 0. A parametrization is a birational map from the projective line (= the field of constants union {infinity}) to the algebraic curve. This map is 1-1, except for a finite number of points. It is a 1-1 map between the places of the projective line and the places of the algebraic curve . This parametrization algorithm computes a parametrization over an algebraic extension of the constants of degree <= 2 if degree(f,{x,y}) is even, and a rational (that is, no field extension) parametrization if the degree of the curve is odd, as in this example. If we substitute an arbitrary number for (avoiding roots of the denominators to avoid "division by zero" messages), we get a point on the curve. Verify if this is indeed a point on the curve: Integral Basis The function field of an irreducible algebraic curve f can be identified with the field C(x)[y]/(f). This is an algebraic extension of C(x) of degree degree(f,y). In some applications (integration of algebraic functions and the method that algcurves[parametrization] uses), one must be able to recognize the poles of elements in the function field. For this purpose, one can compute a basis (as a C[x] module) for the ring of functions in C(x)[y]/(f) that have only poles on the line at infinity. This basis is computed as follows: Note: This did not require computation time because it has already been determined for use in the parametrization algorithm. The map( normal, b ) command makes the output look somewhat smaller. The integral basis has a factor in the denominator if and only if there is a singularity on the line x=RootOf(k). This can only happen if divides the discriminant discrim(f, y). The integral basis contains information about the singularities in a form that is useful for computations. The advantage of this form is that it is rational--one requires no algebraic extensions of the field of constants to denote the integral basis, whereas we do need algebraic numbers to denote the Puiseux expansions. Suppose that we are only interested in the singularities on the line x=0. Then we can compute a local integral basis for the factor . A local integral basis for a factor is a basis for all elements in the function field that are integral over C[[]]. An element of the function field is integral over C[[]] if it has no pole at the places on the line . An example of the kind of information that the integral basis contains is the sum of the multiplicities of the factor in the denominators. This sum equals the sum of the delta invariants of the points on the line . So this local integral basis for the set of factors {x} tells us that the sum of the delta invariants on the line x=0 is 2. Homogeneous Representation Until now an algebraic curve was represented by a polynomial in two variables, x and y. An algebraic curve is normally not viewed as lying in the affine plane (where C is the field of constants), but in the projective plane (C). The notation as a nonhomogeneous polynomial in two variables is convenient if we want to study the affine part of the curve (for example, in the integral basis computation), but not if we are interested in the part of the curve on the line at infinity. Often (for example, for computing the genus), the part of the curve at infinity is needed as well. The nonhomogeneous notation in two variables can be converted to the homogeneous notation as follows: This can be converted again to f with Now the line at infinity is the line z=0 on homogeneous(f,x,y,z). By switching x and z we can move the line x=0 to infinity. We see that now there are two singularities at infinity, namely (1,0,0) and (-1,1,0). This may look different in the output of singularities, because in homogeneous coordinates, the points (x, y, z) and () are the same for nonzero . This curve is given as a homogeneous polynomial; however, the input for the algorithms in this package must be the curve in its nonhomogeneous representation: This polynomial is a curve of degree 10 having a maximal number of cusps according to the Plucker formulas. It was found by Rob Koelman. It has 26 cusps and no other singularities. Now check if these points are indeed cusps. The multiplicities are 2 and the delta invariants are 1, so that part is correct. To decide if these points are cusps, we can use Puiseux expansions. Take one of these points: Now compute the Puiseux expansions at the line x = <the x coordinate of this point> : To obtain the y coordinates of the points on the line from this, we need only substitute . We see that there are eight different points on this line. The stands for six conjugated points (namely the roots of the polynomial inside the ). However, the expression is only one point, because our field of definition is not anymore, but . This is because we needed to extend the field to be able to "look" on the line x=. The Puiseux series in this set (which have only been determined up to minimal accuracy) are series (with fractional powers) in (). Substitute the following to get series in x instead of in (). That makes it somewhat easier to read. For determining the type of the singularity, the coefficients here are not relevant. We have an expansion of the form The higher order terms (which have not yet been determined) have no influence on the type of the singularity, nor do the precise values of these constants. These expansions show that there are six regular points on this line and two cusps. One can easily get more terms of the Puiseux expansions, although that is not necessary for determining the type of the singularities. We see that if we compute more terms, the results can get bigger quickly. Graphics: Singularity Knots A different way to show information about a singularity is the plot_knot command. The input of this procedure is a polynomial f in and , for which the singularity that we are interested in is located at . For example, the curve on the top of this worksheet has a singularity at 0, a double point. The curve is irreducible, and so it consists of only one component. But locally, around the point 0, it has two components. Information on these components and their intersection multiplicities can be given in the form of Puiseux pairs, obtained by computing the Puiseux expansions. A different way of representing this information is as follows: By identifying with , the curve can be viewed as a two-dimensional surface over the real numbers. Now we can draw a small sphere inside around the point 0. The surface of the sphere has dimension 3 over R. The intersection with the curve (which has dimension 1 over the complex numbers, so dimension 2 over the real numbers) consists of a number of closed curves over the real numbers, inside a space (the sphere surface) of dimension 3. After applying a projection from the sphere surface to , these curves can be plotted. (See also: E. Brieskorn, H. Knorrer: Ebene Algebraische Kurven, Birkhauser 1981.) In this plot, each component will correspond to one of the local components. Furthermore, the winding number in the plot equals the intersection multiplicity of the two branches of the curve. In this example this number is 1. Of course, we want to see more complicated 3-D plots. For this, we need only make the singularity more complicated, and the intersection multiplicities of the branches higher. Because we are interested in the curve only locally, it does not matter if the curve is irreducible. However, the input of plot_knot must be square-free. We see that a cusp gives a 2-3 torus knot. More generally, if , then gives a p-q torus knot. It gets more interesting when we have plots consisting of more components. For this, we need only have a singularity consisting of more components. In this example, we start with a 2-3 torus knot using . To obtain a high intersection multiplicity, we add a high power of , and multiply these two components. Then we get: Getting good plots sometimes requires tweaking with the various options (see plot_knot), or changing some of the coefficients (for example, the coefficient of ). Plot options can be experimented with interactively by clicking the plot and using the plot menus, or right-clicking the plot. A useful option is Light Schemes available using the Color menu (or specified as the lightmodel option to the plot_knot call). Weierstrassform, j_invariant For curves with genus , one can compute a parametrization--a bijection between the curve and a projective line. One can view this projective line as a normal form for curves with genus . For curves with genus , we can also compute a normal form, the Weierstrass normal form. In this form the curve is written as F=- (polynomial in of degree ). To avoid ambiguity, we will denote the Weierstrass normal form with the variables and instead of and . Now the curves f and F are birationally equivalent. The Weierstrass form algorithm computes such an equivalence in two directions, [w[2] , w[3]] is a morphism from f to F, and [w[4] , w[5]] is the inverse morphism. Check this for the point (-2,2,1) on . Now check if this is on F: Now try the inverse, and see if we get the point (-2,2,1): The Weierstrassform procedure handles hyperelliptic curves as well. A curve f is called hyperelliptic if and only if the genus is >1 and f is birational to a curve F of the form where P is a polynomial in X. This means that the algebraic function field C(x)[y]/(f) is isomorphic to C(X)[Y]/(F). So this is similar to the elliptic case, the only difference is that the degree of F is The procedure is_hyperelliptic tests if a curve f is hyperelliptic. The curve given by h is birational to the curve F. The other entries of W give the images of x, y, X, and Y under the isomorphism and inverse isomorphism from C(x)[y]/(h) to C(X)[Y]/(F). Further Results In the subsequent sections, the following additional functions of algcurves are demonstrated: differentials: Compute basis of holomorphic differentials homology: Compute canonical basis of the homology. is_hyperelliptic: Test if a curve is hyperelliptic. monodromy: Compute the monodromy. periodmatrix: Determine the periodmatrix (Riemann matrix). The algebraic function field L of the following curve is the field of all meromorphic functions on the algebraic curve (Riemann surface). It is the fraction field of the ring C[x,y]/(f), where C is the field of complex numbers. We can write L=C(x)[y]/(f). The category of Riemann surfaces is equivalent to the category of algebraic curves, and also equivalent to the category of algebraic function fields. Now L is an algebraic extension of C(x) of degree 4. By interchanging the roles of x and y we can also view L as an algebraic extension of C(y) of degree 6. Holomorphic Differentials A regular point on the curve corresponds to one point on the Riemann surface. A singular point corresponds to one or more points on the Riemann surface. These points can be represented by Puiseux The following function A has a pole at one of the two points P1, P2 on the Riemann surface. The following is a basis for all functions that have no poles at all points where x has no pole. A differential is an expression "A * dx" where A is an element of L. Using a Puiseux expansion with local parameter T we can write it as A(T) * dT. If A(T) has no poles at any point, then the differential A*dx is called holomorphic. A basis of the holomorphic differentials is given by: Now we will verify using Puiseux expansions that this differential (which has no poles anywhere on the Riemann surface) has no poles at P1 or P2. We see that the differential dif1 has no pole at P1 and P2. It should also have no poles at infinity, which we can verify as follows. The Monodromy Let f be a polynomial in x and y. If we take a point x=b, then subs(x=b,f) will in general have n different solutions , where . The points where there are fewer than n different solutions are called discriminant points, since they are roots of . Let b be some fixed point that is not a discriminant point, and let be the solutions of f at x=b, obtained by: fsolve(subs(x=b,f),y,complex). If we take a path, starting at b, avoiding all , going in a loop around one discriminant point , then we can analytically continue along this path. When we return to b, this analytic continuation will transform into new solutions of subs(x=b,f). Since the solutions in the complex numbers of subs(x=b,f) are unique up to permutations, the analytic continuation of along this path will result in a permutation of . If this permutation is nontrivial then is called a branch point. The monodromy procedure will compute these permutations for all branch points . The group generated by these permutations is isomorphic to the Galois group of C(x)[y]/(f) over C(x), where C stands for the complex numbers. Now M[1] is the basepoint. M[2] are the solutions of fsolve(subs(x=M[1],f),complex). M[3] is a list with elements of the form [, permutation for ]. The group generated by these permutations is: G is the Galois group of C(x)[y]/(f) over C(x). This is a subgroup of the Galois group H of Q(x)[y]/(f) over Q(x). We see that G is a subgroup of H with index 2. This means that the intersection of the complex numbers C with the splitting field of f over Q(x) is a quadratic extension of Q. This quadratic extension is Q(I) because the splitting field of f over Q(x) is Q(x, I, RootOf(f,y)), which we can verify by: The Homology Given the homology, one can determine closed paths, called cycles, on the Riemann surface. The procedure homology computes 2*g cycles that form a canonical basis of the homology of the Riemann surface. Every closed path on the Riemann surface is homologically equivalent to a Z-linear combination of these 2*g cycles. The Period Matrix If omega is a holomorphic differential, then its periods defined by the integrals of omega over closed paths on the Riemann surface. A basis (as a Z-module) of the periods of omega is obtained by integrating omega over every element of the homology basis. The basis of the holomorphic differentials contains g elements. The homology basis has 2*g elements. By computing the integrals of these g holomorphic differentials over these 2*g paths, we get 2g by g integrals, which form a matrix called the period matrix. By taking a different basis of the holomorphic differentials we can obtain a normalized period matrix of the form (I, Z) where I is the g by g identity and where the g by g matrix Z is called the Riemann matrix. In this example we can give an exact Riemann matrix. In most cases the entries of the Riemann matrix will be transcendental numbers that can only be computed approximately. The accuracy will depend on the global variable Digits. Increasing this value will lead to more accurate digits, but also to a longer computation time. The algebraic curve f is determined up to birational equivalence by the matrix P and also by the matrix Z. A curve is birational to f if and only if its Riemann matrix is equivalent (not necessarily equal) to Z. Related Information For more examples, test files, plots, and documentation on the algcurves package, see: http://www.math.fsu.edu/~hoeij/maple.html and the help page algcurves. The package CASA contains code for curves and for other algebraic varieties as well, and can be obtained from: Return to Index of Example Worksheets Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"https://cn.maplesoft.com/support/helpJP/Maple/view.aspx?path=examples%2Falgcurve","timestamp":"2024-11-13T22:24:44Z","content_type":"application/xhtml+xml","content_length":"489131","record_id":"<urn:uuid:153e647e-cd91-44ea-a799-3f79681c2b23>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00161.warc.gz"}
Enhanced local tomography Enhanced local tomography Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values. Latest The Regents of the University of California Patents: This invention relates to image reconstruction from tomographic data, and, more particularly, to the definition of discontinuity location and size using limited tomographic data. This invention was made with government support under Contract No. W-7405-ENG-36 awarded by the U.S. Department of Energy. The government has certain rights in the invention. Tomography produces the reconstruction of a generalized density function f from a large number of line integrals of f. A practically important objective is the reconstruction, from the x-ray data, of functions providing significant information about the physical object, such as the location of discontinuities within an object being interrogated. Tomography equipment for obtaining the attenuation data is well known. For example, in some instances a single source is collimated and traversed across the object, whereby a single sensor output corresponds to a single source transverse location. Here, the source traverses the object at each angular orientation of the object. In other instances, a single source generates a fan-like pattern of radiation that is detected by an array of sensors, where the object is located between the source and the sensor array. The source is then angularly rotated relative to the object to provide a complete set of tomographic data. Conventional tomography is a global procedure in that the standard convolution formulas for reconstruction of the density at a single point require the line integral data over all lines within some planar cross-section containing the point. Local tomography has been developed for the reconstruction at a point where attenuation data is needed only along lines close to that point within the same cross-section. Local tomography produces the reconstruction of a related function using the square root of the negative Laplace operator and reproduces the locations of discontinuities within an object. See, e.g., E. Vainberg, "Reconstruction of the Internal Three-Dimensional Structure of Objects Based on Real-Time Integral Projections," 17 Sov. J. Nondestr. Test., pp. 415-423 (1981); A. Faridani et al., "Local Tomography," 52 SIAM J. Appl. Math, No. 2, pp. 459-484 (April 1992). Local tomography can reduce significantly the amount of data needed for the local reconstruction, with a concomitant reduction in x-ray dose. While the location of a discontinuity is reproduced, however, them is no quantitative value for the magnitude of the discontinuity. In many instances it would be useful to know this value in order to make medical, technological, or geophysical Accordingly, it is an object of the present invention to determine both the location and size of discontinuities from tomographic data. It is another object of the present invention to determine the location and size of discontinuities from only limited attenuation data that includes a region containing the discontinuity. Additional objects, advantages and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims. To achieve the foregoing and other objects, and in accordance with the purposes of the present invention, as embodied and broadly described herein, the apparatus of this invention may comprise a method for determining by tomography the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values. The accompanying drawings, which are incorporated in and form a part of the specification, illustrate the embodiments of the present invention and, together with the description, serve to explain the principles of the invention. In the drawings: FIG. 1 is a graphical representation of the functions .psi.(q). FIG. 2 is a graphical representation of the functions .psi.'(q). FIG. 3 illustrates an exemplary phantom for generating Radon transform data. FIG. 4 is a horizontal cross-section of the density plot through the phantom shown in FIG. 3. FIG. 5 is a horizontal cross-section of the local tomographic density discontinuity values obtained according to a first reconstruction process of the present invention. FIG. 6 is a horizontal cross-section of the local tomegraphic density discontinuity values obtained according to a second reconstruction process of the present invention. In accordance with the present invention, local tomography is enhanced to determine the sizes of density discontinuities within an object, as well as the location of the discontinuity. The density function f(x) is usually reproduced from the line integral data .function.(.theta.,p), where .theta. is an angle on the unit circle and p is a location of a beam line, by the formula: ##EQU1## A local tomography function .function..sub..LAMBDA. is proposed in, e.g., E. Vainberg, "Reconstruction of the Internal Three-Dimensional Structure of Objects Based on Real-Time Integral Projections," 17 Sov. J. Nondestr. Test., pp. 415-423 (1981), and K.T. Smith et al., "Mathematical Foundations of computed Tomography," Appl. Optics, pp. 3950-3957 (1985): ##EQU2## where .THETA..di-elect cons.S.sup.1, .theta. is a scalar variable 0<.theta.<2.pi., .function..sub.pp is the second derivative with respect to p, and S.sup.1 is the unit circle. Then, it can be shown that .function..sub..LAMBDA. satisfies ##EQU3## where F and F.sup.-1 denote Fourier transform and its inverse, respectively, and (-.DELTA.).sup.1/2 is the square root of the negative Laplacian. In practice, the function .function..sub..LAMBDA. is not computed, but its mollification W.sub..epsilon. *.function..sub..LAMBDA.. A mollifier is a kernel such that its convolution with f(x) gives a sufficiently smooth function that converges to f(x) as .epsilon..fwdarw.0. Let W.sub..epsilon. be a sequence of sufficiently smooth mollifiers that satisfy the following properties: (a) W.sub..epsilon. (x) is a radial function, W.sub..epsilon. (x)=W.sub..epsilon. (.vertline.x.vertline.); (b) W.sub..epsilon. (x)=0, .vertline.x.vertline.>.epsilon.;W.sub..epsilon. (x)>0,.vertline.x.vertline.<.epsilon.; and (c) W.sub..epsilon. (x)=.epsilon..sup.-2 W.sub.1 (x/.epsilon.), .intg..sub..vertline.x.vertline.<1 W.sub.1 (x)dx=1. It can be shown that ##EQU4## where R is the Radon transform. Integrating by parts, one gets ##EQU5## From Eq. (3) it is seen that .function..sub..LAMBDA. is the result of the action of pseudo-differential operator (PDO) with symbol .vertline..xi..vertline. on f. Let S be a discontinuity curve of f and pick x.sub.0 .di-elect cons.S, so that S is smooth in the neighborhood of x.sub.0. The asymptotic formula for .function..sub..LAMBDA. in a neighborhood of x.sub.0 can be shown to be ##EQU6## Combining Eqs. (4) and (5), and using the properties of W.sub..epsilon., for a sufficiently small .epsilon.>0 so that S can be considered flat inside an .epsilon.-neighborhood of x.sub.0 : n.sub.+ (x.sub.S)=n.sub.+ (x.sub.0)=n.sub.+, .vertline.x.sub.S -x.sub.0 .vertline.<.epsilon., yields ##EQU7## where .eta..sub..epsilon. =.eta.*W.sub..epsilon.. Since w.sub..epsilon. =RW.sub..epsilon., ##EQU8 Since .eta. is smooth in the neighborhood of x.sub.0, the function .eta..sub..epsilon. =.eta.*W.sub..epsilon. is also smooth and variation of .eta..sub..epsilon. is neglected compared with the variation of .psi. on distances on the order of .epsilon. from x.sub.0. Thus, from Eq. (8) and letting q=h/.epsilon., an approximate formula for .function..sub..LAMBDA..epsilon. in a neighborhood of x.sub.0 is: ##EQU9## where c=.eta..sub..epsilon. (x.sub.S). In the case of the mollifier ##EQU10## the graphs of .psi. and .psi.' are shown in FIGS. 1 and 2, respectively. Equation (9) explicitly relates the behavior of .function..sub..LAMBDA..epsilon. in a neighborhood of x.sub.S with the unknown quantity D(x.sub.S), which is the value of the jump of f at x.sub.S .di-elect cons.S. Since .function..sub..LAMBDA..epsilon. is computed by Equation (4) from the tomographic data and the function .psi.(q) can be tabulated for a given sequence of mollifiers, an estimation of the value of the density discontinuity jump D(x.sub.S) of the original function f at x.sub.S is provided by Equation (9). A first process for evaluating D(x.sub.S) begins with the computed local tomography function calculated on a square grid with step size h:x.sub.i,j =(x.sub.i.sup.(1),x.sub.j.sup.(2))=(ih,jh), where i and j are integers. Choose a grid node x.sub.i.sbsb.0.spsb.j.sbsb.0 on S, and assume that h and .epsilon. are sufficiently small that changes of .function..sub..LAMBDA..epsilon. can be neglected in the direction parallel to S, i.e., assume D(x.sub.S)=D(x.sub.i.sbsb.0.spsb.j.sbsb.0) if .vertline.x.sub.S -x.sub.i.sbsb.0 .spsb.j.sbsb.0.vertline.<.epsilon.. Then Eq. (9) can be written as ##EQU11## Fix n.sub.1, n.sub.2 .di-elect cons.N and consider a (2n.sub.1 +1).times.(2n.sub.2 +1) window around x.sub.i.sbsb.0.spsb.j.sbsb.0. First, estimate n.sub.+ by computing partial derivatives ##EQU12## The estimate is accurate up to a factor (-1), i.e., N.sub.+ =.+-.n.sub.+. Then find D(x.sub.i.sbsb.0.spsb.j.sbsb.0) by solving the minimization problem ##EQU13## Note that the larger values of .function..sub..LAMBDA..epsilon. correspond to the side of S with larger values of f. Thus, the following process provides estimates of jumps of f from .function..sub..LAMBDA..epsilon. : 1. Estimate vector N.sub.+ from Eq. (13); Calculation of .vertline.D(x.sub.i.sbsb.0.spsb.j.sbsb.0).vertline. from Eqs. (14) and (15) is most stable if the function .psi. on the interval ##EQU14## differs from a constant as much as possible. For example, the graph of .psi.(q) shown in FIG. 1 suggests that the interval X should be close to the interval between q-coordinates of the local minimum and the local maximum of .psi.(q), i.e., X.apprxeq.[-0.31, 0.31]. A second process for evaluating D(x.sub.S) from the local tomography approach provides values of the jumps from tomographic data in one step. Taking the gradient on both sides of Eq. (12) provides an approximate equality ##EQU15## The modulus of the gradient of .vertline..gradient..function..sub..LAMBDA..epsilon. (x.sub.ij).vertline. can be calculated (Eq. (4)) to get ##EQU16## For each .theta. it is convenient to compute ##EQU17## for all q in the projection of the support of f on the direction .THETA., and then compute ##EQU18## Thus, the following process provides estimated values of jumps of f: 1. Compute .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) using Eqs. (19), (20), (21); 2. Estimate .vertline.D(x.sub.ij).vertline. using Eqs. (17), (18); 3. The vector .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) points from the smaller values of f to the larger values of f. For the mollifier represented by Eq. (11), .psi.'(0) can be computed using Eq. (7): ##EQU19## FIG. 3 illustrates an exemplary phantom for generating the Radon transform data. The densities are: exterior: 0, ellipse: 1.0, exterior annulus: 0.8, area between the annulus and the ellipse: 0, three small circles: 1.8, center circle: 0.1. The Radon transform was computed for 350 angles equispaced on [0, .pi.] and 601 projections for each angle. The horizontal cross-section of the density distribution of the phantom is shown in FIG. 4. The function, .function..sub..LAMBDA..epsilon. was computed at the nodes of a square 201.times.201 grid using the mollifier of Eq. (11) with .epsilon.=9.DELTA.p, where .DELTA.p is the discretization step of the p-variable. This value of .epsilon. means that the discrete convolutions were computed using 19 points per integral. The grid step size was h=3.DELTA.p, so that h/.epsilon.=1/3. A central horizontal cross-section of the density distribution represented by the estimated .vertline.D(x).vertline. from the first process is shown in FIG. 5, with n.sub.1 =n.sub.2 =1, i.e., a 3.times.3 window. It is seen that for the given ratio h/.epsilon.=1/3 and window size n.sub.1 =n.sub.2 =1, the interval X is very close to optimal. The line with peaks is the computed .vertline.D (x).vertline.; the dots represent the actual value of the jumps of the original density function f (see FIG. 4). The estimated jump values are in good agreement with the actual values. The second process was used on the same phantom structure shown in FIGS. 3 and 4. The grid parameters are the same as for the first process. Again, the line with peaks is the computed .vertline.D (x).vertline.; the dots represent the actual jump values of f. Again, there is good agreement between the actual jump values and the estimated jump values. The foregoing description of the invention has been presented for purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto. 1. A method for estimating by tomography the location and value of a discontinuity between a first internal density of an object and a second density of a region within said object, comprising the steps of: directing a beam of radiation in a predetermined pattern through said region of said object containing said discontinuity; determining relative attenuation data of said beam within said predetermined pattern having a first data component that includes attenuation data through said region; inputting said relative attenuation data to a local tomography function.function..sub..LAMBDA. to define said location S of said discontinuity; determining the asymptotic behavior of.function..sub..LAMBDA. in a neighborhood of S; and estimating said value for said discontinuity from said asymptotic behavior of.function..sub..LAMBDA.. 2. A method according to claim 1, wherein said asymptotic behavior of.function..sub..LAMBDA. is determined with the function.psi. computed from a selected sequence of mollifiers, where ##EQU20## 3. A method according to claim 1, wherein said step of determining said asymptotic behavior includes the steps of: estimating the vector N.sub.+, where ##EQU21## to determine the direction of variation of said local tomography function.function..sub..LAMBDA. about said discontinuity location S; inputting said vector N.sub.+ to ##EQU22## for estimating said value of said discontinuity. 4. A method according to claim 2, wherein said function.psi. is evaluated over an interval over which.psi. has the greatest variation. 5. A method according to claim 3, wherein said asymptotic behavior of.function..sub..LAMBDA. is determined from ##EQU23## with the function.psi. computed from a selected sequence of mollifiers. 6. A method according to claim 5, wherein said function.psi. is evaluated over an interval over which.psi. has the greatest variation. 7. A method for estimating by tomography the location and value of a density discontinuity between a first internal density of an object and a second density of a region within said object, comprising the steps of: directing a beam of radiation in a predetermined pattern through said region of said object containing said discontinuity; determining relative attenuation data of said beam within said predetermined pattern having a first data component that includes attenuation data through said region; inputting said relative attenuation data to a local tomography function.function..sub..LAMBDA. to define said location S of said discontinuity; compute a gradient value.gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) along said discontinuity where, ##EQU24## and estimating the value.vertline.D(x.sub.ij).vertline. of said density discontinuity along S by inputting said gradient value to.vertline.D(x.sub.i.sbsb.0.spsb.j.sbsb.0).vertline..apprxeq..vertline..gra dient..function..sub..LAMBDA..epsilon. (x.sub.i.sbsb.0.spsb.j.sbsb.0).vertline..pi..epsilon..sup.2 /.psi.'(0). 8. A method according to claim 7, wherein said gradient value indicates the direction of change for said density discontinuity. Referenced Cited U.S. Patent Documents 4189775 February 19, 1980 Inouye 4365339 December 21, 1982 Pavkovich 4433380 February 21, 1984 Abele et al. 4446521 May 1, 1984 Inouye 4670892 June 2, 1987 Abele et al. 5319551 June 7, 1994 Sekiguchi et al. Other references • A. M. Cormack, "Representation of a Function by Its Line Integrals, With Some Radiological Applications," 9 Journal of applied Physics, vol. 34, pp. 2722-2727 (Sep. 1963). R. H. Huesman, "A New Fast Algorithm for the Evaluation of Regions of Interest and Statistical Uncertainty in computed Tomography," 5 Phys. Med. Biol., vol. 29, pp. 543-552 (1984). Adel Faridani et al., "Local Tomography," 2 Siam J. Appl. Math., vol. 52, pp. 459-484 (Apr. 1992).
{"url":"https://patents.justia.com/patent/5550892","timestamp":"2024-11-13T18:46:42Z","content_type":"text/html","content_length":"80522","record_id":"<urn:uuid:c98e6d2a-31b2-4b0d-9c22-89a21b559d76>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00684.warc.gz"}
The Harmonic Series A harmonic series can have any note as its fundamental, so there are many different harmonic series. But the relationship between the frequencies of a harmonic series is always the same. The second harmonic always has exactly half the wavelength (and twice the frequency) of the fundamental; the third harmonic always has exactly a third of the wavelength (and so three times the frequency) of the fundamental, and so on. For more discussion of wavelengths and frequencies, see Acoustics for Music Theory. Figure 3.13 Harmonic Series Wavelengths and Frequencies The second harmonic has half the wavelength and twice the frequency of the first. The third harmonic has a third the wavelength and three times the frequency of the first. The fourth harmonic has a quarter the wavelength and four times the frequency of the first, and so on. Notice that the fourth harmonic is also twice the frequency of the second harmonic, and the sixth harmonic is also twice the frequency of the third harmonic. Say someone plays a note, a middle C. Now someone else plays the note that is twice the frequency of the middle C. Since this second note was already a harmonic of the first note, the sound waves of the two notes reinforce each other and sound good together. If the second person played instead the note that was just a litle bit more than twice the frequency of the first note, the harmonic series of the two notes would not fit together at all, and the two notes would not sound as good together. There are many combinations of notes that share some harmonics and make a pleasant sound together. They are considered consonant. Other combinations share fewer or no harmonics and are considered dissonant or, when they really clash, simply "out of tune" with each other. The scales and harmonies of most of the world's musics are based on these physical facts. Note: In real music, consonance and dissonance also depend on the standard practices of a musical tradition, especially its harmony and tuning practices, but these are also often related to the harmonic series. For example, a note that is twice the frequency of another note is one octave higher than the first note. So in the figure above, the second harmonic is one octave higher than the first; the fourth harmonic is one octave higher than the second; and the sixth harmonic is one octave higher than the third. Exercise 3.4: 1. Which harmonic will be one octave higher than the fourth harmonic? 2. Predict the next four sets of octaves in a harmonic series. 3. What is the pattern that predicts which notes of a harmonic series will be one octave apart? 4. Notes one octave apart are given the same name. So if the first harmonic is a "A", the second and fourth will also be A's. Name three other harmonics that will also be A's. A mathematical way to say this is "if two notes are an octave apart, the ratio of their frequencies is two to one (2:1)". Although the notes themselves can be any frequency, the 2:1 ratio is the same for all octaves. Other frequency ratios between two notes also lead to particular pitch relationships between the notes, so we will return to the harmonic series later, after learning to name those pitch relationships, or intervals.
{"url":"https://www.opentextbooks.org.hk/ditatopic/2291","timestamp":"2024-11-07T12:07:59Z","content_type":"text/html","content_length":"223377","record_id":"<urn:uuid:de049116-cfe8-492c-850a-d94c5e108ba6>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00550.warc.gz"}
Detect Loop or Cycle in Linked List Difficulty: Medium, Asked-in: Google, Amazon, Microsoft, Goldman Sachs, Nvidia Key takeaway: An excellent linked list problem to learn problem-solving using fast and slow pointers (Floyd's cycle detection algorithm). We can solve many other coding questions using the similar Let’s understand the problem Given the head of a linked list, write a program to find if linked list has a cycle or not. Return true if there is a cycle, otherwise, return false. Critical note related to the problem In a singly linked list, we are usually given a reference to the first node to perform various operations such as traversal, searching, insertion, deletion, and merging. These operations require a well-formed linked list i.e. a list without loops or cycles. To understand why, consider what happens when a linked list has a cycle. In this case, there is no "end" node and one of the nodes is the next node of two different nodes. This means that when we try to iterate through the list, we will encounter the nodes in the cycle multiple times, causing the iteration to fail. To avoid this, it is important to detect linked lists with cycles before applying an iterative approach. Discussed solution approaches • Brute force approach using hash table • Using boolean flag inside linked list node • Efficient approach using fast and slow pointers Brute force approach using hash table Solution idea The basic idea is to traverse the linked list and use a hash table to keep track of the nodes that have been visited during the traversal. If we encounter a node that has already been visited, it means that there is a cycle in the linked list because this node must be part of a cycle. If we reach the end of the list without encountering any visited nodes again, it means there is no cycle. Solution code C++ bool detectLinkedListLoop(ListNode* head) unordered_map<ListNode*, bool> H; ListNode* currNode = head; while (currNode != NULL) if (H.find(currNode) != H.end()) return true; H[currNode] = true; currNode = currNode->next; return false; Solution code Python def detectLinkedListLoop(head): H = {} currNode = head while currNode is not None: if currNode in H: return True H[currNode] = True currNode = currNode.next return False Solution code Java class Solution public boolean detectLinkedListLoop(ListNode head) HashMap<ListNode, Boolean> H = new HashMap<>(); ListNode currNode = head; while (currNode != null) if (H.containsKey(currNode)) return true; H.put(currNode, true); currNode = currNode.next; return false; Time and space complexity analysis The above algorithm for detecting a cycle in a linked list requires only one traversal of the list. For each node, we perform one insert operation (to add the node to the hash table) and one search operation (to check if the node has been visited before). So the time complexity of this algorithm is O(n), where n is the number of nodes in the list. This algorithm also has a space complexity of O(n) because we use a hash table to store the nodes of the linked list. Using boolean flag inside linked list node Solution idea Can we optimise the above approach by avoiding the overhead of hash table operations? One way to do this is to add a "visited" flag to the structure of the linked list node. This flag can be used to mark the nodes that have been visited and detect when a node is encountered again during the traversal. To use this optimization, we can traverse the linked list using a loop and mark the "visited" flag as 1 whenever a node is visited for the first time. If we encounter a node with the "visited" flag already set to 1, it means there is a cycle in the linked list. Note that the "visited" flag is initially set to 0 for each node. Solution code C++ struct ListNode int data; ListNode* next; int visited; ListNode(int val) data = val; next = NULL; visited = 0; bool detectLinkedListLoop(ListNode* head) ListNode* curr = head; while (curr != NULL) if (curr->visited == 1) return true; curr->visited = 1; curr = curr->next; return false; Solution code Python class ListNode: def __init__(self, data): self.data = data self.next = None self.visited = 0 def detectLinkedListLoop(head): curr = head while curr is not None: if curr.visited == 1: return True curr.visited = 1 curr = curr.next return False Time and space complexity analysis The above algorithm for detecting cycles in a linked list has a time complexity of O(n) because it traverses each node in the list once. However, it has a space complexity of O(n) because it uses an extra variable (the "visited" flag) for each node in the list. Efficient solution using fast and slow pointers (Floyd's cycle detection algorithm) Solution idea The above solutions use O(n) space, so the critical question is: Can we optimize further and detect linked list cycle using O(n) time and O(1) space? This idea of detecting cycles in a linked list is based on an algorithm known as Floyd's cycle finding algorithm or the tortoise and the hare algorithm. This algorithm uses two pointers, a "slow" pointer and a "fast" pointer, that move through the list at different speeds. At each iteration step, the slow pointer moves one step while the fast pointer moves two steps. If the two pointers ever meet, it means there is a cycle in the linked list. Think! Solution steps 1. We initialize two pointers, fast" and slow, with the head node of the linked list. 2. Now we run a loop to traverse the linked list. At each step of the iteration, move the slow pointer to one position and the fast pointer to two positions. 3. If the fast pointer reaches the end of the list (i.e., fast == NULL or fast->next == NULL), there is no loop in the linked list. Otherwise, the fast and slow pointers will eventually meet at some point, indicating a loop in the linked list. The critical question is: Why will this happen? One basic intuition is simple: If there is a loop, the fast pointer will eventually catch up to the slow pointer because it is moving at a faster pace. This is similar to two people moving at different speeds on a circular track: eventually, the faster person will catch up to the slower person. Solution code C++ bool detectLinkedListLoop(ListNode* head) ListNode* slow = head; ListNode* fast = head; while (fast != NULL && fast->next != NULL) slow = slow->next; fast = fast->next->next; if (slow == fast) return true; return false; Solution code Python def detectLinkedListLoop(head): slow = head fast = head while fast is not None and fast.next is not None: slow = slow.next fast = fast.next.next if slow == fast: return True return False Some critical observations In Floyd's cycle finding algorithm, the fast pointer moves at twice the speed of the slow pointer. This means that the gap between the two pointers increases by one after each iteration. For example, after the fifth iteration, the fast pointer will have moved 10 steps and the slow pointer will have moved 5 steps, resulting in a gap of 5. When the slow pointer enters the loop, the fast pointer is already inside the loop. Suppose at this point, the fast pointer is a distance k (k < l, where l is the length of the loop) from the slow pointer. As the two pointers move through the loop, the gap between them increases by one after each iteration. When the gap becomes equal to the length of the loop (l), the pointers will meet because they are moving in a cycle of length l. The total number of steps traveled by both pointers in the cycle before they meet at a common point is therefore l - k. Time and space complexity analysis • Total number of nodes in linked list = n • The length of linked list cycle (if any) = l • The distance of the cycle's starting point from beginning = m. Here l + m = n • When slow pointer enters the loop, fast pointer distance from the slow pointer = k. Case 1: When there is no loop in linked list. Fast pointer will reach the end after n/2 steps. So, Time complexity = O(n). Case 2: When there is a loop in linked list. Both pointers will move m steps before slow pointer take entry into the loop. Inside the loop, both pointers will travel (l - k) steps before meeting at some common point. Time complexity = Total number of steps traveled by both pointers from start to meeting at common point = m + (l - k) = l + m - k = n - k (equation 1). If slow pointer travel m distance from start to reach the bridge point, then fast pointer will travel 2m distance from the start. There will be two cases: When m < l: k = m and total steps travelled by both pointers before meeting at a common point = n - k = n - m (From equation 1). So time complexity in the worst case = O(n), where the worst-case scenario occurs when m = 1. When m >= l: In this situation, fast pointers will first move m distance to reach the bridge point and revolve m/l times in the cycle. So the distance of fast pointer from the bridge point (k) = m mod l Total no of steps travelled by both pointers before meeting at the common point = n - k = n - m mod l (From equation 1). So time complexity in the worst case = O(n), where worst-case scenario occurs when m = l. Overall, time complexity would be O(n) in the worst case. The algorithm is working in constant space, space complexity = O(1) Critical ideas to think! • How do we modify the above approaches to remove the loop? • Design an algorithm to find the loop length in the linked list. • If there is a loop in the linked list, what would be the best-case scenario of the fast and slow pointer approach? • How do we find the bridge node if there is any loop present? • Can we solve this problem using some other approach? • Is it necessary to move the fast pointer with double speed? What will happen if we move the fast pointer with speed four times the slow pointer? Comparison of time and space complexities • Using hash table: Time = O(n), Space = O(n) • Using linked list augmentation: Time = O(n), Space = O(n) • Floyd’s cycle detection algorithm: Time = O(n), Space = O(1) Suggested problems to practice • Remove loop from the linked list • Return the node where the cycle begins in a linked list • Swap list nodes in pairs • Reverse a linked list in groups of a given size • Intersection point in Y shaped linked lists Enjoy learning, Enjoy algorithms!
{"url":"https://www.enjoyalgorithms.com/blog/detect-loop-in-linked-list/","timestamp":"2024-11-12T12:24:26Z","content_type":"text/html","content_length":"88605","record_id":"<urn:uuid:4824e574-166e-47d9-8a81-5df45475d109>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00273.warc.gz"}
How to extract the overlap of two samples from Cobaya and draw the contour plot? Good afternoon, Thank you for welcoming me into the CosmoCoffee forum! I have a question about Cobaya. I have to check the distribution that results from the overlap of two or more samples drawn from Cobaya package (for simplicity, let's consider only two for the moment). For the first distribution, the code is the following: Code: Select all gdsamples1of3 = MCSamplesFromCobaya(updated_info, products.products()["sample"],ignore_rows=0.3) gdplot = gdplt.getSubplotPlotter(width_inch=5) p1 = gdsamples1of3.getParams() gdsamples1of3.addDerived(O_m1, name='O_m1', label=r"\Omega_{0m}") gdsamples1of3.addDerived(H01, name='H01', label=r"H_0(Km \: s^{-1} \: Mpc^{-1})") gdplot.triangle_plot(gdsamples1of3, ["O_m1","H01"], filled=True) while for the second the code is similar Code: Select all gdsamples2of3 = MCSamplesFromCobaya(updated_info, products.products()["sample"],ignore_rows=0.3) gdplot = gdplt.getSubplotPlotter(width_inch=5) p1 = gdsamples2of3.getParams() gdsamples2of3.addDerived(O_m1, name='O_m1', label=r"\Omega_{0m}") gdsamples2of3.addDerived(H01, name='H01', label=r"H_0(Km \: s^{-1} \: Mpc^{-1})") gdplot.triangle_plot(gdsamples2of3, ["O_m1","H01"], filled=True) When I run the codes and plot the distributions I obtain the overlap of the whole contours, e.g. in the following case with multiple distributions: Code: Select all plot=g.plot_2d([gdsamples1of3,gdsamples2of3,gdsamples3of3,gdsamplesfull],'O_m1','H01',filled=False,colors=['cyan','orange','lime','red'],lims=[0, 1, 64, 76]) What I would like to obtain is only the sub-distribution given by the overlap of all the contours so that I can plot it alone. I draw it in black in the following image. Can you please clarify me how to do it? Thank you in advance! Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi Simone, I have tried to solve your problem by following https://getdist.readthedocs.io/en/latest/plot_gallery.html. This is one example plot from Plot Gallery. I have followed some steps here to plot only the overlap region between the two distributions... I have created grid of points and checked for those lie within both contours. Then. created a mask that shows only the overlapping points. Code: Select all %matplotlib inline g = plots.get_single_plotter(width_inch=3, ratio=1) g.plot_2d([samples, samples2], 'x1', 'x2', filled=True) # Get the paths from the contour plots paths1 = g.subplots[0][0].collections[0].get_paths() paths2 = g.subplots[0][0].collections[1].get_paths() # Create a grid of points in the range of your plots x = np.linspace(-5, 5, 500) y = np.linspace(-5, 5, 500) X, Y = np.meshgrid(x, y) points = np.c_[X.ravel(), Y.ravel()] for path1 in paths1: for path2 in paths2: # Check which points are inside each contour mask1 = path1.contains_points(points).reshape(X.shape) mask2 = path2.contains_points(points).reshape(X.shape) # Find overlapping region and plot it overlap_mask = np.logical_and(mask1, mask2) g.subplots[0][0].scatter(X[overlap_mask], Y[overlap_mask], color='purple', s=1, alpha=0.6) # Remove the original unfilled contours for collection in g.subplots[0][0].collections: g.add_legend(['Overlap Region'], colored_text=True); Please let me know if this approach solves your problem! plot1.png (28.83 KiB) Viewed 11222 times plot1.png (28.83 KiB) Viewed 11223 times plot2.png (25.25 KiB) Viewed 11225 times Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? I assume the contour is supposed to be the joint constraint. In general you cannot do this without generating new samples by sampling from the joint likelihood (using all three bins). (if you have more than 2 parameters, the overlap of the 2D projected contours is in general not the same as two the 2D projection of the ND-intersection of the ND distributions) Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear All, Thank you for your support. The graphical solution of Aruna Harikant is useful for the graphical showing of the intersection and surely I will leverage it but my idea was to consider the joint distribution of the posteriors. As Antony Lewis was pointing out, I should rewrite the likelihood considering the joint contribution of the three or more bins. But in this case it is already given since it is the one in red called "Full Pantheon". Thank you again for your kind support. Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi Simone, I'm grateful for Prof. Lewis's insights and based on his suggestions, I am addressing this issue. Accordingly, the focus of this problem is on separating the joint constraint. For eg. I think we can separate it out, one method is to apply kernel density estimation. [/img]. Please let me know if you still face any problems separating it out. Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear Aruna Harikant, This is a very good idea! So, what should I do with the code? Extract the posteriors for the different bins as list of values, join together the values and run a KDE method in Python? Thanks to you and Prof. Lewis for your kind support! Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi both, As stated by Antony a few posts above, this is not correct: the joint constraint is imposed by the product of the likelihoods, which intuitively is closer to the intersection than the union (what you are doing). As Antony said, you have to generate samples from the joint posterior, containing the two likelihoods. Alternatively, you can use post-processing ( https://cobaya.readthedocs.io/en/latest/post.html) to reweight ("add") one of the samples with the other likelihood. Since you have samples from each individual likelihood, you can reweight each with the other likelihood to check that you get consistent results (if the result differs significantly between the two approaches, you cannot reweight, and need to sample from the joint posterior). Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear Prof. Torrado, Thank you for the further clarification. I will proceed with the joint likelihoods or the alternative approach you suggested. With best regards,
{"url":"https://cosmocoffee.info/viewtopic.php?p=10311&sid=335c2960e91537cd63f4eca744aa2740","timestamp":"2024-11-03T09:53:11Z","content_type":"text/html","content_length":"51836","record_id":"<urn:uuid:5fe55459-9e31-4b34-ba59-a141e32e290b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00193.warc.gz"}
RE: st: how can i make my loop run faster? Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: how can i make my loop run faster? From Stefano Rossi <[email protected]> To "[email protected]" <[email protected]> Subject RE: st: how can i make my loop run faster? Date Mon, 19 Sep 2011 10:50:40 -0400 Many thanks for this, it is very helpful. This raises one question, though: a crucial part of my procedure is that I need to run regressions only on 12 observations for each firm-period pair; that is, if a firm i has data back to period t=-50, say, I still have to run the regression only on the 12 observations from -1 to -12, ignoring all others. This worked well with my loop, but I do not see readily how to do this with statsby. Can you please advise? -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Partho Sarkar Sent: Monday, September 19, 2011 1:06 AM To: [email protected] Subject: Re: st: how can i make my loop run faster? You don't seem to be actually making any use of the panel structure of the data. Stata has very neat built-in procedures for dealing with such data. Very briefly, 2 pointers (I am ignoring the special wrinkle in your problem that you want to run 20 seoarate regressions for each "firm i-period t" pair- you would have to adapt the procedure accordingly): A. I would use -tsfill, full- to fill in the time values and balance the panel. B. If you use tsset panelvar datavar (or xtset), where panelvar is your panel identifier, and datevar the date variable, you can use: statsby _b _se, by(panelvar): regress y x to do all the regressions in one go (assuming a single regression for each "firm i-period t" pair), rather than separately within a long loop. You can collect the results saved in r-class macros, as with _b & _se above. See -help statsby- Having said all that, I have never tried to run a set of regressions with 30,000 firms & 200 time periods in a single run of a program!!! I suspect this will be painfully slow no matter how efficient your code. An obvious alternative would be to split the firms into, say, 10 subsets, do the regression for each subset, and put all the results Hope this helps Partho Sarkar Consultant Econometrician Indicus Analytics New Delhi, India On Mon, Sep 19, 2011 at 5:22 AM, Stefano Rossi <[email protected]> wrote: > Dear Statalist Users, > I wonder if you can help me make a faster loop? > I have an unbalanced panel of about 30,000 firms and 200 periods, and for each "firm i-period t" pair I need to run 10 regressions on the 12 observations from t-1 to t-12 of the same firm i, and another 10 regressions on the observations from t+1 to t+12 of the same firm i. I have come up with the following program, which works well as it does what it should do, but it is very slow (due to the many ifs I suspect) - here's a simplified version of it with just two regressions: > forval z = 1/30000 { > levelsof period if firm==`z', local(sample) > foreach j of local sample { > local k = `j' - 13 > capture reg y x if firm ==`z' & period<`j' & period>`k' & indicator==1 > if _rc==0 { > predict y_hat, xb > replace before = y_hat[_n-1] if firm == `z' & period == `j' > drop y_hat > } > local w = `j' + 13 > capture reg y x if firm ==`z' & period>`j' & period<`w' & indicator==1 > if _rc==0 { > predict y_hat, xb > replace after = y_hat[_n+1] if firm == `z' & period == `j' > drop y_hat > } > } > } > Right now, it takes several minutes for each firm, so if I run it for the whole sample it would take weeks. > Is there any way to make it (a lot) faster? > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2011-09/msg00792.html","timestamp":"2024-11-14T15:18:02Z","content_type":"text/html","content_length":"15303","record_id":"<urn:uuid:de9a865a-f3f5-48d1-98b3-778423e7811f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00036.warc.gz"}
How to Use Place Value Blocks to Compare Decimals Place value blocks, also known as base-10 blocks, can be used to compare decimals visually. These manipulatives come in various sizes to represent ones (or units), tenths, hundredths, and A Step-by-step Guide to Using Place Value Blocks to Compare Decimals Using place value blocks is an effective way to visually compare decimals. Here’s a step-by-step guide on how to use place value blocks to compare decimals: Step 1: Identify the decimals: Write down the decimal numbers you want to compare. The Absolute Best Book for 4th Grade Students Original price was: $29.99.Current price is: $14.99. Step 2: Gather place value blocks: Obtain a set of place value blocks or base-10 blocks that represent ones (or units), tenths, hundredths, and thousandths: • A large cube represents one whole (1 or one unit). • A flat represents one-tenth (0.1). • A long (or rod) represents one-hundredth (0.01). • A small cube (or unit) represents one-thousandth (0.001). Step 3: Represent the decimals using place value blocks: For each decimal number, use the appropriate place value blocks to represent the value. Arrange the blocks for each decimal number in separate groups. Step 4: Compare the decimals visually: Lay out the place value blocks for each decimal number side by side or one above the other, aligning them by their place values (ones, tenths, hundredths, and thousandths). Step 5: Analyze the place value blocks: Compare the blocks at each place value, starting with the largest place value (usually ones) and moving to the right. Determine which decimal is larger or smaller based on the number of blocks in each place value. The Best Math Books for Elementary Students Related to This Article What people say about "How to Use Place Value Blocks to Compare Decimals - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/math-topics/how-to-use-place-value-blocks-to-compare-decimals/","timestamp":"2024-11-06T14:26:06Z","content_type":"text/html","content_length":"90545","record_id":"<urn:uuid:80ab2464-7b9b-4178-9961-eb8f355bcfb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00215.warc.gz"}
Chapter 14 Model diagnostics | Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM) \( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \) Chapter 14 Model diagnostics In this chapter, we investigate how ADAM can be diagnosed and improved. Most topics will build upon the typical model assumptions discussed in Subsection 1.4.1 and in Chapter 15 of Svetunkov (2022). Some of the assumptions cannot be diagnosed properly, but there are well-established instruments for others. All the assumptions about statistical models can be summarised as follows: 1. Model is correctly specified: 2. Residuals are i.i.d.: 1. The distribution of residuals does not change over time. 3. The explanatory variables are not correlated with anything but the response variable: 1. No endogeneity (not discussed in the context of ADAM). Technically speaking, (3) is not an assumption about the model, it is just a requirement for the estimation to work correctly. In regression context, the satisfaction of these assumptions implies that the estimates of parameters are efficient and unbiased (respectively for (3a) and (3b)). In general, all model diagnostics are aimed at spotting patterns in residuals. If there are patterns, then some assumption is violated and something is probably missing in the model. In this chapter, we will discuss which instruments can be used to diagnose different types of violations of assumptions. Remark. The analysis carried out in this chapter is based mainly on visual inspection of various plots. While there are statistical tests for some assumptions, we do not discuss them here. This is because in many cases human judgment is at least as good as automated procedures (Petropoulos et al., 2018b), and people tend to misuse the latter (Wasserstein and Lazar, 2016). So, if you can spend time on improving the model for a specific data, the visual inspection will typically suffice. To make this more actionable, we will consider a conventional regression model on Seatbelts data, discussed in Section 10.6. We start with a pure regression model, which can be estimated equally well with the adam() function from the smooth package or the alm() from the greybox in R. In general, I recommend using alm() when no dynamic elements are present in the model (or only AR(p) and/or I(d) are needed). Otherwise, you should use adam() in the following way: This model has several issues, and in this chapter, we will discuss how to diagnose and fix them. • Petropoulos, F., Kourentzes, N., Nikolopoulos, K., Siemsen, E., 2018b. Judgmental Selection of Forecasting Models . Journal of Operations Management. 60, 34–46. • Svetunkov, I., 2022. Statistics for business analytics. version: 31.10.2022 • Wasserstein, R.L., Lazar, N.A., 2016. The ASA’s Statement on p-Values: Context, Process, and Purpose . American Statistician. 70, 129–133.
{"url":"https://openforecast.org/adam/diagnostics.html","timestamp":"2024-11-05T13:38:35Z","content_type":"text/html","content_length":"72496","record_id":"<urn:uuid:81fd5984-78f2-407b-a87c-795ce9b6eb21>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00521.warc.gz"}
How many employees are on leave during Easter holidays [Homework] » Chandoo.org - Learn Excel, Power BI & Charting Online How many employees are on leave during Easter holidays [Homework] Easter is around the corner. After what seemed like weeks of lousy weather, finally the sun shone today. I capitalized on the day by skipping work, walking kids to school, taking Jo out for some shopping, enjoying a leisurely walk / cycling with Nishanth in the park and almost forgetting about the blog. But it is dark now and before tucking the kids in, let me post a short but interesting home work problem. Let’s say you are HR manager at Egg Co. and you are looking at the vacation plans of your team. Easter is your busiest time and it would be a bummer if a majority of your staff are on leave during the Easter season (14th of April to 28th of April, 2017). So you want to know how many people are on leave. This is how your data (table name: lvs) looks: Click here to download the sample file. You want to answer below three questions: 1. How many employees are on leave during Easter holidays (14th of April to 28th of April)? 2. How many employees are on approved vacation during Easter holidays? 3. How many employees in “Team ninja” are on approved leave during Easter holidays? Assume team employee numbers are in named range ninja For first question, assume that any employees whose leave is pending will be approved. Also, assume that Easter season start & end dates are in cells P4 & P5 respectively. You can use formulas, pivot tables, power pivot measures, VBA or pixie dust to solve the problem. If using pivot table approach, just explain how you would solve in words. For other methods, please post your solution in the comments. Go ahead and post your questions. Want some hints..? What is an Easter themed homework without some clues? So here we go All the best. The weekend forecast is blue skies and light winds. Finally, we will be checking out walking trials in Trelissick park. Hello Awesome... My name is Chandoo. Thanks for dropping by. My mission is to make you awesome in Excel & your work. I live in Wellington, New Zealand. When I am not F9ing my formulas, I cycle, cook or play lego with my kids. Know more about me. I hope you enjoyed this article. Visit Excel for Beginner or Advanced Excel pages to learn more or join my online video class to master Excel. Thank you and see you around. Related articles: Written by Chandoo Tags: between formula, challenge, date and time, excel formulas, homework, logical operators in excel Home: Chandoo.org Main Page ? Doubt: Ask an Excel Question 37 Responses to “How many employees are on leave during Easter holidays [Homework]” 1. This is very similar to consecutive leave problem I had. I have posted a video solution here: Further, I have also launched a Power Query course here: Have a look and please suggest. □ Hi Chandoo! I'm resolved Using Pivot Tables and filtering by Dates filtes. I Got: Q1 = 272 | Q2 = 115 | Q3 = 5 I did 3 pivot tables, the first one Leave Start and Leave end in Rows, Status in columns and Status in values, after that, I only used for Leave Start: Date Filters, Is after and specified is after: 4/13/17 for Leave End, Date filters, is Before 4/29/17 and Booom! the second and third pivot table I only used some filters trick and ready..! I think than my results are corrects! Regars! From Tijuana Mexico. 2. 1. 340=COUNTIFS(lvs[Leave Start],">="&$P$3,lvs[Leave End],"="&$P$3,lvs[Leave End],"="&$P$3,lvs[Leave End],"<="&$P$4,lvs[Status],"Approved",lvs[Emp Number],ninja) 3. =COUNTIFS(lvs[Leave Start],">=42839",lvs[Leave End],"<=42853") 4. 1. Formula for answer to 1 question - COUNTIFS(lvs[Leave Start],">=14-04-17",lvs[Leave End],"<=28-04-17")+COUNTIFS(lvs[Leave Start],"=14-04-17",lvs[Leave End],"=14-04-17",lvs[Leave Start],"28-04-17")+COUNTIFS(lvs[Leave Start],"=28-04-17") = 417 2. Formula for answer to 2 question -COUNTIFS(lvs[Leave Start],">=14-04-17",lvs[Leave End],"<=28-04-17",lvs[Status],"Approved")+COUNTIFS(lvs[Leave Start],"=14-04-17",lvs[Leave End],"= 14-04-17",lvs[Leave Start],"28-04-17",lvs[Status],"Approved")+COUNTIFS(lvs[Leave Start],"=28-04-17",lvs[Status],"Approved") = 293 3.Formula for answer to 3 question - COUNTIFS(lvs[Leave Start],">=14-04-17",lvs[Leave End],"<=28-04-17",lvs[Status],"Approved",lvs[Emp Number],ninja)+COUNTIFS(lvs[Leave Start],"=14-04-17",lvs [Leave End],"=14-04-17",lvs[Leave Start],"28-04-17",lvs[Status],"Approved",lvs[Emp Number],ninja)+COUNTIFS(lvs[Leave Start],"=28-04-17",lvs[Status],"Approved",lvs[Emp Number],ninja) = 2 For every answer there are 4 scenario's: A. Leave START before 14-Apr-17 but ends Between 14-04-17 to 28-04-17. B. Leave START after 14-Apr-17 but ends before 28-04-17. C. Leave START between 14-Apr-17 to 28-04-17 but ends after 14-04-17 to 28-04-17. D. Leave START before 14-Apr-17 but ends after 28-04-17. Hence, we get above answers.. Hope this is correct thinking.. □ Everyone seems to have a different answer! Let me walk through question 1, using just the autofilter to count numbers: 1. There are 1044 people with "Approved" Holiday 2. Of which 59 have their holiday starting after 28-04-2017. We exclude all these 3. A further 692 have their holiday ending before 14-04-2017. We exclude all these 4. 1044 - 59 - 692 = 293 5. Now repeat this process for "Pending" holidays: 6. 151 - 10 - 96 = 45 7. Add the results for approved and pending gives = 338 It's not pretty but the following formula returns that answer: COUNTIFS(lvs[Status],"Approved",lvs[Leave End],""&$P$5)+ COUNTIFS(lvs[Status],"Pending",lvs[Leave End],""&$P$5) ☆ Ergh, the comments software seems to have chopped out big chunks of my formula! It doesn't seem to like working with less then and greater than symbols. ☆ Yes, but there are duplicate Emp Numbers, you have to account for that. 5. For 1st = =COUNTIFS(lvs[Leave Start],">=42839",lvs[Leave End],"=42839",lvs[Leave End],"=42839",lvs[Leave End],"<=42853",lvs[Status],"Approved",lvs[Emp Number],ninja)) = 5 Dear chandoo, Kindly specify if you want the count of leave is starting from 14/04/2017 and ending on or before 28/04/2017. There is an ambiguity in that question. 6. For 1st = =COUNTIFS(lvs[Leave Start],">=42839",lvs[Leave End],"=42839",lvs[Leave End],"=42839",lvs[Leave End],"<=42853",lvs[Status],"Approved",lvs[Emp Number],ninja)) Dear chandoo, Kindly specify if you want the count of leave is starting from 14/04/2017 and ending on or before 28/04/2017. There is an ambiguity in that question. 7. Q1 - 408 Q2 - 286 Q3 - 5 My method: Add two helper columns. Helper 1 "Is Leave During Easter" : =OR(AND([@[Leave Start]]>=$Q$4,[@[Leave Start]]=$Q$4,[@[Leave End]]<=$Q$5)) Helper 2 "Is Employee on Team Ninja?" : =ISNUMBER(MATCH([@[Emp Number]],ninja,0)) Then Insert a Pivot Table, adding the Data to Data Model. Adding to Data Model allows you to get a distinct count instead of a regular count (the questions ask "How many Employees", not "How many leave requests"). Once in a pivot table, you can add filters to isolate leave requests during the holidays, with a distinct count by employee id. Then you can add the filter for Status=Approved. Finally, you can get a distinct count for those on team ninja. □ Actually, I take back my response for Q2. I failed to notice that the leave Type must be vacation. The correct answer is 139. 8. As Josh noticed, the biggest challenge is to avoid counting duplicates, but I'll have to disagree slightly with the results of his formula/pivot work. 1. Being "on leave" for Q1 means Status is not Cancelled or Declined 2. Unless explicitly stated (e.g. vacation), "leave" means any Leave Type 3. Start and end dates are inclusive (i.e. any leave that ends on April 14 or starts on April 28 is counted as an overlap) Q1 = 331 =SUMPRODUCT(SIGN(FREQUENCY(lvs[Emp Number]*($P$4<=lvs[Leave End])*(lvs[Leave Start]<=$P$5)*ISNUMBER(MATCH(lvs[Status],{"Approved","Pending"},0)),lvs[Emp Number])))-1 Q2 = 139 =SUMPRODUCT(SIGN(FREQUENCY(lvs[Emp Number]*($P$4<=lvs[Leave End])*(lvs[Leave Start]<=$P$5)*(lvs[Status]="Approved")*(lvs[Leave Type]="Vacation"),lvs[Emp Number])))-1 Q3 = 4 =SUMPRODUCT(SIGN(FREQUENCY(lvs[Emp Number]*($P$4<=lvs[Leave End])*(lvs[Leave Start]<=$P$5)*(lvs[Status]="Approved")*ISNUMBER(MATCH(lvs[Emp Number],ninja,0)),lvs[Emp Number])))-1 10. I neither used any formula nor pivot table. Solved using Advanced Filter 11. I can't see why the table would look like that from the beginning. □ Sorry! New Numbers: Q1) 267 Q2) 237 Q3) 4 13. q1) 338 =SUM(COUNTIFS(lvs[Leave Start],"=" &$P$4,lvs[Status],{"Approved","Pending"}))+SUM(COUNTIFS(lvs[Leave Start],">"&$P$4,lvs[Leave Start],"=" &$P$4,lvs[Leave Start], "<="&$P$5,lvs[Leave 14. Hi Chandoo, To be honest, i am not really good in Excel, however, this article has given me some insight on how to use it efficiently. You will be glad to know now even I have taught a little bit to my friend. Hope to get a more post from you in the future. Thank you 15. Wow, there's a lot of different numbers being proposed. Here's the way I looked at it: I made a helper column to take out anything "Declined" or "Cancelled", then to determine if any part of the leave fell within the Easter Holiday Season range. Formula used for this part is: '=IF(AND([@Status]"Declined",[@Status]"Cancelled"),MAX(0,MIN($P$5,[@[Leave End]])-MAX($P$4,[@[Leave Start]])+1),0) Result is the number of days of leave that fall in the Easter Holiday Season (on leave DURING, not on leave for the ENTIRE Period). Then, for Q1, I simply counted the values in my helper column that were greater than 0: Result = 338 For Q2 the question is ambiguous, since the assumption is that Pending will be approved, so the answer would be the same as Q1. However, taking it at face value, taking out the Pending and only counting the Approved, my formula is: Result is 293 Haven't tackled Q3 yet. □ Website hosed my Helper formula. Between [@Status] and "Cancelled" there should be GreaterThanLessThan symbols. ☆ Sometimes I amaze myself by what I miss. For Q2, I missed "Vacation". Updated formula (NOT using a Helper Column) is: '=COUNTIFS(lvs[Leave Start],"="&P4,lvs[Status],"Approved",lvs[Leave Type],"Vacation") Result is: 141 Updated formula for Q1 (NOT using Helper) is: '=COUNTIFS(lvs[Leave Start],"="&P4,lvs[Status],"Pending")+COUNTIFS(lvs[Leave Start],"="&P4,lvs[Status],"Approved") Result is still: 338 For Q3, the answer without duplicates is 5, but I've yet to come up with a formula solution to arrive at that. ○ Grrr.... all the "=" signs in my last post or either Less Than or Equal, or Greater Than or Equal. ■ I'm tired of my formulas being hosed. LT = Less Than, GT = Greater Than: '=COUNTIFS(lvs[Leave Start],"LT="&P5,lvs[Leave End],"GT="&P4,lvs[Status],"Approved",lvs[Leave Type],"Vacation") '=COUNTIFS(lvs[Leave Start],"LT="&P5,lvs[Leave End],"GT="&P4,lvs[Status],"Pending")+COUNTIFS(lvs[Leave Start],"LT="&P5,lvs[Leave End],"GT="&P4,lvs[Status],"Approved") 16. For Q2: 141 =COUNTIFS(lvs[Leave Start], ">=" &$P$4,lvs[Leave Start], "<="&$P$5,lvs[Leave Type],"Vacation",lvs[Status],"Approved")+COUNTIFS(lvs[Leave Start],"="&$P$4,lvs[Leave Type],"Vacation",lvs 17. 1Q. How many employees are on leave during Easter holidays (14th of April to 28th of April)? Ans: =COUNTIFS(lvs[[#All],[Status]],"Declined",lvs[[#All],[Status]],"Cancelled",lvs[[#All],[Leave Start]],">="&P4,lvs[[#All],[Leave Start]],"<="&P5) 2Q. How many employees are on approved vacation during Easter holidays? Ans: =COUNTIFS(lvs[[#All],[Leave Type]],"Vacation",lvs[[#All],[Status]],"Declined",lvs[[#All],[Status]],"Cancelled",lvs[[#All],[Status]],"pENDING",lvs[[#All],[Leave Start]],">="&P4,lvs[[#All], [Leave Start]],""&P4,lvs[[#All],[Leave Start]],"<="&P5) 18. 1 - Formula : COUNTIFS(lvs[Leave Start];">="&P4;lvs[Leave Start];"="&P4;lvs[Leave Start];"="&P4;lvs[Leave Start];"="&MIN(ninja);lvs[Emp Number];"<="&MAX(ninja);lvs[Status];"Approved") answer : 6 □ Madouh, I can see you got the right answer, but I can't get your formula to work correctly. I converted the semicolons to commas, and the formula calculates, but the result is zero. Did something get left out, or mistyped, in the formula you posted? 19. Here is the modified and correct result: 1) How many employees are on leave during Easter holidays (14th of April to 28th of April)? Ans: Logic used - Counted all the employees are whose leave status is either Approved or Pending and deducted : a) the count of emp whose leave start and end dates are before holiday start (14th Apr) b) the count of emp whose leave start and end dates are after holiday end (28th Apr). This gives 338 which is resultant of 1195-788-69 based on the above logic and the formula is: =SUM(COUNTIFS(lvs[[#All],[Status]],{"Approved","Pending"}))-SUM(COUNTIFS(lvs[[#All],[Status]],{"Approved","Pending"},lvs[[#All],[Leave Start]],"<"&P4,lvs[[#All],[Leave End]],""&P5,lvs[[#All], [Leave End]],">"&P5)). 2) How many employees are on approved vacation during Easter holidays? Ans: Applying the same logic as explained above we have to count all the employees whose leave status is approved and leave type is vacation and then deduct: "Refer answer section of 1st This gives 141 which is resultant of 490-318-31 and the formula is: =COUNTIFS(lvs[[#All],[Status]],"Approved",lvs[[#All],[Leave Type]],"Vacation")-COUNTIFS(lvs[[#All],[Status]],"Approved",lvs[[#All],[Leave Type]],"Vacation",lvs[[#All],[Leave Start]],"<"&P4,lvs[[# All],[Leave End]],""&P5,lvs[[#All],[Leave End]],">"&P5). 3) How many employees in "Team ninja" are on approved leave during Easter holidays? Ans: Applying the same logic as explained in the above 2 questions, below is the formula. =SUM(COUNTIFS(lvs[[#All],[Emp Number]],"="&ninja,lvs[[#All],[Status]],"Approved"))-SUM(COUNTIFS(lvs[[#All],[Emp Number]],"="&ninja,lvs[[#All],[Status]],"Approved",lvs[[#All],[Leave Start]],"<"& P4,lvs[[#All],[Leave End]],""&P5,lvs[[#All],[Leave End]],">"&P5)) Though the formula appears lengthy, this is conventional way which matches with the count when cross checked using filter criteria. I would be happy to look easier and tricky way of writing formula from others and especially Chandoo 🙂 20. Q1: □ =SUM(COUNTIF(Status,{"Approved","Pending"}))-SUM(COUNTIFS(Status,{"Approved","Pending"},L_End,""&E_End)) 23. Q1: 338 Approved or pending Leave starts on or before 28 apr Leave ends on or after 14 apr =SUMPRODUCT((lvs[Leave Start]=$P$4)*ISNUMBER(MATCH(lvs[Status];{"approved";"pending"};0))) Q2: 141 Approved & type is vacation =SUMPRODUCT((lvs[Leave Start]=$P$4)*(lvs[Status]="approved")*(lvs[Leave Type]="vacation")) Q3: 6 assuming type of leave doesn't matter =SUMPRODUCT((lvs[Leave Start]=$P$4)*(lvs[Status]="approved")*ISNUMBER(MATCH(lvs[Emp Number];ninja;0))) 24. Sorry for the mistakes!! First I have created a table column named Ninja? with this formula: =SI(ESERROR(BUSCARV([@[Emp Number]];ninja;1;FALSO));"N";"S") This tells me whether the employee belongs to Ninja Team. I've only used the sumproduct function filtering all the columns based on the example request. Finally, my results are: Q1: 272 =SUMAPRODUCTO((lvs[Leave Start]>=$P$4)*(lvs[Leave End]=$P$4)*(lvs[Leave End]=$P$4)*(lvs[Leave End]<=$P$5)*(lvs[Ninja?]="S")*(lvs[Leave Type]="Vacation")*((lvs[Status]="Approved")+(lvs[Status]= « Hide columns one one tab same way as they were in another place [quick tip] There are 5 hidden cells in this workbook – Find them all [Excel Easter Eggs] »
{"url":"https://chandoo.org/wp/how-many-employees-are-on-leave-during-easter-holidays-homework/","timestamp":"2024-11-02T15:52:26Z","content_type":"text/html","content_length":"412432","record_id":"<urn:uuid:49eb59e2-67d7-4b6f-90c1-84013b06d530>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00501.warc.gz"}
Unlock the power of the quadratic formula with our easy-to-use calculator. Solve quadratic equations effortlessly and explore real and complex roots today! Quadratic Equation Calculator Overview Posted on: Discover an online tool that simplifies solving quadratic equations with step-by-step solutions. Master this math concept with ease using our helpful calculator! Slope Calculator: A Comprehensive Online Tool for Mathematics Posted on: Easily find the slope of a line for math problems or real-world applications with the Slope Calculator—a comprehensive, user-friendly online tool for accurate results.
{"url":"https://calculatorbeast.com/tag/algebra/","timestamp":"2024-11-10T19:33:03Z","content_type":"text/html","content_length":"117937","record_id":"<urn:uuid:a0a8067b-14b7-480e-8e6f-e1946899e885>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00819.warc.gz"}
Practical Introduction to Fatigue Analysis Using Rainflow Counting This example describes how to perform fatigue analysis to find the total damage on a mechanical component due to cyclic stress. Fatigue is the most common mode of mechanical failure and can lead to catastrophic accidents and expensive redesigns. For this reason, fatigue life prediction and damage computation are important design aspects of mechanical systems that enable choosing materials guaranteed to last as long as required. The performance level under a certain stress history is measured by the damage, which is defined as the inverse of the fatigue life. This example uses data reported in the literature [1] and simulated stress profiles to show the workflow of fatigue analysis and damage computation. Fatigue is defined as the deterioration of the structural properties of a material owing to damage caused by cyclic or fluctuating stresses. A characteristic of fatigue is the damage and loss of strength caused by cyclic stresses, with each stress much too weak to break the material [2]. The formal definition of fatigue stated by the American Society for Testing and Materials (ASTM) is as follows [3]: The process of progressive localized permanent structural change occurring in a material subject to conditions that produce fluctuating stresses and strains at some point or points and that may culminate in cracks or complete fracture after a sufficient number of fluctuations. The fatigue process occurs over a period of time, and the fatigue failure is often very sudden, with no obvious warning; however, the mechanisms involved may have been operating since the component or structure was first used. The fatigue process operates at local areas rather than throughout the entire component or structure. These local areas can have high stresses and strains due to external load transfer, abrupt changes in geometry, temperature differentials, residual stresses, or material imperfections. The process of fatigue involves stresses and strains that are cyclic in nature and requires more than just a sustained load. However, the magnitude and amplitude of the fluctuating stresses and strains must exceed certain material limits for the fatigue process to become critical. The ultimate cause of all fatigue failures is a crack that grows to a point such that the material can no longer tolerate the stress and suddenly fractures. The last stage of the fatigue process, known as ultimate failure or fracture, occurs when the component or structure breaks into two or more parts. In a nutshell, the fatigue process is divided into three stages: 1. Crack initiation 2. Crack propagation (crack growth) 3. Ultimate failure (fracture) Fatigue is usually divided into two categories. High-cycle fatigue occurs when typically more than 10,000 cycles of low, primarily elastic stresses cause the failure to occur. Low-cycle fatigue occurs when the maximum stress exceeds the yield strength, leading to general plastic deformation. Stress and Strain Stress, usually denoted by $\sigma$, is the measure of an external force, $F$, acting over the cross-sectional area, $A$, of an object [4]. Stress has units of force per area. The SI unit is the pascal, abbreviated Pa: 1 Pa = 1 N/m${}^{2}$ (SI). A unit commonly used in the United States is pounds per squared inch, abbreviated psi: 1 psi = 1 lb/in${}^{2}$. Stress can be constant- or variable-amplitude. In this example, the variable-amplitude stress profile shown below is applied to a mechanical component made of steel UNS G41300 and the damage due to this profile is computed. tg = (0:length(sg)-1)'/Fs; Stress is often characterized by its amplitude ${\sigma }_{a}$, mean ${\sigma }_{m}$, maximum ${\sigma }_{max}$, minimum ${\sigma }_{min}$, and range $\Delta \sigma$. These parameters are shown in the figure below. A half cycle is a pair of two consecutive extrema in the stress signal, going from a minimum to a maximum or from a maximum to a minimum. For a variable-amplitude stress history, the definition of one cycle is not clear and hence a reversal is often considered. Two consecutive half cycles or reversals constitute a full cycle. When stress $ϵ=F/A$ is applied to an object, the object deforms. Deformation is a measure of how much an object is elongated. Strain, denoted by $ϵ=\delta /L$, is the ratio between the deformation $\ delta$ and the original length $L$. Stress and strain are related by a constitutive law. The more tensile stress is applied to the object, the more the object is deformed. For small values of strain, the relation between stress and strain is linear, i.e., $\sigma \propto ϵ$. This linear relationship is known as Hooke's law, and the proportionality factor (often denoted by $E$) is known as Young's elastic modulus. The region where Hooke's law holds is referred to as the elastic region. Higher values of stress result in a non-linear stress-strain relation in the plastic region. The figure shows a typical stress-strain plot for a ductile metal like steel. Elastic limit is the limiting value of stress up to which the material is perfectly elastic and returns to its original position when the stress is withdrawn. The yield point is the threshold after which the component material undergoes permanent plastic deformation and cannot relax back to the original shape when the stress is removed. There is an extensive plastic region that allows for the material to be drawn into wires (ductile) or beaten into sheets and easily shaped (malleable). The highest point on the graph is the ultimate tensile strength (UTS), which is the maximum stress the material undergoes before failure. At the fracture point, rupture usually occurs at the necked region with the smallest cross-sectional area because this region tolerates the least amount of stress. Fatigue Life and Damage The fatigue life (${N}_{f}$) of a mechanical component is the number of stress cycles required to cause fracture. The fatigue life is a function of many variables, including stress level, stress state, cyclic waveform, fatigue environment, and the metallurgical condition of the material. One type of test used to measure fatigue life is crack initiation testing [5] in which mechanical components are subjected to the number of stress cycles required for a fatigue crack to initiate and to subsequently grow large enough to produce fracture. Most laboratory fatigue testing is done with axial loading, which produces only tensile and compressive stresses. The stress is usually cycled either between a maximum and a minimum tensile stress, or between a maximum tensile stress and a maximum compressive stress. The results of fatigue crack initiation tests are usually plotted as stress amplitude against number of cycles required for ultimate failure. Whereas stress can be plotted in either a linear or a logarithmic scale, the number of cycles is plotted in a logarithmic scale. The resulting plot of the data is referred to as a Wohler curve or an S-N curve. The figure below shows a typical Wohler curve for a mechanical component. Clearly the number of cycles of stress that a metal can endure before failure increases with decreasing stress. Note that for some materials, such as titanium, the Wohler curve becomes horizontal at a certain limiting stress often referred to as the endurance limit in literature. Below this limiting stress, the component can endure an infinite number of cycles without failure. In practice, it is not feasible to obtain the fatigue for all the possible stress amplitudes in the laboratory. To save time and reduce cost, a model is often fit to the S-N data points. For many materials, a piecewise linear model can be fit to the S-N data expressed in the log-log domain. In this model, each linear piece is formulated by the Basquin expression ${\sigma }_{a}={\sigma }_{f}^ {\prime }\left(2{N}_{f}{\right)}^{b}$, where ${\sigma }_{a}$ is the stress amplitude, ${\sigma }_{f}^{\prime }$ is usually referred to as the fatigue strength coefficient, ${N}_{f}$ is the fatigue life, and $b$ is Basquin’s exponent. If the stress amplitude is known, the corresponding fatigue life can be obtained from the piecewise linear model. The estimated fatigue life is then used to compute damage values. As mentioned earlier, one of the objectives of this example is to compute the total damage that a mechanical component experiences due to a stress history. Towards this objective, it is important to quantify the damage first. The damage caused by one stress cycle of amplitude ${\sigma }_{i}$ is defined as ${D}_{i}=1/{N}_{f,i}$, where ${N}_{f,i}$ is the number of repetitions of the same stress cycle. The number ${N}_{f,i}$ can be computed from the piecewise linear model fit to the Wohler curve. Suppose that the stress profile applied to the component is composed of two blocks of stress history where each block has ${n}_{i}$ cycles and ${\sigma }_{i}$ stress amplitude as shown in the figure. The Palmgren-Miner rule states that the total damage $D$ due to the stress history shown above is given by $D=\sum _{i}\frac{{n}_{i}}{{N}_{f,i}}$. When $D=1$, the component breaks. The assumption of linear damage has some shortcomings. For instance, this rule ignores the fact that the sequence and interaction of events may have major influence on fatigue life. However, this method is the simplest and most widely used approach for damage computation and fatigue life prediction [3]. Fatigue Analysis Workflow The objective of fatigue analysis is to calculate fatigue life from a stress time series and compute the total damage. The task can be divided into two main parts: 1. Rainflow counting 2. Find total damage based on the Wohler curve and the Palmgren-Miner rule To perform fatigue analysis, two sets of data are required: stress history and stress-life data points that are typically recorded from fatigue tests. Rainflow Counting Rainflow counting is used to extract the number of cycles ${n}_{i}$ and the stress amplitude ${\sigma }_{i}$ from the stress history applied to the component. The rainflow counting consists of three 1. Hysteresis filtering: Set a threshold and remove cycle whose contribution to the total damage is insignificant. 2. Peak-valley filtering: Preserve only the maximum and minimum value of the cycles and remove the points in between. Only maximum and minimum values are relevant for fatigue calculation [6]. 3. Cycle counting using the function rainflow. Use the findTurningPts function to preprocess the data and prepare it for rainflow counting. threshold = 5; % [kpsi] [turningptsg,indg] = findTurningPts(sg,threshold); The function denoises the stress history and removes from it inconsequential oscillations that do not contribute damage to the component. For example, the stress cycle $1-2-3-4$ shown in the figure leads to an insignificant closed-loop hysteresis $2-3-{2}^{\prime }$ in the stress-strain domain. After removing the hysteresis the resulting cycle will be $1-4-7$ which has some contribution to the component fracture. The task of hysteresis filtering is to remove these inconsequential cycles from the stress history data. The stress-time plot results from applying the hysteresis filtering to the stress history. This figure zooms a region of the stress history to illustrate the effect of hysteresis filtering. The chosen threshold depends on the data and the material being tested. For this stress history, the threshold is set to 5 kpsi. The findTurningPts function also finds the maximum and minimum values that contribute to the fatigue. Peak-valley filtering removes all data points that are not turning points. This figure shows the stress history and the turning-point series obtained at the output of the preprocessing step for a time snapshot of duration $500$ seconds. After preprocessing, the next step is cycle counting. The rainflow counting method is widely used in industry to perform the counting. The turning point series obtained from the preprocessing steps is fed into the rainflow function, which returns cycle counts and stress ranges from the input signal based on the ASTM E 1049 standard [7]. rfCountg = rainflow(turningptsg,tg(indg),"ext"); The rainflow function also shows the reversals as a function of time (top) and illustrates as a 2-D histogram the distribution of the found cycles as a function of stress range and mean stress (bottom). The function shows the plots if called without output arguments. Find Damage Using Wohler Curve and Palmgren-Miner Rule To compute the total damage, use the stress-life approach in conjunction with the rainflow counting and the Palmgren-Miner rule. In this example, we use the stress-life data from the results of axial fatigue tests on the steel UNS G41300 reported in [1]. This figure illustrates these stress-life data points. Next, we fit a piecewise-linear model to the stress-life data that enables the computation of the fatigue life corresponding to a certain stress without conducting the fatigue experiment. Use the piecewiseLinearFit function to perform the fit. For example, based on the derived model, the fatigue life corresponding to a stress amplitude ${\sigma }_{i}=80$ kpsi is estimated to be ${N}_{f,i}= 2902.9$. The estimateFatigueLife function carries out the estimation. plModel = piecewiseLinearFit(Nf,S); The fatigue life corresponding to each stress amplitude returned by the rainflow function can also be estimated by the same function estimateFatigueLife. Use the cycle counts at the output of the rainflow counting and the Palmgren-Miner rule to compute the cumulative damage due to the applied stress profile. nig = rfCountg(:,1); Sig = rfCountg(:,2)/2; Nfig = estimateFatigueLife(plModel,Sig); damageg = sum(nig./Nfig); Since the computed total damage, $D$ is smaller than one, it can be inferred that the UNS G41300 component can tolerate the applied stress profile. This stem plot shows that the component tolerates the applied history. The same processing steps are applied to another stress history data that causes the component to break. This figure shows a small window of the stress history before and after hysteresis and P-V filtering. For this data set, the threshold is set to 10. tb = (0:length(sb)-1)'/Fs; [turningptsb,indb] = findTurningPts(sb,10); Now perform the rainflow counting and use the Wohler curve model and the Palmgren-Miner rule to compute the damage. rfCountb = rainflow(turningptsb,tb(indb),"ext"); nib = rfCountb(:,1); Sib = rfCountb(:,2)/2; Nfib = estimateFatigueLife(plModel,Sib); damageb = sum(nib./Nfib); The red stem indicates that this stress history results in ultimate fracture in the component. This example illustrated the steps to compute the cumulative damage due to stress time series applied to a mechanical component: • Fit a theoretical model to the experimental stress-life points. • Preprocess the stress history to remove insignificant stresses leading to hysteresis and to find peaks and valleys forming stresses that contribute to damage. • Use rainflow counting to extract the cycle counts and the stress amplitudes. • Find the fatigue life corresponding to the stress amplitudes based on the theoretical Wohler curve. • Use the Palmgren-Miner rule to compute the total cumulative damage due to the stress history. [1] Illg, W. “Fatigue Tests on Notched and Unnotched Sheet Specimens of 2024-T3 and 7075-T6 Aluminum Alloys and of SAE 4130 Steel with Special Consideration of the Life Range from 2 to 10,000 Cycles,” NACA, Hampton, VA, USA, Rep. no., NACA-TN 3866, Dec. 1956. [2] Mouritz, A. P. Introduction to Aerospace Materials, Oxford, UK: Woodhead Publishing, 2012. [3] Stephens, R. I., A. Fatemi, R. R. Stephens, and H. O. Fuchs, Metal Fatigue in Engineering, New York: John Wiley & Sons, 2000. [4] Budynas, R. G., J. K. Nisbett, and J. E. Shigley, Shigley's Mechanical Engineering Design, 9th ed. New York: McGraw-Hill, 2011. [5] Boyer, H. E., Atlas of Fatigue Curves, USA: Materials Park, OH: ASM International, 1986. [6] Rychlik, I. "Simulation of Load Sequences from Rainflow Matrices: Markov Method," International Journal of Fatigue, vol. 18, no. 7, pp. 429–438, 1996. [7] ASTM E1049-85, "Standard Practices for Cycle Counting in Fatigue Analysis." West Conshohocken, PA: ASTM International, 2017, https://www.astm.org/e1049-85r17.html. The functions listed in this section are only for use in this example. They may change or be removed in a future release. The findTurningPts function performs hysteresis and peak-valley filtering on the stress data. The inputs to the function are the stress history data, x, and a threshold for hysteresis filtering. The function pairs each local maximum in the stress history with one particular local minimum that is found as follows [6]: From a local maximum ${M}_{i}$ with height $u$, the function tries to reach above $u$ in the forward or backward direction with as small a downward excursion as possible. The maximum ${M}_{i}$ and the minimum ${m}_{i}^{+}$ (which represents the smallest deviation from the maximum) form a peak-valley pair. function [tp,ind] = findTurningPts(x,threshold) % FINDTURNINGPTS finds turning points in signal x % Reference: % I. Rychlik, "Simulation of load sequences from rainflow matrices: Markov % method", Int. J. Fatigue, vol. 18, no. 7m, pp. 429-438, 1996. % Copyright 2022 The MathWorks, Inc. xLen = length(x); % Find minimum/maximum [~,~,zcm] = zerocrossrate(diff(x),Method="comparison",Threshold=0); index = (1:xLen)'; zci = index(zcm); % Make sure that there are at least two crossing points if (length(zci) < 2) tp = []; % Add end points if (x(zci(1)) > x(zci(2))) ind = [1;zci;xLen]; ind = [zci;xLen]; % Apply hysteresis and peak-valley filtering (a.k.a. rainflow filtering) pvInd = hpvfilter(x(ind),threshold); ind = ind(pvInd); % Extract turning points tp = x(ind); function index = hpvfilter(x,h) % HPVFILTER performs hysteresis and peak-valley filtering % Initialization index = []; tStart = 1; % Ignore the first maximum if (x(1) > x(2)) x(1) = []; tStart = 2; Ntp = length(x); Nc = floor(Ntp/2); % Make sure that there is at least one cycle if (Nc < 1) % Make sure the input sequence is a sequence of turning points dtp = diff(x); if any(dtp(1:end-1).*dtp(2:end) >= 0) error('Not a sequence of turning points.') % Loop over elements of sequence count = 0; index = zeros(size(x)); for i = 0:Nc-2 tiMinus = tStart+2*i; tiPlus = tStart+2*i+2; miMinus = x(2*i+1); miPlus = x(2*i+2+1); if (i ~= 0) j = i-1; while ((j >= 0) && (x(2*j+2) <= x(2*i+2))) if (x(2*j+1) < miMinus) miMinus = x(2*j+1); tiMinus = tStart+2*j; j = j-1; if (miMinus >= miPlus) if (x(2*i+2) >= h+miMinus) count = count+1; index(count) = tiMinus; count = count+1; index(count) = tStart+2*i+1; j = i+1; tfFlag = false; while (j < Nc-1) tfFlag = (x(2*j+2) >= x(2*i+2)); if tfFlag if (x(2*j+2+1) <= miPlus) miPlus = x(2*j+2+1); tiPlus = tStart+2*j+2; j = j+1; if tfFlag if (miPlus <= miMinus) if (x(2*i+2) >= h+miMinus) count = count+1; index(count) = tiMinus; count = count+1; index(count) = tStart+2*i+1; elseif (x(2*i+2) >= h+miPlus) count = count+1; index(count) = tStart+2*i+1; count = count+1; index(count) = tiPlus; elseif (x(2*i+2) >= h+miMinus) count = count+1; index(count) = tiMinus; count = count+1; index(count) = tStart+2*i+1; index = sort(index(1:count)); The plotStress function plots the stress history. function plotStress(t,s) title("Stress History") xlabel("Time (sec)") ylabel("Stress (kpsi)") The plotStressAndTurningPts function plots the stress history and the turning points found by the function findTurningPts. The plot focuses on a time interval starting at time ts and ending at time function plotStressAndTurningPts(t,s,ind,turningpts,ts,te) ind1 = (t >= ts) & (t <= te); ttpts = t(ind); % time stamps of turning points ind2 = (ttpts >= ts) & (ttpts <= te); hold on hold off xlabel("Time (sec)") ylabel("Stress (kpsi)") legend(["Stress history","Hysteresis & P-V filtering output"]) The plotWohlerCurve function plots the Wohler curve (S-N curve). function plotWohlerCurve(Nf,S) title("Wohler Curve") xlabel("Fatigue life") ylabel("Stress (kpsi)") The piecewiseLinearFit function fits a piecewise linear model to the S-N curve in log-log domain based on the Basquin relationship. It divides the fatigue life into three regions: fatigue with ${N}_ {f}\le 1{0}^{3}$, $1{0}^{3}\le {N}_{f}\le 1{0}^{6}$, and the infinite life region with ${N}_{f}>1{0}^{6}$. The function returns the linear models and the region limits. function plModel = piecewiseLinearFit(Nf,S) x = log10(2*Nf); y = log10(S); % Fit piecewise linear models to three regions % 1. Low cycle fatigue (LCF) lcfi = Nf <= 1e3; xlcf = x(lcfi); ylcf = y(lcfi); plcf = polyfit(xlcf,ylcf,1); % 2. High cycle fatigue (HCF) hcfi = (Nf > 1e3) & (Nf <= 1e6); xhcf = x(hcfi); yhcf = y(hcfi); phcf = polyfit(xhcf,yhcf,1); % 3. Infinite life (IL) ili = (Nf > 1e6); xil = x(ili); yil = y(ili); pil = polyfit(xil,yil,0); % Find ending points. Nflcf = 10^((phcf(2)-plcf(2))/(plcf(1)-phcf(1)))/2; Nfhcf = 10^((pil(1)-phcf(2))/phcf(1))/2; Nfil = 1e8; % Create the model struct plModel.plcf = plcf; plModel.Nflcf = Nflcf; plModel.phcf = phcf; plModel.Nfhcf = Nfhcf; plModel.pil = pil; plModel.Nfil = Nfil; % Compute stress for a range of fatigue life based on model for the sake of % illustration testNf = [logspace(0,log10(Nflcf),1e3),... % low cycle fatigue region logspace(log10(Nflcf),log10(Nfhcf),1e4),... % high cycle fatigue region logspace(log10(Nfhcf),log10(Nfil),1e3)... % inifnite life region testS = computeStress(plModel,testNf); % Fatigue life corresponding to a sample stress amplitude Si = 80; Nfi = estimateFatigueLife(plModel,Si); % Plot data and piece-wise model based on Basquin relation h1 = loglog(Nf,S,"o"); hold on h2 = loglog(testNf,testS,"-k","LineWidth",2); h3 = loglog(Nfi,Si,"h","MarkerSize",15); h3.MarkerFaceColor = h3.Color; xLim = get(gca,"XLim"); yLim = get(gca,"YLim"); loglog([xLim(1) Nfi],[Si Si],"--","Color",0.3*ones(1,3),"LineWidth",1) loglog([Nfi Nfi],[yLim(1) Si],"--","Color",0.3*ones(1,3),"LineWidth",1) title("Fit Model to Wohler Curve") xlabel("Fatigue life") ylabel("Stress (kpsi)") The computeStress function computes stress corresponding to a fatigue life given a stress-life model. function Si = computeStress(plModel,Nfi) Si = zeros(size(Nfi)); for i = 1:length(Nfi) if (Nfi(i) < plModel.Nflcf) % Low cycle fatigue Si(i) = 10.^(polyval(plModel.plcf,log10(2*Nfi(i)))); elseif (Nfi(i) >= plModel.Nflcf && Nfi(i) < plModel.Nfhcf) % High cycle fatigue Si(i) = 10.^(polyval(plModel.phcf,log10(2*Nfi(i)))); % Infinite life Si(i) = 10.^(polyval(plModel.phcf,log10(2*plModel.Nfhcf))); The estimateFatigueLife function estimates the fatigue life corresponding to a stress amplitude given a stress-life model. function Nfi = estimateFatigueLife(plModel,Si) plcf = plModel.plcf; Nflcf = plModel.Nflcf; phcf = plModel.phcf; Nfhcf = plModel.Nfhcf; % Transform the stress amplitude to the log domain logSi = log10(Si); % Preallocate fatigue life Nfi = NaN(size(Si)); % Loop over the stress history for i = 1:length(Si) Nfi1 = 10^((logSi(i)-plcf(2))/plcf(1))/2; % low cycle fatigue Nfi2 = 10^((logSi(i)-phcf(2))/phcf(1))/2; % high cycle fatigue Nfi3 = Inf; % infinite life if (Nfi1 < Nflcf) Nfi(i) = Nfi1; elseif (Nfi2 < Nfhcf) Nfi(i) = Nfi2; Nfi(i) = Nfi3; The cumulativeDamageStemPlot function shows the cumulative damage at each cycle using the stem function. The color of each stem is set according to the value of the damage accumulated up to the corresponding cycle. For the low-value damage, the stem color is set to a shade of green while for the high-value damage, it is set to a shade of red. function cumulativeDamageStemPlot(ni,Nfi) L = length(ni); damage = sum(ni./Nfi); stem(0,NaN,"Color",[0 1 0]) title("Cumulative Damage from Palmgren-Miner Rule") xlabel("Cycle $i$","Interpreter","latex") ylabel("Cum. damage $D_{i} = \sum_{j=1}^{i}n_{j}/N_{f,j}$","Interpreter","latex") set(gca,"XLim",[0 L],"YLim",[0 damage]) iter = unique([1:round(L/100):L,L]); for i = iter cdi = sum(ni(1:i)./Nfi(1:i)); % cumulative damge upto cycle i plt = stem(i,cdi,"filled"); The setStemColor function sets the color of the stem based on the value of the cumulative damage. function setStemColor(hplt,cumulativeDamage,gamma) c = lines(5); c = c([2,3,5],:); if (cumulativeDamage > 1) color = c(1,:); if (cumulativeDamage > gamma) c1 = c(1,:); c2 = c(2,:); c1 = c(3,:); c2 = c(2,:); color = zeros(1,3); for i = 1:3 color(i) = c1(i)+(c2(i)-c1(i))*cumulativeDamage; hplt.Color = color; See Also rainflow | zerocrossrate
{"url":"https://es.mathworks.com/help/signal/ug/practical-introduction-to-fatigue-analysis-using-rainflow-counting.html","timestamp":"2024-11-04T15:15:14Z","content_type":"text/html","content_length":"129829","record_id":"<urn:uuid:35b1f87d-c2bc-4ffa-b792-db7ae3c3702e>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00230.warc.gz"}
How do you integrate 10/[(x-1)(x^2+9)] dx? | HIX Tutor How do you integrate #10/[(x-1)(x^2+9)] dx#? Answer 1 $\frac{10}{\left(x - 1\right) \left({x}^{2} + 9\right)} = \frac{1}{x - 1} + \frac{- x - 1}{{x}^{2} + 9} = \frac{1}{x - 1} - \frac{x}{{x}^{2} + 9} - \frac{1}{{x}^{2} + 9}$ #10/((x-1)(x^2+9)) = A/(x-1) + (Bx+C)/(x^2+9)# So we want: #Ax^2+9A +Bx^2-Bx+Cx-C = 10# Hence: #A+B =0# #-B+C=0# #9A-C=10# The second equation gives us: #B=C#, so #A+C =0# #9A-C=10#, adding gets us: #10A=10#, so #A=1# and #B=C =-1# And #10/((x-1)(x^2+9)) = 1/(x-1) + (-x-1)/(x^2+9) = 1/(x-1) - x/(x^2+9) - 1/(x^2+9)#, #int 10/((x-1)(x^2+9)) dx = int 1/(x-1) dx - int x/(x^2+9) dx - int1/(x^2+9) dx# #= ln abs(x-1) -1/2 ln(x^2+9) - 1/3 tan^(-1) (x/3) +C# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-integrate-10-x-1-x-2-9-dx-8f9afa152a","timestamp":"2024-11-02T06:36:30Z","content_type":"text/html","content_length":"564769","record_id":"<urn:uuid:6132dfea-23aa-429e-8345-5f21cbcf7c58>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00504.warc.gz"}
Excel Troubleshooting | How to Troubleshoot Excel Formulas? (Examples) Updated June 8, 2023 What is Troubleshooting in Excel? Sometimes, we get errors such as #N/A or #VALUE in Excel! Even though we have applied formulae correctly (at least for ourselves, we feel we have applied those correctly). These issues might be due to different reasons, such as using the same formula multiple times as a reference, getting dates in number format, getting #N/A, or #VALUE! Errors etc. In these cases, finding what is going wrong with the formula is difficult for the naked eye. However, if we follow some checkpoints for such errors, there are ample possibilities that we will be able to eliminate those. Excel Troubleshooting is the process of rectifying the errors from a formula and nullifying them. We will see some methods to troubleshoot the formula errors in Excel. Examples of Troubleshooting in Excel Lets us discuss the examples of troubleshooting in Excel. Example #1 – Troubleshooting Formula that gives #N/A Error #N/A errors are so common. You might have encountered this specific error many times while working with spreadsheet formulas. Specifically, this error can be widely associated with lookup formulae such as VLOOKUP, HLOOKUP, or simply Lthe OOKUP formula in Excel. Consider an example as shown below. In this example, we have the person’s name, age, and location where they work. See the screenshot below: We want to check the Age details for a person named “Kanchan”. We will use the VLOOKUP to check whether any age details for Kanchan are available with us. Step 1: In cell F2, use the VLOOKUP formula to get the details for Kanchan. We’ve used E2 as a lookup_value parameter under VLOOKUP as it is the cell where the name “Kanchan” is stored (which we want to find Age for) A2:C4 is the range for table_array since we wanted to check the Age value for Kanchan under this table. Col_index_num should be 2 as we have Aged as a second column of the table. Finally, we want an exact match for lookup_value; therefore, we use FALSE, zero as a [range_lookup] parameter. Now, if you Press Enter key, you’ll get a #N/A error as shown below: This is happening because the lookup_value we are trying to search under a given table, “Kanchan”, is not present in the table, and hence no age details for “Kanchan” could be found. This type of error can be neglected with the help of the IFERROR function, which can be used in combination with any function. Step 2: Use IFERROR with VLOOKUP to get the result as “No Value” if there is any error with the VLOOKUP formula. See the screenshot below: This function IFERROR allows us to have a nice message under Excel for any formula where we get an error value for any error such as #N/A, #VALUE!, #DIV/0! Error etc. Example #2 – Troubleshooting “Numbers Stored as Dates” Error In most cases, we come up with a situation where we try to input a number value for specific criteria but get a weird date value instead. We may usually think that there must be something going wrong with Excel, or it might be going mad. However, that is not the case, and we can troubleshoot this error with minimal hassle. This error is called a Number Stored as Dates” error which can be eliminated. Follow the steps below: Step 1: Consider the same data as in the previous example. However, this time we would like to add the Salary details for these three employees. However, as soon as I enter the salary value as 15000 for Patrick, we get a weird date value within the cell, as shown in a series of screenshots below: This issue we may get for all the cells if we add the salary details as 12000 and 10000, respectively, for Martha and Amanda. This is a weird situation as we may not understand why the dates reflect instead of number values 15K, 12K, and 10K, respectively. This is because column C for Salary has a number property, unfortunately, being set as Date. Due to this, whatever value you’ll input will be reflected as a date value. This issue can be troubleshot by changing the value formatting for the cell. Step 2: Select column C and navigate towards the Number group under the Home tab, where we can change the number formatting for cells. You can see the value format for the selected column is Date. Step 3: Select the cell value format as Number through the dropdown list to convert all those date values into Number values. Now, you can see that the column Salary has all numeric values as expected. Example #3 – Troubleshooting Formula Auto Calculating Error Sometimes, we may face a situation where we have applied formulae. However, it is not calculated across cells. We want to capture the incentive as 10% of the current salary. This can be done with the formula shown below in cell D2. If you drag this formula across the cells up to D4, we can see the value for the incentive is the same as $1500. Which ideally should be $1200 and $1000, respectively. This is because the formula calculations option under your Excel file is Manual instead of Automatic. You can either select the cell, press F2 to edit it, or press Enter key to calculate the Incentive values for another employee manually. Or else, you can set the calculation options to Automatic instead of manual to automatically calculate the formulae. Navigate to the Formulas tab. Under the Calculation group, you can see a Calculation Options dropdown, as shown in the screenshot below: Under the Calculation Options dropdown, you’ll see three options: Automatic, Automatic Except for Data Tables, and Manual (this one will be ticked). Of all three options available under the dropdown, click on Automatic, and you can see the formulae are calculated automatically. These methods can be used to troubleshoot the problems with Excel Formulae. Let’s wrap things up with some points to be remembered. Things to Remember • Troubleshooting cannot be performed step by step as the issues we face are “as-such” issues. Any issue can occur at any point and hence has no step-by-step solution. • Most of the time, the error you are getting itself informs you about the formula and what can be done to get it resolved. For Ex. #N/A tells you that the value in the formula is not present under the given range of table, #VALUE! The error tells you that at least one cell should have a numeric value containing a text, #DIV/0! The error tells you that you tried dividing the numerator with zero. Etc. • Troubleshooting may take a few minutes, whereas some cases might take an entire day to get unresolved. Recommended Articles This is a guide to Excel Troubleshooting. Here we discuss How to Troubleshoot Excel Formulas, practical examples, and a downloadable Excel template. You can also go through our other suggested articles –
{"url":"https://www.educba.com/excel-troubleshooting/","timestamp":"2024-11-02T07:50:02Z","content_type":"text/html","content_length":"349955","record_id":"<urn:uuid:ad8cac60-fab7-4a4e-9de9-c5ec4314001e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00084.warc.gz"}
What is the probability of flipping a coin 3 times and getting at least 1 heads? By Author Questions What is the probability of flipping a coin 3 times and getting at least 1 heads? If you flip a coin three times the chance of getting at least one head is 87.5%. What are the odds of flipping heads 3 times in a row? Answer: If a coin is tossed three times, the likelihood of obtaining three heads in a row is 1/8. What is the probability of getting at least one tail in three tosses of a coin? A coin can only get heads or tails. So you have 1/2 chance of getting either heads or tails. Well simplify tails to T and heads to H. So you have a 7/8 chance that you will have at least one tails. What are the odds of flipping a coin three times and getting heads at least twice? Answer: If you flip a coin 3 times, the probability of getting at least 2 heads is 1/2. What is the probability of flipping a coin 3 times and getting heads 3 times? Answer: If you flip a coin 3 times the probability of getting 3 heads is 0.125. What is the probability of getting 3 heads out of 10 tries flipping a coin? So the probability of exactly 3 heads in 10 tosses is 1201024. Remark: The idea can be substantially generalized. What is the probability of flipping 4 heads in a row? The probability of getting a heads first is 1/2. The probability of getting 2 heads in a row is 1/2 of that, or 1/4. The probability of getting 3 heads in a row is 1/2 of that, or 1/8. The probability of getting 4 heads in a row is 1/2 of that, or 1/16. What are the odds of flipping 2 heads in a row? This states that the probability of the occurrence of two independent events is the product of their individual probabilities. The probability of getting two heads on two coin tosses is 0.5 x 0.5 or What is the probability of getting 3 heads in 3 tosses? What is the probability of getting at most 3 tails? If three coins are flipped, the probability of getting exactly 3 tails is 1/8. What is the probability of getting all heads if you flip 3 coins? 1 out of 8 1 Expert Answer There are 8 possibilities when flipping three coins and the possibility of getting all heads is 1 out of 8. What is the probability of getting exactly 2 heads in a single throw of 3 coins? Summary: The Probability of getting two heads and one tails in the toss of three coins simultaneously is 3/8 or 0.375.
{"url":"https://short-facts.com/what-is-the-probability-of-flipping-a-coin-3-times-and-getting-at-least-1-heads/","timestamp":"2024-11-03T19:29:07Z","content_type":"text/html","content_length":"132607","record_id":"<urn:uuid:7e544f48-5f73-42b2-8d70-c4878365d456>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00814.warc.gz"}
How do you implement multithreading in a program? | Hire Someone To Do Assignment How do you implement multithreading in a program? How do you implement multithreading in a program? I know from your posted question of how many great post to read do you talk about each thread, that the thread used to execute thread might be for the whole program or about what is executed in the program, then I know that threads are defined in the programming language (C++) and I have used several different ways of defining threads. My question is: What’s your general idea of multithreading in a program? Can I improve the design, too? Or is it basically writing, I suppose, one thread, that has another thread to do and in that case it’s more efficient? Are you sure you’re doing that? A: By design a thread for each purpose within the programs that the program uses. What they probably need is some feedback (eg logging, message, line numbers, etc. etc.) or some kind of help or suggestions for improved design that you can find in the help center. If it’s some feedback, I guess that you’d like to talk about how it looks like for different things. C++, C, C++ (or C) in common uses has a mechanism for configuring the threading threading that is used in the look here but those resources are abstracted to your language as a business application. For example, in C++ it provides local variables, event propagation, caching, thread counting, thread map (thread.map) etc. If you could change other threads within the program, you would not only change resources, but your code eventually would change to other types of problems as well. You might think of your code a lot like that, it’s not terribly dynamic, it’s just simple logic. I recommend you never changing your program from 1-thread to 2-thread setting into the context of your program, consider debugging. When debugging, the way out is to write -c,c,c->c on every line.How do you look at this website multithreading in a program? What if a piece of text won a search over it? In other words, is it possible to query most of it’s contents with a series of queries? Obviously, the problem is in the data itself, which has her latest blog value. So, how can you use it for querying any data outside the source code? My question concerns some other use cases. If you want to search the full text of a text file, you can use text (or map) queries, which are very simple and efficient implementations of a simple text search. You just have to get a fairly precise representation of the file contents, and then you can use it carefully by passing numbers in a given query string. But what if we want to query all names from the output? (If you really don’t want to be bothered and try to get a reference to a nice JSON representation). How can we accomplish it? A sample text file. This is my first attempt at code (implemented in a little Pay Someone To Take Online Class For Me Reddit //… http://juli.aai.fr/code/text-search/search/text.txt ; // for more details on how to use text-search, see http://www.atnet.fr/mappe/textsearch.html ; // for some more interactive code from juli.aai.fr/text-search/search/search.txt ; // set up recommended you read parameter : app.useKeyword( “file_text_file” ); #!/usr/bin/env python3 /home/abraham/torsch/repository/thesis/app.py -o / home/abraham/thesis /home/abraham/thesis/app.py # I’m not completely settled on the approach to match using multiple indexes, though // I think I may come to this question by taking a look at the answer to // the following question: Are there ways to modify (all-items) a text-based search algorithm? Or a way to pass into a simple query for example.txt file? With that said, if you are familiar with Google Scholar and you have to fill in information like keywords, the next problem comes in writing query strings to the file, which can be really daunting. To overcome this problem, let’s make some sort of non-searchable program – one for the very first time. This is so that the data can be moved by users without disturbing them from searching again, so that they can be updated and improve their results so that they will increase their efficiency. To make this work, you’ll need to modify your file schema, your data type and some basic functions: string { “type” : “text”, “contents” :How do you implement multithreading in a program? This post describes some of the different approaches both in the language and in the publisher. Easiest Flvs Classes To Take Overview This is an introduction to the various concepts regarding multithreaded programs. The main question now becomes whether multithreading is good and whether to use proprietary programs as hire someone to do assignment programming paradigm. However, some important conclusions have been made, that is, do multithreaded programs still need to be discontinued? In the case that multithreaded programs still need to be marked for reuse (because a previous multithreaded program would be forgotten upon re-sear issue such as that around 10 Browsers of the Staggering Editor), two conclusions will be gained: In particular, are there any major complications for the removal of multithreaded programs from the software markets at this time, namely, will they also need to be marked as non-standard? Multithreaded programs In the case in question, multithreaded programs in open systems can typically be compared with the multithreaded programs that are used in applications and should thus have no cross-compilation for their purposes. The example presented in the article can now be viewed as if it is described in more detail. Let us first understand the steps that are being applied to multithreaded programs so as to help fill the gaps in learn the facts here now literature of applications. Take a piece of software as an example. This program deals with a web page with a group of users who make requests for a database. The software needs to be able to manage the metadata of your program and is implemented using different-types of semantic templates which can be used for the particular task. Each individual user clicks two buttons to my site as a part of the page. These can be shown as:
{"url":"https://hiresomeonetodo.com/how-do-you-implement-multithreading-in-a-program","timestamp":"2024-11-05T06:44:09Z","content_type":"text/html","content_length":"88774","record_id":"<urn:uuid:c6cd61ce-d682-47d0-8c37-dd1d9ab27459>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00689.warc.gz"}
How come the improbable is so commonplace? | Aeon Essays Statisticians tell us that the chances of winning the lottery are incredibly small – for the UK National Lottery, for example, around one in 14 million per ticket. That’s about the same probability as seeing a flipped fair coin come up 24 heads in a row, and far less than your chances of being killed by a meteorite. And yet, week after week, people do win, supplying newspapers with a constant flow of personal-interest stories into the bargain. What’s going on? How can something that has such an incredibly small chance keep on happening? The explanation is, of course, straightforward. The chance that your ticket is the winner is indeed small. But you’re not the only person entering the draw. In fact, many people buy lottery tickets each week. Often, they buy more than one. So, overall, a very large number of tickets are bought. And while each ticket might have a very small chance of winning, if we add up that very large number of very small chances, it soon amounts to something sizeable. With enough people buying enough tickets, we should actively expect to see someone win. This distinction – between the chance that you (or, indeed, any other particular person) will win the lottery and that someone will win – is a manifestation of what I call the law of truly large numbers. If a large enough number of people each buy a lottery ticket, then the probability that someone will win becomes substantial. It grows so large, indeed, that someone wins almost every week. This law is part of what I have called ‘the improbability principle’. The principle states that extremely improbable occurrences are, in spite of the odds against them, actually quite common. It says we should expect to see events that we might regard as incredibly unlikely – such as someone winning the lottery. The improbability principle consists of five elements, of which the law of truly large numbers is just one. Allow me to introduce its partners in – not crime exactly, but… Well. You’ll see. The law of inevitability says that some outcome must occur – one of the 14 million sets of six numbers from one to 49 must be chosen when the lottery balls drop. So, if you bought all possible combinations, you’d be certain to hold the jackpot-winning ticket. That sounds trivial but, of course, people have still found a way to make money out of it. The law of selection says, in effect, that while prediction might be hard, postdiction is easy. It’s easy to look back and see the causal chain that led inexorably to disaster. It’s not so easy to choose among the multitude of possible chains that lead into the future. The law of near enough says that you can dramatically increase the chances of a coincidence if you broaden what you mean by a coincidence. You would be surprised to encounter an old friend in a strange town, perhaps, but you might be almost as surprised if you met a friend of a friend, even though friends of friends heavily outnumber friends. Finally, the law of the probability lever says that slight changes can make highly improbable events almost certain. Thus we encounter financial crashes, positive results in ESP experiments, people being repeatedly struck by lightning and so on. Or take the RMS Titanic. This flagship of the White Star Line had a double-bottom hull to make sure that the chances of water flooding in it were very small. Furthermore, it was divided into 16 different compartments using bulkheads with remotely operated watertight doors. For the ship to sink, several of the compartments would have to flood simultaneously. And if the probability of one compartment flooding was very small, then surely the probability of several doing so would be a great deal smaller. For these reasons, many people regarded the ship as unsinkable. And the basic line of reasoning seems, on the face of it, pretty compelling: just as the probability of you winning the lottery is very small, so the probability of you winning it several times is much smaller. If you buy one ticket on the UK National Lottery, your chance of winning is around one in 14 million. If you buy two tickets on consecutive weeks, your chance of winning both times is about 1 in 2×10^14, or roughly the same chance as tossing a fair coin and seeing 48 heads in a row. In other words, don’t hold your breath. And yet the Titanic did sink. Why? Well, there’s nothing wrong with the lottery calculations. If you win the lottery one week with a one-in-14-million chance per ticket, then your chances of winning it the next week are unaltered. Statisticians say that the two events are independent, but another way to put it is that the lottery numbers don’t remember who has won previously: the outcome of one draw doesn’t affect the following one. It follows that the probability that you will win the lottery in both weeks is just the two separate probabilities multiplied together: the 1 in 2×10^14 mentioned earlier. The same does not hold for the Titanic. For if one compartment is damaged so that it floods, what does that say about the probability that a neighbouring compartment might also be damaged? Well, clearly our answer depends how the damage occurs. As it happens, the Titanic’s maiden voyage was through iceberg-infested waters. If an iceberg were to strike the side of the ship penetrating the double hull, isn’t there a good chance that it would also damage neighbouring compartments? even very slightly incorrect assumptions can have huge consequences Icebergs can be very large – especially the part hidden beneath the water – and the ship would be moving past them. This means that the two events – damage to one compartment and damage to another – are not independent. And this, of course, is exactly what happened. The iceberg didn’t simply puncture one compartment and then bounce off. Rather, it cut into the side of the ship at several points, flooding six compartments. What we find is that the appropriate way of thinking about what happened to the Titanic is different from the appropriate way of thinking about the lottery. We have to change our model slightly, relaxing the assumption that the events in question (different compartments flooding) are independent. The result is that what the ship’s owners and passengers believed was a highly improbable event in fact becomes quite likely. I chose the Titanic example because it’s very clear and straightforward: it’s easy to see why the independence assumption is unjustified. In many cases, however, it’s not obvious which assumptions are incorrect – and even very slightly incorrect assumptions can have huge consequences, especially when they interact with other strands of the improbability principle, such as the law of truly large numbers. We live in a complex world, and the different components of a system are often locked in a web of interconnections that are difficult to tease apart. When trying to make sense of them, it is common to assume independence as a first approximation. But this can lead to major miscalculations. The Yale sociologist Charles Perrow has developed an entire theory of what he calls ‘normal accidents’, based on the observation that complex systems should be expected to have complex, undetected, interactions. A frightening thought. But I should note that the probability lever doesn’t just have adverse consequences. Consider Joan Ginther, a woman in her sixties from Texas, who has won some $20 million from lotteries in four separate wins: $5.4 million in 1993 (though this ticket was bought by her father rather than Ginther herself); $2 million in 2006; $3 million in 2008; and most recently, $10 million in 2010. The first win was from a standard lottery, where you had to pick six numbers, but the other three were from scratchcards. Now, any strand of the improbability principle can increase your chances of winning a lottery – and therefore your chances of winning more than once. Buying multiple tickets, for example, can bring the law of truly large numbers into play. Ginther is reported to have bought some 3,000 scratchcards per year, spending perhaps $1 million in total on them. A lot of tickets – a lot of chances of winning. But that’s still not enough to make her multiple wins very likely. We need to bring the probability lever into play. The most familiar kind of lottery is known as an ‘r/s’ lottery, so called because each ticket consists of r numbers chosen from a list of possible s numbers. They are simple and well-understood. Scratchcard lotteries, on the other hand, are more complex – and this complexity provides a crack for the lever to enter. she periodically bought large numbers of tickets in one go: it’s as if she had cracked the routing algorithm the company used to deliver the tickets Suppose the Texas lottery operators just sent out all 3 million scratchcard tickets in one go. That would mean that all the winning tickets might end up getting bought very quickly, leaving no incentive for players to buy the remaining tickets. This, of course, would leave the lottery operators out of pocket. And so, instead of spreading the prize money at random over the tickets as they are printed and released, the operators try to make sure it is fairly uniformly distributed. In fact, the 3 million tickets are released in six consecutive tranches of half a million each, with each tranche containing one-sixth of the prize money. The next tranche is released only when most of the tickets for the preceding one have been bought, which encourages people to keep playing. Analysis of data from the Texas Lottery even suggests that the algorithm keeps at least some of the big prizes for the later tranches, to keep the game interesting. If this is true, it means that the probability of winning one of the big prizes is not uniform – it isn’t the same whichever ticket you buy. And so we find an opening that the probability lever can exploit. Knowing when large jackpot tickets are likely to be sold is half of the battle. But to be practically useful, you have also to have an idea of where – so that you can buy tickets there. Ginther bought three of her winning tickets in a small town called Bishop in Texas, where she was born, not far from the Mexican border. Although she moved to Las Vegas, she periodically returned to Bishop and bought large numbers of tickets in one go: it’s as if she had cracked the routing algorithm that the shipping company used to deliver the tickets. (For the record, it’s probably worth mentioning that Ginther has a PhD in mathematics from Stanford University and spent some years as a college lecturer in California.) The story of Joan Ginther shows us one way in which the probability lever can gain purchase on the seemingly impregnable bulwarks of chance. But there are plenty of others. In fact, even some standard r/s lotteries have concealed structures that can serve as a pivot point. Lotteries, of course, seek to make money for the people or organisations that run them. Their basic mechanism for doing this is to return only a percentage of the total amount paid for the tickets. This means that the expected return for a regular player is less than $1 for every dollar you bet. Players should expect, on average, to lose money. But lottery draws are repeated, week after week, and if the jackpot is not won in one week, it is typically ‘rolled over’ to the next. By buying lottery tickets only when the rollover jackpot has built up substantially, you can boost your expected winnings to more than your stake: then you can expect, on average, to make money. That’s all very well, but the amount you expect to win in the long term if you keep playing and the chance that you will win it in the short term are two very different things. It is unrealistic to take a long-term perspective lasting thousands of years. What else can you do? buying tickets only when things were in their favour, several groups made significant sums of money You might draw inspiration from several groups in Massachusetts. The Massachusetts Cash WinFall was a 6/46 lottery, meaning that one had to choose six numbers from one to 46, with tickets being drawn twice a week. The jackpot began at $500,000 but there were other prizes – of $4000, $150, and $5 – for matching five, four, or three numbers respectively. Whereas many lotteries roll an unclaimed jackpot forward, adding it to the jackpot of the next draw, this lottery rolled it down if it exceeded $2 million without being won: so the lesser prizes, awarded for matching fewer than all six numbers, went up in value. The Massachusetts groups spotted that if the rolldown exceeded a certain amount, then the total they would expect to win on the cumulated rolldown prizes would exceed what they would have to spend on tickets. Spotting this, and buying tickets only when things were in their favour, several groups made significant sums of money. So much so, in fact, that Cash Winfall was terminated at the start of That’s the trouble with the probability lever: sometimes it moves mountains, and sometimes it breaks off in your hand. And of course, when it does, the law of selection means it’s difficult to find another one. Then again, the improbability principle just says that the improbable is commonplace: it’s another matter entirely whether or not you can summon it at will.
{"url":"https://aeon.co/essays/how-come-the-improbable-is-so-commonplace","timestamp":"2024-11-12T09:13:56Z","content_type":"text/html","content_length":"198709","record_id":"<urn:uuid:e1967bb0-5c93-4bfa-b04e-cbd766a32b13>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00647.warc.gz"}
K-Means Interactive Demo in your browser In this blog post, I introduce a new interactive tool for showing a demonstration of the K-Means algorithm for students (for teaching purposes). The K-Means clustering demo tool can be accessed here: The K-Means demo, first let you enter a list of 2 dimensional data points in the range of [0,10] or to generate 100 random data points: Then the user can choose the value of K, adjusts other settings, and run the K-Means algorithm. The result is then displayed for each iteration, step by step. Each cluster is represented by a different color. The SSE (Sum of Squared Error) is displayed, and the centroids of clusters are illustrated by the + symbol. For example, this is the result on the provided example dataset: Because K-Means is a randomized algorithm, if we run it again the result may be different: Now, let me show you the feature of generating random points. If I click the button for generating a random dataset and run K-Means, the result may look like this: And again, because K-Means is randomized, I may execute it again on the same random dataset and get a different result: I think that this simple tool can be useful for illustrating how the K-Means algorithm works to students. You may try it. It is simple to use and allows to visualize the result and clustering process. Hope that it will be useful! Philippe Fournier-Viger is a distinguished professor working in China and founder of the SPMF open source data mining software. This entry was posted in Data Mining, Data science and tagged browser, clustering, interactive demo, javascript k-means, k-means, k-means tool. Bookmark the permalink.
{"url":"https://data-mining.philippe-fournier-viger.com/k-means-interactive-demo-in-your-browser/","timestamp":"2024-11-11T04:54:01Z","content_type":"text/html","content_length":"71807","record_id":"<urn:uuid:f524a0a5-c524-4798-9be6-035b6c46dfd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00532.warc.gz"}