content
stringlengths
86
994k
meta
stringlengths
288
619
Beyond Special Relativity There are two different ways in which one can go beyond the kinematics of Special Relativity (SR). One can consider adding to the Standard Model (SM) Lagrangian new terms that violate Lorentz Invariance (LIV). In case one wants to preserve the relativistic invariance, one should modify the transformations between inertial frames and accordingly modify the special relativistic kinematics; this is what is called Doubly/Deformed Special Relativity (DSR). 1. Lorentz Invariance Violation A deviation from SR whose effects increase with the energy can be incorporated in the framework of Effective Field Theory. This is achieved by adding to the fields and symmetries that define the SM of particle physics, terms of dimension higher than four which are not invariant under boosts (neglecting a possible deviation in the rotational symmetry). This is known as the Standard Model Extension (SME) The most important effect of this extension is contained in the free part of the Lagrangian density, i.e., in the part which is quadratic in fields. This leads to a modification of the SR energy–momentum relation of a free particle (modified dispersion relation) $E\phantom{\rule{0.166667em}{0ex}}\approx \phantom{\rule{0.166667em}{0ex}}p+\frac{{m}^{2}}{2\phantom{\rule{0.166667em}{0ex}}p}+\alpha \phantom{\rule{0.166667em}{0ex}}\frac{{p}^{n+1}}{{\mathsf{\Lambda }}^{n}}\phantom{\rule{1.em}{0ex}}\mathrm{when}\phantom{\rule{1.em}{0ex}}m\ll p\ll \mathsf{\Lambda }\phantom{\rule{0.166667em}{0ex}},$ where E and p are the energy and the modulus of the momentum of a particle with mass m, respectively, Λ and n are the energy scale and order of correction which parametrize the deviations from SR, respectively, and α is a dimensionless constant which parametrizes the particle’s dependence of the LIV effects. The dimension of the first quadratic term in the SME which violates Lorentz invariance is . There are two alternative values, (linear case) or (quadratic case), considered in the studies of LIV. It is also possible to consider a minimal SME , where there are only operators of dimension four or less. A modified dispersion relation implies a modification of the expression of the velocity of a particle in terms of the energy, which can lead to observable consequences from transient astrophysical phenomena (energy-dependent photon time delays), even if the energies of the observed particles are much smaller than the energy scale parametrizing the LIV. Another observable consequence of modified dispersion relations is the modification of the SR kinematics in the different particle processes which are relevant in high-energy astrophysics. The thresholds and the separation of kinematically allowed/forbidden processes (with respect to SR) are affected by the modified energy–momentum relation when the mass-dependent and the LIV terms in ( ) become comparable, i.e., when (m^2/E^2)∼(E/Λ)^n ^[4]^[5]^[6] . This happens for , and then, one can have observable consequences of the LIV in high-energy astrophysics at energies much lower than the energy scale of LIV. 2. Doubly Special Relativity It was seen that considering an LIV scenario entails a loss of the relativity principle and the acceptance of a preferred reference frame, which is usually identified with the one defined by the homogeneity and isotropy of the Cosmic Microwave Background (CMB). If one wants to maintain a relativity principle when going beyond SR, one has to consider a deformation of the transformations relating the inertial reference frames. The deformation of SR, usually called DSR , is assumed to be parametrized by a new energy scale , which does not usually affect the rotational symmetry, as in the case of LIV. A necessary ingredient of this departure from SR at the kinematical level is a nontrivial characterization of a multi-particle system with a total energy and momentum differing from the sum of the energies and momenta of the particles. One then has a composition of energy and momentum which is non-symmetric under the exchange of the particles . One arrives at this conclusion from different perspectives of DSR. The starting point of this proposal is the attempt to make compatible the relativistic invariance with the presence of a minimal , which seems to be a characteristic of a quantum theory which incorporates consistently the gravitational interaction . Such minimal length can be understood as a consequence of a non-commutativity in a generalization of the classical spacetime, which requires us to go beyond the usual implementation of continuous symmetries by Lie algebras. The new algebraic structure is a Hopf algebra with a non-trivial co-product, which leads to a deformed kinematics with a non-symmetric composition of momenta . An alternative way to arrive at the same conclusion is to identify the non-commutativity of spacetime with a non-commutativity of translations in a curved momentum space, which can also be related with to composition of momenta . This composition law is therefore a crucial ingredient differentiating DSR and LIV. Together with the non-linear composition of momenta, the invariance under deformed Lorentz transformations will lead in many cases to a modification of the dispersion relation. As a consequence, in the kinematic analysis of a process in DSR, one has to consider both a possible modification of the energy–momentum relation of the particles participating in it and a modification of the energy–momentum conservation law. The compatibility with the relativity principle, in comparison with the case of LIV, can be shown to produce a cancellation of the effects of the two modifications. Therefore, in order to have an observable consequence of the deformation of the kinematics in a process, one has to consider energies comparable to the energy scale of the deformation . This means that in order to have a signal of DSR in the particle processes which are relevant in high-energy astrophysics, it is necessary to consider an energy scale parametrizing the deformation of the kinematics of the order of the energy involved in those processes. At the same time, many of the constraints to the high-energy scale in the case of LIV do not apply in the DSR scenario. The two previous kinematic ingredients of DSR raise several problems and apparent contradictions on the physical interpretation of the theory. On the one hand, a modification of the composition of momenta in a particle system (independently of the distance between the particles) implies a departure from the notion of absolute locality in spacetime . The corresponding loss of the crucial property of cluster , which is at the basis of the formulation of special-relativistic quantum field theory, originates the so-called spectator problem . On the other hand, a modification of the dispersion relation with the associated modification of the velocity of a particle raises an apparent inconsistency of DSR when one applies the deformed kinematics to any system, including a macroscopic system (soccer ball problem
{"url":"https://encyclopedia.pub/entry/25064","timestamp":"2024-11-04T02:00:18Z","content_type":"text/html","content_length":"131143","record_id":"<urn:uuid:68d41af1-765a-44b2-ab88-01e228d497e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00894.warc.gz"}
Form Factor Calculator - Calculator Wow Form Factor Calculator The Form Factor Calculator is a valuable tool used to determine the form factor of different geometric shapes and structures. This metric is essential in various fields such as engineering, architecture, and manufacturing. By calculating the ratio of the surface area to the volume of an object, the form factor provides insights into the object’s efficiency and performance. Understanding and using a form factor calculator can aid in optimizing designs and improving functionality. The importance of the Form Factor Calculator cannot be overstated. Here are several reasons why it is a crucial tool: 1. Optimization: Helps in optimizing the design of structures and components for better performance and efficiency. 2. Cost-Efficiency: Enables the creation of cost-effective designs by minimizing material usage while maintaining structural integrity. 3. Energy Efficiency: Informs decisions that lead to energy-efficient designs, especially in thermal management and aerodynamics. 4. Material Science: Assists in understanding the properties of materials and how they interact with their environment. 5. Engineering Applications: Widely used in various engineering applications to ensure that components meet required specifications and standards. How to Use the Form Factor Calculator Using the Form Factor Calculator is straightforward. Follow these steps: 1. Input Surface Area: Enter the surface area of the object in the designated field. 2. Input Volume: Enter the volume of the object in the respective field. 3. Calculate Form Factor: Click the calculate button to determine the form factor. 4. Interpret Results: The calculator will display the form factor, indicating the efficiency of the object in terms of its surface area and volume. 10 FAQs and Answers 1. What is a Form Factor Calculator? A Form Factor Calculator is a tool used to calculate the form factor of an object, which is the ratio of its surface area to its volume raised to the power of 2. Why is the form factor important? The form factor is important because it provides insights into the efficiency and performance of geometric shapes and structures in various applications. 3. How do I calculate the form factor? You can calculate the form factor by dividing the surface area of an object by its volume raised to the power of two-thirds. 4. Can I use the Form Factor Calculator for any shape? Yes, the calculator can be used for any geometric shape as long as you know its surface area and volume. 5. What units should I use in the Form Factor Calculator? You can use any units for surface area and volume, but they must be consistent (e.g., square units for surface area and cubic units for 6. How accurate is the Form Factor Calculator? The accuracy of the calculator depends on the precision of the input values for surface area and volume. 7. Can the Form Factor Calculator help in reducing material costs? Yes, by optimizing designs based on form factor, you can minimize material usage and reduce costs. 8. Is the Form Factor Calculator used in thermal management? Yes, it is used to optimize designs for better thermal management by considering the surface area to volume ratio. 9. How often should I use the Form Factor Calculator? It should be used whenever you need to evaluate or optimize the efficiency of a design, especially during the planning and development stages. 10. Is the Form Factor Calculator useful in aerodynamics? Yes, it helps in designing aerodynamic structures by optimizing the surface area to volume ratio for better performance. The Form Factor Calculator is an essential tool in various fields, providing valuable insights into the efficiency and performance of geometric shapes and structures. By understanding and utilizing this calculator, engineers, architects, and designers can optimize their designs for better cost-efficiency, energy efficiency, and overall performance. Whether you are working on a small component or a large structure, the form factor can play a crucial role in achieving your design objectives. Embrace the power of the Form Factor Calculator to enhance your projects and drive innovation.
{"url":"https://calculatorwow.com/form-factor-calculator/","timestamp":"2024-11-06T23:45:55Z","content_type":"text/html","content_length":"64946","record_id":"<urn:uuid:57997384-b2e1-47d5-826b-051574564033>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00281.warc.gz"}
NCERT Solutions Archives NCERT Solutions for Class 8 Social Science Geography Chapter 2 Land, Soil, Water, Natural Vegetation and Wildlife Resources NCERT Solutions for Class 8 Social Science Geography Chapter 2 Land, Soil, Water, Natural Vegetation and Wildlife Resources Class 8 Geography Chapter 2 Land, Soil, Water, Natural Vegetation and Wildlife Resources Ncert Textbook Questions Solved Question … [Read more...]
{"url":"https://www.learncbse.in/tag/ncert-solutions/","timestamp":"2024-11-08T06:09:15Z","content_type":"text/html","content_length":"140764","record_id":"<urn:uuid:e42c1b63-20f9-4b48-a78e-c0af58c84d12>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00487.warc.gz"}
Modeling the Atom by Andrew Boyd Today, guest scientist Andrew Boyd models the atom. The University of Houston presents this series about the machines that make our civilization run, and the people whose ingenuity created them. I remember my introduction to the atom in elementary school. I learned that atoms were made of protons, neutrons, and electrons; that protons and neutrons were situated in the middle, or nucleus, of the atom, and that electrons orbited around the nucleus. I made atoms out of Styrofoam balls decorated with paint and glitter. Toothpicks held the nucleus together, and circular strands of copper wire kept the electrons in their orbits. My model of the atom looked a lot like a picture of our solar system. The earliest scientific model of the atom didn't have protons, neutrons, and electrons. Instead, atoms were modeled as small, indivisible bits of matter. The word atom derives from the Greek word tomos, which means "to cut," and the prefix a, which means "not." The atomos proposed by Democritus in the fifth century B.C. were literally "uncuttable" bits of matter. This early model of the atom lay dormant until the middle of the eighteenth century, when scientific experimentation gave new life to atomic theory. Atoms could actually help explain things people were observing, a vital prerequisite for anyone to care about a model. However, it was not until 1897 when J. J. Thomson discovered the electron that atoms became more than uncuttable bits of matter. Thomson's new model had electrons held together by a soupy goo of positive charge to balance the negative charge of the electrons, and it came to be known as the plum pudding model of the atom. In 1910, Ernest Rutherford discovered there was an awful lot of empty space in atoms, leading him to postulate yet a new model of the atom, in which electrons orbit protons and neutrons in the nucleus. In 1913, Neils Bohr changed the model again. Bohr's model looked a lot like Rutherford's, but only allowed the electrons to circle in very special orbits. This was the model we were taught as children. But the biggest change came with the work of Erwin Schrödinger, who did away with the notion that electrons actually move in orbits around the nucleus. With Schrödinger's model, we stopped thinking about how electrons move and resigned ourselves to thinking about where we'd expect to find them if we went looking. Orbits were replaced by "orbitals," a name that pays due respect to the history of atomic models, but is actually somewhat misleading. Orbitals have nothing to do with orbits, they're just mathematical statements about the probability of finding an electron in a particular place. In fact, in Schrödinger's world, and in the world of quantum mechanics more generally, the whole question of what it means for an electron to get from point A to point B becomes considerably more Schrödinger's model may not be as easy for us to picture as earlier models, but we use it because it explains many experimental observations that the earlier models simply couldn't. Today's model of the atom is typical of an underlying trend in science and engineering. More and more, we find transparent physical models yielding to mathematical abstraction. I'm just having trouble trying to explain our modern view of the atom to my children using Styrofoam, glitter, and copper wire. I'm Andy Boyd, at the University of Houston, where we're interested in the way inventive minds work. (Theme music) Dr. Andrew Boyd is Chief Scientist and Senior Vice President at PROS, a provider of provider of pricing and revenue optimization solutions. Dr. Boyd received his A.B. with Honors at Oberlin College with majors in Mathematics and Economics in 1981, and his Ph.D. in Operations Research from MIT in 1987. Prior to joining PROS, he enjoyed a successful ten year career as a university professor. His new book, The Future of Pricing: How Airline Ticket Pricing Has Inspired a Revolution, (New York: Palgrave-MacMillan), is to come out in October, 2007. R. E. Dickerson, H. B. Gray, and G. P. Haight, Chemical Principles. 2nd ed. (The Philippines: W. A. Benjamin, 1974). For more on the quantum atom and its history, see, C-l. Tien and J. H. Lienhard, Statistical Thermodynamics. (New York: Hemisphere Pub. Co., 1976): Chapters 4 through 7. (Schrödinger equation and diatomic molecule model from this source.) Model of a diatomic molecule suggesting different modes of energy storage, each of which contributes 1/2kT to the molecule's energy (k is Boltzman's constant and T is absolute temperature.) Modern helium atom model. The gray cloud suggests the probability density of the 1s electron. (Image courtesy of Wikipedia)
{"url":"https://engines.egr.uh.edu/episode/2237","timestamp":"2024-11-04T17:44:04Z","content_type":"text/html","content_length":"33168","record_id":"<urn:uuid:ebf7cc1e-e10b-42e0-8610-b5232d831fd1>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00545.warc.gz"}
Density and co-density of the solution set of an evolution inclusion with maximal monotone operators An evolution inclusion defined on a separable Hilbert space and containing a time-dependent maximal monotone operator and a perturbation is considered in the paper. The perturbation is given by the sum of two terms. The first term is a demicontinuous single-valued operator with a time-dependent domain. It is measurable along a continuous function valued in the domain of the maximal monotone operator and satisfies nonlinear growth conditions. The sum of this operator with the identity operator multiplied by a square integrable nonnegative function is a monotone operator. The second term is a measurable multivalued mapping with closed, nonconvex values satisfying conventional Lipschitz conditions and linear growth conditions. Along with this (original) inclusion we introduce an alternative (relaxed) inclusion by convexifying the original multivalued perturbation. We prove the existence of solutions for the original inclusion and establish the density (relaxation theorem) and co-density of the solution set of the original inclusion in the solution set of the relaxed inclusion. Also, we give necessary and sufficient conditions for the closedness of the solution set of the original inclusion in the case when the values of the perturbation are closed nonconvex sets. For the class of perturbations we consider, all our results are completely new. • Co-density • Density • Maximal monotone operator • Nonconvex-valued and convexified perturbations • Weak norm Dive into the research topics of 'Density and co-density of the solution set of an evolution inclusion with maximal monotone operators'. Together they form a unique fingerprint.
{"url":"https://scholar.xjtlu.edu.cn/en/publications/density-and-co-density-of-the-solution-set-of-an-evolution-inclus","timestamp":"2024-11-14T11:54:21Z","content_type":"text/html","content_length":"54740","record_id":"<urn:uuid:65e99faf-0ed9-4dc4-8965-c7de0fc97975>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00697.warc.gz"}
Sort a linked list of 0s, 1s, and 2s by changing links Last Updated on November 18, 2022 by Prepbytes In this article, we will learn to sort a linked list of 0s 1s and 2s. Sorting an array implies that the elements present in the array must be present in a particular order, whether in ascending order or in descending order. Let’s try to understand to sort linked list of 0 1 2 Problem Statement In this problem, we are given a linked list containing 0s, 1s, and 2s, and we are required to sort this linked list. Problem Statement Understanding to sort a linked list of 0s 1s and 2s Let’s learn programming languages online and try to understand to sort linked of 01 2 Suppose we are given a linked list 1 → 2 → 0 → 1 → 0 → NULL, now the problem is demanding that we have to sort the linked list such that our original linked list after sorting should look like: Final resultant linked list after sorting:- 0 → 0 → 1 → 1 → 2 → NULL Explanation: So basically we have to sort the linked list such that in the final sorted linked list all the nodes with value 0 come before the nodes with value 1 and all the nodes with value 1 comes before the nodes with value 2. Say if the input linked list is now in this case after sorting the input linked list our final linked list will look like: Final resultant linked list after sorting:- So now I think from the above examples it is clear what the problem is demanding to sort a linked list of 0 1 2. Let’s think how we can approach to sort a linked list of 0s 1s and 2s. Approach 1 to sort linked list of 0 1 2 As we can see in the problem our linked list only contains nodes with values 0, 1, and 2, so one simple solution will be to count the number of 0’s, 1’s, and 2’s. • After counting the number of 0’s, 1’s, and 2’s, fill the first P (where P is count of 0’s) nodes with 0, then next Q (where Q is count of 1’s) nodes with 1 and last R (where R is count of 2’s) nodes with 2. • After filling the nodes with 0’s, 1’s, and 2’s, our final linked list will be sorted, and we will get our desired result. But there is one problem, that this solution does not work when these values have associated data with them. • For example, If these three 0’s, 1’s, and 2’s represent three different colors and with these colors different types of objects have been associated and have to sort the objects based on colors. So now we will see how can we solve this problem and finally sort the list. Approach 2 to sort linked list of 0 1 2 The above problem could be solved by changing the links: • If we can change the links of nodes such that all the nodes having value of 0 gets together and forms a separate linked list containing all the nodes with value 0. Similarly, the nodes with values 1 and 2 also get together and form their separate linked list. • After separate linked lists for 0’s, 1’s, and 2’s have been formed: 1) We will make the head of the linked list containing 0’s the head of the final sorted linked list. 2) We will make the tail of the linked list of 0’s point to the head of the linked list of 1’s and the tail of the linked list of 1’s point to the head of the linked list of 2’s. 3) Also, we will make the tail of the linked list of 2’s point to NULL. • Finally, we can return our final sorted linked list by returning the head of a linked list of 0’s. Algorithm to sort a linked list of 0s 1s and 2s • Traverse through the list one by one. • Maintain three pointers designated ptr0, ptr1, and ptr2 that refer to the current ending nodes of linked lists with 0, 1, and 2 elements, respectively. • We append each explored Node to the end of its associated list: 1) Node with value 0 will be appended to the end of linked list of 0’s. 2) Node with value 1 will be appended to end of linked list of 1’s. 3) Node with value 2 will be appended to the end of linked lists of 2’s. • Finally, we join the three lists together. For joining the three lists together we will utilize three dummy pointers temp0, temp1, and temp2 that act as dummy headers for the three lists to avoid multiple null tests. • Finally, we will return the head of the linked list of 0’s. Dry Run to sort a linked list of 0s 1s and 2s Code Implementation to sort a linked list of 0 1 2 #include <stdio.h> struct Node { int data; struct Node* next; struct Node* newNode(int data); // Sort a linked list of 0s, 1s and 2s // by changing pointers. struct Node* sortList(struct Node* head) if (!head || !(head->next)) return head; // Create three dummy nodes to point to // beginning of three linked lists. These // dummy nodes are created to avoid many // null checks. struct Node* zeroD = newNode(0); struct Node* oneD = newNode(0); struct Node* twoD = newNode(0); // Initialize current pointers for three // lists and whole list. struct Node* zero = zeroD, *one = oneD, *two = twoD; // Traverse list struct Node* curr = head; while (curr) { if (curr->data == 0) { zero->next = curr; zero = zero->next; curr = curr->next; } else if (curr->data == 1) { one->next = curr; one = one->next; curr = curr->next; } else { two->next = curr; two = two->next; curr = curr->next; // Attach three lists zero->next = (oneD->next) ? (oneD->next) : (twoD->next); one->next = twoD->next; two->next = NULL; // Updated head head = zeroD->next; // Delete dummy nodes return head; // Function to create and return a node struct Node* newNode(int data) // allocating space struct Node* newNode = (struct Node*)malloc(sizeof(struct Node)); // inserting the required data newNode->data = data; newNode->next = NULL; /* Function to print linked list */ void printList(struct Node* node) while (node != NULL) { printf("%d ", node->data); node = node->next; /* Driver program to test above function*/ int main(void) // Creating the list 1->2->4->5 struct Node* head = newNode(1); head->next = newNode(2); head->next->next = newNode(0); head->next->next->next = newNode(1); printf("Linked List Before Sorting\n"); head = sortList(head); printf("Linked List After Sorting\n"); return 0; #include <bits/stdc++.h> using namespace std; /* node definition */ struct Node { int val; struct Node* next; Node* newNode(int val); // Sorting 0s, 1s and 2s in a linked list Node* sortingLL(Node* head) if (!head || !(head->next)) return head; // Creating three dummy nodes inorder to point // to the starting of the 3 lists. // Their motive is to help in avoiding null checks. Node* temp0 = newNode(0); Node* temp1 = newNode(0); Node* temp2 = newNode(0); // Initialize current pointers Node* ptr0 = temp0, *ptr1 = temp1, *ptr2 = temp2; // Traversing through the list Node* current = head; while (current) { if (current->val == 0) { ptr0->next = current; ptr0 = ptr0->next; current = current->next; } else if (current->val == 1) { ptr1->next = current; ptr1 = ptr1->next; current = current->next; } else { ptr2->next = current; ptr2 = ptr2->next; current = current->next; // connect the 2 linked lists. ptr0->next = (temp1->next) ? (temp1->next) : (temp2->next); ptr1->next = temp2->next; ptr2->next = NULL; // Updating the head head = temp0->next; // Deletion of dummy nodes delete temp0; delete temp1; delete temp2; return head; // Creating and returning a node. Node* newNode(int val) Node* newNode = new Node; newNode->val = val; newNode->next = NULL; // Function to display the Linked List void displayList(struct Node* node) while (node != NULL) { cout<<node->val<<” “; node = node->next; // Driver function int main() // Creation of list Node* head = newNode(1); head->next = newNode(2); head->next->next = newNode(0); head->next->next->next = newNode(1); head->next->next->next->next = newNode(0); cout<<"Original Linked List:”<<endl; head = sortingLL(head); cout<<"Sorted Linked List:”<<endl; return 0; class Node int data; Node next; Node(int data) class SortIt public static Node sortList(Node head) if(head==null || head.next==null) return head; Node zeroD = new Node(0); Node oneD = new Node(0); Node twoD = new Node(0); Node zero = zeroD, one = oneD, two = twoD; Node curr = head; while (curr!=null) if (curr.data == 0) zero.next = curr; zero = zero.next; curr = curr.next; else if (curr.data == 1) one.next = curr; one = one.next; curr = curr.next; two.next = curr; two = two.next; curr = curr.next; zero.next = (oneD.next!=null)? (oneD.next) : (twoD.next); one.next = twoD.next; two.next = null; head = zeroD.next; return head; // function to create and return a node public static Node newNode(int data) // allocating space Node newNode = new Node(data); newNode.next = null; return newNode; /* Function to print linked list */ public static void printList(Node node) while (node != null) System.out.print(node.data+" "); node = node.next; public static void main(String args[]) Node head = new Node(1); head.next = new Node(2); head.next.next = new Node(0); head.next.next.next = new Node(1); System.out.println("Linked List Before Sorting"); head = sortList(head); System.out.println("Linked List After Sorting"); # Link list node class Node: def __init__(self, data): self.data = data self.next = None # Sort a linked list of 0s, 1s and 2s # by changing poers. def sortList(head): if (head == None or head.next == None): return head # Create three dummy nodes to point to # beginning of three linked lists. # These dummy nodes are created to # avoid many None checks. zeroD = Node(0) oneD = Node(0) twoD = Node(0) # Initialize current pointers for three # lists and whole list. zero = zeroD one = oneD two = twoD # Traverse list curr = head while (curr): if (curr.data == 0): zero.next = curr zero = zero.next curr = curr.next elif(curr.data == 1): one.next = curr one = one.next curr = curr.next two.next = curr two = two.next curr = curr.next # Attach three lists zero.next = (oneD.next) if (oneD.next ) \ else (twoD.next) one.next = twoD.next two.next = None # Updated head head = zeroD.next # Delete dummy nodes return head # function to create and return a node def newNode(data): newNode = Node(data) newNode.data = data newNode.next = None return newNode # Function to print linked list def printList(node): while (node != None): print(node.data, end = " ") node = node.next if __name__=='__main__': # Creating the list 1.2.4.5 head = newNode(1) head.next = newNode(2) head.next.next = newNode(0) head.next.next.next = newNode(1) head.next.next.next.next = newNode(0) print("Linked List Before Sorting") head = sortList(head) print("\nLinked List After Sorting") Original Linked List: Sorted Linked List: Time Complexity: O(n), where n is the number of nodes in the linked list. In this article, we have explained how to sort a linked list of 0s 1s and 2s. Different approaches have been explained to sort linked list of 0 1 2 with pictorial dry run and code implementation with all time and space complexities. To explore more on the linked list you can follow this link Linked List, which is curated by our expert mentors at PrepBytes Footer FAQs to sort a linked list of 0s 1s and 2s 1. Can we reverse a linked list in less than O(N)? It is not possible to reverse a simple singly linked list in less than O(n). 2. What is a linked list? A linked list is a sequence of data structures, which are connected together via links and each node points to the next node. 3. Is the linked list allow null values? The linked list allows any number of null values while LinkedHashSet also allows a maximum of one null element. Leave a Reply Cancel reply
{"url":"https://www.prepbytes.com/blog/linked-list/sort-a-linked-list-of-0s-1s-and-2s-by-changing-links/","timestamp":"2024-11-14T17:38:13Z","content_type":"text/html","content_length":"159071","record_id":"<urn:uuid:9be84315-7b77-4650-ace9-a284f61b19c4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00352.warc.gz"}
Single Sample T-Test Calculator A single sample t-test (or one sample t-test) is used to compare the mean of a single sample of scores to a known or hypothetical population mean. So, for example, it could be used to determine whether the mean diastolic blood pressure of a particular group differs from 85, a value determined by a previous study. • The data is normally distributed • Scale of measurement should be interval or ratio • A randomized sample from a defined population Null Hypothesis H[0]: M - μ = 0, where M is the sample mean and μ is the population or hypothesized mean. As above, the null hypothesis is that there is no difference between the sample mean and the known or hypothesized population mean.
{"url":"https://www.socscistatistics.com/tests/tsinglesample/default.aspx","timestamp":"2024-11-11T07:53:09Z","content_type":"text/html","content_length":"13181","record_id":"<urn:uuid:bd28aa04-f270-4bec-aa1d-2db919bb8c9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00158.warc.gz"}
Minimalist Coding Guidelines This article presents a set of language agnostic coding guidelines. Code that is produced using these guidelines will be more maintainable than code written without using these guidelines. This article presents a set of language agnostic coding guidelines. I have no doubt that some developers will take issue with one or more of these guidelines. But each has a rationale that may help lessen the angst. However, if adopted in toto, I assure that code that is produced will be much more maintainable than code written without using these guidelines. I know. I know. Most programmers view coding standards and guidelines as intrusions upon their creative and artistic talents. But consider for a moment that most probably these programmers are developing software for someone else (e.g., a company, a client, etc.) as a work for hire. If that is the case, the software that is developed is not an artistic work in the sense that the programmer can copyright it. Furthermore, the programmer does not own the software. And lastly, the programmer is under pressure (aka a fast-paced environment) wherein thoughts of maintainability are put aside for another day that usually never comes. Author's Background and Perspective I believe that most programmers try to produce maintainable code. But if they are not using coding standards, they generally fail in their attempt. Maybe it's because today's programmers enter the work force differently than in the past. As recently as the late 1990's, entry level programmers were assigned to maintenance tasks. Seldom were they given responsibilities for the development or design of original software. The result? Programmers learned from the mistakes of others - most importantly as how not to code software. And in most instances, these maintenance programmers vowed that they would never do to others as others had done to them. Software that is not being developed is being maintained. Software is maintained because it contains an error. I define an error as either a failure on the part of a programmer to correctly implement a requirement or a failure on the part of the architect or designer to correctly state a requirement. In programmer parlance, the former is simply a "bug"; the latter is euphemistically called an So what do coding standards do for a programmer? Note that there must be some payoff or else there is really no reason for a programmer to spend the time to comply. The payoff is simply readability. And readability increases maintainability. Regardless of whether you are the original developer or a programmer assigned to repair or enhance software, you must understand what the code does before you can modify it. Many years ago, as a graduate student, I wrote a rather complex piece of software that would fit a high-speed rail transportation guideway into a right of way composed of multiple parcels. The path of the guideway had to minimize lateral acceleration. The problem, illustrated in the following figure, is that it is normally impossible to purchase parcels that allow a straight-line guideway (the dotted line). Rather we must resort to purchasing a number of parcels that allow a guideway to connect the two ends (the solid line). I was pleased with the software; it did what it was supposed to do. A few years later, a problem came up that could be solved using this software with minor modifications. I reopened the software and found to my horror that it was unusable. At the time I was coding the original software I had no consideration for style, especially style that aided following programmers (including me) to understand the solution. The result was a truly unnecessary "reinvention of the wheel." About the same time that I found myself frustrated by my own coding style, a book named The Elements of Programming Style was published. The authors (Brian Kernighan and P. J. Plauger), both respected computer scientists, wrote a set of guidelines that, had I followed them, would have allowed me to recover my earlier code. To me, the greatest lesson in the book was: "write software as if you were writing for someone else - for in six months you will be someone else!" What does it mean to write for someone else? I suggest that it means to use a clear writing style and consistent formatting scheme. That's what these guidelines are all about. The Guidelines What follows are coding guidelines that are, hopefully, language agnostic. Specific examples may use C or C# as their exemplar languages. But the guidelines themselves are not tied to either C or C#. I wish to acknowledge the contribution of Derek M. Jones of Knowledge Software whose web pages located at These pages provided the basis for much of the rationale for these guidelines. The Prime Objective Consistency is the most important guideline in the clear writing of computer programs. If the original programmer mixes styles, the following maintenance programmer is more easily distracted from repair. This raises the cost of maintenance. Identifier Spelling I don't think that there is any disagreement that the identifiers should be self-describing. The major disagreements appear to come when identifier spelling is the issue. Camel-case. Pascal-case. Uppercase. Lowercase. Underscores. Hungarian notation. All are methods that appeal to one or another segment. But it also appears that the evangelists for one method over another are merely espousing some personal preference and not stating some fact but stating an opinion. I believe that this is inconsistent with a rational approach to the issue. • Create identifiers from complete English Words. • Limit the use of abbreviations to an authorized set. Pronounceability is an easy-to-apply method of gauging the extent to which a spelling matches the characteristics of character sequences found in a developer's native language. Given a choice, character sequences that are easy to pronounce are preferred to those that are difficult to pronounce. • Separate English words with underscores. This form of separation distinguishes programmer defined identifiers from system defined identifiers that usually represent entry points into, say, APIs. • Use lowercase letters for variable identifiers. • Use title case (first-letter capitalization) for enum, struct, class, interface, delegate, namespace, etc. identifiers. • Use uppercase letters for enum value identifiers and const variable identifiers. • Use uppercase letters for abbreviations. Written English separates words with white space. When an identifier spelling is composed of several distinct subcomponents, using an underscore character between the subcomponents is the closest available approximation to a reader’s experience with prose (i.e., separation by spaces). Some developers capitalize the first letter of each subcomponent. Such usage creates character sequences whose visual appearances are unlike those on which readers have been trained. For this reason, additional effort will be needed to process them. In some cases, the use of one or more additional characters may increase the effort needed to comprehend constructs containing the identifier (perhaps because of line breaks needed to organize the visible source). • Choose identifiers that are self-contained and meaningful. There are benefits to readers of identifier spellings that evoke semantic associations. However, reliably evoking the desired semantic associations in different readers is very difficult to achieve. Given a choice, an identifier spelling that evokes, in many people, semantic associations related to what the identifier denotes is preferred to spellings that evoke them in fewer people or commonly evokes semantic associations unrelated to what the identifier denotes. • Distinguish identifiers by their initial letters. The start of English words is more significant than the other parts for a number of reasons. The mental lexicon appears to store words by their beginnings and spoken English appears to be optimized for recognizing words from their beginnings. This suggests that it is better to have differences in identifier spelling at the beginning (e.g., cat, bat, mat, and rat) than at the end (e.g., cat, cab, can, and cad). In any context, a word should have a single meaning. For instance, it is not necessary to know the meaning (after preprocessing) of a, b and c, to comprehend a=b+c. This statement is not necessarily true in computer languages that support overloading. Indentation (See Visual Studio, below) • Use a consistent indentation scheme that enhances edge detection. The visual receiving area of the brain responds selectively to the orientation of edges. In one theory of perceptual organization, edge detection is the first operation performed on the signal that appears as input to the human visual system. Source code is read from left to right, top to bottom. It is common practice to precede the first non-whitespace character on a sequence of lines to start at the same horizontal position. This usage has been found to reduce the effort needed to visually process lines of code that share something in common; for instance, statement indentation is usually used to indicate block nesting. Edge detection would appear to be an operation that people can perform with no apparent effort. An edge can also be used to speed up the search for an item if it occurs along an edge. In the following two sequences of declarations, less effort is required to find a particular identifier in the second block of declarations. In the first block, the reader first has to scan a sequence of tokens to locate the identifier being declared. In the other block, the locations of the identifiers are readily apparent. Block Declaration 1 private List < Color > known_colors; private Panel [ ] panels = null; private Sort_By sort_by = Sort_By.HSL; private ToolTip tooltip = new ToolTip ( ); 2 private List < Color > known_colors; private Panel [ ] panels = null; private Sort_By sort_by = Sort_By.HSL; private ToolTip tooltip = new ToolTip ( ); Edge detection also improves comprehension in reading method declarations. In the following two declarations, less effort is required to understand the declaration in the second declaration. In the first, the reader first has to scan a sequence of tokens to locate an identifier. In the second, the identifiers are readily apparent. Block Declaration 1 [ DllImport ( "gdi32.dll", EntryPoint = "BitBlt" ) ] public static extern bool BitBlt ( IntPtr hdcDest, int nXDest, int nYDest, int nWidth, int nHeight, IntPtr hdcSrc, int nXSrc, int nYSrc, int dwRop ); 2 [ DllImport ( "gdi32.dll", EntryPoint = "BitBlt" ) ] public static extern bool BitBlt ( IntPtr hdcDest, int nXDest, int nYDest, int nWidth, int nHeight, IntPtr hdcSrc, int nXSrc, int nYSrc, int dwRop ); • Use a consistent scheme for statement indentation. There are two common statement indentation schemes. They take the following forms: if ( x ) if ( x ) { { if ( y ) if ( y ) { { F ( ) ; F ( ); } } else else { { G ( ) ; G ( ); } } } } One of these two forms must be chosen to be used to indent statement bodies. There is no preferred way to indent statement bodies. Rather, the developer's preference is the determining factor. Again, the single rule is simply to be consistent. White Space (See Visual Studio, below) • Use only the space character as white space. One of the ISO standards defines white space as the space character, the horizontal tab character, the vertical tab character, and the form feed character and suggests that they may be used to separate tokens. Using any of these characters, other than the space character, may cause the loss of a consistent indentation scheme. There was a historic reason for using horizontal tabs in source code - the limited amount of available rotating mass storage. That reason no longer exists. Most computers are attached to giga- and even tetra-byte mass storage devices. As a result, source code no longer needs to conserve disk space. Additionally, using spaces has a significant advantage. Indentation using spaces creates a consistent indentation scheme, no matter what the display device may be (e.g., monitor, printed page, etc.). For example, say that tabs are defined every eight character positions (the normal tab setting for a laser printer). A maintenance programmer makes a modification but uses spaces rather than tabs for indentation. It is quite possible that the new or modified lines will have an inconsistent indentation with respect to the rest of the code. Another difficulty with tabs is how the display device interprets them. Let's say that a programmer sets the source code editor tabs to four character positions. Coding proceeds as normal. The programmer prints copies of the source code for a design review. Most laser printers define tabs every eight character positions. The result of printing the code is a significant rightward indentation, to the point that the programmer is faced with two options: print the code in landscape mode or reformat the source code. Neither is a particularly welcome task. What's worse, the problem could have been avoided by using spaces for indentation. Source Code Line Length • Limit source code lines to 70 characters. The "standard" letter size paper measures 8-1/2 x 11 inches and "standard" page margins are 1 inch on a side. A recommended minimum font size is 11 points. When source code is printed, the printed page is normally rendered in Courier New. When these provisions are met, the maximum width of an unbroken printed line is 70 Courier New characters. On a monitor, requiring following programmers to scroll to the right to read a source code line is as arrogant as requiring a web page reader to scroll to the right. We don't do this to our web site visitors. Why do it to fellow programmers? Breaking Lines • When a source code line must be broken, to meet the preceding Source Code Line Length guideline, break the line after one of the following operators and punctuators: { [ ( . , : ; + - * / % & | ^ ! ~ = < > ? ?? :: ++ -- && || -> == != <= >= += -= *= /= %= &= |= ^= << <<= • or after one of the keywords that indicate that the statement is not complete: as in is new When source is being read, the reader gains a valuable visual clue that the line has been broken if one of these operators, punctuators, or keywords is the last token on the line. Token Separation (See Visual Studio, below) • Separate following tokens from a comma by a space character. • Separate binary operators from their operands by a space. • Separate all other tokens from each other by a space character. Code Folding Rationale (used with permission) What I only realized afterwards was that code folding was encouraging me to write bigger and bigger methods, and not bother to break them up into smaller bite-sized methods. The result was that I too often ended up with "write-once-maintain-never" programs with big monolithic methods. Code folding has begun to reappear in modern IDEs. This is odd, because the problems that code folding originally addressed have since been eradicated in other, much neater, less transient ways - namely, object-oriented design. If you're staring at your program and can't see the wood for the trees, code folding is the wrong answer. The answer is to structure your program better; encapsulate the details into different classes, use interfaces, small methods, and so on. The other thing about code folding is that you end up wasting a lot of time folding methods, unfolding them, when this isn't really getting you anywhere. It feels like you're doing work because you're actively clicking away; but you're not actually making any progress. It's like trying to rearrange the contents of a cupboard by constantly opening and closing the cupboard doors. Visual Studio Earlier, in these guidelines, I've referred to this paragraph. Visual Studio provides assistance in meeting some of these guidelines. The white space and indentation guidelines can be automated within the Microsoft Visual Studio IDE. Under Tools→Options→Text Editor are a myriad of settings that control the formatting of source code. I strongly recommend that programmers take time to review these settings. One of the advantages is simply if you do not like the formatting of code given to you, you can simple cut all (Ctrl-A; Ctrl-X) and paste (Ctrl-V) the code. If your text editor settings are what you want, then Visual Studio IDE reformats the code to your liking (this can also be accomplished from the Edit→Advanced menu item). As said earlier, there is no "right way" to format computer programs, no matter what the evangelists say. The only important rule is consistency. That rule, if violated, will result in unreadable code. And unreadable code is unmaintainable code. I have been programming computers for decades. As I indicated earlier, developing usable guidelines was a defense against myself mechanism. But something has happened a number of times that increases my confidence in these guidelines. A few years after I left one project, written in Ada, I received a call from the programmer who was maintaining the driver I had written. He had recognized my coding signature and called to say "thank you." Although the driver was many hundreds of lines long, he was able to maintain the code easily. • 08/04/2011 - Original Article
{"url":"https://codeproject.global.ssl.fastly.net/Articles/236430/Minimalist-Coding-Guidelines?fid=1644061&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=3982884&fr=1","timestamp":"2024-11-13T13:06:04Z","content_type":"text/html","content_length":"46960","record_id":"<urn:uuid:8fc780c8-b24f-4273-9e1f-608d57f71c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00229.warc.gz"}
The Bunny Kit Top Posters In This Topic That camo makes me cry :/ Someone probably put a lot of work into it, but it's... not camo. It wouldn't work anywhere. There's also no such thing as a "Water" type camoflauge pattern. that was some good work :(( hi just trying out the spoiler ime new ) that must of taken ages maby days That camo makes me cry :( Someone probably put a lot of work into it, but it's... not camo. It wouldn't work anywhere. There's also no such thing as a "Water" type camoflauge pattern. HAVE you played metal gear solid 3? *Yuck* Fast food place like macdonalds would only atract chavs. You could do a airhostes or some job people That camo makes me cry :( Someone probably put a lot of work into it, but it's... not camo. It wouldn't work anywhere. There's also no such thing as a "Water" type camoflauge pattern. It's MGS camo. From Snake Eater. If you read the posts were he posts them, he actually says that. Besides, real men wear this camo JNr J, do you actually have a ps2/ps3 turned on WHILE you do those Camos!? You play MGS as much as I do and you get them drilled into your head. Meat, do you know that alot of those camoflauge patterns were used alot in history? Maybe you should do some research before you critise something? Edited by Razor Sharpshooter On first page the buddy kit has came up with a white box with a red cross in. Also to go with the 9mm i may do a sniper rifle. kingy, try re-posting the link. It won't work in a new window either, and i don't think it's imageshack Edited by Stringaling works fine for me, ill post it again. EDIT: Image shack isnt working for me. Edited by Lord Kingy heres a boyscout outfit On the shorts it looks like you have ended them where the boots start. PLEASE make runescape stuff, i cant add stuff like the scout outfit cos there is no section to put it in. kingy, put the list of all the runescape stuff people can make back on again. If u don't have a copy,i still have the original
{"url":"https://runescape.salmoneus.net/forums/topic/132765-the-bunny-kit/page/5/","timestamp":"2024-11-03T21:41:00Z","content_type":"text/html","content_length":"417555","record_id":"<urn:uuid:afd92bb4-4597-4e6f-afaf-a1d507bc0ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00307.warc.gz"}
A long time at the till Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? This problem involves considering, comparing and assessing different ways to solve a very difficult 'background' problem. There are three different parts to this unusual and thought-provoking task. Background problem Note: solving the background problem is very involved and not the main focus of this task! A mathematician goes into a supermarket and buys four items. It has been a while since she has used a calculator and she multiplies the cost (in pounds, using the decimal point for the pence) instead of adding them. At the checkout she says, "So that's £7.11" and the checkout man, correctly adding the items, agrees. The mathematician very, very slowly puts the items into her bag whilst thinking and tapping away on her calculator. She eventually says "I believe that the prices of four items with this property is Spend a few minutes trying this problem yourself to get a feel for its mathematical structure. Please note: Although it involves only the basic properties of numbers, the background problem is very difficult and time-consuming to solve directly. Now there's a challenge .... Main problem Read carefully the two solutions provided in the hints tab. How do your attempts at the first part compare to, or differ from, these two solutions? Which of the two solutions do you prefer? Why? Follow up task If you were now to be given related problems with £7.11 replaced by £7.12 or £7.13 or £7.14 how would you now choose to proceed? Can you assess in advance which of these problems will probably be harder or easier? Can you efficiently solve any of these problems with the benefit of hindsight? This problem has been adapted from the book "Sums for Smart Kids" by Laurie Buxton, published by BEAM Education. This book is out of print but can still be found on Amazon. Getting Started Here are some hints: Prime factorisation will probably be useful, as will working in pence rather than pounds and pence. There are lots of different possibilities to consider, so you will need to be clear with your recording system. To get a feel for the complexity, it is very unlikely that you will be able to solve the first part of the problem in under two hours. View full solutions Teachers' Resources Why do this problem? This problem provides an introduction to advanced mathematical behaviour which might not typically be encountered until university. The content level is secondary, but the thinking is sophisticated and will benefit the mathematical development of school-aged mathematicians. It will be of particular interest to students who want to learn to think like mathematicians and can be used at any point in the curriculum. It will need to be used with students who are already used to engaging with sustained mathematical tasks. Essentially the task involves carefully reading and then reflecting upon the merits of two very different solutions to a 'difficult-to-solve-but-easy-to-understand' problem. This is of value because mathematicians don't simply stop once an answer is found; reflecting on the method of solution is a key part of advanced mathematical activity. It will help train school students in the art of assessing their own solutions, which will inevitably lead to better performance in exams. Note: The Full Solutions are to be found here - just click on the 'View full solution' link. Possible approaches This task ideally requires at least two students to work together so that ideas arising can be discussed. We suggest two different ways of using the problem: 1. Filling time for early-finishers/mathematics club Print out a few copies of the problem and solutions to have to hand. Give them out to groups of keen early-finishers to consider in 'spare' lesson time over the course of a week. Give them space to discuss the two solutions, help each other to understand the subtleties and then to discuss the relative merits of the solutions. The problem will automatically generate discussion amongst students, but you might like them to 'report' back to you or others with things that they have discovered or explored. 2. Whole-class activity Set the background task itself as a homework problem with a fixed time-limit, stressing that only a partial solution is expected. Students should come prepared to report on the ways that they tried to solve the problem and the things that they have discovered about the problem. Back in the lesson, group students into pairs or fours. Hand out printed copies of the solutions. Give the groups half an hour or so to try to understand the solutions with the explicit task of writing down 5 short bullet points which explain the key aspects of the solution method. Some students will prefer to discuss solutions together as they work through them whereas others will prefer to work alone. Both approaches are fine, so you might wish to group students according to their preferred style. Next spend 10 minutes sharing the different lists of bullet points to create a 'shared' list for each problem on the board. Spend the remanining time back in groups considering the suggested variations on the background problem. Note that some of these are significantly easier problems to solve because of their simplified prime factorisation. As a focus for the activity, set the explicit task: "which of the variation problems would you choose to solve, and why?" Key questions What are the 'key steps' in the solutions, and what are the 'details'. Can you follow the overall 'strategy' of the two solutions? Which of the two solutions seems more 'reusable' for similar variants on the background problem? Which of the two solutions do you prefer? Why? Of the suggested variants, which seems likely to be the easiest to analyse? Why would you think that? Possible extension A simple-to-set extension is to ask students to solve one or more of the suggested variations on the background problems Another more sophisticated extension is to ask: what would make a variation of the background problem difficult or easy to solve? Can you create a much simpler problem which has a unique solution? Possible support Recall that we only recommend that you use this task with students already used to sustained mathematical engagement with tasks. To help students to get started with thinking about the background task, suggest that they work in pence and convert the two conditions into equations involving whole numbers. Stress that the sum will be $711$ but the product will be $711,000,000$ due to multiplying by $100$ four times. Suggest also that prime factorisation will be useful and a clear recording system will be necessary to keep track of calculations. In assessing the solutions encourage students to go through the solutions carefully line by line and to ask for clarification when there is a line that they do not understand.
{"url":"https://nrich.maths.org/problems/long-time-till","timestamp":"2024-11-12T03:49:56Z","content_type":"text/html","content_length":"65187","record_id":"<urn:uuid:48d0512b-efbc-4afb-b4a7-9128320222e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00525.warc.gz"}
1 Home team wins 2 Away team wins Total Points Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Home team total points Under Home team scores less points than given limit Over Home team scores more points than given limit Away team total points Under Away team scores less points than given limit Over Away team scores more points than given limit Even / Odd Even Even Odd Odd H 1 Home team wins with handicap H 2 Away team wins with handicap Yes Will be extratimes No Will be no extratimes More points I > More points in first half (void if equall) II > More points in second half (void if equall) Halftime / Fulltime 1-1 Home team won at the half time and at the end of the match 1-2 Home team won at half time, away team won at the end of the match X-1 Draw at the half time and home team won X-2 Draw at the half time, away team won 2-1 Away team won at the half time ,home team won at the end of the match 2-2 Away team won at half time and won at the end Total Points First Half Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Total points first quarter Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Total points second quarter Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Total points third quarter Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Total points fourth quarter Under Less number of points than given limit (Under) Over More number of points than given limit (Over) First Half result I 1 Home team wins first half I X Draw on halftime I 2 Away team wins first half First quarter result I 1 Home team wins first quarter I X Draw first quarter I 2 Away team wins first quarter Second quarter result II 1 Home team wins second quarter II X Draw second quarter II 2 Away team wins second quarter Third quarter result III 1 Home team wins third quarter III X Draw third quarter III 2 Away team wins third quarter Fourth quarter result IV 1 Home team wins fourth quarter IV X Draw fourth quarter IV 2 Away team wins fourth quarter Handicap First Half HI 1 Home team wins first half with handicap HI 2 Away team wins first half with handicap Handicap First quarter H 1 Home team wins first quarter with handicap H 2 Away team wins first quarter with handicap Handicap second quarter H 1 Home team wins second quarter with handicap H 2 Away team wins second quarter with handicap Handicap third quarter H 1 Home team wins third quarter with handicap H 2 Away team wins third quarter with handicap Handicap fourth quarter H 1 Home team wins fourth quarter with handicap H 2 Away team wins fourth quarter with handicap Home team points first half Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Away team points first half Under Less number of points than given limit (Under) Over More number of points than given limit (Over) Double bet 1&-P Home team win and under points 1&+P Home team win and over points 2&-P Away team win and under points 2&+P Away team win and under points Points diference 1-3 Less then 4 points difference 1-5 Less then 6 points difference 1-9 Less then 10 points difference 1-12 Less then 13 points difference 3-7 Between 3 and 7 points difference 6-11 Between 6 and 11 points difference 10+ More then 9 points difference 12+ More then 11 points difference 15+ More then 14 points difference 20+ More then 19 points difference
{"url":"https://help.meridianbet.ke/en/category/2618/page/26134","timestamp":"2024-11-05T22:13:19Z","content_type":"text/html","content_length":"32344","record_id":"<urn:uuid:74345501-4961-4841-a5f4-19fa9901b5ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00637.warc.gz"}
Along with OpenFOAM, the SU2 (Stanford University Unstructured) code is one of the more popular academic research and open-source Computational Fluid Dynamics (CFD) solvers. SU2 was initially developed for compressible flows and aerodynamic shape optimization but has since been extended to handle a wide range of flow problems [1]. The FEATool Multiphysics distribution includes the SU2 CFD solver with easy and convenient GUI and CLI interfaces allowing it to be used for simulation of laminar, turbulent, incompressible, and compressible types of flow problems. Basic Use The FEATool-SU2 external solver integration supports single physics models created with either the incompressible Navier-Stokes Equations or compressible inviscid Euler Equations. To use SU2 for a model with one of these physics modes, press the Solve Mode, instead of the default solve button. This opens the SU2 solver settings and control dialog box. Note that SU2 currently does not support models with multiple subdomains, or non-constant material parameters such as for example temperature dependent density and viscosity, the built-in and FEniCS multiphysics solvers can be used for these types of problems instead. Control Panel The SU2 solver settings dialog box and control panel allows one to automatically use SU2 to solve CFD problems. In the lower control button panel the Solve button will start the automatic solution process with the chosen settings. This means that the following steps are performed in sequence 1. Export - converts and exports the defined FEATool model and mesh to compatible SU2 mesh and configuration files 2. Solve - performs a system/subprocess call to the selected SU2 solver and starts a monitoring process 3. Import - interpolates and imports the computed solution back into FEATool for postprocessing and visualization While the solution process is running the Stop button will halt/pause the solver and plot the current solution state, while the Cancel button terminates the solution process and discards the current solution (note that it can take some time for the solver register a halt event and to stop). If the Close automatically checkbox is marked, the SU2 control panel will be automatically closed after the solution process has finished, and FEATool will switch to Postprocessing Mode. During the solution process one can also switch between the Log and Convergence tabs to see and monitor the solver output log and convergence plots in real-time. In the Convergence tab the error norm for the solution variables such as velocities, pressure, and turbulence quantities are plotted after each iteration and time step. Export allows for exporting the SU2 configuration .cfg and mesh .su2 files for external manual processing and editing. Solver Settings The following solver parameters and settings can be modified and set through the SU2 dialog box and control panel. Solver Type Firstly, the Time discretization scheme and main solver type can be selected according to the given problem in the drop-down box. The following time schemes are available • Steady State • Time Stepping • Dual Time Stepping (1st order) • Dual Time Stepping (2nd order) Initial Condition Initial conditions can be specified as either constant or subdomain expressions for the solution variables using the Expression dialog box (equivalent to specifying the corresponding fields in the Subdomain Settings dialog box). Alternatively, if a previously computed Solution exists it can be used instead. Simulation Settings • Iterations - specifies the maximum number of iterations (or time steps for time dependent simulations). • Time step - specifies the time step size. • End time - prescribes maximum time of the simulation. • Stopping criteria - specifies the stopping criteria for steady state simulations (log10). • Discretization - select finite volume (FVM) discretization scheme for convective fluxes. • Number of processors - selects the number of concurrent processes when running computations in parallel (defaults to the number of CPU cores/2) Turbulence Settings The Turbulence model drop-down box allow for selecting between the Spalart-Allmaras one-equation, k-Omega (SST) two-equation RANS turbulence models, as well as the default Laminar flow model. Note that SU2 currently do not include nor support wall functions or prescribing turbulence levels, if this is required the OpenFOAM solver can be used instead. Command Line Use The su2 function can be used instead of solvestat and solvetime to solve CFD problems with SU2 on the MATLAB command line (CLI). The following is an example of laminar steady flow in a channel solved with SU2 fea.sdim = {'x','y'}; fea.grid = rectgrid(50,10,[0,2.5;0,0.5]); fea = addphys(fea,@navierstokes); fea.phys.ns.eqn.coef{1,end} = { 1 }; fea.phys.ns.eqn.coef{2,end} = { 1e-3 }; fea.phys.ns.bdr.sel(4) = 2; fea.phys.ns.bdr.coef{2,end}{1,4} = '2/3*0.3'; fea.phys.ns.bdr.sel(2) = 4; fea = parsephys(fea); fea = parseprob(fea); % Alternative to calling: fea.sol.u = solvestat( fea ); fea.sol.u = su2( fea ); The model parameters used here are taken from the ex_navierstokes1 example script model. The m-script examples listed in the SU2 tutorials section similarly allow for using the SU2 solver instead of the default solver. Furthermore, the su2 function can also be embedded in user-defined custom m-scripts, which can use all other MATLAB functions and toolboxes. Advanced Use The SU2 solver is capable of performing large scale parallel simulations on HPC clusters, and although technically possible to use FEATool Multiphysics to do this, for memory and stability reasons it is not advised to do this from within MATLAB. For larger simulations it is recommended to export CFD models with the su2 export command or Export button in the SU2 Settings dialog box. This will generate SU2 configuration .cfg and mesh .su2 files. Then one can manually launch the SU2 solver the system command line, and import the solution back into the FEATool GUI when the solution has finished. The majority of the fluid dynamics tutorials available in the File > Model Examples and Tutorials... > Fluid Dynamics menu also allow for using SU2 instead of the default CFD solver. The following m-script models, found in the in the examples directory of the FEATool installation folder, feature a 'solver', su2' input parameter which can be used to directly enable the SU2 solver The SU2 solver binaries are included with the FEATool multiphysics distribution. However, for parallel computations one must install the Message Passing Interface (MPI) separately. It is recommended to use Microsoft MPI for Windows systems, and MPICH for Linux and MacOS systems. Further Information Further information about SU2 and its usage can be found on the official SU2 homepage and SU2 documentation. [1] SU2: An Open-Source Suite for Multiphysics Simulation and Design, AIAA Journal, 54(3):828-846, 2016. [2] SU2 Official Source Code Repository, GitHub, 2020.
{"url":"https://www.featool.com/doc/su2","timestamp":"2024-11-05T08:56:34Z","content_type":"application/xhtml+xml","content_length":"17218","record_id":"<urn:uuid:6602d47b-23b5-4ac0-8ec6-29bdd27b734e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00684.warc.gz"}
IBPS Clerk Pre Dec. 9 analysis - Daily GK & Current Affair IBPS Clerk Pre Dec. 9 analysisआईबीपीएस क्लर्क प्री विश्लेषण शिफ्ट 1 Discuss Dec. 9 IBPS Clerk Pre exam: 346, 345,337,310,246 ? 7.5,9.13.5.27.67.5 ? 4,2.5,3.5,9,40 ? 4.5,3.5,6,17,67 ? Do candidates who have their exam later, have an advantage? Update: We have posted some QA questions of the first shift. Do you know these answers? In a while, from now the first shift of IBPS Clerk will be over. 1. Share some questions 2. Level of the exam 3. Cut off as per you. 1--Ratio of the speed of boat downstream and speed of the stream is 9:1, the speed of the current is 3 km per hr, find distance travelled upstream in 5 hours. 2--Sum of 4 consecutive even nos are greater than three consecutive odd nos by 81. If the sum of lost odd and even number is 59, then find the sum of largest odd and even number. 3--Sum of Money invested in two schemes, in scheme A, X principle with 8 % per annum and in scheme B with X+1400 principle for two years and difference is 189, then find the value of X? 4--Average age of A and B, 2 years ago was 26. If the age of A 5 years hence is 40 yrs, and B is 5 years younger to C, then find the difference between the age of A and C? 5--Average of X, Y, Z is 24, X:Y = 2:3, X+Y = 60, then find X-Y=? 6--Cost price of two articles is same, trade man got a profit of 40% on the first article, selling price of the second article is 25% less than the first article, then find overall profit percent. 7--Length of a rectangle is 80% of diagonal of a square of area 1225, then find the area of the rectangle if it's perimeter is 94√2. 8-- the Annual salary of Arun is 7.68 lac. If he spends 12000 on his children, 1/13th of rest in food, 8000 in mutual funds, then find the monthly saving he is left with. 9--A can do a work in 24 days, B is 20% more efficient than A, if C can do the work in 10 more days than B, find days taken by A and C together to complete the work. 10--The ratio of Milk to water is 5:4, if two liters of water is added, the ratio becomes 10:9, then find the new amount of water in the mixture. थोड़ी देर में, अब से आईबीपीएस क्लर्क की पहली पारी खत्म हो जाएगी। 1. कुछ सवाल साझा करें 2. परीक्षा का स्तर 3. आप के अनुसार काट दें
{"url":"https://www.dailygk.co.in/current-affairs/monthly-gk-one-liners/ibps-clerk-pre-dec-9-analysis/","timestamp":"2024-11-08T06:18:29Z","content_type":"text/html","content_length":"86559","record_id":"<urn:uuid:ae52c3f3-2be5-4f95-accc-b95265042ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00860.warc.gz"}
Methods Based on Chemical Kinetics: Theory and Practice Theory and Practice Every chemical reaction occurs at a finite rate and, therefore, can potentially serve as the basis for a chemical kinetic method of analysis. To be effective, however, the chemical reaction must meet three conditions. First, the rate of the chemical reac- tion must be fast enough that the analysis can be conducted in a reasonable time, but slow enough that the reaction does not approach its equilibrium position while the reagents are mixing. As a practical limit, reactions reaching equilibrium within 1 s are not easily studied without the aid of specialized equipment allowing for the rapid mixing of reactants. A second requirement is that the rate law for the chemical reaction must be known for the period in which measurements are made. In addition, the rate law should allow the kinetic parameters of interest, such as rate constants and concen- trations, to be easily estimated. For example, the rate law for a reaction that is first order in the concentration of the analyte, A, is expressed as where k is the reaction’s rate constant. As shown in Appendix 5,* the integrated form of this rate law ln [A][t] = ln [A][0] – kt [A][t] = [A][0]e–kt ………..13.2 provides a simple mathematical relationship between the rate constant, the reac- tion’s elapsed time, t, the initial concentration of analyte, [A][0], and the analyte’s concentration at time t, [A] Unfortunately, most reactions of analytical interest do not follow the simple rate laws shown in equations 13.1 and 13.2. Consider, for example, the following re- action between an analyte, A, and a reagent, R, to form a product, P where k[f] is the rate constant for the forward reaction, and k[b] is the rate constant for the reverse reaction. If the forward and reverse reactions occur in single steps, then the rate law is Rate = k[f][A][R] – k[b][P] 13.3 Although the rate law for the reaction is known, there is no simple integrated form. We can simplify the rate law for the reaction by restricting measurements to the beginning of the reaction when the product’s concentration is negligible. Under these conditions, the second term in equation 13.3 can be ignored; thus Rate = k[f][A][R] t ………..13.4 The integrated form of the rate law for equation 13.4, however, is still too compli- cated to be analytically useful. We can simplify the kinetics, however, by carefully adjusting the reaction conditions.4 For example, pseudo-first-order kinetics can be achieved by using a large excess of R (i.e. [R][0] >> [A][0]), such that its concentration remains essentially constant. Under these It may even be possible to adjust conditions such that measurements are made under pseudo-zero-order conditions where A final requirement for a chemical kinetic method of analysis is that it must be possible to monitor the reaction’s progress by following the change in concentra- tion for one of the reactants or products as a function of time. Which species is used is not important; thus, in a quantitative analysis the rate can be measured by moni- toring the analyte, a reagent reacting with the analyte, or a product. For example, the concentration of phosphate can be determined by monitoring its reaction with Mo(VI) to form 12-molybdophosphoric acid (12-MPA). H[3]PO[4] + 6Mo(VI) + 9H[2]O → 12-MPA + 9H[3]O+ t ………..13.9 We can monitor the progress of this reaction by coupling it to a second reaction in which 12-MPA is reduced to form heteropolyphosphomolybdenum blue, PMB, 12-MPA + nRed → PMB + nOx where Red is a suitable reducing agent, and Ox is its conjugate form.5,6 The rate of formation of PMB is measured spectrophotometrically and is proportional to the concentration of 12-MPA. The concentration of 12-MPA, in turn, is proportional to the concentration of phosphate. Reaction 13.9 also can be followed spectropho- tometrically by monitoring the formation of 12-MPA. Classifying Chemical Kinetic Methods A useful scheme for classifying chemical ki- netic methods of analysis is shown in Figure 13.3. Methods are divided into two main categories. For those methods identified as direct-computation methods, the concentration of analyte, [A][0], is calculated using the appropriate rate law. Thus, for a first-order reaction in A, equation 13.2 is used to determine [A][0], provided that values for k, t, and [A][t] are known. With a curve-fitting method, regression is used to find the best fit between the data (e.g., [A][t] as a function of time) and the known mathematical model for the rate law. In this case, kinetic parameters, such as k and [A][0], are adjusted to find the best fit. Both categories are further subdivided into rate methods and integral methods. Direct-Computation Integral Methods Integral methods for analyzing kinetic data make use of the integrated form of the rate law. In the one-point fixed-time integral method, the concentration of analyte is determined at a single time. The initial con- centration of analyte, [A][0], is calculated using equation 13.2, 13.6, or 13.8, depend- ing on whether the reaction follows first-order, pseudo-first-order, or pseudo-zero- order kinetics. The rate constant for the reaction is determined in a separate experiment using a standard solution of analyte. Alternatively, the analyte’s initial concentration can be determined using a calibration curve consisting of a plot of [A][t] for several standard solutions of known [A][0]. In Example 13.1 the initial concentration of analyte is determined by measur- ing the amount of unreacted analyte at a fixed time. Sometimes it is more conven- ient to measure the concentration of a reagent reacting with the analyte or the con- centration of one of the reaction’s products. The one-point fixed-time integral method can still be applied if the stoichiometry is known between the analyte and the species being monitored. For example, if the concentration of the product in the reaction A+R → P is monitored, then the concentration of the analyte at time t is [A][t] = [A][0] – [P][t] ………..13.10 since the stoichiometry between the analyte and product is 1:1. Substituting equa- tion 13.10 into equation 13.6 gives ln([A][0] – [P][t])= ln [A][0] – k’t t ………..13.11 which is simplified by writing in exponential form [A][0] – [P][t] = [A][0]e–k’t The one-point fixed-time integral method has the advantage of simplicity since only a single measurement is needed to determine the analyte’s initial con- centration. As with any method relying on a single determination, however, a one-point fixed-time integral method cannot compensate for constant sources of determinate error. Such corrections can be made by making measurements at two points in time and using the difference between the measurements to determine the analyte’s initial concentration. Constant sources of error affect both measurements equally, thus, the difference between the measurements is independent of these er- rors. For a two-point fixed-time integral method, in which the concentration of an- alyte for a pseudo-first-order reaction is measured at times t[1] and t [2], we can write [A][t]1 = [A][0]e–k’tl t ………..13.13 [A][t]2 = [A][0]e–k’t2 Subtracting the second equation from the first equation and solving for [A][0] gives The rate constant for the reaction can be calculated from equation 13.14 by measur- ing [A][t]1 and [A][t]2 for a standard solution of analyte. The analyte’s initial concentra- tion also can be found using a calibration curve consisting of a plot of ([A][t][1] – [A][t][2]) versus [A][0]. Fixed-time integral methods are advantageous for systems in which the signal is a linear function of concentration. In this case it is not necessary to determine the concentration of the analyte or product at times t[1] or t[2], because the relevant con- centration terms can be replaced by the appropriate signal. For example, when a pseudo-first-order reaction is followed spectrophotometrically, when Beer’s law (Abs)[t] = εb[A][t] is valid, equations 13.6 and 13.14 can be rewritten as (Abs)[t] = [A][0](e–k’t)εb = c[A][0] where (Abs)[t] is the absorbance at time t, and c is a constant. An alternative to a fixed-time method is a variable-time method, in which we measure the time required for a reaction to proceed by a fixed amount. In this case the analyte’s initial concentration is determined by the elapsed time, ∆t, with a higher concentration of analyte producing a smaller ∆t. For this reason variable- time integral methods are appropriate when the relationship between the detector’s response and the concentration of analyte is not linear or is unknown. In the one- point variable-time integral method, the time needed to cause a desired change in concentration is measured from the start of the reaction. With the two-point vari- able-time integral method, the time required to effect a change in concentration is measured. One important application of the variable-time integral method is the quantita- tive analysis of catalysts, which is based on the catalyst’s ability to increase the rate of a reaction. As the initial concentration of catalyst is increased, the time needed to reach the desired extent of reaction decreases. For many catalytic systems the rela tionship between the elapsed time, ∆t, and the initial concentration of analyte is where F[cat] and F[uncat] are constants that are functions of the rate constants for the catalyzed and uncatalyzed reactions, and the extent of the reaction during the time span ∆t. Direct-Computation Rate Methods Rate methods for analyzing kinetic data are based on the differential form of the rate law. The rate of a reaction at time t, (rate)[t], is determined from the slope of a curve showing the change in concentration for a reactant or product as a function of time (Figure 13.5). For a reaction that is first- order, or pseudo-first-order in analyte, the rate at time t is given as Substituting an equation similar to 13.13 into the preceding equation gives the fol- lowing relationship between the rate at time t and the analyte’s initial concentration. (rate)[t] = k[A][0]e–kt If the rate is measured at a fixed time, then both k and e–kt are constant, and a cali- bration curve of (rate)[t] versus [A][0] can be used for the quantitative analysis of the analyte. The use of the initial rate (t = 0) has the advantage that the rate is at its maxi- mum, providing an improvement in sensitivity. Furthermore, the initial rate is measured under pseudo-zero-order conditions, in which the change in concentra- tion with time is effectively linear, making the determination of slope easier. Finally, when using the initial rate, complications due to competing reactions are avoided. One disadvantage of the initial rate method is that there may be insufficient time for a complete mixing of the reactants. This problem is avoided by using a rate mea- sured at an intermediate time (t > 0). Curve-Fitting Methods In the direct-computation methods discussed earlier, the analyte’s concentration is determined by solving the appropriate rate equa- tion at one or two discrete times. The relationship between the analyte’s concen- tration and the measured response is a function of the rate constant, which must be measured in a separate experiment. This may be accomplished using a single external standard (as in Example 13.2) or with a calibration curve (as in Example 13.4). In a curve-fitting method the concentration of a reactant or product is moni- tored continuously as a function of time, and a regression analysis is used to fit an appropriate differential or integral rate equation to the data. For example, the initial concentration of analyte for a pseudo-first-order reaction, in which the concentra- tion of a product is followed as a function of time, can be determined by fitting a re- arranged form of equation 13.12 [P][t] = [A][0](1 – e–k’t) to the kinetic data using both [A][0] and k’ as adjustable parameters. By using data from more than one or two discrete times, curve-fitting methods are cap- able of producing more reliable results. Although curve-fitting methods are computationally more demanding, the calculations are easily handled by computer. Miscellaneous Methods At the beginning of this section we noted that kinetic methods are susceptible to significant errors when experimental variables affecting the reaction’s rate are difficult to control. Many variables, such as temperature, can be controlled with proper instrumentation. Other variables, such as interferents in the sample matrix, are more difficult to control and may lead to significant errors. Although not discussed in this text, direct-computation and curve-fitting methods have been developed that compensate for these sources of error. Representative Method Although each chemical kinetic method has its own unique considerations, the determination of creatinine in urine based on the ki- netics of its reaction with picrate provides an instructive example of a typical procedure.
{"url":"https://www.brainkart.com/article/Methods-Based-on-Chemical-Kinetics--Theory-and-Practice_29783/","timestamp":"2024-11-03T10:23:53Z","content_type":"text/html","content_length":"149146","record_id":"<urn:uuid:ce366dfb-9efa-4c45-98c6-799c3df0d08a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00416.warc.gz"}
American Mathematical Society On the convergence of difference approximations to nonlinear contraction semigroups in Hilbert spaces HTML articles powered by AMS MathViewer by Olavi Nevanlinna PDF Math. Comp. 32 (1978), 321-334 Request permission Convergence properties of the difference schemes (S) \[ {h^{ - 1}}\sum \limits _{j = 0}^k {{\alpha _j}{u_{n + j}}} + \sum \limits _{j = 0}^k {{\beta _j}A{u_{n + j}}} = 0,\quad n \geqslant 0,\], for evolution equations (E) \[ \frac {{du(t)}}{{dt}} + Au(t) = 0,\quad t \geqslant 0;\quad u(0) = {u_0} \in \overline {D(A)} \] are studied. Here A is a nonlinear, maximally monotone operator in a real Hilbert space. It is shown, in particular, that if the scheme (S) is consistent and stable for the test equation $x\prime = \lambda x$ for $\lambda \in {\text {C}} - K$, where K is a compact subset of the right half-plane, then (S) is convergent as $h \downarrow 0$, with suitable initial values, for (E), on compact intervals [0, T]. Moreover, the convergence is uniform on the half-axis $t \ geqslant 0$, if the solution $u(t)$ tends strongly to a constant as $t \to \infty$. We also show that under weaker stability conditions one can construct conditionally convergent methods. References G. DAHLQUIST, On the Relation of G-Stability to Other Stability Concepts for Linear Multistep Methods, Report TRITA-NA-7618, Dept. of Comput. Sci., Royal Inst. of Tech., 1976. O. NEVANLINNA, On Multistep Methods for Nonlinear Initial Value Problems with an Application to Minimization of Convex Functionals, Report HTKK-MAT-A76, Inst. of Math., Helsinki Univ. of Tech., 1976. Similar Articles • Retrieve articles in Mathematics of Computation with MSC: 47H15, 65J05 • Retrieve articles in all journals with MSC: 47H15, 65J05 Additional Information • © Copyright 1978 American Mathematical Society • Journal: Math. Comp. 32 (1978), 321-334 • MSC: Primary 47H15; Secondary 65J05 • DOI: https://doi.org/10.1090/S0025-5718-1978-0513203-9 • MathSciNet review: 0513203
{"url":"https://www.ams.org/journals/mcom/1978-32-142/S0025-5718-1978-0513203-9/?active=current","timestamp":"2024-11-13T02:16:43Z","content_type":"text/html","content_length":"62924","record_id":"<urn:uuid:1ffebd2d-3c40-4c2b-a336-5800e76641a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00182.warc.gz"}
How to check student answers in table that involve fractions with variables? I’m seeing lots of examples and questions about checking student answers in tables when the answers are or are equivalent to numeric fractions. However, I need to check student answers of the form 1/ (4.5r) and 1/r when they type these into a table. The code I’m using for other examples where the fractions are indeed numeric goes something like this: cellErrorMessage(3,4): when this.cellNumericValue(3,4) = numericValue(“\frac{1}{18}”) “” otherwise “Divide 1 room by the time they would take to paint that room together. Leave your answer as a How do I change this up to use variables in the fraction, or do I need a completely different strategy? Two ways: 1. Use this.cellContent(3,4) = “\frac{1}{r}” instead of cellNumericValue(3,4) = numericValue(“\frac{1}{18}”) 2. or define a function in “r” like : Ans = simpleFunction(“\frac{1}{4r}”,“r”) f= simpleFunction(“${this.cellContent(1,1)}”,“r”) cellSuffix(1,1): when f.evaluateAt(3) = Ans.evaluateAt(3) “Correct” otherwise “” 1 Like A few notes… We’ve often not recommended method 1 (latex matching) because it was prone to errors. I feel like most of these, like the presence of extra spaces, have been fixed. For the second method, just make sure you’re evaluating at multiple values. For example, (r-2)/12 would validate as correct in the above sample. There is a third method, pattern matching: p = patterns #define the pattern you want to match pAnswer = p.fraction(p.integer.satisfies(`x=1`), p.product(p.integer.satisfies(`x=4`), p.literal(`r`) )) check = pAnswer.matches(this.cellContent(1,1)) You could potentially combine more generic patterns (to accept) and evaluating a function. Patterns can be tricky to use, but can really help hone in on the forms you want to accept or reject. 1 Like
{"url":"https://cl.desmos.com/t/how-to-check-student-answers-in-table-that-involve-fractions-with-variables/7360","timestamp":"2024-11-13T21:34:42Z","content_type":"text/html","content_length":"28305","record_id":"<urn:uuid:f55ade08-9499-4040-83e8-43f95a0d2a6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00249.warc.gz"}
Exercise: Two-population hypothesis testing concepts This was our exercise on the first day covering hypothesis testing for two independent populations. I wanted students to come up on their own with the form of a two-population hypothesis test, and with a basic idea of the test statistic. Everyone nailed $H_0: \mu_S = \mu_T$. Wording of the second and third questions was intentionally vague to make the students think. It was partially successful - several groups originally wrote that tunder the null hypothesis, the stiffness of the samples is the same, even though the data contradicts that statement. After some leading questions they all got to the statement that the difference in sample means should tend to be small under the null. Some groups rushed through and tried to calculate t-statistics for the third question, which was specifically against my wishes (and they didn’t yet know what they were doing). However, a couple of groups did well and suggested either a t-test somehow based on ${\bar S} - {\bar T}$ (which was exactly what I wanted to hear), or a permutation test (which was beyond what I hoped they would come up We have talked several times about using data to test whether the expectation of a population is equal to some value. We tended to write the null hypothesis like this: Now, imagine that you work at a bicycle factory where you’d like to replace steel tubes in the company’s bicycle frames with titanium (because it’s lighter). You have carefully selected random samples of the steel and titanium tubes from your suppliers. You measure the stiffness of each tube in a testing rig. The data are (measured in pounds per square inch, psi): • In order for the bicycles to perform the same after the switch, you would like to have tubes with identical stiffness. How might you write down the null hypothesis that the expected stiffness is the same between the two groups? • If the null hypothesis is true, then what can we say about the samples of steel and titanium tubes? • Can you construct a test statistic that would be useful for testing these hypotheses? Brainstorm as many ideas as you can, but don’t bother with calculations. Leave a Comment Markdown is allowed. Email addresses will not be published.
{"url":"https://somesquares.org/blog/2015/7/two-pop-concepts/","timestamp":"2024-11-06T14:02:01Z","content_type":"text/html","content_length":"12555","record_id":"<urn:uuid:c660655e-2950-43c1-a404-87ab9866386c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00856.warc.gz"}
Colonial America Skip Main Navigation Teacher Notes Overview of Coordinate Grids (45 minutes) > Print/View All Notes On this page students will learn about coordinate grids. They will use their new knowledge during their simulation as they place pieces of their settlement on the map. You will want to check for understanding and reteach if necessary before moving to the next page. Students will practice using a coordinate grid by completing the activity on pages 10 – 11 in their Engineering Portfolios. Note that page 11 should be printed for students so that they can mark points on the grid. On page 10 of the Engineering Portfolio, students are asked to plot points on a coordinate grid. An alternate to this activity is to set up your own online graph using the Create a Graph Directions for the Create a Graph website: To set up a blank grid, select the XY graph. On the next page, use the side tabs and go to Data. At the bottom of the data page are fields for Minimum and Maximum values for x and y. Put 0 for the minimum and 10 for the maximum for both x and y. Use the Preview tab to see the blank graph. Working with students, help them use the Data page to plot points on the grid. Toggle between Data and Preview to see the grid. On page 11 of the Engineering Portfolio, students are asked to identify coordinates of symbols on a map. For this activity, consider partnering students or work with students to provide verbal descriptions of the map. You may also wish to link lessons in your math curriculum on coordinate grids to this activity for further reinforcement and practice. Standards Addressed: MP.4, 5.G.2
{"url":"https://colonialamerica.thinkport.org/tn-overview-of-coordinate-grids.html","timestamp":"2024-11-03T19:02:20Z","content_type":"text/html","content_length":"15030","record_id":"<urn:uuid:b4018321-c41b-4f63-9a30-67116edf8cfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00111.warc.gz"}
Day 9 Advent of Code 2015 - Day 9 Useful Links Concepts and Packages Demonstrated Travelling Salesman ProblemGraphsregexpermutationsNetworkX Page Navigation Problem Intro We’re told that Santa needs to visit every location on his list exactly once. The distances between all pairs of locations have been provided, in the form of data that looks like this: London to Dublin = 464 London to Belfast = 518 Dublin to Belfast = 141 Part 1 Starting at any location and ending at any location, what is the shortest distance that Santa must travel to visit every location exactly once? This is actually a fairly standard problem, known as the Travelling Salesman Problem. My strategy is as follows: • Use regex to create a (location_1, location_2):distance dictionary entry for each distance in the input data. We’ll also store the pair of locations in reverse, so we can lookup either way. • We then create a set to store all unique locations. Iterate through each location in the location pairs, and build our set of unique locations. Using a set makes it easy to automatically throw away locations we’ve seen before. • Then we use itertools.permutations() to obtain all possible location permutations for all the locations we’ve stored in our set. For example, imagine we had just three locations, called A, B, and C. The itertools.permutations() function would return the following permutations for these three locations: ABC, ACB, BAC, BCA, CAB, CBA. I.e. with 3 locations, we’ll end up with 3! = 6 permutations. With 4 locations, we’ll end up with 4! = 24 permutations. And so on. • For efficiency, we’ll filter out any permutations which are simply the reverse of an existing permutation. For example, if know the total distance for ABC, then we have no need to determine the distance for CBA, since it is the same. • For each unique permutation, we find the distances between each pair of locations in the permutation. For example, if we have a permutation ABC, then we need the distance for A -> B, and the distance from B -> C. • We add up these distances, which gives us a total journey distance for this permutation. We store this total journey distance. After we’ve obtained the total distance for each permutation, we simply need to find the shortest distance. This is trivial, since we can just use the min() function and pass in our list of The code looks like this: from pathlib import Path import time import re from itertools import permutations SCRIPT_DIR = Path(__file__).parent INPUT_FILE = Path(SCRIPT_DIR, "input/input.txt") # INPUT_FILE = Path(SCRIPT_DIR, "input/sample_input.txt") def main(): with open(INPUT_FILE, mode="rt") as f: data = f.read().splitlines() locs_to_distances = get_distances(data) # E.g. (A, B) = n # build our set of unique locations locations = set() for loc_pair in locs_to_distances: locations.add(loc_pair[0]) # place_a locations.add(loc_pair[1]) # place_b journey_distances = [] # store total journey distances # Create permutations of all possible combinations of locations # I.e. all possible ways of ordering the locations we must visit. # E.g. if we have to visit places A, B and C, there would be 3! perms: # ABC, ACB, BAC, BCA, CAB, CBA for loc_perm in permutations(locations): # For efficiency: filter out inverse routes. E.g. we want ABC, but not CBA; they are the same if loc_perm[0] < loc_perm[-1]: journey_dist = 0 for i in range(len(loc_perm)-1): # iterate through location pairs i, i+1, for all locations in this permutation # E.g. for A, B, C, we would have pairs: A-B, and B-C. pair_a = loc_perm[i] pair_b = loc_perm[i+1] dist = locs_to_distances[(pair_a, pair_b)] journey_dist += dist # Just store the total distance for this journey. # If we cared about the order of places, we could use a dict and store those too print(f"Shortest journey: {min(journey_distances)}") def get_distances(data) -> dict: """ Read list of distances between place_a and place_b. Return dict that maps (A,B)->dist x, and (B,A)->dist x. data (list[str]): distances, in the form "London to Dublin = 464" dict: (start, end) = distance distances = {} distance_match = re.compile(r"^(\w+) to (\w+) = (\d+)") for line in data: start, end, dist = distance_match.findall(line)[0] dist = int(dist) # create a distance record in the form: [(loc_1, loc_2), dist] # And also store it in reverse, so that when we look it up, # it doesn't matter which order the locations come in the journey. distances[(start, end)] = dist distances[(end, start)] = dist return distances if __name__ == "__main__": t1 = time.perf_counter() t2 = time.perf_counter() print(f"Execution time: {t2 - t1:0.4f} seconds") Part 2 I love it when we get a Part 2 like this! We’re told that Santa wants to show off and take the route with the longest distance. Our code only needs one extra line of code! We just add this: print(f"Longest journey: {max(journey_distances)}") The final output is: Shortest journey: 207 Longest journey: 804 Execution time: 0.0460 seconds That’s pretty swift! Solving with NetworkX NetworkX is a cool library that allows us to build a graph, and then solve problems with that graph, e.g. shortest and longest path between two points. It can also be used to visualise our graph Here’s a solution using NetworkX… from itertools import permutations from pathlib import Path import time import re import networkx as nx import matplotlib.pyplot as plt SCRIPT_DIR = Path(__file__).parent INPUT_FILE = Path(SCRIPT_DIR, "input/input.txt") # INPUT_FILE = Path(SCRIPT_DIR, "input/sample_input.txt") DISTANCE = "distance" SHOW_GRAPH = True def main(): with open(INPUT_FILE, mode="rt") as f: data = f.read().splitlines() graph = build_graph(data) locations = graph.nodes journey_distances = {} for route in permutations(locations): # E.g. for route ABC # Use path_weight to get the total of all the edges that make up the route route_distance = nx.path_weight(graph, route, weight=DISTANCE) journey_distances[route] = route_distance # Get (journey, distance) tuples min_journey = min(journey_distances.items(), key=lambda x: x[1]) max_journey = max(journey_distances.items(), key=lambda x: x[1]) print(f"Shortest journey: {min_journey}") print(f"Longest journey: {max_journey}") if SHOW_GRAPH: draw_graph(graph, min_journey[0]) draw_graph(graph, max_journey[0]) def draw_graph(graph, route): start_node = route[0] end_node = route[-1] pos = nx.spring_layout(graph) # create a layout for our graph # Draw all nodes in the graph nx.draw_networkx_nodes(graph, pos, nodelist=route[1:-1], # exclude start and end # Draw all the node labels nx.draw_networkx_labels(graph, pos, font_size=11) # Draw start and end nodes nx.draw_networkx_nodes(graph, pos, nodelist=[start_node], node_color="white", edgecolors="green") nx.draw_networkx_nodes(graph, pos, nodelist=[end_node], node_color="orange", edgecolors="green") # Draw closest edges for each node only - with thin lines nx.draw_networkx_edges(graph, pos, edge_color="green", width=0.5) # Draw all the edge labels - i.e. the distances nx.draw_networkx_edge_labels(graph, pos, nx.get_edge_attributes(graph, DISTANCE)) # Draw the edges that make up this particular route route_edges = list(nx.utils.pairwise(route)) nx.draw_networkx_edges(graph, pos, edgelist=route_edges, edge_color="red", width=3, arrows=True) ax = plt.gca() def build_graph(data) -> nx.Graph: """ Read list of distances between place_a and place_b. Return dict that maps (A,B)->dist x, and (B,A)->dist x. data (list[str]): distances, in the form "London to Dublin = 464" dict: (start, end) = distance graph = nx.Graph() distance_match = re.compile(r"^(\w+) to (\w+) = (\d+)") # Add each edge, in the form of a location pair for line in data: start, end, distance = distance_match.findall(line)[0] distance = int(distance) graph.add_edge(start, end, distance=distance) return graph if __name__ == "__main__": t1 = time.perf_counter() t2 = time.perf_counter() print(f"Execution time: {t2 - t1:0.4f} seconds") Some things to note about this code: • If it were not for the function that creates a visual image of our routes, this code would be quite a bit shorter than my first solution. This is because the NetworkX package contains a number of utility methods which save us from doing many things manually. • We’ve replaced the get_distances() function with the build_graph() function. □ It still uses regex to retrieve all the edges, i.e. as a pair of locations and the distance between them. □ But here we create a complete NetworkX Graph object by simply adding each new edge to the graph directly. We then return the graph object. • As before, we then need to iterate through all permutations of the locations. This is easy to do, since we can retrieve all the locations (nodes) by simply using the graph.nodes attribute. Recall that each permutation of locations is a route. • For each route we then use nx.path_weight() to determine the overall distance for all the edges in this route. This saves us having to get all the edges, and from having to then get the distance for each edge. Quite a few lines of code saved here! • I’ve then stored the resulting distance in a dict, where the key is the route itself. • Finally - for Part 1 - we use min() to get the shortest distance of all our routes in the dictionary. Note how I’ve used a lambda function to tell min() to use the values of the dict as a key. Our call to min() returns the both the route, and the route’s distance. • Part 2 is solved in exactly the same way, except using max() instead of min(). The output looks like this: Shortest journey: (('Norrath', 'Straylight', 'Arbre', 'Faerun', 'AlphaCentauri', 'Snowdin', 'Tambi', 'Tristram'), 207) Longest journey: (('Tambi', 'Faerun', 'Norrath', 'Tristram', 'AlphaCentauri', 'Arbre', 'Snowdin', 'Straylight'), 804) Execution time: 0.4862 seconds This clearly runs a lot slower than my first solution. But it saved us some code, and makes it really easy to draw the graph… Drawing the Graph One cool thing about NetworkX is that it’s really easy to render a visual representation of our graph. In the code above you’ll see that I’ve explicitly done the following things: • Drawn all the nodes, except start and end. • Drawn the start and end nodes, in a different colour. • Drawn all the edges between pairs and label them. • Drawn all the edges that make up the specified route, with arrows and different colour. The resulting graphs look like this: Shortest Path Longest Path Cool, right?
{"url":"https://aoc.just2good.co.uk/2015/9","timestamp":"2024-11-14T10:08:10Z","content_type":"text/html","content_length":"38853","record_id":"<urn:uuid:98c0e856-e662-4057-b175-b104cbcb1b4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00279.warc.gz"}
The Durbin's ANOVA (missing data) Durbin's analysis of variance of repeated measurements for ranks was proposed by Durbin (1951)1). This test is used when measurements of the variable under study are made several times – a similar situation in which Friedman'sANOVA is used. The original Durbin test and the Friedman test give the same result when we have a complete data set. However, Durbin's test has an advantage – it can also be calculated for an incomplete data set. At the same time, data deficiencies cannot be located arbitrarily, but the data must form a so-called balanced and incomplete block: Hypotheses involve equality of the sum of ranks for successive measurements ( The p-value, designated on the basis of the test statistic, is compared with the significance level Introduction to the contrasts and the POST-HOC tests was performed in the unit, which relates to the one-way analysis of variance. Used for simple comparisons (the counts in each measurement are always the same). Example - simple comparisons (comparing 2 selected medians / rank sums between each other): [ii] The value of critical difference is calculated by using the following formula: The settings window with the Durbin's ANOVA can be opened in Statistics menu→NonParametric tests →Friedman ANOVA, trend test or in ''Wizard'' For records with missing data to be taken into account, you must check the Accept missing data option. Empty cells and cells with non-numeric values are treated as missing data. Only records with more than one numeric value will be analyzed. An experiment was conducted among 20 patients in a psychiatric hospital (Ogilvie 1965)3). This experiment involved drawing straight lines according to a presented pattern. The pattern represented 5 lines drawn at different angles ( We want to see if the time taken to draw each line is completely random, or if there are lines that took more or less time to draw. The graph shows homogeneous groups indicated by the post-hoc test. Conover W. J. (1999), Practical nonparametric statistics (3rd ed). John Wiley and Sons, New York Durbin J. (1951), Incomplete blocks in ranking experiments. British Journal of Statistical Psychology, 4: 85–90 Ogilvie J. C. (1965), Paired comparison models with tests for interaction. Biometrics 21(3): 651-64
{"url":"http://manuals.pqstat.pl/en:statpqpl:porown3grpl:nparpl:durbinpl","timestamp":"2024-11-06T23:44:05Z","content_type":"text/html","content_length":"66364","record_id":"<urn:uuid:f615df90-2e13-4caa-b8f2-4af57e6bca3f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00842.warc.gz"}
Spherical Harmonics #include <boost/math/special_functions/spherical_harmonic.hpp> namespace boost{ namespace math{ template <class T1, class T2> std::complex<calculated-result-type> spherical_harmonic(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> std::complex<calculated-result-type> spherical_harmonic(unsigned n, int m, T1 theta, T2 phi, const Policy&); template <class T1, class T2> calculated-result-type spherical_harmonic_r(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> calculated-result-type spherical_harmonic_r(unsigned n, int m, T1 theta, T2 phi, const Policy&); template <class T1, class T2> calculated-result-type spherical_harmonic_i(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> calculated-result-type spherical_harmonic_i(unsigned n, int m, T1 theta, T2 phi, const Policy&); }} // namespaces The return type of these functions is computed using the result type calculation rules when T1 and T2 are different types. The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more template <class T1, class T2> std::complex<calculated-result-type> spherical_harmonic(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> std::complex<calculated-result-type> spherical_harmonic(unsigned n, int m, T1 theta, T2 phi, const Policy&); Returns the value of the Spherical Harmonic Y[n]^m(theta, phi): The spherical harmonics Y[n]^m(theta, phi) are the angular portion of the solution to Laplace's equation in spherical coordinates where azimuthal symmetry is not present. Care must be taken in correctly identifying the arguments to this function: θ is taken as the polar (colatitudinal) coordinate with θ in [0, π], and φas the azimuthal (longitudinal) coordinate with φin [0,2π). This is the convention used in Physics, and matches the definition used by Mathematica in the function SpericalHarmonicY, but is opposite to the usual mathematical conventions. Some other sources include an additional Condon-Shortley phase term of (-1)^m in the definition of this function: note however that our definition of the associated Legendre polynomial already includes this term. This implementation returns zero for m > n For θ outside [0, π] and φ outside [0, 2π] this implementation follows the convention used by Mathematica: the function is periodic with period π in θ and 2π in φ. Please note that this is not the behaviour one would get from a casual application of the function's definition. Cautious users should keep θ and φ to the range [0, π] and [0, 2π] respectively. See: Weisstein, Eric W. "Spherical Harmonic." From MathWorld--A Wolfram Web Resource. template <class T1, class T2> calculated-result-type spherical_harmonic_r(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> calculated-result-type spherical_harmonic_r(unsigned n, int m, T1 theta, T2 phi, const Policy&); Returns the real part of Y[n]^m(theta, phi): template <class T1, class T2> calculated-result-type spherical_harmonic_i(unsigned n, int m, T1 theta, T2 phi); template <class T1, class T2, class Policy> calculated-result-type spherical_harmonic_i(unsigned n, int m, T1 theta, T2 phi, const Policy&); Returns the imaginary part of Y[n]^m(theta, phi): The following table shows peak errors for various domains of input arguments. Note that only results for the widest floating point type on the system are given as narrower types have effectively zero error. Peak errors are the same for both the real and imaginary parts, as the error is dominated by calculation of the associated Legendre polynomials: especially near the roots of the associated Legendre function. All values are in units of epsilon. Table 8.38. Error rates for spherical_harmonic_r GNU C++ version 7.1.0 GNU C++ version 7.1.0 Sun compiler version 0x5150 Microsoft Visual C++ version 14.1 linux linux Sun Solaris Win32 double long double long double double Spherical Harmonics Max = 1.58ε (Mean = 0.0707ε) Max = 2.89e+03ε (Mean = 108ε) Max = 1.03e+04ε (Mean = 327ε) Max = 2.27e+04ε (Mean = 725ε) Table 8.39. Error rates for spherical_harmonic_i GNU C++ version 7.1.0 GNU C++ version 7.1.0 Sun compiler version 0x5150 Microsoft Visual C++ version 14.1 linux linux Sun Solaris Win32 double long double long double double Spherical Harmonics Max = 1.36ε (Mean = 0.0765ε) Max = 2.89e+03ε (Mean = 108ε) Max = 1.03e+04ε (Mean = 327ε) Max = 2.27e+04ε (Mean = 725ε) Note that the worst errors occur when the degree increases, values greater than ~120 are very unlikely to produce sensible results, especially when the order is also large. Further the relative errors are likely to grow arbitrarily large when the function is very close to a root. A mixture of spot tests of values calculated using functions.wolfram.com, and randomly generated test data are used: the test data was computed using NTL::RR at 1000-bit precision. These functions are implemented fairly naively using the formulae given above. Some extra care is taken to prevent roundoff error when converting from polar coordinates (so for example the 1-x^2 term used by the associated Legendre functions is calculated without roundoff error using x = cos(theta), and 1-x^2 = sin^2(theta)). The limiting factor in the error rates for these functions is the need to calculate values near the roots of the associated Legendre functions.
{"url":"https://live.boost.org/doc/libs/1_83_0/libs/math/doc/html/math_toolkit/sf_poly/sph_harm.html","timestamp":"2024-11-09T02:56:52Z","content_type":"text/html","content_length":"30000","record_id":"<urn:uuid:6efb6268-d4f0-41ac-8222-488ab8944fc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00550.warc.gz"}
Chart Js Multiple Lines With Different Labels 2024 - Multiplication Chart Printable Chart Js Multiple Lines With Different Labels Chart Js Multiple Lines With Different Labels – The Multiplication Chart Series will help your students creatively represent a variety of early arithmetic methods. However, it must be used as a teaching aid only and should not be confused with the Multiplication Table. The graph or chart can be purchased in about three versions: the colored version is helpful whenever your student is concentrating on a single times desk at a time. The horizontal and vertical versions are compatible with kids who definitely are nevertheless studying their times tables. In addition to the colored version, you can also purchase a blank multiplication chart if you prefer. Chart Js Multiple Lines With Different Labels. Multiples of 4 are 4 away from one another The design for determining multiples of 4 is usually to add more every single amount to by itself and discover its other numerous. As an illustration, the 1st 5 various multiples of 4 are: 12, 16, 4 and 8 and 20. And they are four away from each other on the multiplication chart line, this trick works because all multiples of a number are even. In addition, multiples of 4 are even amounts in Multiples of 5 are even If they end in or 5, You’ll find multiples of 5 on the multiplication chart line only. In other words, you can’t multiply a quantity by two or three to have a much amount. If the number ends in five or , you can only find a multiple of five! Thankfully, there are actually tricks which make finding multiples of 5 even simpler, like using the multiplication chart line to get the a number of of 5. Multiples of 8 are 8 from one another The design is obvious: all multiples of 8 are two-digit amounts and multiples of four-digit amounts are two-digit amounts. Each and every array of 10 contains a several of eight. Eight is even, so all its multiples are two-digit phone numbers. Its routine carries on as much as 119. When the thing is a amount, be sure you look for a multiple of eight in the first place. Multiples of 12 are 12 far from each other The quantity a dozen has unlimited multiples, and you could increase any whole variety by it to create any amount, such as on its own. All multiples of a dozen are even numbers. Here are a few examples. David wants to purchase writing instruments and organizes them into seven packages of 12. He presently has 96 writing instruments. James has among each kind of pen. In their office, he arranges them in the multiplication chart series. Multiples of 20 are 20 from each other Inside the multiplication graph or chart, multiples of 20 are typical even. The multiple will be also even if you multiply one by another. Multiply both numbers by each other to find the factor if you have more than one factor. For example, if Oliver has 2000 notebooks, then he can group them equally. Exactly the same relates to pencils and erasers. You can get one out of a package of a few or possibly a load of six. Multiples of 30 are 30 away from the other person In multiplication, the expression “component combine” identifies a group of numbers that kind an obvious amount. If the number ’30’ is written as a product of five and six, that number is 30 away from each other on a multiplication chart line, for example. The same is true to get a amount from the collection ‘1’ to ’10’. Quite simply, any number can be written as the product or service of 1 and on its own. Multiples of 40 are 40 from the other person You may know that there are multiples of 40 on a multiplication chart line, but do you know how to find them? To do this, you can include externally-in. For example, 10 12 14 = 40, and the like. Similarly, 15 8-10 = 20. In this case, the telephone number about the left of 10 is definitely an even amount, even though the 1 in the right is an unusual variety. Multiples of 50 are 50 far from each other Making use of the multiplication graph or chart series to discover the sum of two numbers, multiples of 50 are similar distance away from each other on the multiplication chart. They have two excellent 80, 50 and factors. Most of the time, each term is different by 50. The other aspect is 50 by itself. Listed below are the typical multiples of 50. A frequent several is the several of any given quantity by 50. Multiples of 100 are 100 clear of the other person Listed here are the different figures which are multiples of 100. A good combine is actually a multiple of merely one one hundred, whilst a poor set is actually a a number of of ten. These two kinds of figures are not the same in several ways. The initial technique is to separate the number by subsequent integers. In cases like this, the quantity of multiples is one, ten, twenty and thirty and Gallery of Chart Js Multiple Lines With Different Labels Colors X axis Multiple Colored Label For Bar Chart Using Chart js Javascript 2 Line Chart With Different Labels Chart js Stack Overflow Chart js Displaying Labels For Grouped Datasets In ChartJS Clustered Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/chart-js-multiple-lines-with-different-labels/","timestamp":"2024-11-02T12:34:09Z","content_type":"text/html","content_length":"54473","record_id":"<urn:uuid:fb5cf4f4-c1fe-4228-858f-5e8732329a12>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00477.warc.gz"}
Entropy calculation beyond the harmonic approximation: Application to diffusion by concerted exchange in Si for Physical Review Letters Physical Review Letters Entropy calculation beyond the harmonic approximation: Application to diffusion by concerted exchange in Si View publication We present a formulation for calculating entropy based on the application of classical transition-rate theory to quantum-mechanical energy surfaces. Using this approach, which avoids difficulties due to anharmonicity and large energy barriers, we calculate the entropy of concerted exchange (CE) in Si and find it to be 3.3k in the high-temperature regime. The relatively high entropy of CE is traced to multiple equivalent exchange paths and to a a combination of a stiff mode at the equilibrium and a soft mode at the saddle-point configurations. Comparison to harmonic-approximation results shows substantial differences, in both the low- and in the high-temperature limits. © 1991 The American Physical Society.
{"url":"https://research.ibm.com/publications/entropy-calculation-beyond-the-harmonic-approximation-application-to-diffusion-by-concerted-exchange-in-si","timestamp":"2024-11-13T14:35:27Z","content_type":"text/html","content_length":"66416","record_id":"<urn:uuid:d64c3989-863d-4ea5-98ea-f09159a49bb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00250.warc.gz"}
Abstract. In natural language, it is easy to say and understand a phrase like “the sum of the numbers in a set”. Defining and working with such functions in a formal settings is more work. The problem has to do with how a recursively defined function picks the next element from a set. This note describes a representative example and describes how to make the formal mumbo-jumbo work out. The solution can be applied to any commutative and associative operation on a set. 0.Summing the elements of a set Suppose we have a function that returns the sum of the integers in a set: function Sum(s: set<int>): int If we add an element y to a set, we expect its sum to go up by y. That is, we expect that the following method is correctly implemented: method AddElement(s: set<int>, a: int, y: int) returns (t: set<int>, b: int) requires a == Sum(s) && y !in s ensures t == s + {y} && b == Sum(t) t := s + {y}; b := a + y; It turns out, the proof is not automatic. Let's look at the details and fill in the proof. 1.Recursive definition of Sum Function Sum is defined recursively. The sum of the empty set is 0. If the set is nonempty, pick one of its elements, say x. Then, add x to the recursively computed sum of the remaining elements. function Sum(s: set<int>): int { if s == {} then 0 else var x := Pick(s); x + Sum(s - {x}) This definition uses a function Pick, which returns an arbitrary element from a given set. Here is its definition: function Pick(s: set<int>): int requires s != {} var x :| x in s; x I'll come back to Pick later. All you need to understand at this time is that the caller of Pick has no control over which element of s is returned. 2.The proof that fails To prove AddElement, we need to show b == Sum(t) holds in its final state. Working backwards over the assignments, this means we need to show a + y == Sum(s + {y}) in the initial state. Since a is Sum(s), our proof obligation comes down to Sum(s) + y == Sum(s + {y}) where we are given that y is not in s. Suppose Pick(s + {y}) returns y. Then, we have Sum(s + {y}); == // def. Sum var x := Pick(s + {y}); x + Sum(s + {y} - {x}); == // using the assumption Pick(s + {y}) == y y + Sum(s + {y} - {y}); == // sets, since y !in s y + Sum(s); That was easy and straightforward. But for this proof, we assumed that the relevant call to Pick returned y. What if Pick returns a different element from s? 3.Picking something else Before you realize Pick can choose a different element than the one you have in mind, the clouds start to clear. What we need is a lemma that says the choice is immaterial. That is, the lemma will let us treat Sum as if it picks, when doing its recursive call, an element that we specify. Here is that lemma. The proof is also a little tricky at first. It comes down to letting Pick choose whatever element it chooses, and then applying the induction hypothesis on the smaller set that Sum recurses on. lemma SumMyWay(s: set<int>, y: int) requires y in s ensures Sum(s) == y + Sum(s - {y}) var x := Pick(s); if y == x { } else { calc { == // def. Sum x + Sum(s - {x}); == { SumMyWay(s - {x}, y); } x + y + Sum(s - {x} - {y}); == { assert s - {x} - {y} == s - {y} - {x}; } y + x + Sum(s - {y} - {x}); == { SumMyWay(s - {y}, x); } y + Sum(s - {y}); I stated the lemma to look like the expressions in the body of Sum, so the two arguments to Sum are s and s - {y}. Alternatively, we can state the property in terms of calls to Sum with the arguments s + {y} and s. This alternative is a simple corollary of the lemma above: lemma AddToSum(s: set<int>, y: int) requires y !in s ensures Sum(s + {y}) == Sum(s) + y SumMyWay(s + {y}, y); Using the lemma Equipped with the useful lemma, it's easy to get the proof of AddElement go through: change its body to t := s + {y}; b := a + y; AddToSum(s, y); 4.Inlining Pick In the development above, I define Pick as a separate function. Reading the word “pick” in the program text may help understand what Sum and SumMyWay do. But it's such a small function, so why not just inline it in the two places where it's used. Let's try it: function Sum(s: set<int>): int { if s == {} then 0 else var x :| x in s; // this line takes the place of a call to Pick x + Sum(s - {x}) lemma SumMyWay(s: set<int>, y: int) requires y in s ensures Sum(s) == y + Sum(s - {y}) var x :| x in s; // this line takes the place of a call to Pick if y == x { // error: postcondition might not hold on this path } else { calc { == // def. Sum // error: this step might not hold x + Sum(s - {x}); == { SumMyWay(s - {x}, y); } x + y + Sum(s - {x} - {y}); == { assert s - {x} - {y} == s - {y} - {x}; } y + x + Sum(s - {y} - {x}); == { SumMyWay(s - {y}, x); } y + Sum(s - {y}); We now get two errors! To explain what's going on, let me say a little more about :| and what makes it unusual. 5.Let such that The let-such-that construct in Dafny has the form var x :| P; E It evaluates to E, where x is bound to some value satisfying P. For example, var x :| 7 <= x < 10; 2 * x evaluates to 14, 16, or 18. As the programmer, you have no control over which value of x is chosen. But you do get to know two important things. One is that x will be chosen to be a value that satisfies P. (The Dafny verifier gives an error if it cannot prove such a value to exist.) The other is that you will get the same value every time you evaluate the expression with the same inputs. In other words, the operator is deterministic. Here is another example to illustrate the point about determinism: var x :| x in {2, 3, 5}; x This expression chooses x to be one of the three smallest primes (2, 3, or 5) and then returns it. You don't know which of the three values you get, but you are guaranteed that every time this expression is evaluated within one run of a program, you will get the same value. Let's be more precise about what I mean by “this expression”. In Dafny, every textual occurrence of a let-such-that expression gets to make its own choices. One way to think about this is to go through the text of your program and to color each :| operator with a unique color. Then, you can rely on choices being the same only if they are performed by the same-color :|. Here is an illustrative example. lemma Choices(s: set<int>) requires s != {} var a := Pick(s); var b := Pick(s); assert a == b; // this is provable a := var x :| x in s; x; b := var x :| x in s; x; assert a == b; // error: not provable The first values assigned to a and b originate from the same :| operator. They are the results of choices of the same color. Therefore, they are known to be the same. In contrast, the next values assigned to a and b originate from different :| operators—ones of different colors. Therefore, you cannot be sure a and b are equal. Actually, if you think about it a little more (or, maybe, a little less), then you realize that we know the first values assigned to a and b to be equal even without knowing anything about the body of Pick. After all, Pick is a function, and if you call a function twice on the same arguments, it will give you back the same value. Mathematics guarantees this, and so does Dafny. So, then what about the second assignments to a and b; aren't the :| operators in those expressions also functions? Yes, they are, but they are different functions. They are functions of different colors, to follow that analogy. As long as you think of every occurrence of :| in your program as being a different function, then all mathematics work out as you'd expect. This is why it was easier for me to describe the Sum situation if I could use just one :|. To reuse that same :|, I placed it in a function, which I named Pick. I recommend you do the same if you're working with ghost functions that involve choices that you want to prove properties about. 6.Different choices If you tried to define Sum and use it in AddElement before understanding these issues, you would be perplexed. Now, you know that it is easier to put :| into a function by itself, and you know that you'll need to write a lemma like SumMyWay. You may be curious if it's possible to do without the Pick function. That is, you may wonder if there's any way to use one :| operator in Sum and another : | operator in SumMyWay. Yes, it is possible. Let me show you how. Suppose we inline Pick in function Sum. That is, suppose we define Sum as in Section4 above. In that section, I mentioned that you'll get a couple of errors if you also inline Pick in SumMyWay. Both of those errors stem from the fact that Sum and SumMyWay make different choices. But we can be more specific in the lemma, to force it to choose the same element as the one chosen in Sum. You can do that by saying you want x not just to be in s, but to be a value that makes Sum(s) == x + Sum(s - {x}) hold true. Only one such x exists, and it's the one that Sum chooses. So, if you write lemma as follows: lemma SumMyWay(s: set<int>, y: int) requires y in s ensures Sum(s) == y + Sum(s - {y}) var x :| x in s && Sum(s) == x + Sum(s - {x}); if y == x { } else { // same calc statement as before... then it verifies! This is good to know, but it seems cleaner to introduce the function Pick around your :|. Beware that every textual occurrence of :| in your program is a different function. You'll have a simpler time working with :| if you roll it into a function that you name, because then you reduce the chance of becoming confused because of different kinds (different “colors”) of choices. Also, beware that the choice made by :| may not be the choice you need. You'll probably want to prove a lemma that says any choice gives the same result in the end. Use lemma SumMyWay above as a template for your proof.
{"url":"https://leino.science/papers/krml274.html","timestamp":"2024-11-09T03:50:38Z","content_type":"text/html","content_length":"45911","record_id":"<urn:uuid:f6d2f8d2-8c1d-4c47-bd22-df81c97aa86d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00080.warc.gz"}
wu :: forums - Interesting Limit wu :: forums putnam exam (pure math) (Moderators: Grimbal, Eigenray, Icarus, william wu, towr, SMQ) « Previous topic | Next topic » Pages: 1 Reply Notify of replies Send Topic Print Author Topic: Interesting Limit (Read 7621 times) Barukh Interesting Limit Uberpuzzler « on: Sep 2^nd, 2011, 1:06am » Quote Modify Find the limit of the following sum when n -> n [k = 1...n] (n^2 + k^2)^-1 Posts: 2276 pex Re: Interesting Limit Uberpuzzler « Reply #1 on: Sep 2^nd, 2011, 4:20am » Quote Modify Isn't that just the Riemann sum for the integral of (1+x^2)^-1 over 0..1? That would make the limit equal to pi divided by four. Posts: 880 Grimbal Re: Interesting Limit wu::riddles Moderator « Reply #2 on: Sep 2^nd, 2011, 5:07am » Quote Modify Here is as formal as a proof as I could get in the short time I worked on this: I computed the sum for n=1000. I got 0.7866. pi/4 = 0.7854. Between an extraordinary coincidence and a very plausible pex being correct, the second option is much more probable. Posts: 7527 Barukh Re: Interesting Limit Uberpuzzler « Reply #3 on: Sep 2^nd, 2011, 11:40am » Quote Modify pex, you are right, and you probably know a much more elegant proof than that of Grimbal's Posts: 2276 pex Re: Interesting Limit Uberpuzzler « Reply #4 on: Sep 3^rd, 2011, 2:01am » Quote Modify Multiply and divide by n^2 to get lim[n to inf] (1/n) sum[k=1..n] (1 + (k/n)^2)^-1, which is by definition int[0]^1 (1 + x^2)^-1 dx = arctan(1) - arctan(0) = pi/4. Posts: 880 Pages: 1 Reply Notify of replies Send Topic Print « Previous topic | Next topic »
{"url":"https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_putnam;action=display;num=1314950769","timestamp":"2024-11-10T06:41:20Z","content_type":"text/html","content_length":"37146","record_id":"<urn:uuid:7b7b39e2-9190-476b-beda-f51c170ecb10>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00461.warc.gz"}
SBV Regularity of Entropy Solutions for Hyperbolic Systems of Balance Laws: Analysis and Counterexamples Core Concepts This paper investigates the regularity of solutions to hyperbolic systems of balance laws, demonstrating that under specific conditions (genuinely nonlinear or non-degenerate fluxes), solutions exhibit special bounded variation (SBV) regularity, meaning their derivatives lack a Cantor part. However, this regularity can fail for linearly degenerate systems, highlighting the crucial role of characteristic field properties in determining solution smoothness. How do the findings of this paper impact the design of numerical methods for solving hyperbolic systems of balance laws, particularly in the presence of linearly degenerate fields? This paper sheds light on the regularity of solutions to hyperbolic systems of balance laws, particularly distinguishing between genuinely nonlinear and linearly degenerate fields. This understanding has significant implications for designing effective numerical methods for these systems: Adaptive Methods: The paper demonstrates that solutions can exhibit different regularity properties in different regions depending on the nature of the characteristic fields. This suggests that adaptive numerical methods, which adjust the mesh or the order of accuracy based on the local smoothness of the solution, could be particularly effective. For instance, finer meshes or higher-order schemes could be employed in regions where the solution is expected to be less regular due to linear degeneracy, while coarser meshes or lower-order schemes might suffice in regions with genuinely nonlinear fields where SBV regularity holds. Treatment of Linear Degeneracy: Standard numerical methods may experience difficulties in accurately capturing the behavior of solutions near linearly degenerate fields, where SBV regularity fails. This paper motivates the development of specialized numerical techniques specifically designed to handle the challenges posed by linear degeneracy. These techniques might involve carefully designed flux limiters, slope reconstructions, or Riemann solvers that account for the potential loss of regularity. Convergence and Stability Analysis: The regularity results provided in the paper can inform the convergence and stability analysis of numerical methods. For example, the lack of SBV regularity near linearly degenerate fields might impose limitations on the achievable order of convergence. Understanding these limitations is crucial for selecting appropriate numerical methods and for correctly interpreting the results of numerical simulations. Could there be alternative regularity concepts, beyond SBV, that might be applicable or more appropriate for characterizing the smoothness of solutions to linearly degenerate systems? While the paper demonstrates that SBV regularity is not generally achievable for linearly degenerate systems, it hints at the possibility of exploring alternative regularity concepts that could provide a more refined characterization of solution smoothness in these cases. Some potential avenues for exploration include: Fractional BV Spaces: Instead of requiring bounded variation, one could consider spaces of functions with bounded fractional variation. These spaces allow for a more nuanced description of regularity, capturing functions with a certain degree of singularity. Investigating whether solutions to linearly degenerate systems possess some form of fractional BV regularity could provide valuable insights. Weighted BV Spaces: Another possibility is to introduce weights in the definition of BV norms to account for the specific structure of linearly degenerate fields. By choosing appropriate weight functions that depend on the eigenvalues and eigenvectors of the system, one might be able to define weighted BV spaces where solutions exhibit better regularity properties. Kinetic Regularity: Drawing inspiration from kinetic theory, one could explore kinetic formulations of hyperbolic balance laws and investigate regularity properties in the kinetic framework. This approach has been successful in characterizing the regularity of solutions to certain classes of nonlinear PDEs and could potentially offer new perspectives on linearly degenerate systems. What are the implications of these findings for understanding the long-term behavior and stability of solutions to hyperbolic balance laws in real-world physical systems, such as those arising in fluid dynamics or elasticity? The regularity results presented in the paper have important implications for understanding the long-term behavior and stability of solutions to hyperbolic balance laws in various physical applications: Formation of Singularities: The lack of SBV regularity near linearly degenerate fields suggests that solutions to these systems might develop more complex singularities compared to genuinely nonlinear systems. These singularities could manifest as concentrations, oscillations, or other irregular behaviors, potentially influencing the long-term dynamics of the physical system. Dissipative Mechanisms: In real-world systems, dissipative mechanisms, such as viscosity or friction, often play a role in regularizing solutions and preventing the formation of sharp discontinuities. However, the paper indicates that even with vanishing viscosity, SBV regularity might not be fully restored in the presence of linear degeneracy. This highlights the limitations of viscosity in controlling the regularity of solutions and suggests that other mechanisms might be responsible for smoothing out singularities in linearly degenerate systems. Numerical Simulations: When using numerical methods to study the long-term behavior of physical systems modeled by hyperbolic balance laws, it is crucial to be aware of the potential for reduced regularity near linearly degenerate fields. Failure to adequately resolve these regions numerically could lead to inaccurate predictions about the stability and long-term evolution of the system.
{"url":"https://linnk.ai/insight/scientific-computing/sbv-regularity-of-entropy-solutions-for-hyperbolic-systems-of-balance-laws-analysis-and-counterexamples-EqHPvKNq/","timestamp":"2024-11-02T22:13:47Z","content_type":"text/html","content_length":"375518","record_id":"<urn:uuid:64ec658a-a26d-4f64-8779-d44b8584a668>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00537.warc.gz"}
Sylvie's Connection This post is the third of a series on connections on foliated manifolds. 1. The Bott connection on foliated manifolds, 2. Tanno’s connection on contact manifolds, 3. The equivalence of Bott and Tanno’s connections on \(K\)-contact manifolds with the Reeb foliation, 4. Connections on codimension 3 sub-Riemannian manifolds. In the last two posts, we have discussed basic properties of the Bott connection on general foliated manifolds and Tanno’s connection on contact manifolds. Here we will show that the two notions are equivalent under a certain condition on the contact structure. Throughout this post, all manifolds will be smooth. 3. Bott and Tanno’s connections on \(K\)-contact manifolds The key property we want on a contact manifold is the following: Definition 3.1 Let \((\mathbb{M},\theta,g)\) be a contact manifold with compatible metric \(g\). We call \(\mathbb{M}\) a \(K\)-contact manifold if the associated Reeb field \(\xi\) is a Killing field, that is if \[\mathcal{L}_\xi g = 0\] We are interested in \(K\)-contact manifolds because of the following Proposition 3.2 Let \((\mathbb{M},\theta,g,\mathcal{F}_\xi)\) be a contact manifold equipped with Reeb foliation \(\mathcal{F}_\xi\). Then the following are equivalent: 1. \((\mathbb{M},\theta,g)\) is a \(K\)-contact manifold, 2. \((\mathbb{M},g,\mathcal{F}_\xi)\) is a totally-geodesic foliation with bundle-like metric \(g\). Remark: Boyer and Galicki indicate that they prefer the name bundle-like contact metric manifold to \(K\)-contact manifold, as it is more descriptive and equivalent by the above. I’m not sure of the history of the name, but this makes sense to me. I’ll probably use the two interchangeably in future posts. The equivalence of the \(K\)-contact condition and \((\mathbb{M},g,\mathcal{F}_\xi)\) being having a bundle-like metric \(g\) is by essentially definition since this is equivalent to \[\mathcal{L}_Zg(X,X) = 0\] for \(X \in \Gamma(\mathcal{H}), Z \in \Gamma(\mathcal{V})\). To see that \(K\) contact manifolds are totally-geodesic foliations, observe that \mathcal{L}_Xg(Z,Z) &= X\cdot g(Z,Z) – 2g([X,Z],Z) \\ &= 2\theta(Z) \iota_X d\theta(Z) + 2g([Z,X],Z) \\ &= -\mathcal{L}_Z g(X,Z) + Z \cdot g(X,Z) \\ &= 0 \\ completing the proof. Remark: I think there must be a nicer way to show that \(K\)-contact manifolds are totally-geodesic, I may update this. Now we can state the main claim: Theorem 3.3 Let \((\mathbb{M}, \theta, g)\) be a \(K\)-contact manifold with Reeb foliation \(\mathcal{F}_\xi\). Then the Bott connection \(\nabla^B\) on \((\mathbb{M},g,\mathcal{F}_\xi)\) and Tanno’s connection \(\nabla^T\) on \((\mathbb{M},\theta,g)\) coincide. By Proposition 3.2 the Bott connection is well-defined, and both the Bott and Tanno’s connections are unique by definition. To see that they are equivalent, we need to show that one satisfies the conditions of the other. We will proceed by showing that Tanno’s connection satisfies the conditions of Theorem 1.1 defining the Bott connection. 1. (\(\nabla^B\) is metric) By definition, Tanno’s connection is metric. 2. (If \(Y \in \Gamma(\mathcal{H})\) then \(\nabla^B_XY \in \Gamma(\mathcal{H})\)) We have that \nabla^T_XY &= -\nabla^T_X(J^2Y) \\ &= -(\nabla^T_X J)(JY) + J(\nabla^T_X(JY)) \\ &= -Q(JY,X) + J(\nabla^T_X(JY)) \\ &= -\left( (\nabla^g_XJ)(JY) – [(\nabla^g_X\theta)(J^2Y)]\xi +\theta(JY)J(\nabla^g_X\xi) \right) + J(\nabla^T_X(JY)) \\ &= -\left( \nabla^g_X(J^2Y) – J(\nabla^g_X(JY)) – \nabla^g_X(\theta Y) + \theta(\nabla^g_XY)\xi \right) + J(\nabla^T_X(JY)) \\ &= -\left( – \nabla^g_XY – J(\nabla^g_X(JY)) + \theta(\nabla^g_XY)\xi \right) + J(\nabla^T_X(JY)) \\ &= – J(\nabla^g_XY) + J(\nabla^g_X(JY)) + J(\nabla^T_X(JY)) \in \Gamma(\mathcal{H}) 3. (If \(Z \in \Gamma(\mathcal{V})\) then \(\nabla^B_XZ \in \Gamma(\mathcal{V})\)) By property 2 of Tanno’s connection, \[\nabla^T_XZ = \nabla^T_X(\theta(Z)\xi) = \nabla^T_X(\theta(Z))\xi \in \Gamma(\mathcal{V})\] 4. (For \(X_1,X_2 \in \Gamma(\mathcal{H})\) and \(Z_1,Z_2 \in \Gamma(\mathcal{V})\) it holds that \(T^B(X_1,X_2) \in \Gamma(\mathcal{V})\) and \(T^B(Z_1,X_1) = T^B(Z_1,Z_2) = 0\)) For the first claim, we see that by property 4 of Tanno’s connection, \[T^T(X_1,X_2) = d\theta(X_1,X_2)\xi \in \Gamma(\mathcal{V}).\]For the second, T^T(Z_1,X_1) &= -T^T(Z_1,J^2X_1) = JT^T(\xi,JX_1) \\ &= – J^2T^T(Z_1,X_1) \\ using the fact that \(J^2X_1 = -X_1\) for horizontal vector fields and property 5 of Tanno’s connection. This implies that \(T^T(Z_1,X_1)\) is horizontal. By the definition of the torsion tensor we see that \[T^T(Z_1,X_1) = \nabla^T_{Z_1}X_1 – \nabla^T_{X_1}Z_1 – [Z_1,X_1] = \nabla^T_{Z_1}X_1\] since \(\nabla^T_{X_1}Z_1\) is vertical by 3, and the bracket vanishes by assuming \(X_1\) to be basic. However, the right hand side of this expression is not tensorial in \(X_1\), and so we conclude that \[T^T(Z_1,X_1) = 0\] \[T^T(Z_1,Z_2) = \theta(Z_1) \theta(Z_2) T^T(\xi, \xi) = 0\] completing the proof.
{"url":"http://vega-molino.com/tag/tannos-connection/","timestamp":"2024-11-04T10:13:06Z","content_type":"text/html","content_length":"60863","record_id":"<urn:uuid:fa4898ce-64ab-4eef-b904-5e4db6c41930>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00103.warc.gz"}
Re: Recursive definitions of higher-order operators [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Recursive definitions of higher-order operators There is no plan to allow recursive definitions of higher-order operators. I believe that Georges Gonthier believes that allowing them is not a problem. I think that supporting them would just require a simple change to the parser and no change to TLC, though I'm not positive about that. However, I found it hard enough to understand the meaning of a recursive definition of an ordinary operator, and I was not able to understand recursive definitions of higher-order operators well enough to be convinced of their soundness. And recursive definitions of first-order operators are still not supported by TLAPS. On Sunday, December 6, 2015 at 7:34:46 AM UTC-8, Y2i wrote: Just wanted to ask if there are plans to support recursive definitions of higher-order operators in the future? Thank you,
{"url":"https://discuss.tlapl.us/msg00674.html","timestamp":"2024-11-12T23:39:45Z","content_type":"text/html","content_length":"4422","record_id":"<urn:uuid:54d78463-53d6-4baf-bfee-345baca62713>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00709.warc.gz"}
This article is to provide the Mooring and berthing load calculation on Bollards and Fenders. Assuming Bollards and Fenders are installed at Piles, Forces can be transferred to Piles for Pile design load calculation Wharf shall moor the barges and tug boats of various sizes on both sides of wharfs. This Wharf structure are designed with steel mono-pile, with steel pile head platforms installed. The wharf will be fitted with Cone Type or typical tyre fenders on the Pile to protect against impact and facilitate landing of s Mooring system to wharf side shall comprise of Braided Nylon Ropes and mooring bits. Mooring system shall be designed for 10 yrs return period environmental loads including storms in the particular location. The maximum approach velocity and angle for berthing shall be specified based on Wharf structural design. During the design of wharf System, the following 2 design conditions shall be considered: (1) Ship Berthing Condition During Ship berthing on site, there will be berthing loads acting on Wharf through fenders. (2) Ship Wharf Mooring Condition For Ship permanently mooring on site, 10 years return period shall be considered for wind. In order to design the wharf System, berthing loads, mooring loads from TUG/BARGE shall be assessed first. The berthing loads for Berthing Condition will be assessed based on the berthing energy calculated as per BS 6349-Part 4: Code of practice for design of fendering and mooring systems. Mooring loads for ship Mooring will be calculated based on hydrodynamic analysis using appropriate software. Based on these berthing loads and mooring loads, the piles of Wharf, fenders of Wharf, mooring lines and relative equipment shall be designed. Water depth near jetty shall be considered in the design, and the following sea states are to be considered for different conditions: (1) Sea state for ship Berthing Condition The sea state is considered as benign, and as per the berthing conditions specified by BS 6349- Part 4, “Sheltered berthing, difficult” shall be chosen. (2) Sea state for ship Permanent Mooring Condition For ship mooring on site, 10 years return period of wind shall be considered for environmental loads. • Wave Height • Wind Speed • Current Speed The total amount of energy E (kNm) to be absorbed, by the fender system either alone or by a combination of the fender system and the structure itself with some flexibility, may be calculated from the following energy formulae: E=0.5*Cm*Mv *Vb2 *Ce *Cs *Cc where Cm is the hydrodynamic coefficient. Mv is the displacement of the vessel (t). Vb is the velocity of the vessel normal to the berth (m/s). Ce is the eccentricity coefficient. Cs is the softness coefficient. Cc is the berth configuration coefficient. This energy depends on the velocity of the vessel normal to the berth and a number of factors that modify the vessel’s kinetic energy to be absorbed by the fender system and the structure. (1) Berthing Velocity, Vb The berthing velocity of the vessel normal to the berth depends on the vessel size and type, frequency of arrival, possible constraints on movement approaching the berth, and wave, current and wind conditions likely to be encountered at berthing. The velocity with which a ship closes with a berth is the most significant of all factors in the calculation of the energy to be absorbed by the fendering system. In more difficult conditions velocities may be estimated from below Figure on which five curves are given corresponding to the following navigation conditions. a) Good berthing, sheltered; b) Difficult berthing, sheltered; c) Good berthing, exposed to waves and/or currents; d) Difficult berthing, exposed to waves and/or currents; e) Adverse berthing, exposed to waves and/or currents. (2) Hydrodynamic Mass Coefficient, CM The hydrodynamic mass coefficient allows the movement of water around the ship to be taken into account when calculating the total energy of the vessel by increasing the mass of the system. The hydrodynamic mass coefficient CM may be calculated from the following equation (BSI, 2014): CM = 1 + 2T/B = 1.5 Where T is the draft of the vessel (m). B is the beam of the vessel (m). (3) Eccentricity Coefficient, CE A vessel will usually berth at a certain angle and hence it turns simultaneously at the time of first impact. During this process, some of the kinetic energy of the ship is converted to turning energy and the remaining energy is transferred to the berth. The eccentricity coefficient represents the proportion of the remaining energy to the kinetic energy of the vessel at berthing. The formula for calculating the coefficient is given as follows (BSI, 2014): Where K is the radius of gyration of the ship. K = (0.19Cb + 0.11) L L is the length of the hull between perpendiculars (m). Cb is the block coefficient Cb = displacement (kg)/(L(m) x beam(m) x draft(m) x density of water(kg/m3)) R is the distance of the point of contact from the centre of mass (m). γ is the angle between the line joining the point of contact to the centre of mass and the velocity vector. 4) Softness Coefficient, CS The softness coefficient allows for the portion of the impact energy that is absorbed by the ship’s hull. Little research into energy absorption by ship hulls has taken place, but it has been generally accepted that the value of CS lies between 0.9 and 1.0. For ships, which are fitted with continuous rubber fendering, CS may be taken to be 0.9. For all other vessels CS = 1. (5) Berth Configuration Coefficient / Water Cushion Effect, CC The berth configuration coefficient allows for the portion of the ship’s energy, which is absorbed by the cushioning effect of water trapped between the ship’s hull and Wharf wall. The value Cc is between the ship’s hull and Wharf wall. The value of CC is influenced by the type of Wharf construction, and its distance from the side of the vessel, the berthing angle, the shape of the ship’s hull, and it’s under keel clearance. A value of 1.0 for CC should be used for open piled Wharf structures, and a value of Cc of between 0.8 and 1.0 is recommended for use with a solid Wharf wall. 4.5.2 Berthing Energy Distribution It shall be assumed that the first berthing fender will absorb 100% of total berthing energy. Berthing Load Estimation Super cone fender or other fenders shall be used to absorb berthing energy. Characteristic of fender compression and reaction force shall be used to determine compression of fenders. According to the calculated berthing energy, the berthing load can be extracted from the generic curve of super cone fender. In below fender characteristic graph, two curves are included. Lower curve represents percentage of maximum energy absorbed by the fender vs percentage of maximum deflection of fender. Higher curve reflects percentage of maximum reaction of fender vs percentage of maximum deflection. Assuming Berthing load absorbed by one fender, we can calculate this energy in terms of fender maximum energy absorption capacity (red line). Corresponding deflection of fender can be calculated from lower graph (Blue Line). For same percentage of deflection, corresponding reaction produced can be calculated from higher curve. Hydrodynamic Analysis Diffraction Radiation calculation are performed with software. This classical problem of diffraction radiation leads to the evaluation of the hydrodynamic loads on a structure, submitted to regular waves and enable to get accurate RAOs operation of the vessel. The effect of shallow water on drift loads is considered in the analyses and provides proper drift forces on the hull (QTF, Newman Approximation). Ship hydrodynamic model shall be used which gives resultant forces bit higher than original barge and tug boat. 30 x8x3 m barge hull is modelled with windage area of 10x76x4 This arrangement represents hull with cargo windage area or tug boat with accommodation windage area. Time Domain Mooring Analysis The analysis is an assessment of the loads made at the Mooring bit, fenders and the mooring lines based on a numerical model using available data. The simplified configuration of the Barge/Tug boat is modelled in Moses as per design data previously presented and based on the following assumptions Barge/Tug boat are moored to the mooring bit using mooring ropes and resting on fenders. Barge/Tug boat is modelled with 4 mooring lines. Hull is a moving body under the action of wind wave and current. Mooring ropes are modelled as nonlinear spring. First, incidences of wind, wave and current are decided based on available metocean data. Then, time domain analyses are run for the independent wind wave and current cases for 1 hour duration to be able to decide the worst headings to study the combinations of wind wave and current. Second, the time domain analyses for the combinations of maximum individual wind wave and current are run. The duration of the simulations are 3 hours at real time ensuring the capture of sufficient number of low frequency motions cycles.
{"url":"https://www.mermaid-consultants.com/mooring-and-berthing-load-calculation.html","timestamp":"2024-11-04T16:40:14Z","content_type":"text/html","content_length":"106066","record_id":"<urn:uuid:51e01638-cf42-4a62-87bf-91b82d57eae3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00706.warc.gz"}
The Kalman filter on a moving target In a recent series I’ve been discussing the lighthouse problem: trying to identify the coordinates of a lighthouse at sea based on only sporadic recordings of its light hitting the shoreline. First, I walked through the powerful, but computationally expensive, purely Bayesian approach. Then I introduced the Kalman filter as an efficient alternative, and we saw how, even though it should be theoretically incapable of tackling that problem, it turned out to be easy to make it work very well with just a few small tweaks. Even so, I’ve only demonstrated half of the Kalman filter’s power: the update step, which processes new data. The Kalman filter also includes, as part of its machinery, a prediction step, which takes the probability distribution as of timestep k and extrapolates a new distribution for timestep k+1. Because a lighthouse by its nature doesn’t move, that prediction step changed absolutely nothing, so I got away with neglecting it until now. Today we’re going to replace the lighthouse with a moving ship. We’ll need to add prediction into our Kalman filter to keep up, and we’ll see how this, too, follows from the Bayesian probability By the way… I’m not actually obsessed with tracking down maritime objects! My motivation for this series is to familiarize you with how the Kalman filter approximates Bayesian reasoning, and the lighthouse problem is a testbed that is challenging-enough to keep it fun. But the problem is also more practical than it might seem. For example, it’s isomorphic to the challenge of locating the position of a radioactive tracer inside a body by using a detector array on the surface.1 So let’s reformulate the lighthouse problem to make a “ship” problem: A ship is somewhere off a piece of straight coastline at position α(t) along the shore and a distance β out at sea. It is moving parallel to the shore at a constant, but unknown, velocity v m/s. It emits short highly collimated flashes at 1-second intervals but at random azimuths. These pulses are intercepted on the coast by photo-detectors that only record the time that a flash has occurred, but not the angle from which it came. Flashes so far have been recorded at positions {x₁…xₖ}. Where is the ship currently? We could certainly make this problem even more complicated — the ship could move along a random vector; it could also accelerate; its acceleration could vary with time; we could have partial control of its acceleration; it could steer; bounce off obstacles; rise into the air; teleport unpredictably, and so on. But this already should be enough to introduce the idea of prediction, and then we can consider further mixups next time. Here are the equations of motion for this ship: • α(t) = α₀ + vt — Moving at speed v along the shoreline • dv/dt = 0 — Speed doesn’t change • dβ/dt = 0 — Distance from shore doesn’t change Let’s quickly go through the Bayesian approach, so that we can compare how well the Kalman filter measures up to it. The computational difficulty is heating up as the new unknown parameter, v, introduces yet an additional dimension to consider. But as usual, we just need to start with a prior, then figure out what the parameters [α(tₖ), β, v] say about the likelihood of observing data xₖ, and then rely on Bayes’ rule to do the hard work of flipping that around to get what the data implies about α(tₖ), β, v. Let’s start with Bayes’ theorem as always. Here is how it guides us to update on the latest measurement xₖ₊₁ to learn about the latest state y = [α(tₖ₊₁), β, v]: And again, for those who don’t love to read math, here’s the English phrasing: Or in even fewer words: posterior = prior × likelihood. So to get our posterior describing the ship’s location, we need a prior and a likelihood. Let’s do it. The once and future prior Let’s keep the non-informative priors we used earlier: uniform for α₀, and log(β). Now we need a prior probability for v. I don’t think that the information in the problem gives us one unique choice, but remember, the most important thing about an uninformative prior is just to be sure we’re not prematurely ruling out possible true values. A quick search suggests the world maritime speed record was set in 1978 by Ken Warby’s Spirit of Australia at about 276 knots, or 511 km/hr, powered by a J34 fighter jet engine. At risk of understatement, our ship in question is unlikely to be going faster than that. So for argument’s sake let’s make our prior for v Gaussian with a mean of 0 and a standard deviation of, say, ~250 km/hr (70 m/s). Combined with our priors for α₀ and β, we have: This is a good initial prior to start out with for updating on our very first data point x₁. But now here’s the important part: we need a prior for every update that we do, on every new data point, not just t₀. What about for t₁? For t₂? For tₖ₊₁? Back when we were tracking a lighthouse, the only thing that was changing with time was our knowledge of the parameters, not their actual values. If a minute passed by with no new data, we would have no reason to change our beliefs about the lighthouse’s coordinates. So we were able to just recycle our posterior after updating on x₁ to serve as our prior when updating on x₂. This gave us a very nice recurrence relationship. With a moving ship, our posterior for α(t₁) no longer can double as a prior for α(t₂), because we know the ship’s position will have moved over that time interval. Even if our sensors lost power for a while and we had to extrapolate the ship’s position with no new data, we could do better than just guessing it stays still. Fortunately, the system’s equations of motion show us how to make a much better guess: α(t₂) = α(t₁) + vΔt. Plugging this into our prior allows us to look forward and adjust our probability distribution in accordance with the way the system moves: To put this equation into words: whatever probability distribution we’d found as our posterior for α(t₁), we just shuffle it over sideways by vΔt to get the prior for α(t₂). (We don’t know v for certain, so we’ll make up for that by brute force computation: iterating this process over all plausible velocities.2) The likelihood The direction of the light flash isn’t directly influenced by the ship’s speed, so supposing the ship’s next position [α(tₖ₊₁), β] is already enough to get the corresponding probability distribution of the next data point xₖ₊₁. Conditional on the next-timestep state, the likelihood is still just our old Cauchy equation. And finally… lots of turning the crank We’ve seen how Bayes’ rule splits up into two terms, representing the repetitive application of two procedures: 1. Generate a new prior probability by extrapolating forward from our old posterior, and 2. Generate a new posterior by updating on the new data xₖ₊₁ Now getting the full Bayesian solution is just a matter of running that computation many, many, many times over a humongous grid of plausible positions and velocities. I won’t easily be able to plot the entire 3D joint distribution, so I’ll collapse it down to just the distribution for the ship’s coordinates [α(tₖ), β], and display v by only its marginal I’ve added a colour gradient to the measurement histogram so that the recency of each measurement is visible. I don’t know about you, but I find it impressive that the algorithm can extract such a confident, accurate picture of the ship’s motion from such scattered data! Kalman’s turn It took about two hours for my Thinkpad to compute the above Bayesian solution. The Kalman filter reduces that to milliseconds. The Kalman filter advances in two steps: prediction, then update. Last time, we saw how the Kalman filter’s update process comes directly from approximating Bayesian probabilities with a Gaussian. By now it should be no surprise that the prediction step does, too. In order to get the prior probability for Bayes’ equation, we had to use the system dynamics to shift our old posterior distribution into a new prior for the next timestep. Since we weren’t sure what those dynamics are exactly, we just iterated the process over every possibility and turned the crank. The Kalman filter’s prediction step does this job while replacing brute force with finesse. With the Kalman filter we’re representing our beliefs by a Gaussian, so we just need to keep track of a mean and an uncertainty. Working out what happens with the mean is pretty straightforward: our best guess for α(t+Δt) would be to just take our guess for α(t) and shift it by vΔt, using our best guess for v. So what I want to write is: μₐ,ₖ₊₁ = μₐ,ₖ + μᵥ,ₖΔt. And that would be correct, except that I was already using the symbol μₐ,ₖ₊₁ for “the mean of α after the update on the k+1th data point”. We need a different symbol to represent the intermediate state that comes after the prediction, but before the update on new data. I’ll use ͞μₐ,ₖ₊₁ to refer to “the mean value of α that is predicted for tₖ₊₁ ”. So, the state prediction for α is ͞μₐ,ₖ₊₁ = μₐ,ₖ + μᵥ,ₖΔt. We also need to apply this process to the covariance — the uncertainty. If you think about it, any uncertainty we have about v should cause a gradually-increasing uncertainty about the position α. Even if we knew α(t₁) with pinpoint accuracy, by time t₂ our estimate would smear to a cloud if we didn’t know the speed of travel. As with all Gaussians, that cloud will be an ellipsoid shape. But it will be tilted in the α-v plane, because the errors should correlate: if the real v is higher than our best guess μᵥ, then the real α(tₖ₊₁) will be higher than our best guess μₐ,ₖ₊₁ and Enter the matrix The Kalman filter has us write our system dynamics in terms of a state transition matrix, F, which expresses how the present state relates to the future state. This is part of what’s called a state-space model.3 4 When we transform our probability distribution with F, the new (predicted) mean evolves directly from the previous mean, via that state transition matrix:5 ͞μₖ₊₁ = Fμₖ. Meanwhile the covariance undergoes the same transition, but it applies twice, basically because variance is a squared quantity.6 7 The equation for the covariance of the prediction is ͞Sₖ₊₁ = FSₖFᵀ. In other words, the Kalman filter’s prediction is nothing magical or surprising; we just evolve our beliefs about the system state by following the very same dynamics that we believe govern the system itself. Because how would you make any better guess than that? Then, after making the prediction, we update on the measurement just like we did in the previous article. With each timestep, the prediction grows the covariance slightly (as we’re always a bit more uncertain about the future than the present), and then the measurement shrinks it again. Here’s the Kalman filter at work. The prediction step is shown in red, and the update step is in pink. As it did last time, the Kalman filter tracks extremely closely to the ideal of direct Bayesian estimation. It’s tempting to think it’s just curve-fitting the Bayes output, but you have to remember it doesn’t actually have access to that data; both solutions are being computed independently. This might be especially surprising when you remember that we’re still voiding the warranty, with noise that is about as non-Gaussian as it gets.8 Source code is available on gitlab if you’d like to follow along. Anyway, that’s basically the prediction step of the Kalman filter in a nutshell. There are still a few more features up its sleeve to handle more complicated systems with control inputs and process noise, but they don’t change the idea very much. If the system is strongly nonlinear, then one matrix multiplication alone won’t be enough to pull a new prior from our old posterior, and the Kalman filter won’t give good results. But we can usually fix such limitations without changing the Bayesian spirit at the heart of the technique. Any transformation — linear or otherwise — that can forecast our knowledge from t₁ to represent our most honest prior for the system state at t₂ is fair game. Techniques like the Extended Kalman Filter and Unscented Kalman Filter extend the core ideas of the Kalman filter for the case of nonlinear Final thoughts I know these short articles aren’t enough to offer a detailed mathematical understanding of how to implement a Kalman filter well for an arbitrary system, and they aren’t intended as tutorials. Fortunately, there are many other authors who have done much more complete, and much more respectable, jobs of that.9 I hope only to have planted some seeds for the appreciation of the Kalman filter’s usefulness, its power, its flexibility, and its deep connection to Bayesian probability, with as little math as I could get away with (though still quite a bit, sorry). I hope that this helps connect the dots for those who have struggled with more in-depth tutorials, or perhaps inspired you to “void the warranty” yourself, and experiment with pushing beyond a technique’s conventional limitations.10 And even if you never have any need to filter a noisy signal or estimate a parameter, even if you never have cause to use a Kalman filter for anything, I think that it’s still a concept worth knowing about. When an idea is so simple and so powerful, it tends to show up everywhere — especially in nature. (Is the hippocampus a Kalman filter?) The Kalman filter, of course, only captures a small part of the depth of Bayesian reasoning. But in contrast to pure Bayesian logic, which tends to spiral off into infinite loops, halting problems, and other calculations that outlast the lifespan of the sun, the Kalman filter captures reality with incredible efficiency. That, in my opinion, has earned it the right to be respected as a basic building block of system theory, prediction, and even perception in general. This post isn’t intended as professional engineering advice. If you are looking for professional engineering advice, please contact me with your requirements. Applications of this technique that I’ve seen in the literature tend to use detectors on all sides. However by using the Cauchy distribution in the style of the lighthouse problem, a single flat detector plane should be sufficient to localize a radioactive particle in 3D space; depending on the source intensity, this could be accomplished pretty cheaply with a camera and a plastic scintillator sheet. This requires discretizing the possible velocities into a grid, just as we did with position. As with any discretization process, there is a tradeoff to be made in terms of resolution vs. memory & computation time. There are ways to save some time and memory, such as refining the grid density to be denser in areas of high probability and sparser in areas of low probability. State-space form is, basically, taking a system described by differential equations and, at least if that system is linear, organizing those equations in the following traditional format: where x is called the state vector and contains all the data about the internal state of the system; u is the input vector, containing whatever inputs might be applied to the system; y is the output vector, containing whatever important properties we care about extracting from the system. (Note that in my articles I have been using ‘x’ to denote sensor measurements, not state, following the phrasing of the lighthouse problem). A, B, C, and D are matrices that describe all the connections between the state variables, their rates of change, and the inputs and outputs. If A, B, C, and D do not vary with time, the system is called time-invariant. This ship system has no inputs, so B and D are empty. If we’re interested in displaying the ship’s full state [α(t), β, v], then y = x, so C is a 3x3 identity matrix. And last but not least, the system matrix A contains our system’s equations of motion: Representing differential equations in state-space form is popular, compact, and versatile format. It serves as a common “user interface” to a lot of techniques and computer algorithms, including the Kalman filter. So the first step in many controls and modelling problems is to get a state-space representation of the system. That said, there are some systems that are difficult to express this way. Nonlinear systems require relaxing the formatting constraints a little bit. Fractional-order systems, even when linear and time-invariant, sort of break the entire idea behind this approach, and are easier to just express as a list of transfer functions. The state transition matrix F is essentially a discrete-time version of A. Rather than represent the relationship between x(t) and its rate-of-change, it represents the relationship between x(tₖ) and its subsequent value x(tₖ₊₁). For time-invariant systems it can be calculated from A by the matrix exponential: F = expm(A Δt). If we had data about any control inputs to our system uₖ, we could incorporate that data into our prediction by expanding the equation a little bit: μₖ₊₁ = Fμₖ + Guₖ. The effect of the inputs uₖ on the state variables are quantified by G, the input transition matrix. G serves the same role in the discrete-time model that B serves in the continuous-time model, just like F relates to A. To elaborate, if X is a random variable (such as our state vector) over which we have some probability distribution, and F is a linear transformation applied to X, and E(X) denotes the “expectation of X” (which is another way of saying the mean of X, which we can also write μ), then: • E(FX) = FE(X) = Fμ • cov(FX) = E( (FX - Fμ) (FX - Fμ)ᵀ) = F E( (X - μ) (X - μ)ᵀ) Fᵀ = F cov(X) Fᵀ But I think “the transformation applies twice, because variance is a squared quantity” is a bit more intuitive and much easier to remember. By the way, we can also add in an fudge factor for additional uncertainty, called process noise, representing both unmeasured external disturbances as well as any further doubts we might have in the model itself. This just adds a little extra to the covariance with each prediction step. For example, we could use this if we thought the ship velocity might be actually varying a bit with time. Then even if we’ve pinpointed the velocity at time t₁ with our measurements, we will lose some confidence in our forecast of that velocity for time t₂, as we ought to. When a model is built on strong mathematical foundations, it might not make much sense to include additional process noise. For example, adding a process noise term to the covariance of α would suggest that (somehow?!) we aren’t confident that the velocity is the only thing that could contribute to α changing over time. Since the change in α over time is the velocity by definition, that seems like a mistake. If we wanted to represent a little extra uncertainty in the ship’s position for whatever reason, the measurement noise term might be a more sensible place to put it. In fact it’s so far from Gaussian that even the central limit theorem itself doesn’t save us; while most random variables will eventually converge to Gaussian when you average them all together, averaging together any number of Cauchy-distributed variables still just leaves you with a Cauchy distribution no matter how much data you collect. It’s actually surprising how much literature exists about the Kalman filter specifically, especially tutorials and derivations, and a lot of it is exceptionally well-presented and approachable. I really hesitated to write a series about it for that reason, because I didn’t want to add noise to an already clear signal. But I plan to refer to the Kalman filter a fair bit in future articles, and it seemed negligent to not at least provide a basic and intuitive, if slightly incomplete, foundation. This is why I haven’t gone deep into the weeds on control inputs, process noise, how to appropriately derive the measurement covariance matrix, and so on. What I really want to cover here is not rote tutorials or dusty proofs, but rather less-trodden ground. Interesting questions that lurk in the night, ready to strike curious-minded engineers, whose answers are frustratingly hard to find on the internet or in literature. (Rather like attaching a fighter jet engine to a boat in order to claim the world speed record.) I'm wondering if in some situations, a Kalman filter simplifies to equations that are easy to implement? What do degenerate forms of it look like? Expand full comment 1 more comment...
{"url":"https://thesearesystems.substack.com/p/the-kalman-filter-on-a-moving-target","timestamp":"2024-11-02T21:52:12Z","content_type":"text/html","content_length":"327023","record_id":"<urn:uuid:0969f08a-0b96-4752-9f67-58ce02fda671>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00370.warc.gz"}
Manipulate Python Sets All Course > Python > Python Sets Nov 17, 2023 Manipulate Python Sets In Python programming, sets are a versatile data structure that allow you to work with unique collections of items. Whether you're performing basic set operations or advanced manipulation techniques, understanding how to effectively use sets can greatly enhance your Python coding skills. In this article, we'll explore various methods and functions for manipulating sets in Python, covering everything from basic operations to advanced techniques. Understanding Python Sets Python sets are unordered collections of unique elements. This means that sets do not allow duplicate values, and the elements are not stored in any particular order. Sets are commonly used for tasks such as removing duplicates from a list, testing membership of elements, and performing mathematical set operations. Basic Set Operations Python provides several built-in methods for performing basic set operations, including: • Union (union()): The union of two sets combines all unique elements from both sets into a new set. In other words, it creates a set containing all elements that are present in either set, without • Intersection (intersection()): The intersection of two sets returns a new set containing only the elements that are common to both sets. In simpler terms, it produces a set with elements that exist in both sets. • Difference (difference()): The difference between two sets returns a new set containing the elements that are present in the first set but not in the second set. It essentially removes the elements of one set from another. • Symmetric Difference (symmetric_difference()): The symmetric difference between two sets returns a new set containing elements that are present in either of the sets, but not in both. In essence, it removes the common elements from both sets and retains the unique ones. Let’s illustrate these operations with some examples. set1 = {1, 2, 3} set2 = {3, 4, 5} # Union print(set1.union(set2)) # Output: {1, 2, 3, 4, 5} # Intersection print(set1.intersection(set2)) # Output: {3} # Difference print(set1.difference(set2)) # Output: {1, 2} # Symmetric Difference print(set1.symmetric_difference(set2)) # Output: {1, 2, 4, 5} Advanced Set Manipulation Techniques In addition to basic operations, Python offers powerful techniques for advanced set manipulation. These include methods like add(), remove(), clear(), and update(), among others. Adding Elements (add() method) The add() method in Python sets is used to add a single element to a set. If the element is already present in the set, it will not be added again, as sets do not allow duplicates. This method is particularly useful when you want to insert a new unique element into an existing set without worrying about duplicates. my_set = {1, 2, 3} # Adding an element print(my_set) # Output: {1, 2, 3, 4} # Adding an existing element (no change) print(my_set) # Output: {1, 2, 3, 4} Removing Elements (remove() method) The remove() method is used to remove a specific element from a set. If the element is not present in the set, it will raise a KeyError. This method is handy when you want to precisely delete a particular element from a set. my_set = {1, 2, 3} # Removing an element print(my_set) # Output: {1, 3} # Attempting to remove a non-existent element (raises KeyError) # my_set.remove(4) Clearing the Set (clear() method) The clear() method removes all elements from a set, leaving it empty. This is useful when you need to reset a set or free up memory occupied by its elements without deleting the set itself. my_set = {1, 2, 3} # Clearing the set print(my_set) # Output: set() Updating the Set (update() method) The update() method adds elements from another iterable (such as a list, tuple, or another set) to the set. This operation effectively performs a union of the two sets, adding any unique elements from the iterable to the set. Duplicate elements are automatically removed. my_set = {1, 2, 3} another_set = {3, 4, 5} # Updating the set print(my_set) # Output: {1, 2, 3, 4, 5} # Updating with a list my_set.update([5, 6, 7]) print(my_set) # Output: {1, 2, 3, 4, 5, 6, 7} These advanced set manipulation techniques provide you with powerful tools to manage and manipulate sets efficiently in Python. Whether you need to add or remove elements, clear a set, or update it with new elements, Python’s set methods offer straightforward solutions to your programming needs. In conclusion, mastering set manipulation in Python is essential for any programmer looking to work efficiently with collections of unique elements. By understanding basic set operations and advanced manipulation techniques, you can streamline your code and solve various computational problems more effectively. With the knowledge gained from this article, you’re well-equipped to tackle set-related tasks in your Python projects. Q: Can sets contain duplicate elements in Python? A: No, sets in Python do not allow duplicate elements. Each element in a set must be unique. Q: What is the difference between add() and update() methods in Python sets? A: The add() method is used to add a single element to a set, while the update() method can be used to add multiple elements from another iterable (like a list or another set) to the set. Q: How can I check if a specific element is present in a set in Python? A: You can use the in keyword to check for membership. For example: if element in my_set:. There are no comments yet.
{"url":"https://technetzz.com/manipulate-python-sets.html","timestamp":"2024-11-02T02:09:25Z","content_type":"text/html","content_length":"29584","record_id":"<urn:uuid:f2468029-25e1-4de0-be6e-ef8808791e83>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00009.warc.gz"}
Cryptography is all about keys, and it is usually assumed that a large random number would make a good key. Your cryptographic security depends on how hard it is for an enemy to guess this key. If you consider a password as a cryptographic key, we all know that large and random is best - the National Cyber Security Centre (NCSC) recommends using 3 random words. Clearly an obvious class of weak keys would be keys that are simply too short. For this reason the commonly used AES (Advanced Encryption Standard) uses at a minimum 128-bit keys, far too big to be guessed. And the AES has no weak keys, each possible key is as good as any other. However its’ predecessor DES - which used 56-bit keys - did have a small number of keys that while 56-bits long were particularly “weak”, in the sense that they resonated with the cipher’s structure to render it completely ineffective and unable to encrypt securely. An example of a weak DES key would be FFFFFFF0000000 in hex. Clearly a randomly generated key would be very unlikely to be a weak key, but nonetheless it was often recommended during key generation that a check be made to exclude the remote possibility of a weak key being generated. Sometimes the possibility of weak keys cannot be avoided. The famous RSA encryption method uses a public key N which is the product of two secret and randomly chosen prime numbers p and q, so N=pq. The strength of the system depends on the difficulty of factoring N into p and q. Now normally if p and q were random 1024-bit primes this would indeed be a hard problem to solve. But if some idiot were to choose p=q, then factoring N becomes a simple matter of finding the square root of N – an easy problem. Also if p or q is re-used between multiple public keys, then they all become weak. And this does happen, as p and q are typically generated using random number generators, and if they are not truly random they may well produce the same p or q more than once. Indeed some studies have shown that from random samples of RSA public keys plucked from the internet some 0.5% may be weak. Elliptic curve cryptography is becoming more popular, driven in part by its use to protect Bitcoin wallets. Here a generator point G on an elliptic curve is denoted by its integer coordinates (x,y). When multiplied by a large secret key s this produces another point Q=sG on the curve. There are q points on the curve, whereqis a large prime. The strength of the system depends on the difficulty of determining the key s given G and Q. Since a NIST standard curve is used, the standard specifies and fixes the values of G and q. Consider the choice of: s=6482687712184010168252362946267496770293767958036933 4126295633893540044112329 Which results in: Q=(100760202697161893004335214126591116800117319792545 458764085267675326325395621, 75193444318165031146359304 621062797862272142296678797285916994295833810377664) The key s surely looks big and random enough. And there is nothing obviously suspicious about Q. But given just Q it’s trivially easy to calculate s, because in fact s is a weak key. We omit the details, except to point out why s is weak. It is weak because s4 mod q = 1 (that is the remainder you get when you divide s to the power of 4 by q, equals 1).But you wouldn’t know that unless you explicitly tested for it. This weak key is just one of many highlighted in a recent research paper - https://eprint.iacr.org/2020/1436.pdf Now the chances of a randomly generated s being weak in this way are vanishingly small. However a clever insider attack would be to modify the random number generator to only generate weak keys. These weak keys are hard to spot! The Bitcoin curve is particularly poor in this regard. The good news is that it’s not hard to test for these weak keys, and it’s not hard to modify the key generation process to avoid them. It’s also quite easy to determine if a public key Q is generated from a weak key. Intriguingly, shortly after the original article was published, a Bitcoin wallet was found with the exact same weak key as specified above. The account had seen multiple transactions. So clearly bad actors are already considering the possibilities for exploiting these weak keys! So generate your cryptographic keys with care. Either assure yourself that they are truly randomly generated (remember weak keys, unless deliberately introduced, are vanishingly rare), or put in extra tests to detect and eliminate them. Find out more about MIRACL’s zero knowledge proof authentication, using military grade cryptography, by visiting MIRACL or follow us on social media: Twitter @MIRACL | LinkedIn MIRACL
{"url":"https://miracl.com/blog/generate-cryptographic-keys-with-care/","timestamp":"2024-11-02T14:51:29Z","content_type":"text/html","content_length":"448952","record_id":"<urn:uuid:1b27761d-808f-4dec-8b37-cf7ffb6446dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00011.warc.gz"}
Change the DTD description in Section 7.3 Entity Declarations to reference the Combined HTML MathML entity set rather than the legacy ISO entity sets. This does not change any existing definition, but adds the following 38 entity definitions: &QUOT; (U+0022), &AMP; (U+0026), &LT; (U+003C), &GT; (U+003E), &COPY; (U+00A9), &REG; (U+00AE), &Alpha; (U+0391), &Beta; (U+0392), &Epsilon; (U+0395), & Zeta; (U+0396), &Eta; (U+0397), &Iota; (U+0399), &Kappa; (U+039A), &Mu; (U+039C), &Nu; (U+039D), &Omicron; (U+039F), &Rho; (U+03A1), &Tau; (U+03A4), &Chi; (U+03A7), &epsilon; (U+03B5), &omicron; (U+03BF), &sigmaf; (U+03C2), &thetasym; (U+03D1), &upsih; (U+03D2), &zwnj; (U+200C), &zwj; (U+200D), &lrm; (U+200E), &rlm; (U+200F), &sbquo; (U+201A), &bdquo; (U+201E), &lsaquo; (U+2039), &rsaquo; (U+203A), &oline; (U+203E), &frasl; (U+2044), &euro; (U+20AC), &TRADE; (U+2122), &alefsym; (U+2135), &crarr; (U+21B5).
{"url":"https://www.w3.org/TR/MathML3/appendixf.html","timestamp":"2024-11-01T19:51:19Z","content_type":"text/html","content_length":"46523","record_id":"<urn:uuid:858c8a1d-4fb4-4934-8c7a-adafded2ebf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00882.warc.gz"}
Question about solve function for system of non linear equations 06-04-2014, 09:37 PM (This post was last modified: 06-04-2014 10:18 PM by Rich.) Post: #1 Rich Posts: 47 Junior Member Joined: Jan 2014 Question about solve function for system of non linear equations Is there a better way to solve a system of equations on the Prime. When entering: solve([1=a*b 1=a*(-b^2+1)^.5], [a b]) I am getting an error: "[b, SQRT(-b^2+1)] is not rational w.r.t. b Error Bad Argument Value" (the SQRT was an actual square root symbol) I am able to do the example in the user manual without any issues but the above system of equations is giving me an error. Of course doing it by hand i can get {[a = 0.7071067812, b= 1.414213562]} 06-04-2014, 09:39 PM (This post was last modified: 06-04-2014 09:42 PM by Rich.) Post: #2 Rich Posts: 47 Junior Member Joined: Jan 2014 RE: Question about solve function for system of non linear equations ... actually testing out and using fsolve instead of solve gave me the right answer. nSolve gave me the hour glass for a while... : ( I keep on forgetting that fsolve is for numeric results, it would be nice to have that in the drop down list in the CAS menu for those of us who are absent minded : ( 06-05-2014, 12:15 AM (This post was last modified: 06-05-2014 12:15 AM by CR Haeger.) Post: #3 CR Haeger Posts: 275 Member Joined: Dec 2013 RE: Question about solve function for system of non linear equations (06-04-2014 09:39 PM)Rich Wrote: ... actually testing out and using fsolve instead of solve gave me the right answer. nSolve gave me the hour glass for a while... : ( I keep on forgetting that fsolve is for numeric results, it would be nice to have that in the drop down list in the CAS menu for those of us who are absent minded : ( Hmm can you show the fsolve() equation that worked? I get an hourglass then reboot with: fsolve([a*b=1,a*(1-b^2)^0.5=1],[a,b],[1,1]) I do use fsolve() often enough that I made it a USER key. 06-05-2014, 12:35 AM (This post was last modified: 06-05-2014 01:09 AM by Rich.) Post: #4 Rich Posts: 47 Junior Member Joined: Jan 2014 RE: Question about solve function for system of non linear equations Oh sure, no problem, fsolve([1=a*b 1=a*(-b^2+1)^.5], [a b]) Where [] indicates a matrix with separated elements by pressing , (comma), so the commas are not shown in my equation above. Image attached. I want to setup user keys, but i really wish it was available to assign to "soft menu" keys like in the 50g, otherwise i might forget what key is what. This is probably one of my top request to have in the Prime, that an undo key, and maybe a better way to store equations. 06-05-2014, 05:44 AM Post: #5 parisse Posts: 1,337 Senior Member Joined: Dec 2013 RE: Question about solve function for system of non linear equations This was implemented recently in giac (exact solving I mean, fsolve works), so you'll have to wait for an update on the Prime... User(s) browsing this thread: 1 Guest(s)
{"url":"https://www.hpmuseum.org/forum/thread-1537-post-13422.html#pid13422","timestamp":"2024-11-11T04:15:17Z","content_type":"application/xhtml+xml","content_length":"28227","record_id":"<urn:uuid:1d123de4-42be-4d15-817e-c97496e58198>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00786.warc.gz"}
Specific damping capacity calculation of composite plates with delamination based on higher-order Zig-Zag theory Damping is focused in a mass of engineering applications at present. In this paper a new laminate element rests on the higher-order zig-zag theory for composite plates were presented. And then, viscoelasticity damping and frictional damping models in delaminated composites were established. The damping changes of delaminated composites with different boundary conditions were investigated via the laminate element, the effects of area and location on damping were also researched. The results revealed that viscoelasticity damping and frictional damping are in the same order of magnitude even delamination area is small, and frictional damping increase significantly when delamination area enlarged, frictional damping is needed to be considered in the damping research of delaminated 1. Introduction Composite materials has widely used during the last decades. We could find the application of composite materials from the aviation industry to sports equipment. Since their first appearance and application, most researchers have focused on static characteristics and anisotropy properties, an extensive literature is available in this filed. Beside stiffness and strength, their dynamic responses, e.g. vibration and damping, also need deep understanding in engineering design for purpose of vibration and noise controlling. The damping of the structure is possibly caused by the resistance of external, e.g. air drag and support friction. It also possibly comes from the interior energy dissipation of structure, including the viscidity of material, friction of interior contact surfaces, heat and sound production and damage evolution [1, 2]. Polymer matrix composites are generally recognized possessing better damping capacity, several orders of magnitude higher, than traditional metal materials because of their visco-elastic matrix. Additionally, various defects often found in composites can dissipate energy during cyclic load and elevate damping furthermore [3-5]. For the widely used laminate composite, defects include matrix cracks, delamination, imperfect fiber/matrix bounding and fiber breakage and these defects are considered inevitable during manufacture and service [6]. For fiber composites, Friction of crack surfaces and damage development consume energy. For instance, Cho C. [7] presented an estimation of interfacial friction in fiber-reinforced ceramics via increasing temperature during cyclic loading. David B. [8] used an indentation method to obtain the fiber/matrix interfacial frictional sliding stress and debone energy of a SiC/ glass-ceramic composite. One of the macro phenomena of these energy dissipations manifests of damping under cycle loads. Birman V. [9, 10] analytically modeled the relation between damping and micro matrix crack of ceramic matrix unidirectional and cross-ply composites. Damping is also used to figure out damage level of structure. Relations between damping and damage have been extensively studied theoretically and experimentally. Damping is found sensitive to damage, thus it is recommended as an effective representation of damage evaluation [11-16]. Among damages, delamination is most concerned for composite laminate because of it might lead fatal consequence without obviously visual mark. The gradients of in-plane displacements along thickness are discontinuous in delaminated interface. Various approaches have been proposed to represent this discontinuity. For example, Jim K. S. build a novel transition element based on first-order theory [17], Cheng Z. Q .established a spring-layer model to simulate vibration of multilayered laminate with weak interface [18], Marco D. S. develop a nonlinear theory of multilayered composites with interface slips based on higher-order shear model [19], and a higher-order zig-zag theory established by Cho M was employed for the natural frequency analysis of delaminated plates [20]. To adapt the complicated configurations of structure and irregular shaped multi plies delamination, FEM algorithms have been developed based on the various laminate plate theories which consider the inter-laminar displacement discontinuity introduced by delamination [20, 25]. Among them, FEM models founded on layer-wise plate theory [21] and higher-order zig-zag theory [20] exhibit satisfying accuracy. However, the former defines degree of freedom (DOF) upon each individual ply, thus it is not computationally efficient. Whereas, the latter theory only need additional DOF when delamination occurs. In this study, a new four-node plane element was developed to represent damping of delaminated plates based on the higher-order zig-zag theory. 2. Higher-order zig-zag theory of delaminated composites 2.1. Displacement model An abridged general view of composite plates with multi-delaminations is shown in Fig. 1. The discontinued displacement field predicted by higher-order zig-zag theory [20] includes Heaviside functions to adapt the displacement discontinuity between the delaminated laminas. Meanwhile, the transverse shear stresses of this theory are continuous through the thickness and vanish on the top and bottom surface as well as on the interior surfaces of delamination. The discontinued displacement field for a plate composite with multi-delaminations can be written as follows [20]: $\begin{array}{l}{u}_{\alpha }\left({x}_{\alpha },z;t\right)={u}_{\alpha }^{0}\left({x}_{\alpha };t\right)+{\psi }_{\alpha }\left({x}_{\alpha };t\right)z+{\xi }_{\alpha }\left({x}_{\alpha };t\right) {z}^{2}+{\varphi }_{\alpha }\left({x}_{\alpha };t\right){z}^{3}\\ \mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\sum _{k=1}^{N-1}{S}_{\alpha }^{k}\left({x}_{\alpha };t\right)\left(z-{z}_{k}\right)H\ left(z-{z}_{k}\right)+\sum _{k=1}^{N-1}{\stackrel{-}{u}}_{\alpha }^{k}\left({x}_{\alpha };t\right)H\left(z-{z}_{k}\right),\end{array}$ ${u}_{3}\left({x}_{\alpha },z;t\right)=w\left({x}_{\alpha };t\right)+\sum _{k=1}^{N-1}{\stackrel{-}{w}}^{k}\left({x}_{\alpha };t\right)H\left(z-{z}_{k}\right).$ The subscript $\alpha$ denotes the two in plane directions of ${x}_{1}$ and ${x}_{2}$, as illustrated in Fig. 1. the first term in right side of Eq. (1) is the in-plane displacement on the reference plane, the three terms following are those of linear, square and cubic terms along thickness, the forth term characterize slope variation between neighbor plies caused by stiffness jump and delamination, the last term represent the relative shear displacement on the interfacial crack surfaces. ${\psi }_{\alpha }$ are the rotations in the normal direction of the reference plane about the ${x}_{\alpha }$ coordinate, ${\xi }_{\alpha }$ and ${\varphi }_{\alpha }$ are the square and cubic displacement coefficients respectively. The first and second terms in right side of Eq. (2) are deflection on the reference plane and opening of delamination. $N$is the number of ply, the terms ${\stackrel{-}{u}}_{\alpha }^{k}$ and ${\stackrel{-}{w}}^{k}$ represent possible jumps in the slipping and opening displacements, ${z}_{k}$ means the distance between the kth interlaminar to the bottom reference plane, $H\left(\mathrm{z}-{\mathrm{z}}_{k}\right)$ is the Heaviside function. The deformed schematic configuration is shown in Fig. 2. In the well bonded interface composites plate, transverse shear stress and displacement should be continuous, in the delaminated interface, transverse stress vanished but it is still continuous. Applied the shear stress continuity conditions of the interface, the slope change ${S}_{\alpha }^{k}$ could be written as: ${S}_{\alpha }^{k}={a}_{\alpha \gamma }^{k}{\varphi }_{\gamma }-{\stackrel{-}{w}}_{,\alpha }^{k}.$ The details of coefficients meaning can be found in reference [20]. Fig. 1A composite laminate with multiple delamination Fig. 2Schematic deformations of multiply delaminated composite 2.2. Constitutive equations From Eq. (1) and Eq. (2), in-plane and transverse strains are derived as: $\begin{array}{l}{\epsilon }_{\alpha \beta }=\frac{1}{2}\left({u}_{\alpha ,\beta }+{u}_{\beta ,\alpha }\right)=\frac{1}{2}\left\{{u}_{\alpha ,\beta }^{0}+{u}_{\beta ,\alpha }^{0}-\left({w}_{,\alpha \ beta }+{w}_{,\beta \alpha }\right)z-\frac{1}{2h}\sum _{k=1}^{N-1}\left({a}_{\alpha \gamma }^{k}{\varphi }_{\gamma ,\beta }+{a}_{\beta \omega }^{k}{\varphi }_{\omega ,\alpha }\right){z}^{2}\right\\\ \ \ \mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\sum _{k=1}^{N-1}\left({a}_{\alpha \gamma }^{k}{\varphi }_{\gamma ,\beta }+{a}_{\beta \omega }^{k}{\varphi }_{\omega ,\alpha }-2{\stackrel{-}{w}}_{,\ alpha \beta }^{k}\right)\left(z-{z}_{k}\right)+\sum _{d=1}^{D}\left({\stackrel{-}{u}}_{\alpha ,\beta }^{d}+{\stackrel{-}{u}}_{\beta ,\alpha }^{d}\right)H\left(z-{z}_{k}\right)},\end{array}$ ${\gamma }_{\alpha 3}={u}_{\alpha ,3}+{u}_{3,\alpha }=-\left(3h{\varphi }_{\alpha }+\frac{1}{h}\sum _{k=1}^{N-1}{a}_{\alpha \gamma }^{k}{\varphi }_{\gamma }\right)z+3{\varphi }_{\alpha }{z}^{2}+\sum _{k=1}^{N-1}{a}_{\alpha \gamma }^{k}{\varphi }_{\gamma }.$ The constitutions an individual ply in the global coordinate system are expressed as: $\begin{array}{c}{\sigma }_{\alpha \beta }^{\left(k\right)}={\stackrel{-}{Q}}_{\alpha \beta \gamma \omega }^{\left(k\right)}{\epsilon }_{\gamma \omega }^{\left(k\right)},\end{array}{\sigma }_{\alpha 3}^{\left(k\right)}={\stackrel{-}{Q}}_{\alpha 3\gamma 3}^{\left(k\right)}{\epsilon }_{\gamma 3}^{\left(k\right)},$ where ${\stackrel{-}{Q}}_{\alpha \beta \gamma \omega }^{\left(k\right)}$ denotes the transformed stiffness of the $k$th lamina. Through integrating stiffness of each ply, the resultant constitutive relations for the laminate are obtained, as shown in matrix form below: $\left\{\begin{array}{l}{N}_{\alpha \beta }\\ {M}_{\alpha \beta }\\ {R}_{\alpha \beta }^{\left(2\right)}\\ {R}_{\alpha \beta }^{\left(3\right)}\\ {\stackrel{-}{N}}_{\alpha \beta }^{i}\\ {\stackrel{-} {M}}_{\alpha \beta }^{i}\end{array}\right\}=\left[\begin{array}{llllll}{A}_{\alpha \beta \gamma \omega }^{\left(0\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(1\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(2\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(3\right)}& {B}_{\alpha \beta \gamma \omega }^{j\left(0\right)}& {E}_{\alpha \beta \gamma \omega }^{j\left(0\right)}\\ {A}_{\alpha \ beta \gamma \omega }^{\left(1\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(2\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(3\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(4\right)}& {B}_ {\alpha \beta \gamma \omega }^{j\left(1\right)}& {E}_{\alpha \beta \gamma \omega }^{j\left(1\right)}\\ {A}_{\alpha \beta \gamma \omega }^{\left(2\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(3\ right)}& {A}_{\alpha \beta \gamma \omega }^{\left(4\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(5\right)}& {B}_{\alpha \beta \gamma \omega }^{j\left(2\right)}& {E}_{\alpha \beta \gamma \omega } ^{j\left(2\right)}\\ {A}_{\alpha \beta \gamma \omega }^{\left(3\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(4\right)}& {A}_{\alpha \beta \gamma \omega }^{\left(5\right)}& {A}_{\alpha \beta \ gamma \omega }^{\left(6\right)}& {B}_{\alpha \beta \gamma \omega }^{j\left(3\right)}& {E}_{\alpha \beta \gamma \omega }^{j\left(3\right)}\\ {B}_{\alpha \beta \gamma \omega }^{i\left(0\right)}& {B}_{\ alpha \beta \gamma \omega }^{i\left(1\right)}& {B}_{\alpha \beta \gamma \omega }^{i\left(2\right)}& {B}_{\alpha \beta \gamma \omega }^{i\left(3\right)}& {D}_{\alpha \beta \gamma \omega }^{ij}& {F}_{\ alpha \beta \gamma \omega }^{ij}\\ {E}_{\alpha \beta \gamma \omega }^{i\left(0\right)}& {E}_{\alpha \beta \gamma \omega }^{i\left(1\right)}& {E}_{\alpha \beta \gamma \omega }^{i\left(2\right)}& {E}_ {\alpha \beta \gamma \omega }^{i\left(3\right)}& {F}_{\alpha \beta \gamma \omega }^{ji}& {E}_{\alpha \beta \gamma \omega }^{ij}\end{array}\right]\left\{\begin{array}{l}{\epsilon }_{\gamma \omega }^{\ left(0\right)}\\ {\epsilon }_{\gamma \omega }^{\left(1\right)}\\ {\epsilon }_{\gamma \omega }^{\left(2\right)}\\ {\epsilon }_{\gamma \omega }^{\left(3\right)}\\ {\epsilon }_{\gamma \omega }^{j}\\ {\ stackrel{-}{\epsilon }}_{\gamma \omega }^{j}\end{array}\right\},$ $\left\{\begin{array}{l}{V}_{\alpha }^{\left(1\right)}\\ {V}_{\alpha }^{\left(2\right)}\\ {Q}_{\alpha }^{i}\end{array}\right\}=\left[\begin{array}{lll}{A}_{\alpha 3\beta 3}^{\left(2\right)}& {A}_{\ alpha 3\beta 3}^{\left(3\right)}& {E}_{\alpha 3\beta 3}^{j\left(1\right)}\\ {A}_{\alpha 3\beta 3}^{\left(3\right)}& {A}_{\alpha 3\beta 3}^{\left(4\right)}& {E}_{\alpha 3\beta 3}^{j\left(2\right)}\\ {E}_{\alpha 3\beta 3}^{i\left(1\right)}& {E}_{\alpha 3\beta 3}^{i\left(2\right)}& {E}_{\alpha 3\beta 3}^{ij}\end{array}\right]\left\{\begin{array}{l}{\gamma }_{\beta 3}^{\left(1\right)}\\ {\gamma }_ {\beta 3}^{\left(2\right)}\\ {\gamma }_{\beta 3}^{k}\end{array}\right\}.$ The formula derivation and variable specific meaning in Eq. (7) and Eq. (8) could be found in reference [20]. 3. Finite element algorithm A four-node plane element is developed on the foundation of the proposed theory. The primary displacement of the plate is interpolated in terms of nodal displacements via shape functions, as below: $\begin{array}{l}\left({u}_{\alpha }^{0},{\varphi }_{\alpha },{\stackrel{-}{u}}_{\alpha }^{i}\right)=\sum _{m=1}^{n}{N}_{m}\left[{\left\{{u}_{\alpha }^{0}\right\}}_{m},{\left\{{\varphi }_{\alpha }\ right\}}_{m},{\left\{{\stackrel{-}{u}}_{\alpha }^{i}\right\}}_{m}\right],\\ w=\sum _{m=1}^{n}{P}_{m}{\left\{w\right\}}_{m}+{H}_{xm}{\left\{{w}_{,x}\right\}}_{m}+{H}_{ym}{\left\{{w}_{,y}\right\}}_{m}, \\ {\stackrel{-}{w}}^{j}=\sum _{m=1}^{n}{P}_{m}{\left\{{\stackrel{-}{w}}^{j}\right\}}_{m}+{H}_{xm}{\left\{{\stackrel{-}{w}}_{,x}^{j}\right\}}_{m}+{H}_{ym}{\left\{{\stackrel{-}{w}}_{,\mathrm{y}}^{j}\ where $n$ is the node number in a plate unit, ${N}_{m}$ denotes a Lagrange shape function and ${P}_{m}$, ${H}_{xm}$, ${H}_{ym}$ are Hermite interpolation functions. Strains relate to nodal displacement in expression as: $\left\{{\epsilon }_{\alpha \beta }\right\}={\left[B\right]}_{b}{\left\{u\right\}}_{n},\left\{{\gamma }_{\alpha 3}\right\}={\left[B\right]}_{s}{\left\{u\right\}}_{n},$ $\begin{array}{l}\left\{{\epsilon }_{\alpha \beta }\right\}=\left\{{\epsilon }_{\alpha \beta }^{\left(0\right)},{\epsilon }_{\alpha \beta }^{\left(1\right)},{\epsilon }_{\alpha \beta }^{\left(2\ right)},{\epsilon }_{\alpha \beta }^{\left(3\right)},{\epsilon }_{\alpha \beta }^{j},{\stackrel{-}{\epsilon }}_{\alpha \beta }^{j}\right\},\left\{{\gamma }_{\alpha 3}\right\}=\left\{{\gamma }_{\alpha 3}^{\left(1\right)},{\gamma }_{\alpha 3}^{\left(2\right)},{\gamma }_{\alpha 3}^{k}\right\},\\ \left\{{u}_{n}\right\}=\left\{{u}_{\alpha }^{0},w,{w}_{,\alpha },{\varphi }_{\alpha },{\stackrel{-}{u}}_ {\alpha }^{j},{\stackrel{-}{w}}^{j},{{\stackrel{-}{w}}^{j}}_{,\alpha }\right\},\end{array}$ ${\left[B\right]}_{b}=\left[\begin{array}{l}\begin{array}{l}{B}_{b111}\\ {B}_{b122}\end{array}\\ {B}_{b112}\end{array}\begin{array}{l}\begin{array}{l}\cdots \\ \cdots \end{array}\\ \cdots \end{array} \begin{array}{l}\begin{array}{l}{B}_{bn11}\\ {B}_{bn22}\end{array}\\ {B}_{bn12}\end{array}\right],{\left[B\right]}_{s}=\left[\begin{array}{l}{B}_{s113}\\ {B}_{s123}\end{array}\begin{array}{l}\cdots \\ \cdots \end{array}\begin{array}{l}{B}_{sn13}\\ {B}_{sn23}\end{array}\right],$ where ${\left[B\right]}_{b}$ is the in-plane geometry matrix, ${\left[B\right]}_{s}$ denotes the transverse shear geometry matrix, these two items are deviated from shape function, and $\left\{{u}_ {n}\right\}$ are node displacements The DOF of each node on the element will increase with number of delamination, it is expressed as $7+5D$. In the other word, DOF is 7 for an un-delaminated plate and it increases 5 for each delamination. Correspondingly, sub matric of ${\left[B\right]}_{b}$ and ${\left[B\right]}_{s}$ are partitioned to two blocks: ${B}_{bi\alpha \beta }=\left[\begin{array}{ll}{B}_{bi\alpha \beta ud}& {B}_{bi\alpha \beta d}\end{array}\right],\mathrm{}\mathrm{}\mathrm{}\mathrm{}{B}_{si\alpha \beta }=\left[\begin{array}{ll}{B}_ {si\alpha \beta ud}& {B}_{si\alpha \beta d}\end{array}\right].$ The first block $\left[{B}_{bi\alpha \beta ud}\right]$ (and $\left[{B}_{si\alpha \beta ud}\right]$) relates to un-delamination DOF, while the second block $\left[{B}_{bi\alpha \beta d}\right]$ (and $ \left[{B}_{si\alpha \beta d}\right]$) to delamination one. The detailed description of $\left[{B}_{bi\alpha \beta ud}\right]$ and $\left[{B}_{bi\alpha \beta d}\right]$ can be found in the appendix. 4. Damping model of delaminated plates Damping properties of a structure can be characterized by specific damping capacity (SDC) [26-28]. SDC, denoted ${\psi }_{lam}$, is expressed by the following formula: ${\psi }_{lam}=\frac{{E}_{diss}}{{E}_{stra}},$ where ${E}_{stra}$ and ${E}_{diss}$ denote the dissipated energy and largest strain energy in one load cycle, respectively. If interior delamination surfaces of a composite laminate contact and have relative slide, the frictional energy dissipation also contribute to damping. Together with damping caused by materials viscoelasticity, SDC of delaminated plates can be rewritten as: ${\psi }_{lam}=\frac{{E}_{vis}+{E}_{fri}}{{E}_{stra}}={\psi }_{vis}+{\psi }_{fri},$ where ${E}_{vis}$ is viscoelastic damping dissipated energy, ${E}_{fri}$ is frictional damping dissipated energy, ${\psi }_{vis}$ and ${\psi }_{fri}$ denote viscoelastic specific damping capacity (VSDC) and frictional specific damping capacity (FSDC), respectively. Note that, other damping resources such as damage evolution are not taken account here. 4.1. Viscoelastic damping model of delaminated plates ${E}_{vis}$ and ${E}_{stra}$ can be expressed as a summation of energies corresponding to six strain components. SDC corresponding to each strain component, named as ${\psi }_{ij}$, can be obtained through unidirectional damping experiment or FEM analysis [29]. For a linear elastic material, the deformation energy stored in an element is: $\begin{array}{l}{E}_{stra}=\frac{1}{2}{\int }_{V}{\sigma }_{11}{\epsilon }_{11}+{\sigma }_{22}{\epsilon }_{22}+{\sigma }_{33}{\epsilon }_{33}+{\sigma }_{23}{\gamma }_{23}+{\sigma }_{13}{\gamma }_ {13}+{\sigma }_{12}{\gamma }_{12}dV\\ =\frac{1}{2}{\int }_{V}{\sigma }_{11}{\epsilon }_{11}dV+\frac{1}{2}{\int }_{V}{\sigma }_{22}{\epsilon }_{22}dV+\frac{1}{2}{\int }_{V}{\sigma }_{33}{\epsilon }_ {33}dV+\frac{1}{2}{\int }_{V}{\sigma }_{23}{\gamma }_{23}dV\\ +\frac{1}{2}{\int }_{V}{\sigma }_{13}{\gamma }_{13}dV+\frac{1}{2}{\int }_{V}{\sigma }_{12}{\gamma }_{12}dV\\ ={E}_{11}+{E}_{22}+{E}_{33}+ {E}_{23}+{E}_{13}+{E}_{12}=\sum {E}_{ij},\end{array}$ where ${E}_{ij}$ is the energy component. The corresponding viscoelastic damping energy dissipation of the element can be written in terms of the specific damping capacity along each direction as: $\begin{array}{l}{E}_{vis}=\frac{1}{2}{\int }_{V}{\psi }_{11}{\sigma }_{11}{\epsilon }_{11}+{\psi }_{22}{\sigma }_{22}{\epsilon }_{22}+\cdots +{\psi }_{12}{\sigma }_{12}{\gamma }_{12}dV\\ =\frac{1} {2}{\psi }_{11}{\int }_{V}{\sigma }_{11}{\epsilon }_{11}dV+\frac{1}{2}{\psi }_{22}{\int }_{V}{\sigma }_{22}{\epsilon }_{22}dV+\cdots +\frac{1}{2}{\psi }_{12}{\int }_{V}{\sigma }_{12}{\gamma }_{12}dV\ \ ={\psi }_{11}{E}_{11}+{\psi }_{22}{E}_{22}+\cdots +{\psi }_{12}{E}_{12}=\sum {\psi }_{ij}{E}_{ij}.\end{array}$ Therefore, SDC of laminate composite ${\psi }_{vis}$ can be expressed as: ${\psi }_{vis}=\frac{{E}_{vis}}{{E}_{stra}}=\frac{\sum {\psi }_{ij}{E}_{ij}}{\sum {E}_{ij}}.$ For a plate model, the off-plane normal strain is neglected, therefore, strain energy has three in-plane components and two transverse shear components. In the higher-order zig-zag theory, in-plane strain and transverse strain are all higher order function of $z$ coordinates and contains a step-function at the interface of the lamina. It is difficult to calculate strain energy and viscoelastic damping via integration layer by layer directly. Here, numerical integration is adopted, and the strain energy corresponding to one strain component, named as ${E}_{ij}$, is accumulated as: ${E}_{ij}=\frac{1}{2}\sum _{k=1}^{{N}_{}}\sum _{l=1}^{{N}_{p}}\left(\frac{{\sigma }_{ij}^{k,l}{\epsilon }_{ij}^{k,l}{h}_{ply}}{{N}_{p}{A}_{ele}}\right),$ where, $N$ and is the ply number, ${N}_{p}$ is the integrate point number of each ply, ${h}_{ply}$ is the thickness of each ply, ${A}_{ele}$ is the plate element area, ${\sigma }_{ij}^{k,l}$ and ${\ epsilon }_{ij}^{k,l}$ denote the stress and strain at $l$th integrate point locating $k$th ply, respectively. 4.2. Frictional damping model of delaminated plates Normal stress ${\sigma }_{z}$ is usually neglected in the thin plate theory for it is much smaller than the in-plane normal stress. However, it is essential for calculating frictional energy dissipation introduced by delaminated interface. Here, ${\sigma }_{z}$ between plies is estimated through force equilibrium condition in thickness. As illustrated in Fig. 3, the inter-ply normal stress between $k$th ply and $k+1$th ply, named as ${\sigma }_{z}^{k}$, is determined by following equation: $\sum _{i=1}^{k}\left(\mathrm{\Delta }{\tau }_{yz}^{i}{h}_{ply}^{i}{l}_{x}+\mathrm{\Delta }{\tau }_{xz}^{i}{h}_{ply}^{i}{l}_{y}\right)+{\sigma }_{z}^{k}{l}_{x}{l}_{y}+{f}_{z}=0,$ where, $\mathrm{\Delta }{\tau }_{yz}^{i}$ and $\mathrm{\Delta }{\tau }_{xz}^{i}$ denote the shear stress increments on the two lateral sides face of $i$th ply, ${h}_{ply}^{i}$ is the thickness of $i$ th ply, ${l}_{x}$ and ${l}_{y}$ are the element length of the $x$ and $y$ directions, ${f}_{z}$ is the transverse external resultant force applied (below above) the $k$th ply and it is postulated going through the central point of the element. To delaminated interface ${\sigma }_{z}^{k}$ will be null when interface is open. So, the contact pressure on the delamination surface is expressed with one-side condition as: ${\sigma }_{N}=⟨-{\sigma }_{z}⟩.$ The symbol of ‘〈〉’ means it takes the value of variable inside if the variable is positive otherwise it equals zero. The contact force is further supposed to be evenly distributed for simplification. If the structure is supposed experience periodic load, the compressive stress on crack surface also alternates with the same frequency. If the shear stress on delaminated (but compressed) surface has overcome the static frictional resistance, the two delamination surfaces will have relative shear displacement and frictional force does work. The frictional force is the product of compression ${\sigma }_{N}$ and slide friction coefficient $\mu$. In order to correspond to the relative displacement, the transverse shear stress on the stratified plane is also divided into two directions of $X$-axis and $Y$-axis. Therefore, sliding friction conditions of $X$ direction and $Y$ direction can be written as: $\mu {\sigma }_{N}\le \sqrt{{\stackrel{-}{\tau }}_{xz}^{2}+{\stackrel{-}{\tau }}_{yz}^{2}}.$ When relative displacement occurs on delamination surface, the friction energy dissipation of a delamination in one load cycle can be expressed as: ${E}_{fri}={\oint }_{T}\left|\mu {\stackrel{-}{\sigma }}_{N}\left(t\right)\sqrt{\left({\stackrel{-}{u}}_{x}^{k}\left(t\right){\right)}^{2}+\left({\stackrel{-}{u}}_{y}^{k}\left(t\right){\right)}^{2}}\ where, ${\stackrel{-}{\sigma }}_{N}$ is the peak magnitude of average contact pressure, ${\stackrel{-}{u}}_{x}^{k}$ and ${\stackrel{-}{u}}_{y}^{k}$ denote the shear relative displacement of the upper and lower surfaces of the $k$th delamination in one element. ${A}_{d}$ is the delamination area of one element, when delamination occurred in an element, delamination area ${A}_{d}$ equals the element area ${A}_{ele}$, otherwise ${A}_{d}$ equals 0. It is worth noting that, without consideration of the interaction between the contacted surfaces, ${\stackrel{-}{\sigma }}_{N}$, ${\stackrel{-} {u}}_{x}^{k}$ and ${\stackrel{-}{u}}_{y}^{k}$ may have a little difference comparing with the real situation. The frictional energy dissipation of a composite plate can be obtained by summarized frictional energy dissipation of all elements. 5. Numerical examples In order to validate the application of theoretical model developed above in structure, a plate finite element based this theory was implemented on platform of ABAQUS. The effects of the delamination area, delamination location and boundary conditions on the viscoelastic damping and friction damping of laminated plates are studied. The material parameters of the fiber and the matrix of the laminated plates used in this paper are derived from the reference [29]. When the fiber volume fraction of this composite takes 60 %, the engineering constants and the SDC of single ply are calculated by mixture theory and employed from reference [29] respectively, as presented in Table 1 and Table 2. Damping properties of the single layer plate are show in Table 2. Table 1Engineering constants of single layer plate ${E}_{1}$ / GPa ${E}_{2}$ / GPa ${G}_{12}$ / GPa ${G}_{23}$ / GPa ${u }_{12}$ ${u }_{23}$ 136.28 7.83 2.84 1.5 0.28 0.35 Table 2Damping properties of single layer ${\psi }_{11}$ ${\psi }_{22}$ ${\psi }_{23}$ ${\psi }_{12}$ ${\psi }_{13}$ ${\psi }_{11}$ 0.00123 0.01256 0.01688 0.02164 0.02164 0.00123 Fig. 3FE model and ply sequences A fictitious square composite laminate with dimension of 100 mm×100 mm×1 mm was used as example. Its stacking sequences are [0°, 90°, 90°, 0°] and each ply layer is 0.25 mm in thickness. 400 uniform square elements were discretized for the FE model and each element involves 4 plies, as demonstrated in Fig. 3. Three boundary conditions were considered here, i.e. (1) One side was clamped ($x=$ 0 mm) and the other three sides were free; (2) One side was clamped ($y=$ 0 mm) and the other three sides were free; (3) All the four sides are clamped. Symmetrical cycle sinusoid uniform pressure (or traction) 0.004 MPa is applied on upper surface of the plate. Note that, during half of one load cycle the upper surface is in traction and delaminated surfaces separate and thus no frictional dissipation takes place. 5.1. Delamination area influence on special damping capacity A serious square shape delaminations were preset between ply 2 and ply 3 ($z=$ 0.5 mm) in the laminated plate. Their areas are 10 mm×10 mm, 20 mm×20 mm, 30 mm×30 mm, 40 mm×40 mm, 50 mm×50 mm, 60 mm×60 mm, 70 mm×70 mm and 80 mm×80 mm respectively and all of them locate in the middle region of the plate. Fig. 4 shows the relationship between the viscoelastic damping capacity (VSDC), friction damping capacity (FSDC) and the delamination area under the three boundary conditions. Fig. 4Delamination area influence on SDC As see from the Fig. 4, VSDC and FSDC both increase with the increase of the delamination area under the three kinds of boundary conditions, and the damping capacity with four edges clamped boundary condition grows quicker than that with two kinds of one side clamped boundary condition when delamination area increases. In the boundary conditions (1) and (3), FSDC and VSDC are numerically equal with each other when the delamination area is up to 30 mm×30 mm (less than 10 % of the total area), ${\psi }_{fr}$ is greater than ${\psi }_{vis}$ when the delamination area enlarge continually, and ${\psi }_{fr}$ is two orders of magnitude more than ${\psi }_{vis}$ when delamination area reaches 64 % of the laminated plate. In the boundary condition (2), FSDC and VSDC are numerically close to each other when the delamination area is up to 40 mm×40 mm (about 16 % of the total area), and ${\psi }_{fr}$ increases more significant than ${\psi }_{vis}$ when the delamination area expand continually. Compared with the three lines of differ boundary conditions in Fig. 4(a) and Fig. 4(b), we can also know that boundary conditions significant influences VSDC and FSDC. 5.2. Delamination location influence on special damping capacity In order to study the influence of the delamination position on damping of the composite laminated plate, the laminated plate models with different delamination positions are established respectively. All delamination areas are 40 mm×40 mm and occur between layer 2 and layer 3. In boundary condition (1) and (3), the distances to one side ($x=$ 0 mm) are 30 mm, 40 mm, 50 mm and 60 mm and 70 mm in turn, and they keep at middle of the plate in $y$-axis direction. In boundary condition (2), the delamination location changes similar to boundary condition (1) but in $y$-axis direction. As show in Fig. 5, in boundary condition (1) and (3), both VSDC and FSDC with four edges clamped boundary condition are higher than those with one side clamped boundary condition and ${\ psi }_{fr}$ is one order of magnitude more than ${\psi }_{vis}$. In boundary condition (1) and (2), both VSDC and FSDC small decrease with the delamination location away from the fixed boundary. In boundary condition (2), VSDC is closed to FSDC during the delamination area is 16 % of total plate area. Considered the cycle load condition applied on the plate, during half of one load cycle the upper surface is in traction thus no frictional dissipation takes place, but viscoelastic dissipation appears in the whole load cycle, that is to say, in boundary condition (2), frictional dissipation is also considerable. Fig. 5Delamination location influence on special damping capacity Actually, ${\psi }_{fr}$ may be smaller than the calculated value, because of this paper does not consider the friction influence when calculate the deformation. However, the friction energy dissipation on layer surface will be one of the main sources of damping without doubt when the delamination damage area is large enough. 6. Conclusions In this paper, a four-node finite element was constructed based on higher-order shear deformation theory to calculate damping capacity of composite plates with delamination. The effects of area and location of delamination on viscoelasticity damping and frictional damping of a four layer laminate composite are researched. The results reveal that viscoelasticity damping and frictional damping will rise as delamination area increased in all cases and the friction energy dissipation will increase significantly, it shows the close relationship between the friction energy dissipation and the area of delamination. Friction damping will become one of the main sources of damping in laminates when the delamination area is large enough, it needed to be considered in engineering design. • Melo Jose Daniel D. Time and temperature dependence of the viscoelastic properties of CFRP by dynamic mechanical analysis. Composite Structures, Vol. 70, 2005, p. 240-253. • Kumar Rabindra Patel, Bishakh Bhattacharya, Sumit Basu A finite element based investigation on obtaining high material damping over a large frequency range in viscoelastic composites. Journal of Sound and Vibration, Vol. 303, 2007, p. 753-766. • Chandra R., Singh S. P., Gupta K. Damping studies in fiber-reinforced composites – a review. Composite Structures, Vol. 46, 1999, p. 41-51. • Zhang P. Q. Influence of some factors on the damping property of fiber-reinforced epoxy composites at low temperature. Cryogenics, Vol. 41, 2001, p. 245-251. • Kubat J., Rigdahl M., Welander M. Characterization of interfacial interactions in high density polyethylene filled with glass spheres using dynamic-mechanical analysis. Journal of Applied Polymer Science, Vol. 39, Issue 7, 1990, p. 527-1539. • Hassan N. M., Batrar R. C. Modeling damage in polymeric composites. Composites, Part B, Vol. 39, 2008, p. 66-82. • Cho Chongdu, Holmes John W., Barber James R. Estimation of interfacial shear in ceramic composites from frictional heating measurements. Journal of the American Ceramic Society, Vol. 74, Issue 11, 1991, p. 2802-2808. • Marshall D. B., Oliver W. C. Measurement of interfacial mechanical properties in fiber-reinforced ceramic composites. Journal of the American Ceramic Society, Vol. 70, Issue 8, 1987, p. 542-548. • Birman Victor, Byrd Larry W. Effect of matrix cracks on damping in unidirectional and cross-ply ceramic matrix composites. Journal of Composite Materials, Vol. 36, 2002, p. 1858-1878. • Birman Victor, Byrd Larry W. Damping in ceramic matrix composites with matrix cracks. International Journal of Solids and Structures, Vol. 40, 2003, p. 4239-4256. • Saravanos D. A., Hopkins D. A. Effects of delaminations on the damped dynamic characteristics of composite laminates: analysis and experiments. Journal of Sound and Vibration, Vol. 192, Issue 5, 1996, p. 977-993. • Echtermeyer A., Engh B., Buene L. Lifetime and Young’s modulus changes of glass/phenolic and glass/polyester composites under fatigue. Composites, Vol. 26, Issue 1, 1995, p. 10-16. • Balasubramaniam K., Alluri S., Nidumolu P., et al. Ultrasonic and vibration methods for the characterization of pultruded composites. Composites Engineering, Vol. 5, Issue 12, 1995, p. 1433-1451. • Kyriazoglou C., Le Page B. H., Guild F. J. Vibration damping for crack detection in composite laminates. Composites: Part A, Vol. 35, 2004, p. 945-953. • Zhang Z., Hartwig G. Relation of damping and fatigue damage of unidirectional fiber composites. International Journal of Fatigue, Vol. 24, 2002, p. 713-718. • Cho M., Parmerter R. Efficient higher order composite plate theory for general lamination configurations. AIAA Journal, Vol. 31, Issue 7, 1993, p. 1299-1306. • Kim J. S., Cho M. Post buckling of delaminated composites under compressive loads using global-local approach. AIAA Journal, Vol. 31, 1999, p. 774-777. • Cheng Z. Q., Jemah A. K., Williams F. W. Theory for multilayered anisotropic plates with weakened interfaces. Journal of Applied Mechanics, Vol. 63, 1996, p. 1019-1026. • Sciuva M. D. Geometrically nonlinear theory of multilayered plates with interlayer slips. AIAA Journal, Vol. 35, Issue 11, 1997, p. 1753-1759. • Cho M., Kim J. S. Higher order zig-zag theory of laminated composites with multiple delaminations. Journal of Applied Mechanics, Vol. 68, 2001, p. 869-877. • Lee J., Gurdal Z., Griffin O. Layer-wise approach for the bifurcation problem in laminated composites with delaminations. AIAA Journal, Vol. 31, Issue 31, 1993, p. 331-338. • Dimitris I., Nikos A., Dimitris S., et al. A damping mechanics model and a beam finite element for the free vibration of laminated composite strips under in-plane loading. Journal of Sound and Vibration, Vol. 330, 2011, p. 5660-5677. • Kenan Y., Koruk H. A new triangular composite shell element with damping capability. Composite Structures, Vol. 118, 2014, p. 322-327. • Wang Y., Daniel J. Finite element analysis and experimental study on dynamic properties of a composite beam with viscoelastic damping. Journal of Sound and Vibration, Vol. 332, 2013, p. • Niyari A. Nonlinear finite element modelling investigation of flexural damping behaviour of triple core composite sandwich panels. Materials and Design, Vol. 46, 2013, p. 842-848. • Adams R. D., Fox M. A. O., Flood R. J. L., et al. The dynamic properties of unidirectional carbon and glass fibre reinforced plastics in torsion and flexure. Journal of Composite Materials, Vol. 3, Issue 4, 1969, p. 594-603. • Adams R. D., Bacon D. G. C. Measurement of the flexural damping capacity and dynamic Young’s modulus of metals and reinforced plastics. Journal of Physics D: Applied Physics, Vol. 6, Issue 1, 1973, p. 27-41. • Ni R. G., Adams R. D. The damping and dynamic moduli of symmetric laminated composite beams: theoretical and experimental results. Journal of Composite Materials, Vol. 18, Issue 2, 1984, p. • Tsai J. L., Chi Y. K. Effect of fiber array on damping behaviors of fibre composites. Composites Part B: Engineering, Vol. 39, 2008, p. 1196-1204. About this article Mechanical vibrations and applications viscoelasticity damping frictional damping laminate element This work was supported by the National Natural Science Foundation of China (Grant No. 11272147, 10772078), Chinese Aviation Science Fund (2013ZF52074), Fund of State Key Laboratory of Mechanical Structural Mechanics and Control (0214G02), SKL Open Fund (IZD13001-1353, IZD150021556), Priority Academic Program Development of Jiangsu Higher Education Institutions. Copyright © 2018 Chaogan Gao, et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/19508","timestamp":"2024-11-05T12:36:12Z","content_type":"text/html","content_length":"193618","record_id":"<urn:uuid:bfba609c-998a-4960-bb8f-924fce74f432>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00243.warc.gz"}
Internet Engineering Task Force (IETF) Request for Comments: 6412 Category: S. Poretsky Allot Communications B. Imhoff F5 Networks K. Michielsen Cisco Systems November 2011 Terminology for Benchmarking Link-State IGP Data-Plane Route Convergence This document describes the terminology for benchmarking link-state Interior Gateway Protocol (IGP) route convergence. The terminology is to be used for benchmarking IGP convergence time through externally observable (black-box) data-plane measurements. The terminology can be applied to any link-state IGP, such as IS-IS and OSPF. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6412. Copyright Notice Copyright © 2011 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English. Table of Contents 1. Introduction and Scope . . . . . . . . . . . . . . . . . . . . 4 2. Existing Definitions . . . . . . . . . . . . . . . . . . . . . 4 3. Term Definitions . . . . . . . . . . . . . . . . . . . . . . . 5 3.1. Convergence Types . . . . . . . . . . . . . . . . . . . . 5 3.1.1. Route Convergence . . . . . . . . . . . . . . . . . . 5 3.1.2. Full Convergence . . . . . . . . . . . . . . . . . . . 5 3.2. Instants . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2.1. Traffic Start Instant . . . . . . . . . . . . . . . . 6 3.2.2. Convergence Event Instant . . . . . . . . . . . . . . 6 3.2.3. Convergence Recovery Instant . . . . . . . . . . . . . 7 3.2.4. First Route Convergence Instant . . . . . . . . . . . 8 3.3. Transitions . . . . . . . . . . . . . . . . . . . . . . . 8 3.3.1. Convergence Event Transition . . . . . . . . . . . . . 8 3.3.2. Convergence Recovery Transition . . . . . . . . . . . 9 3.4. Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4.1. Local Interface . . . . . . . . . . . . . . . . . . . 10 3.4.2. Remote Interface . . . . . . . . . . . . . . . . . . . 10 3.4.3. Preferred Egress Interface . . . . . . . . . . . . . . 10 3.4.4. Next-Best Egress Interface . . . . . . . . . . . . . . 11 3.5. Benchmarking Methods . . . . . . . . . . . . . . . . . . . 11 3.5.1. Rate-Derived Method . . . . . . . . . . . . . . . . . 11 3.5.2. Loss-Derived Method . . . . . . . . . . . . . . . . . 14 3.5.3. Route-Specific Loss-Derived Method . . . . . . . . . . 15 3.6. Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . 17 3.6.1. Full Convergence Time . . . . . . . . . . . . . . . . 17 3.6.2. First Route Convergence Time . . . . . . . . . . . . . 18 3.6.3. Route-Specific Convergence Time . . . . . . . . . . . 18 3.6.4. Loss-Derived Convergence Time . . . . . . . . . . . . 20 3.6.5. Route Loss of Connectivity Period . . . . . . . . . . 21 3.6.6. Loss-Derived Loss of Connectivity Period . . . . . . . 22 3.7. Measurement Terms . . . . . . . . . . . . . . . . . . . . 23 3.7.1. Convergence Event . . . . . . . . . . . . . . . . . . 23 3.7.2. Convergence Packet Loss . . . . . . . . . . . . . . . 23 3.7.3. Connectivity Packet Loss . . . . . . . . . . . . . . . 24 3.7.4. Packet Sampling Interval . . . . . . . . . . . . . . . 24 3.7.5. Sustained Convergence Validation Time . . . . . . . . 25 3.7.6. Forwarding Delay Threshold . . . . . . . . . . . . . . 26 3.8. Miscellaneous Terms . . . . . . . . . . . . . . . . . . . 26 3.8.1. Impaired Packet . . . . . . . . . . . . . . . . . . . 26 4. Security Considerations . . . . . . . . . . . . . . . . . . . 27 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 27 6. Normative References . . . . . . . . . . . . . . . . . . . . . 27 1. Introduction and Scope This document is a companion to [Po11m], which contains the methodology to be used for benchmarking link-state Interior Gateway Protocol (IGP) convergence by observing the data plane. The purpose of this document is to introduce new terms required to complete execution of the Link-State IGP Data-Plane Route Convergence methodology [Po11m]. IGP convergence time is measured by observing the data plane through the Device Under Test (DUT) at the Tester. The methodology and terminology to be used for benchmarking IGP convergence can be applied to IPv4 and IPv6 traffic and link-state IGPs such as Intermediate System to Intermediate System (IS-IS) [Ca90][Ho08], Open Shortest Path First (OSPF) [Mo98] [Co08], and others. 2. Existing Definitions This document uses existing terminology defined in other IETF documents. Examples include, but are not limited to: Throughput [Br91], Section 3.17 Offered Load [Ma98], Section 3.5.2 Forwarding Rate [Ma98], Section 3.6.1 Device Under Test (DUT) [Ma98], Section 3.1.1 System Under Test (SUT) [Ma98], Section 3.1.2 Out-of-Order Packet [Po06], Section 3.3.4 Duplicate Packet [Po06], Section 3.3.5 Stream [Po06], Section 3.3.2 Forwarding Delay [Po06], Section 3.2.4 IP Packet Delay Variation (IPDV) [De02], Section 1.2 Loss Period [Ko02], Section 4 The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [Br97]. RFC 2119 defines the use of these keywords to help make the intent of Standards Track documents as clear as possible. While this document uses these keywords, this document is not a Standards Track document. 3. Term Definitions 3.1. Convergence Types 3.1.1. Route Convergence The process of updating all components of the router, including the Routing Information Base (RIB) and Forwarding Information Base (FIB), along with software and hardware tables, with the most recent route change(s) such that forwarding for a route entry is successful on the Next-Best Egress Interface (Section 3.4.4). In general, IGP convergence does not necessarily result in a change in forwarding. But the test cases in [Po11m] are specified such that the IGP convergence results in a change of egress interface for the measurement data-plane traffic. Due to this property of the test case specifications, Route Convergence can be observed externally by the rerouting of the measurement data-plane traffic to the Next-Best Egress Interface (Section 3.4.4). Measurement Units: See Also: Next-Best Egress Interface, Full Convergence 3.1.2. Full Convergence Route Convergence for all routes in the Forwarding Information Base (FIB). In general, IGP convergence does not necessarily result in a change in forwarding. But the test cases in [Po11m] are specified such that the IGP convergence results in a change of egress interface for the measurement data-plane traffic. Due to this property of the test cases specifications, Full Convergence can be observed externally by the rerouting of the measurement data-plane traffic to the Next-Best Egress Interface (Section 3.4.4). Measurement Units: See Also: Next-Best Egress Interface, Route Convergence 3.2. Instants 3.2.1. Traffic Start Instant The time instant the Tester sends out the first data packet to the DUT. If using the Loss-Derived Method (Section 3.5.2) or the Route- Specific Loss-Derived Method (Section 3.5.3) to benchmark IGP convergence time, and the applied Convergence Event (Section 3.7.1) does not cause instantaneous traffic loss for all routes at the Convergence Event Instant (Section 3.2.2), then the Tester SHOULD collect a timestamp on the Traffic Start Instant in order to measure the period of time between the Traffic Start Instant and Convergence Event Instant. Measurement Units: seconds (and fractions), reported with resolution sufficient to distinguish between different instants See Also: Loss-Derived Method, Route-Specific Loss-Derived Method, Convergence Event, Convergence Event Instant 3.2.2. Convergence Event Instant The time instant that a Convergence Event (Section 3.7.1) occurs. If the Convergence Event (Section 3.7.1) causes instantaneous traffic loss on the Preferred Egress Interface (Section 3.4.3), the Convergence Event Instant is observable from the data plane as the instant that no more packets are received on the Preferred Egress Interface. The Tester SHOULD collect a timestamp on the Convergence Event Instant if the Convergence Event does not cause instantaneous traffic loss on the Preferred Egress Interface (Section 3.4.3). Measurement Units: seconds (and fractions), reported with resolution sufficient to distinguish between different instants See Also: Convergence Event, Preferred Egress Interface 3.2.3. Convergence Recovery Instant The time instant that Full Convergence (Section 3.1.2) has completed. The Full Convergence completed state MUST be maintained for an interval of duration equal to the Sustained Convergence Validation Time (Section 3.7.5) in order to validate the Convergence Recovery Instant. The Convergence Recovery Instant is observable from the data plane as the instant the DUT forwards traffic to all destinations over the Next-Best Egress Interface (Section 3.4.4) without Measurement Units: seconds (and fractions), reported with resolution sufficient to distinguish between different instants See Also: Sustained Convergence Validation Time, Full Convergence, Next-Best Egress Interface 3.2.4. First Route Convergence Instant The time instant the first route entry completes Route Convergence (Section 3.1.1) Any route may be the first to complete Route Convergence. The First Route Convergence Instant is observable from the data plane as the instant that the first packet that is not an Impaired Packet (Section 3.8.1) is received from the Next-Best Egress Interface (Section 3.4.4) or, for the test cases with Equal Cost Multi-Path (ECMP) or Parallel Links, the instant that the Forwarding Rate on the Next-Best Egress Interface (Section 3.4.4) starts to increase. Measurement Units: seconds (and fractions), reported with resolution sufficient to distinguish between different instants See Also: Route Convergence, Impaired Packet, Next-Best Egress Interface 3.3. Transitions 3.3.1. Convergence Event Transition A time interval following a Convergence Event (Section 3.7.1) in which the Forwarding Rate on the Preferred Egress Interface (Section 3.4.3) gradually reduces to zero. The Forwarding Rate during a Convergence Event Transition may or may not decrease linearly. The Forwarding Rate observed on the DUT egress interface(s) may or may not decrease to zero. The Offered Load, the number of routes, and the Packet Sampling Interval (Section 3.7.4) influence the observations of the Convergence Event Transition using the Rate-Derived Method (Section Measurement Units: seconds (and fractions) See Also: Convergence Event, Preferred Egress Interface, Packet Sampling Interval, Rate-Derived Method 3.3.2. Convergence Recovery Transition A time interval following the First Route Convergence Instant (Section 3.4.4) in which the Forwarding Rate on the DUT egress interface(s) gradually increases to equal to the Offered Load. The Forwarding Rate observed during a Convergence Recovery Transition may or may not increase linearly. The Offered Load, the number of routes, and the Packet Sampling Interval (Section 3.7.4) influence the observations of the Convergence Recovery Transition using the Rate-Derived Method (Section 3.5.1). Measurement Units: seconds (and fractions) See Also: First Route Convergence Instant, Packet Sampling Interval, Rate- Derived Method 3.4. Interfaces 3.4.1. Local Interface An interface on the DUT. A failure of a Local Interface indicates that the failure occurred directly on the DUT. Measurement Units: See Also: Remote Interface 3.4.2. Remote Interface An interface on a neighboring router that is not directly connected to any interface on the DUT. A failure of a Remote Interface indicates that the failure occurred on a neighbor router's interface that is not directly connected to the DUT. Measurement Units: See Also: Local Interface 3.4.3. Preferred Egress Interface The outbound interface from the DUT for traffic routed to the preferred next-hop. The Preferred Egress Interface is the egress interface prior to a Convergence Event (Section 3.7.1). Measurement Units: See Also: Convergence Event, Next-Best Egress Interface 3.4.4. Next-Best Egress Interface The outbound interface or set of outbound interfaces in an Equal Cost Multipath (ECMP) set or parallel link set of the Device Under Test (DUT) for traffic routed to the second-best next-hop. The Next-Best Egress Interface becomes the egress interface after a Convergence Event (Section 3.4.4). For the test cases in [Po11m] using test topologies with an ECMP set or parallel link set, the term Preferred Egress Interface refers to all members of the link set. Measurement Units: See Also: Convergence Event, Preferred Egress Interface 3.5. Benchmarking Methods 3.5.1. Rate-Derived Method The method to calculate convergence time benchmarks from observing the Forwarding Rate each Packet Sampling Interval (Section 3.7.4). Figure 1 shows an example of the Forwarding Rate change in time during convergence as observed when using the Rate-Derived Method. ^ Traffic Convergence Fwd | Start Recovery Rate | Instant Instant | Offered ^ ^ | Load --> ----------\ /----------- | \ /<--- Convergence | \ Packet / Recovery | Convergence --->\ Loss / Transition | Event \ / | Transition \---------/ <-- Max Packet Loss ^ ^ time Convergence First Route Event Instant Convergence Instant Figure 1: Rate-Derived Convergence Graph To enable collecting statistics of Out-of-Order Packets per flow (see [Th00], Section 3), the Offered Load SHOULD consist of multiple Streams [Po06], and each Stream SHOULD consist of a single flow . If sending multiple Streams, the measured traffic statistics for all Streams MUST be added together. The destination addresses for the Offered Load MUST be distributed such that all routes or a statistically representative subset of all routes are matched and each of these routes is offered an equal share of the Offered Load. It is RECOMMENDED to send traffic to all routes, but a statistically representative subset of all routes can be used if required. At least one packet per route for all routes matched in the Offered Load MUST be offered to the DUT within each Packet Sampling Interval. For maximum accuracy, the value of the Packet Sampling Interval SHOULD be as small as possible, but the presence of IP Packet Delay Variation (IPDV) [De02] may require that a larger Packet Sampling Interval be used. The Offered Load, IPDV, the number of routes, and the Packet Sampling Interval influence the observations for the Rate-Derived Method. It may be difficult to identify the different convergence time instants in the Rate-Derived Convergence Graph. For example, it is possible that a Convergence Event causes the Forwarding Rate to drop to zero, while this may not be observed in the Forwarding Rate measurements if the Packet Sampling Interval is too large. IPDV causes fluctuations in the number of received packets during each Packet Sampling Interval. To account for the presence of IPDV in determining if a convergence instant has been reached, Forwarding Delay SHOULD be observed during each Packet Sampling Interval. The minimum and maximum number of packets expected in a Packet Sampling Interval in presence of IPDV can be calculated with Equation 1. number of packets expected in a Packet Sampling Interval in presence of IP Packet Delay Variation = expected number of packets without IP Packet Delay Variation +/-( (maxDelay - minDelay) * Offered Load) where minDelay and maxDelay indicate (respectively) the minimum and maximum Forwarding Delay of packets received during the Packet Sampling Interval Equation 1 To determine if a convergence instant has been reached, the number of packets received in a Packet Sampling Interval is compared with the range of expected number of packets calculated in Equation 1. If packets are going over multiple ECMP members and one or more of the members has failed, then the number of received packets during each Packet Sampling Interval may vary, even excluding presence of IPDV. To prevent fluctuation of the number of received packets during each Packet Sampling Interval for this reason, the Packet Sampling Interval duration SHOULD be a whole multiple of the time between two consecutive packets sent to the same destination. Metrics measured at the Packet Sampling Interval MUST include Forwarding Rate and Impaired Packet count. To measure convergence time benchmarks for Convergence Events (Section 3.7.1) that do not cause instantaneous traffic loss for all routes at the Convergence Event Instant, the Tester SHOULD collect a timestamp of the Convergence Event Instant (Section 3.2.2), and the Tester SHOULD observe Forwarding Rate separately on the Next-Best Egress Interface. Since the Rate-Derived Method does not distinguish between individual traffic destinations, it SHOULD NOT be used for any route specific measurements. Therefore, the Rate-Derived Method SHOULD NOT be used to benchmark Route Loss of Connectivity Period (Section 3.6.5). Measurement Units: See Also: Packet Sampling Interval, Convergence Event, Convergence Event Instant, Next-Best Egress Interface, Route Loss of Connectivity Period 3.5.2. Loss-Derived Method The method to calculate the Loss-Derived Convergence Time (Section 3.6.4) and Loss-Derived Loss of Connectivity Period (Section 3.6.6) benchmarks from the amount of Impaired Packets (Section To enable collecting statistics of Out-of-Order Packets per flow (see [Th00], Section 3), the Offered Load SHOULD consist of multiple Streams [Po06], and each Stream SHOULD consist of a single flow . If sending multiple Streams, the measured traffic statistics for all Streams MUST be added together. The destination addresses for the Offered Load MUST be distributed such that all routes or a statistically representative subset of all routes are matched and each of these routes is offered an equal share of the Offered Load. It is RECOMMENDED to send traffic to all routes, but a statistically representative subset of all routes can be used if required. Loss-Derived Method SHOULD always be combined with the Rate- Derived Method in order to observe Full Convergence completion. The total amount of Convergence Packet Loss is collected after Full Convergence completion. To measure convergence time and loss of connectivity benchmarks for Convergence Events that cause instantaneous traffic loss for all routes at the Convergence Event Instant, the Tester SHOULD observe the Impaired Packet count on all DUT egress interfaces (see Connectivity Packet Loss (Section 3.7.3)). To measure convergence time benchmarks for Convergence Events that do not cause instantaneous traffic loss for all routes at the Convergence Event Instant, the Tester SHOULD collect timestamps of the Start Traffic Instant and of the Convergence Event Instant, and the Tester SHOULD observe Impaired Packet count separately on the Next-Best Egress Interface (see Convergence Packet Loss (Section 3.7.2)). Since Loss-Derived Method does not distinguish between traffic destinations and the Impaired Packet statistics are only collected after Full Convergence completion, this method can only be used to measure average values over all routes. For these reasons, Loss- Derived Method can only be used to benchmark Loss-Derived Convergence Time (Section 3.6.4) and Loss-Derived Loss of Connectivity Period (Section 3.6.6). Note that the Loss-Derived Method measures an average over all routes, including the routes that may not be impacted by the Convergence Event, such as routes via non-impacted members of ECMP or parallel links. Measurement Units: See Also: Loss-Derived Convergence Time, Loss-Derived Loss of Connectivity Period, Connectivity Packet Loss, Convergence Packet Loss 3.5.3. Route-Specific Loss-Derived Method The method to calculate the Route-Specific Convergence Time (Section 3.6.3) benchmark from the amount of Impaired Packets (Section 3.8.1) during convergence for a specific route entry. To benchmark Route-Specific Convergence Time, the Tester provides an Offered Load that consists of multiple Streams [Po06]. Each Stream has a single destination address matching a different route entry, for all routes or a statistically representative subset of all routes. Each Stream SHOULD consist of a single flow (see [Th00], Section 3). Convergence Packet Loss is measured for each Stream separately. Route-Specific Loss-Derived Method SHOULD always be combined with the Rate-Derived Method in order to observe Full Convergence completion. The total amount of Convergence Packet Loss (Section 3.7.2) for each Stream is collected after Full Convergence completion. Route-Specific Loss-Derived Method is the RECOMMENDED method to measure convergence time benchmarks. To measure convergence time and loss of connectivity benchmarks for Convergence Events that cause instantaneous traffic loss for all routes at the Convergence Event Instant, the Tester SHOULD observe Impaired Packet count on all DUT egress interfaces (see Connectivity Packet Loss (Section 3.7.3)). To measure convergence time benchmarks for Convergence Events that do not cause instantaneous traffic loss for all routes at the Convergence Event Instant, the Tester SHOULD collect timestamps of the Start Traffic Instant and of the Convergence Event Instant, and the Tester SHOULD observe packet loss separately on the Next- Best Egress Interface (see Convergence Packet Loss (Section 3.7.2)). Since Route-Specific Loss-Derived Method uses traffic streams to individual routes, it observes Impaired Packet count as it would be experienced by a network user. For this reason, Route-Specific Loss-Derived Method is RECOMMENDED to measure Route-Specific Convergence Time benchmarks and Route Loss of Connectivity Period benchmarks. Measurement Units: See Also: Route-Specific Convergence Time, Route Loss of Connectivity Period, Connectivity Packet Loss, Convergence Packet Loss 3.6. Benchmarks 3.6.1. Full Convergence Time The time duration of the period between the Convergence Event Instant and the Convergence Recovery Instant as observed using the Rate-Derived Method. Using the Rate-Derived Method, Full Convergence Time can be calculated as the time difference between the Convergence Event Instant and the Convergence Recovery Instant, as shown in Equation Full Convergence Time = Convergence Recovery Instant - Convergence Event Instant Equation 2 The Convergence Event Instant can be derived from the Forwarding Rate observation or from a timestamp collected by the Tester. For the test cases described in [Po11m], it is expected that Full Convergence Time equals the maximum Route-Specific Convergence Time when benchmarking all routes in the FIB using the Route- Specific Loss-Derived Method. It is not possible to measure Full Convergence Time using the Loss-Derived Method. Measurement Units: seconds (and fractions) See Also: Full Convergence, Rate-Derived Method, Route-Specific Loss-Derived Method, Convergence Event Instant, Convergence Recovery Instant 3.6.2. First Route Convergence Time The duration of the period between the Convergence Event Instant and the First Route Convergence Instant as observed using the Rate-Derived Method. Using the Rate-Derived Method, First Route Convergence Time can be calculated as the time difference between the Convergence Event Instant and the First Route Convergence Instant, as shown with Equation 3. First Route Convergence Time = First Route Convergence Instant - Convergence Event Instant Equation 3 The Convergence Event Instant can be derived from the Forwarding Rate observation or from a timestamp collected by the Tester. For the test cases described in [Po11m], it is expected that First Route Convergence Time equals the minimum Route-Specific Convergence Time when benchmarking all routes in the FIB using the Route-Specific Loss-Derived Method. It is not possible to measure First Route Convergence Time using the Loss-Derived Method. Measurement Units: seconds (and fractions) See Also: Rate-Derived Method, Route-Specific Loss-Derived Method, Convergence Event Instant, First Route Convergence Instant 3.6.3. Route-Specific Convergence Time The amount of time it takes for Route Convergence to be completed for a specific route, as calculated from the amount of Impaired Packets (Section 3.8.1) during convergence for a single route Route-Specific Convergence Time can only be measured using the Route-Specific Loss-Derived Method. If the applied Convergence Event causes instantaneous traffic loss for all routes at the Convergence Event Instant, Connectivity Packet Loss should be observed. Connectivity Packet Loss is the combined Impaired Packet count observed on Preferred Egress Interface and Next-Best Egress Interface. When benchmarking Route-Specific Convergence Time, Connectivity Packet Loss is measured, and Equation 4 is applied for each measured route. The calculation is equal to Equation 8 in Section 3.6.5. Route-Specific Convergence Time = Connectivity Packet Loss for specific route / Offered Load per route Equation 4 If the applied Convergence Event does not cause instantaneous traffic loss for all routes at the Convergence Event Instant, then the Tester SHOULD collect timestamps of the Traffic Start Instant and of the Convergence Event Instant, and the Tester SHOULD observe Convergence Packet Loss separately on the Next-Best Egress Interface. When benchmarking Route-Specific Convergence Time, Convergence Packet Loss is measured, and Equation 5 is applied for each measured route. Route-Specific Convergence Time = Convergence Packet Loss for specific route / Offered Load per route - (Convergence Event Instant - Traffic Start Instant) Equation 5 The Route-Specific Convergence Time benchmarks enable minimum, maximum, average, and median convergence time measurements to be reported by comparing the results for the different route entries. It also enables benchmarking of convergence time when configuring a priority value for the route entry or entries. Since multiple Route-Specific Convergence Times can be measured, it is possible to have an array of results. The format for reporting Route- Specific Convergence Time is provided in [Po11m]. Measurement Units: seconds (and fractions) See Also: Route-Specific Loss-Derived Method, Convergence Event, Convergence Event Instant, Convergence Packet Loss, Connectivity Packet Loss, Route Convergence 3.6.4. Loss-Derived Convergence Time The average Route Convergence time for all routes in the Forwarding Information Base (FIB), as calculated from the amount of Impaired Packets (Section 3.8.1) during convergence. Loss-Derived Convergence Time is measured using the Loss-Derived Method. If the applied Convergence Event causes instantaneous traffic loss for all routes at the Convergence Event Instant, Connectivity Packet Loss (Section 3.7.3) should be observed. Connectivity Packet Loss is the combined Impaired Packet count observed on Preferred Egress Interface and Next-Best Egress Interface. When benchmarking Loss-Derived Convergence Time, Connectivity Packet Loss is measured, and Equation 6 is applied. Loss-Derived Convergence Time = Connectivity Packet Loss / Offered Load Equation 6 If the applied Convergence Event does not cause instantaneous traffic loss for all routes at the Convergence Event Instant, then the Tester SHOULD collect timestamps of the Start Traffic Instant and of the Convergence Event Instant, and the Tester SHOULD observe Convergence Packet Loss (Section 3.7.2) separately on the Next-Best Egress Interface. When benchmarking Loss-Derived Convergence Time, Convergence Packet Loss is measured and Equation 7 is applied. Loss-Derived Convergence Time = Convergence Packet Loss / Offered Load - (Convergence Event Instant - Traffic Start Instant) Equation 7 Measurement Units: seconds (and fractions) See Also: Convergence Packet Loss, Connectivity Packet Loss, Route Convergence, Loss-Derived Method 3.6.5. Route Loss of Connectivity Period The time duration of packet impairments for a specific route entry following a Convergence Event until Full Convergence completion, as observed using the Route-Specific Loss-Derived Method. In general, the Route Loss of Connectivity Period is not equal to the Route-Specific Convergence Time. If the DUT continues to forward traffic to the Preferred Egress Interface after the Convergence Event is applied, then the Route Loss of Connectivity Period will be smaller than the Route-Specific Convergence Time. This is also specifically the case after reversing a failure The Route Loss of Connectivity Period may be equal to the Route- Specific Convergence Time if, as a characteristic of the Convergence Event, traffic for all routes starts dropping instantaneously on the Convergence Event Instant. See discussion in [Po11m]. For the test cases described in [Po11m], the Route Loss of Connectivity Period is expected to be a single Loss Period [Ko02]. When benchmarking the Route Loss of Connectivity Period, Connectivity Packet Loss is measured for each route, and Equation 8 is applied for each measured route entry. The calculation is equal to Equation 4 in Section 3.6.3. Route Loss of Connectivity Period = Connectivity Packet Loss for specific route / Offered Load per route Equation 8 Route Loss of Connectivity Period SHOULD be measured using Route- Specific Loss-Derived Method. Measurement Units: seconds (and fractions) See Also: Route-Specific Convergence Time, Route-Specific Loss-Derived Method, Connectivity Packet Loss 3.6.6. Loss-Derived Loss of Connectivity Period The average time duration of packet impairments for all routes following a Convergence Event until Full Convergence completion, as observed using the Loss-Derived Method. In general, the Loss-Derived Loss of Connectivity Period is not equal to the Loss-Derived Convergence Time. If the DUT continues to forward traffic to the Preferred Egress Interface after the Convergence Event is applied, then the Loss-Derived Loss of Connectivity Period will be smaller than the Loss-Derived Convergence Time. This is also specifically the case after reversing a failure event. The Loss-Derived Loss of Connectivity Period may be equal to the Loss-Derived Convergence Time if, as a characteristic of the Convergence Event, traffic for all routes starts dropping instantaneously on the Convergence Event Instant. See discussion in [Po11m]. For the test cases described in [Po11m], each route's Route Loss of Connectivity Period is expected to be a single Loss Period [Ko02]. When benchmarking the Loss-Derived Loss of Connectivity Period, Connectivity Packet Loss is measured for all routes, and Equation 9 is applied. The calculation is equal to Equation 6 in Section 3.6.4. Loss-Derived Loss of Connectivity Period = Connectivity Packet Loss for all routes / Offered Load Equation 9 The Loss-Derived Loss of Connectivity Period SHOULD be measured using the Loss-Derived Method. Measurement Units: seconds (and fractions) See Also: Loss-Derived Convergence Time, Loss-Derived Method, Connectivity Packet Loss 3.7. Measurement Terms 3.7.1. Convergence Event The occurrence of an event in the network that will result in a change in the egress interface of the DUT for routed packets. All test cases in [Po11m] are defined such that a Convergence Event results in a change of egress interface of the DUT. Local or remote triggers that cause a route calculation that does not result in a change in forwarding are not considered. Measurement Units: See Also: Convergence Event Instant 3.7.2. Convergence Packet Loss The number of Impaired Packets (Section 3.8.1) as observed on the Next-Best Egress Interface of the DUT during convergence. An Impaired Packet is considered as a lost packet. Measurement Units: number of packets See Also: Connectivity Packet Loss 3.7.3. Connectivity Packet Loss The number of Impaired Packets observed on all DUT egress interfaces during convergence. An Impaired Packet is considered as a lost packet. Connectivity Packet Loss is equal to Convergence Packet Loss if the Convergence Event causes instantaneous traffic loss for all egress interfaces of the DUT except for the Next-Best Egress Interface. Measurement Units: number of packets See Also: Convergence Packet Loss 3.7.4. Packet Sampling Interval The interval at which the Tester (test equipment) polls to make measurements for arriving packets. At least one packet per route for all routes matched in the Offered Load MUST be offered to the DUT within the Packet Sampling Interval. Metrics measured at the Packet Sampling Interval MUST include Forwarding Rate and received packets. Packet Sampling Interval can influence the convergence graph as observed with the Rate-Derived Method. This is particularly true when implementations complete Full Convergence in less time than the Packet Sampling Interval. The Convergence Event Instant and First Route Convergence Instant may not be easily identifiable, and the Rate-Derived Method may produce a larger than actual convergence time. Using a small Packet Sampling Interval in the presence of IPDV [De02] may cause fluctuations of the Forwarding Rate observation and can prevent correct observation of the different convergence time instants. The value of the Packet Sampling Interval only contributes to the measurement accuracy of the Rate-Derived Method. For maximum accuracy, the value for the Packet Sampling Interval SHOULD be as small as possible, but the presence of IPDV may enforce using a larger Packet Sampling Interval. Measurement Units: seconds (and fractions) See Also: Rate-Derived Method 3.7.5. Sustained Convergence Validation Time The amount of time for which the completion of Full Convergence is maintained without additional Impaired Packets being observed. The purpose of the Sustained Convergence Validation Time is to produce convergence benchmarks protected against fluctuation in Forwarding Rate after the completion of Full Convergence is observed. The RECOMMENDED Sustained Convergence Validation Time to be used is the time to send 5 consecutive packets to each destination with a minimum of 5 seconds. The Benchmarking Methodology Working Group (BMWG) selected 5 seconds based upon [Br99], which recommends waiting 2 seconds for residual frames to arrive (this is the Forwarding Delay Threshold for the last packet sent) and 5 seconds for DUT restabilization. Measurement Units: seconds (and fractions) See Also: Full Convergence, Convergence Recovery Instant 3.7.6. Forwarding Delay Threshold The maximum waiting time threshold used to distinguish between packets with very long delay and lost packets that will never arrive. Applying a Forwarding Delay Threshold allows packets with a too large Forwarding Delay to be considered lost, as is required for some applications (e.g. voice, video, etc.). The Forwarding Delay Threshold is a parameter of the methodology, and it MUST be reported. [Br99] recommends waiting 2 seconds for residual frames to arrive. Measurement Units: seconds (and fractions) See Also: Convergence Packet Loss, Connectivity Packet Loss 3.8. Miscellaneous Terms 3.8.1. Impaired Packet A packet that experienced at least one of the following impairments: loss, excessive Forwarding Delay, corruption, duplication, reordering. A lost packet, a packet with a Forwarding Delay exceeding the Forwarding Delay Threshold, a corrupted packet, a Duplicate Packet [Po06], and an Out-of-Order Packet [Po06] are Impaired Packet ordering is observed for each individual flow (see [Th00], Section 3) of the Offered Load. Measurement Units: See Also: Forwarding Delay Threshold 4. Security Considerations Benchmarking activities as described in this memo are limited to technology characterization using controlled stimuli in a laboratory environment, with dedicated address space and the constraints specified in the sections above. The benchmarking network topology will be an independent test setup and MUST NOT be connected to devices that may forward the test traffic into a production network or misroute traffic to the test management network. Further, benchmarking is performed on a "black-box" basis, relying solely on measurements observable external to the DUT/SUT. Special capabilities SHOULD NOT exist in the DUT/SUT specifically for benchmarking purposes. Any implications for network security arising from the DUT/SUT SHOULD be identical in the lab and in production networks. 5. Acknowledgements Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, Peter De Vriendt, Anuj Dewagan, Adrian Farrel, Stewart Bryant, Francis Dupont, and the Benchmarking Methodology Working Group for their contributions to this work. 6. Normative References [Br91] Bradner, S., "Benchmarking terminology for network interconnection devices", RFC 1242, July 1991. [Br97] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [Br99] Bradner, S. and J. McQuaid, "Benchmarking Methodology for Network Interconnect Devices", RFC 2544, March 1999. [Ca90] Callon, R., "Use of OSI IS-IS for routing in TCP/IP and dual environments", RFC 1195, December 1990. [Co08] Coltun, R., Ferguson, D., Moy, J., and A. Lindem, "OSPF for IPv6", RFC 5340, July 2008. [De02] Demichelis, C. and P. Chimento, "IP Packet Delay Variation Metric for IP Performance Metrics (IPPM)", RFC 3393, November 2002. [Ho08] Hopps, C., "Routing IPv6 with IS-IS", RFC 5308, October 2008. [Ko02] Koodli, R. and R. Ravikanth, "One-way Loss Pattern Sample Metrics", RFC 3357, August 2002. [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching Devices", RFC 2285, February 1998. [Mo98] Moy, J., "OSPF Version 2", STD 54, RFC 2328, April 1998. [Po06] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, "Terminology for Benchmarking Network-layer Traffic Control Mechanisms", RFC 4689, October 2006. [Po11m] Poretsky, S., Imhoff, B., and K. Michielsen, "Benchmarking Methodology for Link-State IGP Data-Plane Route Convergence", RFC 6413, November 2011. [Th00] Thaler, D. and C. Hopps, "Multipath Issues in Unicast and Multicast Next-Hop Selection", RFC 2991, November 2000. Authors' Addresses Scott Poretsky Allot Communications 300 TradeCenter Woburn, MA 01801 Phone: + 1 508 309 2179 EMail: sporetsky@allot.com Brent Imhoff F5 Networks 401 Elliott Avenue West Seattle, WA 98119 Phone: + 1 314 378 2571 EMail: bimhoff@planetspork.com Kris Michielsen Cisco Systems 6A De Kleetlaan Diegem, BRABANT 1831
{"url":"https://pike.lysator.liu.se/docs/ietf/rfc/64/rfc6412.xml","timestamp":"2024-11-02T21:18:53Z","content_type":"text/html","content_length":"60153","record_id":"<urn:uuid:a56c159e-2877-4b4a-9045-7af1d83c8eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00481.warc.gz"}
24,443 research outputs found A sum-of-squares is a polynomial that can be expressed as a sum of squares of other polynomials. Determining if a sum-of-squares decomposition exists for a given polynomial is equivalent to a linear matrix inequality feasibility problem. The computation required to solve the feasibility problem depends on the number of monomials used in the decomposition. The Newton polytope is a method to prune unnecessary monomials from the decomposition. This method requires the construction of a convex hull and this can be time consuming for polynomials with many terms. This paper presents a new algorithm for removing monomials based on a simple property of positive semidefinite matrices. It returns a set of monomials that is never larger than the set returned by the Newton polytope method and, for some polynomials, is a strictly smaller set. Moreover, the algorithm takes significantly less computation than the convex hull construction. This algorithm is then extended to a more general simplification method for sum-of-squares programming.Comment: 6 pages, 2 figure The mixing of $K^0-\bar{K^0}$, $D^0-\bar{D^0}$ and $B_{(s)}^0-\bar{B^0_{(s)}}$ provides a sensitive probe to explore new physics beyond the Standard Model. The scale invariant unparticle physics recently proposed by Georgi can induce flavor-changing neutral current and contribute to the mixing at tree level. We investigate the unparticle effects on $B^0-\bar{B^0}$ and $D^0-\bar{D^0}$ mixing. Especially, the newly observed $D^0-\bar{D^0}$ mixing sets the most stringent constraints on the coupling of the unparticle to quarks.Comment: 9 pages, some errors corrected, published versio This dissertation investigates the applicability and usefulness of applying Fractal mathematics and to the fracture of brittle particulates in Fluid Energy Mill devices, and in particular quantifying the resulting power law particle size distributions, examining the Surface Fractal Dimension of milled particulates, and relating the Izod Impact Strength values of composites of polypropylene and Calcium carbonate particulates which are large un-milled, small milled, as well as small and produced by the simultaneous milling and coating with nano-silica to the Surface Fractal Dimension of the impact fracture surfaces. First, the dissertation examines the behavior of un-coated and micron-sized wax pre-coated particulates in a specially designed Single-event Fluid Mill (SEFM), which is utilized to represent (for each pass) the Elementary Breakage Events in the Fluid Energy Milling process, and analyze the results in terms of the Fractal Theory. The results establish that brittle milled particulates have self-similar shape to the original particulates, which points to the self-similarity property of fractals. Particle size distribution (PSD) of milled particulates obeys Power Law expression. This allows the analysis of size reduction efficiency and specific kinetic energy of particulates during SEFM milling using fractal methods. For modeling the surface structure of particles by a fractal surface at various scales, Atomic Force Microscopy and the Gwyddion 2.25 software are used to measure the surface fractal dimension (Ds) of raw and ground particles. The results show that the surface fractal dimensions of CaCO3 and KCl particles are independent of scale or grinding. This is a strong indication that the fracture process is self-similar. The surfaces of CaCO3 and KCl particles are modeled very well by fractal surfaces. For the materials of CaCO3 and KCl, a relationship between the macro-mechanical property and the micro-structure is built. The fractal dimension of the fracture surface increases with energy per unit surface area for fracture. The dissertation also investigates the fractal behavior of the following Polypropylene (PP) based polymer composites performance during impact testing and establishes a quantitative relationship between the evolution of microstructure and fracture macro-mechanical properties by fractal theory. The results show that the Izod impact strength increases, as the fractal dimension of composite\u27s impact-fractured surface increases. PP is compounded with large un-milled , small milled, as well as small and produced by the simultaneous milling and coating with nano-silica Calcium carbonate at the 10 and 20 wt% levels. The Izod impact strengths of the composites are obtained and their values are related to their Surface Fractal Dimension. The results establish an excellent relationship, strongly indicating that increasing fracture surface roughness shows more inter-particle ligaments in the composites resulting tougher materials Calorific values of plants are important indices for evaluating and reflecting material cycle and energy conversion in forest ecosystems. Based on the data of Masson Pine (Pinus massoniana) in southern China, the calorific values (CVs) and ash contents (ACs) of different plant organs were analyzed systematically using hypothesis test and regression analysis in this paper. The results show: (i) the CVs and ACs of different plant organs are almost significantly different, and the order by AFCV (ash-free calorific value) from the largest to the smallest is foliage (23.55 kJ/g), branches (22.25 kJ/g), stem bark (21.71 kJ/g), root (21.52 kJ/g) and stem wood (21.35 kJ/g); and the order by AC is foliage (2.35%), stem bark (1.44%), root (1.42%), branches (1.08%) and stem wood (0.33%); (ii) the CVs and ACs of stem woods on top, middle and lower sections are significantly different, and the CVs are increasing from top to lower sections of trunk while the ACs are decreasing; (iii) the mean GCV (gross calorific value) and AFCV of aboveground part are larger than those of belowground part (roots), and the differences are also statistically significant; (iv) the CVs and ACs of different organs are related, to some extent, to diameter, height and origin of the tree, but the influence degrees of the factors on CVs and ACs are not the same Assuming the newly observed $Z_c(3900)$ to be a molecular state of $D\bar D^*(D^{*} \bar D)$, we calculate the partial widths of $Z_c(3900)\to J/\psi+\pi;\; \psi'+\pi;\; \eta_c+\rho$ and $D\bar D^*$ within the light front model (LFM). $Z_c(3900)\to J/\psi+\pi$ is the channel by which $Z_c(3900)$ was observed, our calculation indicates that it is indeed one of the dominant modes whose width can be in the range of a few MeV depending on the model parameters. Similar to $Z_b$ and $Z_b'$, Voloshin suggested that there should be a resonance $Z_c'$ at 4030 MeV which can be a molecular state of $D^*\bar D^*$. Then we go on calculating its decay rates to all the aforementioned final states and as well the $D^*\bar D^*$. It is found that if $Z_c(3900)$ is a molecular state of ${1\over\sqrt 2} (D\bar D^*+D^*\bar D)$, the partial width of $Z_c(3900)\to D\bar D^*$ is rather small, but the rate of $Z_c(3900)\to\psi(2s)\pi$ is even larger than $Z_c(3900)\to J/\psi\pi$. The implications are discussed and it is indicated that with the luminosity of BES and BELLE, the experiments may finally determine if $Z_c(3900)$ is a molecular state or a tetraquark.Comment: 17 pages, 6 figures, 3
{"url":"https://core.ac.uk/search/?q=author%3A(Zheng%2C%20Qian)","timestamp":"2024-11-05T01:38:46Z","content_type":"text/html","content_length":"161723","record_id":"<urn:uuid:9a8032e9-2edd-4c20-873f-3fd8d1ae148d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00703.warc.gz"}
Brain Teasers Have a brain teaser to share? Tell us. 1. Reverse [Difficulty Level: Easy] 2. Chickens, eggs, bananas [Difficulty Level: Medium] 3. Match Sticks What is the largest number you can make by moving just two matches? [Difficulty Level: Medium] 4. Ages of Children Two brunettes (!) are talking about the children. One says that she has three daughters. The product of their ages equals 36 and the sum of the ages coincides with the number of the house across the street. The second brunette replies that this information is not enough to figure out the age of each child. The first agrees and adds that the oldest daughter has the beautiful blue eyes. Then the second solves the puzzle. What are the children’s ages? [Difficulty level: Medium] [Difficulty Level: Medium] 5. 12 Balls You have 12 balls and a balance. One of the balls differs in weight from the others (it is either lighter or heavier). Find the different ball by using the balance only three times. [Difficulty Level: Hard] [Difficulty Level: Difficult] Have a brain teaser to share? Tell us.
{"url":"https://remm.net/brain-teasers/","timestamp":"2024-11-04T21:39:05Z","content_type":"text/html","content_length":"83376","record_id":"<urn:uuid:218028a8-d648-4913-b2cb-a542c115322a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00452.warc.gz"}
Tutorial LDABiplots Luis Pilacuan-Bonete, Purificación Galindo-Villardón, Javier De La Hoz-Maestre y Francisco Javier Delgado Álvarez LDABiplots it is an extraction, analysis, and visualization tool for the exploratory analysis of news published on the web by digital newspapers, which, by extracting data from the web (Bradley et al. 2019), allows the implementation of the Latent Dirichlet Allocation probabilistic model(LDA) (Blei, Ng y Jordán, 2003) and the generation of Biplot (Gabriel K.R, 1971) and HJ-Biplot (Galindo-Villardón P, 1986) visualizations of the main topics of the headlines of the news published on the web. LDABiplots allows for optimizing the data extraction from the web, the LDA modeling routine, and the generation of Biplot visualizations in an interactive way for users who are not adapted to the use of R. Download & installation To download install the stable version of Comprehensive R Archive Network (CRAN) Once the library is loaded, to use the web interface, type in the R console Import or Load of Data allows us to extract data from the web page , the data belongs to the news section in the GOOGLE search engine. For users using a different extraction page also allows the loading of files in Excel format. Importing Data from File The data can be imported from a file in the directory, by selecting the Import or Load Data tab the Import excel file option, and selecting the file to upload from and the work tab where the data is located Worksheet Name . The data to be uploaded must have the header and format according to figure 1 Video 1. Importing Data
{"url":"https://cran.r-project.org/web/packages/LDABiplots/vignettes/Tutorial_LDABiplots_English.html","timestamp":"2024-11-14T14:24:44Z","content_type":"text/html","content_length":"688242","record_id":"<urn:uuid:f0bec154-2711-4578-ac72-7d5018bf13ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00078.warc.gz"}
Brown Sharpie on March 28, 2012 at 4:42 pm Posted In: Uncategorized Good news! Your generosity means that Brown Sharpie will continue to plague the Internet for a few years and I will continue to doodle uses for that big heavy [S:brick:S] book we younger generations know as Lang’s Algebra. Many folks requested new comics, so I’m going to try to come up with some new horrible puns to inflict upon you, perhaps at a rate of 1 every 2 weeks or so. Also, thanks to your donations, I can buy the fancy toilet paper this week when I go grocery shopping instead of thieving it from the math department (just kidding, Math Department! I only steal toilet paper from other buildings on campus…). If you have enjoyed Brown Sharpie, didn’t get a chance to kick in a few bucks, and want to support my fancy toilet paper habit, you are welcome to donate via the link in the last news post. Or go buy yourself a treat from the shop! All sarcasm aside, I am deeply touched by how many of you contributed to the hosting fund. You guys are the best. I mean it. If I were grading your papers you would get 10/10 with smiley faces in the 0s. And bonus points. It’s not a comic, but here’s one for you: What do you call half the circumference of a bashful unit circle? Humble π!
{"url":"https://brownsharpie.courtneygibbons.org/page/2/","timestamp":"2024-11-10T22:20:49Z","content_type":"application/xhtml+xml","content_length":"37564","record_id":"<urn:uuid:8c12ff44-12cb-40db-94ab-5d65e11c4cf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00265.warc.gz"}
6.889: Algorithms for Planar Graphs and Beyond (Fall 2011) [Home] [Problem Sets] [Project] [Lectures] [Problem Session Notes] [Klein's Book] [Accessibility] Lecture 25 Video [previous] [+] Shortest paths with negative lengths in minor-free graphs. We revisit the shortest paths problem, considering the case where the input is a directed minor-free graph with negative arc lengths (but no negative-length cycles). In Lecture 14, we saw almost-linear-time algorithms for the case of planar and bounded-genus graphs. Currently, comparable bounds for minor-free graphs are not known. We shall discuss Goldberg's algorithm, a shortest-path algorithm for general graphs with integer lengths, whose running time depends logarithmically on the magnitude of the largest negative arc length. By exploiting separators (Lecture 6), it runs faster on minor-free graphs than on general graphs, but it still requires superlinear time. Download Video: 360p, 720p Lecture notes, page 1/7 • [previous page] • [next page] • [PDF] Lecture notes, page 1/7 • [previous page] • [next page] • [PDF] The video above should play if your web browser supports either modern Flash or HTML5 video with H.264 or WebM codec. The lecture notes should advance automatically. If you have any trouble with playback, email Erik.
{"url":"https://courses.csail.mit.edu/6.889/fall11/lectures/L25.html","timestamp":"2024-11-04T07:58:18Z","content_type":"text/html","content_length":"5646","record_id":"<urn:uuid:51e421d8-6c0a-488a-a171-f29548e00eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00153.warc.gz"}
245 research outputs found We analyze the long-time asymptotics for the Degasperis--Procesi equation on the half-line. By applying nonlinear steepest descent techniques to an associated $3 \times 3$-matrix valued Riemann--Hilbert problem, we find an explicit formula for the leading order asymptotics of the solution in the similarity region in terms of the initial and boundary values.Comment: 61 pages, 11 We consider continuum random Schr\"odinger operators of the type $H_{\omega} = -\Delta + V_0 + V_{\omega}$ with a deterministic background potential $V_0$. We establish criteria for the absence of continuous and absolutely continuous spectrum, respectively, outside the spectrum of $-\Delta +V_0$. The models we treat include random surface potentials as well as sparse or slowly decaying random potentials. In particular, we establish absence of absolutely continuous surface spectrum for random potentials supported near a one-dimensional surface (``random tube'') in arbitrary dimension.Comment: 14 pages, 2 figure We investigate the asymptotic behaviour of large eigenvalues for a class of finite difference self-adjoint operators with compact resolvent in $l^2$ Boundary value problems for integrable nonlinear evolution PDEs formulated on the half-line can be analyzed by the unified method introduced by one of the authors and used extensively in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann-Hilbert problem formulated in the complex $k$-plane (the Fourier plane), which has a jump matrix with explicit $(x,t)$-dependence involving four scalar functions of $k$, called spectral functions. Two of these functions depend on the initial data, whereas the other two depend on all boundary values. The most difficult step of the new method is the characterization of the latter two spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. For certain boundary conditions, called linearizable, this can be achieved simply using algebraic manipulations. Here, we first present an effective characterization of the spectral functions in terms of the given initial and boundary data for the general case of non-linearizable boundary conditions. This characterization is based on the analysis of the so-called global relation and on the introduction of the so-called Gelfand-Levitan-Marchenko representations of the eigenfunctions defining the spectral functions. We then concentrate on the physically significant case of $t$-periodic Dirichlet boundary data. After presenting certain heuristic arguments which suggest that the Neumann boundary values become periodic as $t\to\infty$, we show that for the case of the NLS with a sine-wave as Dirichlet data, the asymptotics of the Neumann boundary values can be computed explicitly at least up to third order in a perturbative expansion and indeed at least up to this order are asymptotically periodic.Comment: 29 page For the two versions of the KdV equation on the positive half-line an initial-boundary value problem is well posed if one prescribes an initial condition plus either one boundary condition if $q_{t}$ and $q_{xxx}$ have the same sign (KdVI) or two boundary conditions if $q_{t}$ and $q_{xxx}$ have opposite sign (KdVII). Constructing the generalized Dirichlet to Neumann map for the above problems means characterizing the unknown boundary values in terms of the given initial and boundary conditions. For example, if $\{q(x,0),q(0,t) \}$ and $\{q(x,0),q(0,t),q_{x}(0,t) \}$ are given for the KdVI and KdVII equations, respectively, then one must construct the unknown boundary values $\{q_{x}(0,t),q_{xx}(0,t) \}$ and $\{q_{xx}(0,t) \}$, respectively. We show that this can be achieved without solving for $q(x,t)$ by analysing a certain ``global relation'' which couples the given initial and boundary conditions with the unknown boundary values, as well as with the function $\Phi^{(t)}(t,k) $, where $\Phi^{(t)}$ satisifies the $t$-part of the associated Lax pair evaluated at $x=0$. Indeed, by employing a Gelfand--Levitan--Marchenko triangular representation for $\Phi^{(t)}$, the global relation can be solved \emph{explicitly} for the unknown boundary values in terms of the given initial and boundary conditions and the function $\Phi^{(t)}$. This yields the unknown boundary values in terms of a nonlinear Volterra integral equation.Comment: 21 pages, 3 figure We apply the method of nonlinear steepest descent to compute the long-time asymptotics of the Camassa-Holm equation for decaying initial data, completing previous results by A. Boutet de Monvel and D. Shepelsky.Comment: 30 page The traveling salesman problem (TSP) consists of finding the length of the shortest closed tour visiting N ``cities''. We consider the Euclidean TSP where the cities are distributed randomly and independently in a d-dimensional unit hypercube. Working with periodic boundary conditions and inspired by a remarkable universality in the kth nearest neighbor distribution, we find for the average optimum tour length = beta_E(d) N^{1-1/d} [1+O(1/N)] with beta_E(2) = 0.7120 +- 0.0002 and beta_E(3) = 0.6979 +- 0.0002. We then derive analytical predictions for these quantities using the random link approximation, where the lengths between cities are taken as independent random variables. From the ``cavity'' equations developed by Krauth, Mezard and Parisi, we calculate the associated random link values beta_RL(d). For d=1,2,3, numerical results show that the random link approximation is a good one, with a discrepancy of less than 2.1% between beta_E(d) and beta_RL(d). For large d, we argue that the approximation is exact up to O(1/d^2) and give a conjecture for beta_E(d), in terms of a power series in 1/d, specifying both leading and subleading coefficients.Comment: 29 pages, 6 figures; formatting and typos correcte
{"url":"https://core.ac.uk/search/?q=authors%3A(Boutet%20de%20Monvel%20A)","timestamp":"2024-11-02T19:16:10Z","content_type":"text/html","content_length":"152864","record_id":"<urn:uuid:d394cec4-7092-43ec-be52-d3a44829b399>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00127.warc.gz"}
Math Fun A+ Math Visit the Game Room, test your math knowledge with Flash Cards, and get help from the Homework Helper. Math Advantage Divided by grade level (k-8). Build dinosaurs and join Dr. Gee in the 3-D lab. This site is filled with Shockwave games that will make math exciting and fun. Dr. Math Need help finding that formula for geometry, or assistance with adding and subtracting fractions? Dr. Math is here to help. Basket Math Interactive The ball's in your court, the heat is on, the shot is yours. Solve the math problem to shoot and score. Explore Your Knowledge Test your knowledge in math and science and see how you score compared to other students from around the world. Create a Graph Use this site to quickly and easily create an area, bar, line, or pie graph. Just simply type in the numbers and titles and you have it. Math League Help Topics A great resource for finding math equations and operations and provides examples to see how they work. Volume and Shape Predict how high the juice will go when poured from one tank to another. Math Dictionary How exactly do you use an abacus? How long is a fortnight? What exactly is a leap year? … this and more. Times-Tables Practice Tests Tables Practice Tests - Practice your timetables. Choose from 2's to 10's. Math Puzzles Read Dr. Math's answers to students' tough story problems or ask him one of your own. Mad Math You choose the type of problems you want to do (add, subtract, multiply, divide). Many options make this site a helpful tool in practicing math.
{"url":"https://www.muensterlibrary.org/kids-teens/kid-s-corner/math-fun.html","timestamp":"2024-11-08T21:26:28Z","content_type":"application/xhtml+xml","content_length":"32572","record_id":"<urn:uuid:c15bd480-90f6-4db4-ab26-66540b76902d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00016.warc.gz"}
[Solved] Use a graphing utility to verify any five | SolutionInn Use a graphing utility to verify any five of the graphs that you drew by hand in Use a graphing utility to verify any five of the graphs that you drew by hand in Exercises 1–26. Data from exercise 1-26 Transcribed Image Text: 1. x + 2y = 8 3. x2y> 10 2. 3x6y≤ 12 4. 2xy > 4 Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 75% (4 reviews) Remember that to graph an inequality treat the or sign as an sign and gra...View the full answer Answered By Rishabh Ojha During my undergraduate i used to participate as TA (Teaching Assistant) in several electronics and computers subject. I'm passionate about learning Computer Science as my bachelors are in Electronics but i learnt most of the Computer Science subjects on my own which Machine Learning also. At Present, i'm a working professional pursuing my career as a Machine Learning Engineer and i want to help others learn during my free hours, that's all the motivation behind giving tuition. To be frank i have no prior experience of tutoring but i have solved problems on opensource platforms like StackOverflow and github. ~Thanks 4.90+ 3+ Reviews 10+ Question Solved Students also viewed these Mathematics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/college-algebra-graphs-and-models/use-a-graphing-utility-to-verify-any-five-of-the-1099440","timestamp":"2024-11-03T02:37:56Z","content_type":"text/html","content_length":"80091","record_id":"<urn:uuid:02a227f6-02f4-41b9-a5f5-95ef77208bfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00444.warc.gz"}
Electromagnetic Field - Maxwell's Equations Electromagnetic Field Maxwell's Equations A description of the field from a current which changes in time is much more complicated, but is calculable owing to James Clerk Maxwell (1831–1879). His equations, which have unified the laws of electricity and magnetism, are called Maxwell's equations. They are differential equations which completely describe the combined effects of electricity and magnetism, and are considered to be one of the crowning achievements of the nineteenth century. Maxwell's formulation of the theory of electromagnetic radiation allows us to understand the entire electromagnetic spectrum, from radio waves through visible light to gamma rays. Additional topics
{"url":"https://science.jrank.org/pages/2361/Electromagnetic-Field-Maxwell-s-equations.html","timestamp":"2024-11-10T01:53:04Z","content_type":"text/html","content_length":"8037","record_id":"<urn:uuid:8266aeb6-d7bd-4947-bcad-3100a4386436>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00226.warc.gz"}
Comparing rigid boundary condition and pressure release boundary condition for modeling ultrasound transducers This example illustrates the difference between the rigid boundary condition (baffled source) and pressure release boundary condition for modeling ultrasound transducers. Currently, mSOUND uses the pressure release boundary condition, in which the planar transducer surface is assigned with a certain pressure distribution and the pressure is zero everywhere else on the input plane. When modeling a baffled source, where the rigid boundary condition is assumed, the transducer is assigned with a certain particle velocity distribution and the particle velocity in the normal direction is zero everywhere else. These two boundary conditions normally would produce different results mainly in the near-field, but the difference becomes less significant in the far-field. In this example, we use mSOUND to model a planar transducer with the pressure release boundary condition. We also use mSOUND to model the same transducer with the rigid boundary condition. In this case, the input plane pressure is obtained from FOCUS. The flat transducer has a diameter of 20 mm. Generating the grid structure to define the computational domain We first need to define the temporal and spatial computational domains in 3D forward simulations. In FSMDM, any arbitrary values can be set for the time step and temporal domain size, since they are not being used. Here, we set them to 0. We also define the background acoustic medium in this section. medium.c0 = 1500; % speed of Sound [m/s] medium.rho0 = 1000; % density of medium [kg/m^3] medium.ca0 = 0; % attenuation coefficient [dB/(MHz^y cm)]; 1 dB/cm at 500 kHz medium.cb0 = 2.0; % power law exponent dx = lambda/6; % step size in the x direction [m] dy = lambda/6; % step size in the y direction [m] dz = lambda/6; % step size in the z direction [m] x_length = 70e-3*4+dx; % computational domain size in the x direction [m] y_length = 70e-3*4+dx; % computational domain size in the y direction [m] z_length = 120e-3-19*dz; % computational domain size in the z direction [m] mgrid = set_grid(0, 0, dx, x_length, dy, y_length, dz, z_length); Excitation signal We assume pressure release boundary in the first simulation. In the 2nd simulation, the input plane pressure is generated by FOCUS, which uses the rigid boundary condition. The corresponding FOCUS code (input_plane_FOCUS_unfocused_transducer_code.m) can be found in the example folder. % set up pressure release boundary condition source_p = p0*ones(mgrid.num_x,mgrid.num_y); % set the pressure to be zero outside the source surface source_p(RHO>TR_radius) = 0; % load FOCUS input plane pressure load FOCUS_input_plane_unfocused_lossless_0.5mm.mat source_p1 = p_amp.*exp(1i*p_phase); Defining the medium properties Define the homogeneous medium. It is the same as the background medium. medium.c = medium.c0; % speed of sound [m/s] medium.rho = medium.rho0; % density [kg/m^3] medium.ca = medium.ca0; % attenuation coefficient [dB/(MHz^y cm)] medium.cb = medium.cb0; % power law exponent % setting the non-reflecting boundary layer medium.NRL_gamma = 0.1; medium.NRL_alpha = 0.02; 3D forward simulation The pressure field is calculated with the 3D forward simulation function Forward3D_fund. fc = 0.5e6; % ultrasound frequency/ Hz omega_c = 2*pi*fc; % angular frequency % forward propagation of the wave at the fundamental frequency %for the two boundary conditions [P_fundamental] = Forward3D_fund(mgrid, medium, source_p, omegac, 0 ,[],'NRL'); [P_fundamental1] = Forward3D_fund(mgrid, medium, source_p1, omegac, 0 ,[],'NRL'); The two figures below show the pressure field distribution along the axial direction and transverse direction (at 60 mm depth). The results obtained from FOCUS alone (blue line) are also shown for comparison. As expected, the difference between the two different boundary conditions mainly exist in the near field. This difference must be considered when comparing mSOUND with other solvers. Other examples ⮞Forward TMDM · Simulation of a 2D homogeneous medium using the transient mixed domain method · Simulation of a 2D heterogeneous medium using the transient mixed domain method · Simulation of a strongly 2D heterogeneous medium using the transient mixed domain method · Simulation of a 3D homogeneous medium using the transient mixed domain method · Selecting the proper temporal domain size for the TMDM · Shock wave simulations with TMDM ⮞Forward FSMDM · Simulation of a 2D homogeneous medium using the frequency-specific mixed domain method · Simulation of a 2D heterogeneous medium using the frequency-specific mixed domain method · Simulation of a 3D homogeneous medium using the frequency-specific mixed domain method · Simulation of a 3D heterogeneous medium using the frequency-specific mixed domain method · Reducing the spatial aliasing error using the non-reflecting layer · Comparing pressure release and rigid boundary conditions ⮞Backward Propagation · Image reconstruction using backward projection · Reconstruction of the source pressure distribution with FSMDM in a 3D homogeneous medium ⮞Integration with Other Simulators · Integrating mSOUND with k-Wave for transducers of arbitrary shape · Integrating mSOUND with FOCUS for transducers of arbitrary shape · Integrating mSOUND with k-Wave for thermal simulations
{"url":"https://m-sound.github.io/mSOUND/rigid_vs_pressure_release","timestamp":"2024-11-07T23:40:48Z","content_type":"text/html","content_length":"12024","record_id":"<urn:uuid:40199b79-3a4b-4ad9-ab2c-6fd8d1e5aade>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00799.warc.gz"}
How to Find Average Rate of Change of a Function - Your Step-by-Step Guide To find the average rate of change of a function, you should first identify two distinct points on the function and note their coordinates. The average rate of change is essentially the slope of the secant line that intersects the graph of the function at these points. In calculus, this concept helps us understand how a function‘s output value changes in response to changes in the input value over a certain interval. You calculate it using the formula $\frac{f(b) – f(a)}{b – a}$, where $a$ and $b$ are the input values at the two points and $f(a)$, $f(b)$ are the corresponding output values from the function. This computation will give you the rate of change per unit, on average, over the interval from $a$ to $b$. Remember, digging into this idea opens the door to predicting how things change over time or space in various scientific and mathematical contexts. Stay with me, and let’s explore the intriguing world of change together. Calculating Average Rate of Change of a Function When I want to measure how a function’s output changes relative to its input, I calculate its average rate of change over a specific interval. This is akin to finding the average speed of a car over a road trip. Step-by-step Procedure 1. Identify the Interval: Select the range of x values (the input) over which you want to determine the average rate of change. These are often referred to as the endpoints, ( a ) and ( b ) 2. Calculate Change in Output $\Delta y $: Find the function values at these endpoints, ( f(a) ) and ( f(b) ). The change in output, $\Delta y$, is ( f(b) – f(a) ). 3. Calculate Change in Input $ \Delta x $: The change in input, $\Delta x$, is ( b – a ). 4. Use the Slope Formula: The average rate of change is analogous to the slope of the secant line that connects the endpoints (( a, f(a) )) and (( b, f(b) )) on the function’s graph. Calculate the slope with the formula $\frac{\Delta y}{\Delta x}$. 5. Evaluate the Result: Insert the values into the slope formula to get $\frac{f(b) – f(a)}{b – a}$. The resulting value is your function’s average rate of change. For a tangible example, if I’m looking at a population change over time, the average rate of change tells me how much the population grew or diminished per year on average over a specific time frame. Step Operation Example Calculation 1 Select ( a ) and ( b ) ( a = 2000 ), ( b = 2010 ) 2 Find ( f(a) ) and ( f(b) ) Population ( f(a) = 50,000 ), ( f(b) = 70,000 ) 3 Calculate $ \Delta y$ $ \Delta y = 70,000 – 50,000 = 20,000 $ people 4 Calculate $\Delta x$ $ \Delta x = 2010 – 2000 = 10 $ years 5 Apply formula Average rate $ = \frac{20,000}{10} = 2,000 $ people/year Remember, the sign of the average rate of change implies whether the function is increasing, decreasing, or remaining constant over the interval. A positive value signifies an increasing trend, a negative one indicates a decreasing trend and a zero means the output is constant, no matter the change in input. In mastering the concept of average rate of change, I have learned to view functions dynamically. The average rate of change is akin to measuring the slope between two points on a graph. Specifically, it quantifies how the output of a function changes concerning changes in the input over an interval. To calculate, I use the formula: $$ \text{Average rate of change} = \frac{f(x_2) – f(x_1)}{x_2 – x_1} $$ Remember, $x_1 $ and $ x_2 $ are the input values, while $f(x_1) $ and $f(x_2) $ are the respective outputs from the function ( f(x) ). This formula gives me a precise rate at which the function moves from one point to another. By applying this knowledge, I can predict future behavior of a function within a certain interval, assuming the rate remains consistent. This is particularly useful in fields like physics for velocity, or economics for growth rates. I make sure to interpret the result with context; a positive rate indicates an increasing function, while a negative rate suggests a decrease over the selected interval. Understanding the average rate of change provides a solid foundation for further exploration in calculus, such as approaching the concept of instantaneous rate of change and eventually the derivative, which give me insights into how a function behaves at any given point.
{"url":"https://www.storyofmathematics.com/how-to-find-average-rate-of-change-of-a-function/","timestamp":"2024-11-04T14:10:04Z","content_type":"text/html","content_length":"137636","record_id":"<urn:uuid:ef5090dd-db18-434f-88ba-ef0de69acfbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00779.warc.gz"}
24,201 research outputs found In the context of an application to superfluidity, it is elaborated how to do quantum mechanics of a system with a rotational velocity. Especially, in both the laboratory frame and the non-inertial co-rotating frame, the canonical momentum, which corresponds to the quantum mechanical momentum operator, contains a part due to the rotational velocity.Comment: 2 page, comment on cond-mat/010435 Some thermodynamic properties of weakly interacting Bose systems are derived from dimensional and heuristic arguments and thermodynamic relations, without resorting to statistical mechanics The possibility of direct observation of Nonlinear Landau-Zener tunnelling effect with a device consisting of two waveguide arrays connected with a tilted reduced refractive index barrier is discussed. Numerical simulations on this realistic setup are interpreted via simplified double well system and different asymmetric tunnelling scenarios were predicted just varying injected beam intensity.Comment: 5 pages, 6 figure The dynamic structure factor of a normal Fermi gas is investigated by using the moment method for the Boltzmann equation. We determine the spectral function at finite temperatures over the full range of crossover from the collisionless regime to the hydrodynamic regime. We find that the Brillouin peak in the dynamic structure factor exhibits a smooth crossover from zero to first sound as functions of temperature and interaction strength. The dynamic structure factor obtained using the moment method also exhibits a definite Rayleigh peak ($/omega /sim 0$), which is a characteristic of the hydrodynamic regime. We compare the dynamic structure factor obtained by the moment method with that obtained from the hydrodynamic equations.Comment: 19 pages, 9 figure The Landau-Pomeranchuk-Migdal effects on photon emission from the quark gluon plasma have been studied as a function of photon mass, at a fixed temperature of the plasma. The integral equations for the transverse vector function (${\bf \tilde{f}(\tilde{p}_\perp)}$) and the longitudinal function ($\tilde{g}({\bf \tilde{p}_\perp})$) consisting of multiple scattering effects are solved by the self consistent iterations method and also by the variational method for the variable set \{$p_0,q_0,Q^2$\}, considering the bremsstrahlung and the $\bf aws$ processes. We define four new dynamical scaling variables, $x^b_T$,$x^a_T$,$x^b_L$,$x^a_L$ for bremsstrahlung and {\bf aws} processes and analyse the transverse and longitudinal components as a function of \{$p_0,q_0,Q^2$\}. We generalize the concept of photon emission function and we define four new emission functions for massive photon emission represented by $g^b_T$, $g^a_T$, $g^b_L$, $g^a_L$. These have been constructed using the exact numerical solutions of the integral equations. These four emission functions have been parameterized by suitable simple empirical fits. In terms of these empirical emission functions, the virtual photon emission from quark gluon plasma reduces to one dimensional integrals that involve folding over the empirical $g^{b,a}_{T,L}$ functions with appropriate quark distribution functions and the kinematic factors. Using this empirical emission functions, we calculated the imaginary part of the photon polarization tensor as a function of photon mass and energy.Comment: In nuclear physics journals and arxiv listings, my name used to appear as S.V.S. Sastry. Hereafter, my name will appear as, S.V. Suryanarayan We describe a new paradox for ideal fluids. It arises in the accretion of an \textit{ideal} fluid onto a black hole, where, under suitable boundary conditions, the flow can violate the generalized second law of thermodynamics. The paradox indicates that there is in fact a lower bound to the correlation length of any \textit{real} fluid, the value of which is determined by the thermodynamic properties of that fluid. We observe that the universal bound on entropy, itself suggested by the generalized second law, puts a lower bound on the correlation length of any fluid in terms of its specific entropy. With the help of a new, efficient estimate for the viscosity of liquids, we argue that this also means that viscosity is bounded from below in a way reminiscent of the conjectured Kovtun-Son-Starinets lower bound on the ratio of viscosity to entropy density. We conclude that much light may be shed on the Kovtun-Son-Starinets bound by suitable arguments based on the generalized second law.Comment: 11 pages, 1 figure, published versio We use an almost model-independent analytical parameterization for $pp$ and $\bar{p}p$ elastic scattering data to analyze the eikonal, profile, and inelastic overlap functions in the impact parameter space. Error propagation in the fit parameters allows estimations of uncertainty regions, improving the geometrical description of the hadron-hadron interaction. Several predictions are shown and, in particular, the prediction for $pp$ inelastic overlap function at $\sqrt{s}=14$ TeV shows the saturation of the Froissart-Martin bound at LHC energies.Comment: 15 pages, 16 figure High-resolution magneto-optical technique was used to analyze flux patterns in the intermediate state of bulk Pb samples of various shapes - cones, hemispheres and discs. Combined with the measurements of macroscopic magnetization these results allowed studying the effect of bulk pinning and geometric barrier on the equilibrium structure of the intermediate state. Zero-bulk pinning discs and slabs show hysteretic behavior due to geometric barrier that results in a topological hysteresis -- flux tubes on penetration and lamellae on flux exit. (Hemi)spheres and cones do not have geometric barrier and show no hysteresis with flux tubes dominating the intermediate field region. It is concluded that flux tubes represent the equilibrium topology of the intermediate state in reversible samples, whereas laminar structure appears in samples with magnetic hysteresis (either bulk or geometric). Real-time video is available in http://www.cmpgroup.ameslab.gov/supermaglab/video /Pb.html NOTE: the submitted images were severely downsampled due to Arxiv's limitations of 1 Mb total size A sensitive polarization modulation technique uses photoelastic modulation and hetrodyne detection to simultaneously measure the Faraday rotation and induced ellipticity in light transmitted by semiconducting and metallic samples. The frequencies measured are in the mid-infrared and correspond to the spectral lines of a CO2 laser. The measured temperature range is continuous and extends from 35 to 330K. Measured samples include GaAs and Si substrates, gold and copper films, and YBCO and BSCCO high temperature superconductors.Comment: 12 pages of text, 6 figures, fixed typos in formulas, added figur A detailed study is carried out for the relativistic theory of viscoelasticity which was recently constructed on the basis of Onsager's linear nonequilibrium thermodynamics. After rederiving the theory using a local argument with the entropy current, we show that this theory universally reduces to the standard relativistic Navier-Stokes fluid mechanics in the long time limit. Since effects of elasticity are taken into account, the dynamics at short time scales is modified from that given by the Navier-Stokes equations, so that acausal problems intrinsic to relativistic Navier-Stokes fluids are significantly remedied. We in particular show that the wave equations for the propagation of disturbance around a hydrostatic equilibrium in Minkowski spacetime become symmetric hyperbolic for some range of parameters, so that the model is free of acausality problems. This observation suggests that the relativistic viscoelastic model with such parameters can be regarded as a causal completion of relativistic Navier-Stokes fluid mechanics. By adjusting parameters to various values, this theory can treat a wide variety of materials including elastic materials, Maxwell materials, Kelvin-Voigt materials, and (a nonlinearly generalized version of) simplified Israel-Stewart fluids, and thus we expect the theory to be the most universal description of single-component relativistic continuum materials. We also show that the presence of strains and the corresponding change in temperature are naturally unified through the Tolman law in a generally covariant description of continuum mechanics.Comment: 52pages, 11figures; v2: minor corrections; v3: minor corrections, to appear in Physical Review E; v4: minor change
{"url":"https://core.ac.uk/search/?q=authors%3A(L.%20D.%20Landau)","timestamp":"2024-11-11T14:43:06Z","content_type":"text/html","content_length":"180038","record_id":"<urn:uuid:4993991d-94b2-427f-8d77-e8f667a5cb6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00513.warc.gz"}
Algebra Homework Help: How to get Top Algebra Homework Help Today Struggling with algebra homework? Tutor Pace is ready to help you. Our algebra homework help is truly what you are looking for. Algebra homework help: 24/7 help from us Tell us an area in which you need help and we’ll provide you our algebra experts right away. Feel free to learn anytime you want. You can connect with our algebra experts 24/7. Whether late night or early morning, you’ll get an expert to help in your homework. Ask your algebra homework problems and get solutions as and when you want. Our algebra experts are the masters of algebra 1 and algebra 2. They’ll help you in every algebra problem. Algebra tutoring: Fully featured one-on-one private algebra tutoring Get an individual attention directly from the expert. Ask doubts, know solutions, and learn concepts in private tutoring sessions. Work on one-to-one basis with our algebra experts. Enjoy extra help in your homework and assignments from the tutors. Get academic project guidance at your ease. Learn with comfort in our highly interactive virtual classrooms. Use the interactive tool of whiteboard to work on your problems in real time. Utilize tutor chat to convey your messages and get feedback from the tutor. Cover each topic ranging from algebraic equations to expressions. Study better with the help of algebra worksheets. Practice algebra problems for your upcoming algebra exams. Online algebra tutor: Our expert online algebra tutor is a lot of help Our online algebra tutor teaches students from elementary level to college level. Be it your basics or advanced concepts, our experts got you covered. Tutor Pace’s algebra tutor helps you with: • Completing your algebra homework • Working on algebra problems • Preparing for algebra tests • Studying for entrance exams • Finishing your algebra assignments Start getting great algebra grades- Get algebra homework help from Tutor Pace now!
{"url":"https://freeonlinetutoring.edublogs.org/2013/12/11/algebra-homework-help-how-to-get-top-algebra-homework-help-today/","timestamp":"2024-11-12T02:12:45Z","content_type":"text/html","content_length":"46145","record_id":"<urn:uuid:7f51630a-2c01-4684-8d99-0e7fbed1af6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00533.warc.gz"}
\mathrm{\\{} P r o b a b i l i t y ~ T h e ~ m e d i a n ~ w a i t i n g ~ t i m e ~ ( i n ~ m i n u t e s ) ~ f o r ~ p e o p l e ~ waiting for service in a convenience store is given by the solution of the equation \(\int_{0}^{x} 0.3 e^{-0.3 t} d t=\frac{1}{2}\) Solve the equation. Short Answer Expert verified The solution to the equation, or the median waiting time, is \(x = \frac{ln(1/2)}{-0.3}\) minutes. Step by step solution Identify the Function The function given is the integral from 0 to x of \(0.3e^{-0.3t}\) dt, which needs to equal \(1/2\). This function is related to the cumulative distribution function (CDF) of an exponential Compute the Integral The integral of \(0.3e^{-0.3t}\) dt is given as -e^{-0.3t}. So, evaluate it from 0 to x: -e^{-0.3x} - (-e^{0}) = -e^{-0.3x} + 1. Set Up the Equation -e^{-0.3x} + 1 = 1/2. This equation resulted from setting the cumulative distribution function equal to 1/2, which gives us the median waiting time. Solve for x First, subtract 1 from both sides to get -e^{-0.3x} = -1/2. Since we have a negative on both sides, we can remove these negatives, so we get e^{-0.3x} = 1/2. Then to get rid of the exponential, we'll take the natural logarithm (ln) of both sides of our equation: ln(e^{-0.3x}) = ln(1/2), which simplifies to -0.3x = ln(1/2). Finally, divide both sides by -0.3: x = ln(1/2) / -0.3. Key Concepts These are the key concepts you need to understand to accurately answer the question. Cumulative Distribution Function The Cumulative Distribution Function, often abbreviated as CDF, is a vital concept in statistics and probability, particularly when dealing with random variables and their distributions. Essentially, the CDF of a random variable X gives the probability that X will take a value less than or equal to x. Mathematically, it is represented as \( F(x) = P(X \leq x) \). The CDF is designed to help us understand the distribution of data, providing a graphical representation that starts from 0 and progresses to 1. In our exercise, the function \( \int_{0}^{x} 0.3 e^{-0.3 t} dt = \frac{1}{2} \) represents the CDF of an exponential distribution. The purpose here is to find the value of x (in this case, the median waiting time) where the probability reaches 0.5, or 50%. This means there's an equal chance that the waiting time will be less or greater than this median value. The CDF is cumulative because it adds up probabilities from the start up to a point, providing a complete picture of how probabilities accrue over a range of values, which is essential in determining measures like the median. Exponential Distribution The Exponential Distribution is a continuous probability distribution that is often used to model waiting times or the time until the next event occurs. It is characterized by the rate parameter \( \ lambda \), which is a measure of how frequently events occur. In our problem, the parameter is 0.3, as seen in the expression \( 0.3e^{-0.3t} \). This function is pivotal as it describes the likelihood of waiting for a certain time before an event happens, such as being served in a store. One crucial property of the exponential distribution is that it is memoryless. This means the probability of an event occurring in the future is independent of past events. Another property is its relationship with the Poisson distribution, often used for counting processes, whereas the exponential distribution is concerned with timing. The median of an exponential distribution can be found through integration and involves setting the cumulative distribution function (CDF) to 0.5. This approach presents us with the median waiting time point, demonstrating the significance of understanding these key mathematical properties. Probability Integration Probability integration involves the integration of probability density functions (PDFs) to find cumulative probabilities. In simpler terms, it is the process of summing up all the little probabilities represented by the function, up to a certain point, to find total probability—a central concept in calculating values like medians, percentiles, and more. In our exercise, we integrated the function \( 0.3e^{-0.3t} \) over the interval from 0 to x. This integral provides a cumulative probability, which helps us find specific values such as the median by requiring that integral to equal 0.5 (as median is the midpoint of data in distributions). Applying probability integration requires understanding the properties of the functions involved as well as calculus. Solutions often involve crucial steps such as evaluating the integral limits and using tools like logarithms to solve for desired parameters. Through probability integration, we can derive significant statistical measures fundamental for decision-making processes and deeper
{"url":"https://www.vaia.com/en-us/textbooks/math/calculus-8-edition/chapter-5/problem-120-mathrm-p-r-o-b-a-b-i-l-i-t-y-t-h-e-m-e-d-i-a-n-w/","timestamp":"2024-11-14T14:37:28Z","content_type":"text/html","content_length":"252811","record_id":"<urn:uuid:0dbe7384-9216-43ce-a657-6d45128287dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00745.warc.gz"}
CGPA Calculator | An Effective Way to Calculate Your GPA CGPA Calculator The CGPA calculator is one of the most demanding tools among University students who are conscious and serious about their academic progress. I, being a graduate of a very famous university in the information technology sector, always used to calculate my cumulative grade point average in each semester. It helps to set our goal for the semester GPA. CGPA conversion table Calculating CGPA manually can be an inconvenient and confusing process for many students. Before getting into the university, we are familiar with marks, grades, or maybe percentages, but the point system and credit hours are confusing. So, I thought there should be a table or mapping between numbers, grades, and points. I will share a general mapping table between numbers, grades, and points in this sub-section. Number Grade Point 80% and above A+ 4.00 75% to less than 80% A 3.75 70% to less than 75% A- 3.50 65% to less than 70% B+ 3.25 60% to less than 65% B 3.00 55% to less than 60% B- 2.75 50% to less than 55% C+ 2.50 45% to less than 50% C 2.25 40% to less than 45% D 2.00 <40% Less than 40% F 0.00 In the table above, we have percentages in the first column, grades in the second column, and points in the third column. If we get 80% or above marks, then we’ll get an A+ grade, and our CGPA will be 4.00 points. Once you have your CGPA, you can use the CGPA to Percentage calculator to find your equivalent percentage. How to Use CGPA Calculator Using an online calculator is quite an easy task. We just need to enter the required information and let the calculator do magic for us. The calculator we provided above takes your GPAs and then calculates the CGPA. What is the formula to calculate CGPA? We can calculate CGPA from GPAs by taking the average of GPA earned in all semesters. Below is the generic formula for calculating CGPA from GPA: CGPA = Sum of GPAs/ Number of semesters studied How to calculate CGPA for engineering? Most engineering colleges use letter grades and CGPA to assess their students’ performance. You can calculate CGPA for engineering by summing the all semester GPAs and dividing them by the total number of semesters. How do Pakistani universities calculate CGPA for their students? Pakistani universities like GCUF, UET, Comsats, FAST (NUCES), GCU Lahore, GIKI, GIFT University, IIUI, Minhaj University, and UOL use 4 point grade scale and use the same formula that I have given above. You can use the calculator above to find sgpa to CGPA.
{"url":"https://thecgpatopercentage.com/cgpa-calculator/","timestamp":"2024-11-03T15:15:12Z","content_type":"text/html","content_length":"92844","record_id":"<urn:uuid:f3ed3209-2dca-4d9a-8935-b30199f6ff58>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00778.warc.gz"}
8 Bit Comparator Using 74LS85 8-Bit Magnitude Comparator - Comparators with three output terminals and checks for three conditions i.e greater than or less than or equal to is magnitude comparator. The Comparator is another very useful combinational logic circuit used to compare the value of two binary digits. A magnitude digital Comparator is a combinational circuit that compares two digital or binary numbers in order to find out whether one binary number is equal, less than or greater than the other binary number. We logically design a circuit for which we will have two inputs one for A and another for B and have three output terminals, one for A > B condition, one for A = B condition and one for A < B condition. Digital Magnitude Comparators are made up from standard AND, NOR and NOT gates that compare the digital signals present at their input terminals and produce an output depending upon the condition of those inputs. An 8-bit comparator compares the two 8-bit numbers by cascading of two 4-bit comparators. The circuit connection of this comparator is shown below in which the lower order comparator A<B, A=B, and A> B outputs are connected to the respective cascade inputs of the higher-order comparator. For the lower order comparator, the A=B cascade input must be connected High, while the other two cascading inputs A, B must be connected to LOW. The outputs of the higher-order comparator become the outputs of this eight-bit comparator.
{"url":"https://deldsim.com/study/material/41/8-bit-comparator-using-74ls85/","timestamp":"2024-11-06T01:42:31Z","content_type":"text/html","content_length":"18085","record_id":"<urn:uuid:49d3714a-4355-4f6f-a44e-decac98704b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00893.warc.gz"}
A Comprehensive Math Workbook for Grade 2 Original price was: $20.99.Current price is: $15.99. A Comprehensive Math Workbook for Grade 2 Embark on a mathematical adventure with the "A Comprehensive Math Workbook for Grade 2," an invaluable tool for parents and educators seeking to strengthen a child’s mathematical foundation. This meticulously designed workbook is tailored to address the pivotal mathematical concepts that form the cornerstone of a second grader’s academic success. +1K Downloads Teacher's Choice 100% Guaranteed Secure Checkout Lifetime Support There are no reviews yet. Effortless Math: We Help Students Learn to LOVE Mathematics - © 2024
{"url":"https://www.effortlessmath.com/product/a-comprehensive-math-workbook-for-grade-2/","timestamp":"2024-11-13T07:40:08Z","content_type":"text/html","content_length":"46388","record_id":"<urn:uuid:cea3552a-fb65-4d4b-a830-d1aa07237de2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00846.warc.gz"}
NEET AIPMT Physics Chapter Wise Solutions - Motion in a Straight Line | Recruitment Topper NEET AIPMT Physics Chapter Wise Solutions – Motion in a Straight Line NEET AIPMT Physics Chapter Wise SolutionsChemistryBiology 1. A particle of unit mass undergoes one¬dimensional motion such that its velocity varies according to v(x) = βx^-2n, where β and n are constants and x is the position of the particle. The acceleration of the particle as a function of x, is given by (a) -2β^2 x ^2n+1 (b) -2nβ^2e^ -4n+1 (c) -2nβ^2 x ^2n-1 (d) -2nβ^2x^-4n-1 (AIPMT 2015, Cancelled) 2. A stone falls freely under gravity. It covers distances h[1], h[2 ]and h[3 ]in the first 5 seconds, the next 5 seconds and the next 5 seconds respectively. The relation between h[1], h[2 ]and h [3 ]is (a) h2 = 3h[1] and h3 = 3h2 (b) h[1]= h2= h3 (C) h[1] = 2h2 = 3h3 (d) h[1]= $\frac { h2}{ 3 }$ =$\frac { h3 }{ 5 }$ (NEET 2013) 3. The displacement ‘x’ (in meter) of a particle of mass ‘m’ (in kg) moving in one dimension under the action of a force, is related to time t(in sec) by t = $\sqrt { x }$ + 3 The displacement of the particle when its velocity is zero, will be (a) 4m (b) 0m(zero) (c) 6 m (d) 2 m (Karnataka NEET 2013) 4. The motion of a particle along a straight line is described by equation x = 8 + 12t – t^3 where x is in metre and / in second. The retardation of the particle when its velocity becomes zero is (a) 24 ms^-2 ^ (b) zero (c) 6ms^-2 (d) 12ms^-2 (Prelims 2012) 5. A boy standing at the top of a tower of 20 m height drops a stone. Assuming g= 10 m s^-2 , the velocity with which it hits the ground is (a) 10.0 m/s (b) 20.0 m/s (c) 40.0 m/s (d) 5.0 m/s(Prelims 2011) 6. A particle covers half of its total distance with speed v[1] and the rest half distance with speed v2. Its average speed during the complete journey is 7. A particle moves a distance x in time / according to equation x = (/ + 5)^-1. The acceleration of particle is proportional to (a) (velocity)^3/2 (b) (distance)2 (c) (distance) 2 (d) (velocity)^2/3 (Prelims 2010) 8. A ball is dropped from a high rise platform at t = 0 starting from rest. After 6 seconds another ball is thrown downwards from the same platform with a speed v. The two balls meet at t = 18 s. What is the value of v? (Takeg= 10 m/s^2) (a) 75 m/s (b) 55 m/s (c) 40 m/s (d) 60 m/s ( Prelims 2010) 9. A particle starts its motion from rest under the action of a constant force. If the distance covered in first 10 seconds is 5, and that covered in the first 20 seconds is S2 , then (a) S2 = 3s[1 ](b)S[2] = 4s[1] (c) S[2]=s[1 ](d) S[2] = 2s[1 ](Prelims 2009) 10. A bus is moving with a speed of 10 ms^-1 on a straight road. A scooterist wishes to overtake the bus in 100 s. If the bus is at a distance of 1 km from the scooterist, with what speed should the scooterist chase the bus? (a) 40 ms^-1 (b) 25 ms^-1 (c) W) ms^-1 (d) 20 ms^-1 (Prelims 2009) 11. A particle moves in a straight line with a constant acceleration. It changes its velocity from 10 ms^-1 to 20 ms^-1 while passing through a distance 135 m in t second. The value of t is (a) 12 (b) 9 (c) 10 (d) 1.8 (Prelims 2008) 12. The distance travelled by a particle starting from 4-2 rest and moving with an acceleration -j ms , in the third second is (a) Ym (b) Tm (c) 6m (d) 4m (Prelims 2008 ) 13. A particle moving along x-axis has acceleration f, at time t, given by f= fo($1-\frac { t }{ T }$) , where f0 and T are constants. The panicle at t = 0 has zero velocity. In the time interval between t = 0 and the instant when t= 0, the particle’s velocity (v[x]). 14. A car moves from X to Y with a uniform speed vu and returns to Y with a uniform speed vd. The average speed for this round trip IS 15. The position x of a particle with respect to time t along x-axis is given by x = 9^12 – t^3 where x is in metres and t in seconds. What will be the position of this particle when it achieves maximum speed along the +x direction? (a) 54 m (b) 81m (c) 24m (d) 32m. (2007) 16. Two bodies A (of mass 1 kg) and B (of mass 3 kg) are dropped from heights of 16 m and 25 m, respectively. The ratio of the time taken by them to reach the ground is (a) 4/5 (b) 5/4 (c) 12/5 (d) 5/12. (2006) 17. A car runs at a constant speed on a circular track of radius 100 m, taking 62.8 seconds for every circular lap. The average velocity and average speed for each circular lap respectively is (a) 10 m/s, 0 (b) 0, 0 (c) 0, 10 m/s (d) 10 m/s, 10 m/s. (2006) 18. A particle moves along a straight line OX. At a time t (in seconds) the distance x (in metres) of the particle from O is given by x = 40 + 12t^2— t^3. How long would the particle travel before coming to rest? (a) 16 m (b) 24 m (c) 40 m (d) 56 m. (2006) 19. A ball is thrown vertically upward. It has a speed of 10 m/sec when it has reached one half of its maximum height. How high does the ball rise? Take g = 10 m/s^2 (a) 10 m (b) 5 m (c) 15 m (d) 20 m. (2005) 20. The displacement x of a particle varies with time t as x =ae^-αt+ be^βt where a,b, a and P are positive constants. The velocity of the particle will (a) be independent of β (b) drop to zero when α = β (c) go on decreasing with time (d) go on increasing with time. (2005) 21. A man throws balls with the same speed vertically upwards one after the other at an interval of 2 seconds. What should be the speed of the throw so that more than two balls are in the sky at any time ? (Given g = 9.8 m/s^2) (a) more than 19.6 m/s (b) at least 9.8 m/s (c) any speed less than 19.6 m/s (d) only with speed 19.6 m/s. (2003) 22. If a ball is thrown vertically upwards with speed u, the distance covered during the last t seconds of its ascent is 23. A particle is thrown vertically upward. Its velocity at half of the height is 10 m/s, then the maximum height attained by it (g = 10 m/s^2) (a) 8 m (b) 20 m (c) 10 m (d) 16 m. (2001) 24. Motion of a particle is given by equation i = (3t^3 + 7t^2– + 14t + 8) m. The value of acceleration of the particle at / = 1 sec is (a) 10 m/s^2 (b) 32 m/s^2 (c) 23 m/s^2 (d) 16 m/s^2. (2000) 25. A car moving with a speed of 40 km/h can be stopped by applying brakes after at least 2 m. If the same car is moving with a speed of 80 km/h, what is the minimum stopping distance? (a) 4 m (b) 6 m (c) 8m (d) 2 m. (1998) 26. A rubber ball is dropped from a height of 5 m on a plane. On bouncing it rises to 1.8 m. The ball loses its velocity on bouncing by a factor of 27. The position x of a particle varies with time, (t) as x = at^2 – bt^3. The acceleration will be zero at time t is equal to 28. If a car at rest accelerates uniformly to a speed of 144 km/h in 20 sec, it covers a distance of (a) 1440 cm (b) 2980 cm (c) 20m (d) 400m. (1997) 29. A body dropped from a height h with initial velocity zero, strikes the ground with a velocity 3 m/s. Another body of same mass dropped from the same height h with an initial velocity of 4 m/s. The final velocity of second mass, with which it strikes the ground is (a) 5 m/s (b) 12 m/s (c) 3 m/s (d) 4 m/s. (1996) 30. The acceleration of a particle is increasing linearly with time t as bt. The particle starts from origin with an initial velocity V[0]. The distance travelled by the particle in time t will be 31. The water drop fells at regular intervals from a tap 5 m above the ground. The third drop is leaving the tap at instant the first drop touches the ground. How far above the ground is the second drop at that instant? (a) 3.75 m (b) 4.00 m (c) 1.25 m (d) 2.50m. (1995) 32. A car accelerates from rest at a constant rate a for some time after which it decelerates at a constant rate p and comes to rest. If total time elapsed is t, then maximum velocity acquired by car will be 33. A particle moves along a straight line such that its displacement at any time t is given by s = (t^3 – 6t^2 + 3t+ 4) metres. The velocity when the acceleration is zero is (a) 3 m/s (b) 42 m/s (c) – 9 m/s (d) – 15 m/s. (1994) 34. The velocity of train increases uniformly from 20 km/h to 60 km/h in 4 hours. The distance travelled by the train during this period is (a) 160km (b) 180km (c) 100km (d) 120km. (1994) 35. The displacement-time graph of a moving particle is shown below. The instantaneous velocity of the particle is negative at the point (a) E (b) F (c) C (d) D. (1994) 36. A body starts from rest, what is the ratio of the distance travelled by the during the 4^th and 3^rd second ? (a)$\frac { 7 }{ 5 }$ (b)$\frac { 5 }{ 7 }$ (c)$\frac { 7 }{ 3 }$ (d)$\frac { 3 }{ 7 }$(1993) 37. Which of the following curve does not represent motion in one dimension ? 38. A body dropped from top of a tower fall through 40 m during the last two seconds of its fall. The height of tower is (g = 10 m/s^2) (a) 60 m (b) 45 m (c) 80 m (d) 50 m. (1992) 39. A car moves a distance of 200 m. It covers the first half of the distance at speed 40 km/h and the second half of distance at speed v. The average speed is 48 km/h. The value of v is (a) 56 km/h (b) 60 km/h (c) 50 km/h (d) 48 km/h. (1991) 40. A bus travelling the first one-third distance at a speed of 10 km/h, the next one-third at 20 km/h and at last one-third at 60 km/h. The average speed of the bus is (a) 9 km/h (b) 16 km/h (c) 18 km/h (d) 48 km/h. (1991) 41. A car covers the first half of the distance between two places at 40 km/h and another half at 60 km/h. The average speed of the car is (a) 40 km/h (b) 48 km/h (c) 50 km/h (d) 60 km/h. (1990) 42. What will be the ratio of the distance moved by a freely falling body from rest in 4^th and 5^th seconds of journey ? (a) 4:5 (b) 7 : 9 (c) 16:25 (d) 1:1. (1989) 43. A car is moving along a straight road with a uniform acceleration. It passes through two points P and Q separated by a distance with velocity 30 km/ h and 40 km/h respectively. The velocity of the car midway between P and Q is (a) 33.3 km/h (b) 20$\sqrt { 2 }$ km/h (c) 25$\sqrt { 2 }$ km/h (d) 35 km/h. (1988)
{"url":"http://www.recruitmenttopper.com/neet-aipmt-physics-chapter-wise-solutions-motion-straight-line/10333/","timestamp":"2024-11-12T09:42:47Z","content_type":"text/html","content_length":"99232","record_id":"<urn:uuid:7c49622d-40af-4c40-9c3e-1e44c8531c33>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00333.warc.gz"}
As you noticed, the prices of financial assets are determined by expectations of investors. The generalizing formula is: when Pr – current and past prices; The forming of price expectations is both backward and forward looking process. Expectations formed by looking back are called adaptive expectations. This experience is measured as a weighted average of past values because the recent past (say, the last one to two years) is more significant than the more distant past. Thus, recent years are weighted more heavily than earlier years. For example, if the rate of inflation has been 10 % per year for 7 years and fell down to 8 % in the most recent 2 years, the expected inflation rate in the coming year will be closer to 8 than to 10 It is clearly that expectations of future prices can’t be based only on last experience. Expectations formed by looking both backward and forward using all available information, are called rational The theory of rational expectations states that expectations of financial prices on average are equal to the optimal forecast. The optimal forecast is the best guess possible arrived at by using all available information both from the past and about the future. But even if a forecast is rational, there is no guarantee that the forecast will be accu­rate. At first, there may be one or more additional key factors that are relevant but not available at the time the opti­mal forecast is made. If the information is not available, then the forecast may be inaccurate. However, it is still rational because the decision maker uses all available information. At second, there are lags between the time that information becomes available and when it is fully incorporated into expectations. For example, as soon as market participants have come to better understand the process of inflation thanks to publications of Central Banks, the lag in adjusting expectations has sufficiently shortened. Thus, in any given time period, it is possible to predict that the forecast error (or the difference between the actual value and the forecast) will exist. But on average it will be zero. The efficient markets hypothesis is built on the theory of rational expectations. Namely, when financial markets are in equilibrium, the prices of financial instruments reflect all readily available information. Financial markets are in equilibrium when the quantity de­manded of any security is equal to the quantity supplied of that security. Returns reflect only differences in risk and liquidity. In an efficient market, the optimal forecast of a security's price (made by using all available informa­tion) will be equal to the equilibrium price. Let's look once again at the share of stock from the previous example. Suppose that an equi­librium return is 8 percent after adjusting for risk and liquidity (6% expected return on bonds + 2% risk premium = 8%). The return in terms of money in a given time period is equal to any dividend payment made during the time period plus the price of the stock at the end of the time period minus the price at the begin­ning of the time period. To express this return as a percentage, we need to divide the total return by the price at the beginning of the period, as in equation (4.2): where D – dividend payments made during the period; P[t] – price at the beginning of the time period; P[t+1] –price at the end of time period. If at the beginning of the time period we know the price and dividend payment of the stock, then the only unknown variable is the price of the instrument at the end of the time period (P[t+1]). The efficient markets hypothesis assumes that the ex­pected or forecasted price of the stock at the end of the time period will be equal to the optimal forecast subject to using all available information. Thus, the exploring of the Equation (4.6) is suitable. Let's assume now that Company-issuer announces new profit numbers that raise the expected price of the instrument at the end of the time period, for example, up to 1200 ˆ. The question is how today's price is respond to the new higher expected price in the future. Because having added new data in a formula we will receive an inequality: Assuming that the risk and liquidity of the financial asset have not changed and that the equilibrium return (based on that risk and liquidity) of stock is 8 percent, the pre­sent price will adjust so that given the new expected price, the return will still be 8 percent. P[t] Þ Thus, the conclusion is that current price will rise to a level where the optimal forecast of an instrument's return is equal to the instrument's equilibrium return. Thus, the current price will immediately rise to 1185.19 ˆ, given the new higher expected price of 1200 ˆ. When the current price is 1185.19 ˆ, the expected return will be equal to 8 percent. At a price lower than 1185.19 ˆ, the expected return would be higher. For example, at the original price of 1000 ˆ, the expected return would be 28 per­cent. Funds would flow in this market by investors seeking the higher than equilibrium return of 8 percent based on risk and liq­uidity. As funds flowed in, the price of stock would rise. Funds would keep flowing into the market, pushing the price up until the market returned to equilibrium. This occurs at a price of 1185.19 because The efficient markets hypothesis states that the prices of all financial instruments are based on the optimal forecast obtained by using all available information. A stronger version of the efficient markets hypothesis states that the prices of all financial instru­ments reflect the true fundamental value of the instruments. Thus, not only do prices reflect all available information but also this information is accurate, complete, under­stood by all, and reflects the market fundamentals. Market fundamentals are factors that have a direct effect on future income streams of the instruments. These factors include the value of the assets and the expected income streams of those assets on which the fi­nancial instruments represent claims. Thus, if markets are efficient, prices are correct in that they represent underlying fundamentals. In the less-stringent version of the hypoth­esis, the prices of all financial instruments do not necessarily represent the fundamental value of the instrument. There have been extraordinary run-ups and collapses of stock or bond prices, known as bubbles, which do not seem to be related to market fundamentals. Such run-ups in stock prices have occurred in Japan in the late 1980s and more recently in the United States in the late 1990s. Some economists point out such bubbles in financial mar­kets can still be explained by rational expectations. It may be rational to buy a share of stock at a high price if it is thought that there will be other investors in the future who would be willing to pay inflated prices (prices that exceed those based on market funda­mentals) for the stock. This phenomenon is sometimes called "the greater fool" theory. Other economists suspect that financial market prices may overreact before reach­ing equilibrium when there is a change in either supply or demand. That is, prices may rise or fall (overshoot or undershoot) more than market fundamentals would justify be­fore setting down to the price based on fundamentals. In these cases, it may be possible for investors to earn above average returns or to experience above average losses. [1] Note that if you are not going to sell the stock the last summand n goes to infinity. To solve it for current price you need to set the limit of summation (for example it could be your expected age of surviving). [2] Remind that the coefficient of a variation (CV) is a standardized measure of risk per unit of return: i, i. [3] Payments on bonds are made before payments to shareholders. Date: 2014-12-22; view: 1299 <== previous page | next page ==> INFLUENCE OF EXPECTED RATE OF RETURN ON STOCK AND BOND PRICES | American Gift-Giving Customs
{"url":"https://doclecture.net/1-1284.html","timestamp":"2024-11-08T14:31:49Z","content_type":"text/html","content_length":"15662","record_id":"<urn:uuid:aa0f3ae7-ec6e-4693-b86e-956a57c37904>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00558.warc.gz"}
Title : Exact and Approximation Algorithms for Computing Connected f-Factors Speaker : Rahul C S (IITM) Details : Tue, 17 Feb, 2015 3:00 PM @ BSB 361 Abstract: : Given an undirected graph G=(V,E) and a function f:V->N, an f-factor H is a spanning subgraph such that dH(v)=f(v) for every v in V. The problem of computing an f-factor is polynomial time solvable. We consider the problem of computing a connected f-factor. The Hamiltonian cycle problem is a special case of connected f-factor problem. Even when f(v)=d for every v in V and some constant d, the problem is shown to be NP-hard. When f(v)>=|V|/2 for every v in V, the problem is polynomial time solvable. Motivated by the observation that the hardness of the problem vary along with change in f, we attempt to explore the spectrum of values f and come up with a dichotomy result on connected f-factors. Further we come up with exact and approximation algorithms for optimization version and special cases of the same. We use the techniques from the literature to show that the problem of computing connected f-factor is hard even when f(v)>=|V|^(1-ε) for some constant 0<ε<1. At the other end of the spectrum, we come up with algorithms for the problem of computing connected f-factor when f(v)>=|V|/2.5 for every v in V and for the case where f(v)>=|V|/3 for every v in V. Both these algorithms can be extended to solve the optimization versions of the same. As a special case, we consider the metric version of the problem and give a 3-approximation algorithm. For the case where f(v)>=|V|/c for every v and for some constant c, we give a (1+ε)-approximation algorithm.
{"url":"https://cse.iitm.ac.in/seminar_details.php?arg=MzU=","timestamp":"2024-11-07T17:19:56Z","content_type":"application/xhtml+xml","content_length":"14441","record_id":"<urn:uuid:82f56f49-e82a-493b-9a28-276c31264583>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00155.warc.gz"}
How to Find the Slope of a Regression Line in Excel (3 Easy Ways) - ExcelDemy What Is the Slope of a Regression Line? A regression line generally shows the connection between some scatter data points from a dataset. The equation for a regression line is: • m = Slope of the Regression Line. • B = Y-Intercept. You can also use the following formula to find the slope of a regression line: m = ∑(x-µ[x])*(y-µ[y])/∑(x-µ[x])² • µ[x]= Mean of known x values. • µ[y]= Mean of known y values. How to Find the Slope of a Regression Line in Excel: 3 Easy Ways We have the following dataset, containing the Month, Advertisement Cost, and Sales. Method 1 – Use an Excel Chart to Find the Slope of a Regression Line Step 1 – Insert a Scatter Chart • Select the data range with which you want to make the chart. • Go to the Insert tab from the Ribbon. • Select Insert Scatter or Bubble Chart. • A drop-down menu will appear. • Select Scatter. • This inserts a Scatter Chart for your selected data. • Change the Chart Title. • We have changed the Chart Title. Step 2 – Add a Trendline • Select the chart. • Select Chart Elements. • Check the Trendline option. Step 3 – Display the Trendline Equation on the Chart and Find the Slope • Right-click on the Trendline. • Select Format Trendline. • The Format Trendline task pane will appear on the right side of the screen. • Select the Trendline Options tab. • Check the Display Equation on chart option. • You will be able to see the equation for the Trendline on the chart. • Determine the Slope from the equation (the part before the x) and write it down in your preferred location. Read More: How to Find Instantaneous Slope on Excel Method 2 – Apply the SLOPE Function to Calculate the Slope of a Regression Line in Excel • Select the cell where you want the Slope. We selected Cell C12. • Insert the following formula. • Press Enter to get the result. In the SLOPE function, we selected cell range D5:D10 as known_ys, and C5:C10 as known_xs. The formula will return the slope of the regression line for these data points. Read More: How to Find the Slope of a Line in Excel Method 3 – Determine the Slope of a Regression Line Manually Using SUM and AVERAGE Functions • Select the cell where you want the Slope. • Insert the following formula in the selected cell: • Press Enter to get the result. How Does the Formula Work? • AVERAGE(C5:C10): The AVERAGE function returns the average of cell range C5:C10. • (C5:C10-AVERAGE(C5:C10)): The average is subtracted from the cell range C5:C10. • AVERAGE(D5:D10): The AVERAGE function returns the average of cell range D5:D10. • (D5:D10-AVERAGE(D5:D10): The average is subtracted from the cell range D5:D10. • (C5:C10-AVERAGE(C5:C10))*(D5:D10-AVERAGE(D5:D10)): The formula multiplies the results it got from the previous formulas. • SUM((C5:C10-AVERAGE(C5:C10))*(D5:D10-AVERAGE(D5:D10))): The SUM function returns the summation of these values. • (C5:C10-AVERAGE(C5:C10))^2: The average of cell range C5:C10 is subtracted from cell range C5:C10. And then raised to the power of 2. • SUM((C5:C10-AVERAGE(C5:C10))^2): The SUM function returns the summation of the values it got from the previous calculation. • SUM((C5:C10-AVERAGE(C5:C10))*(D5:D10-AVERAGE(D5:D10)))/SUM((C5:C10-AVERAGE(C5:C10))^2): The first summation is divided by the second summation. Read More: How to Find Slope of Logarithmic Graph in Excel Practice Section We have provided a practice sheet for you to practice how to find the slope of a regression line in Excel. Download the Practice Workbook Related Articles << Go Back to Excel SLOPE Function | Excel Functions | Learn Excel Get FREE Advanced Excel Exercises with Solutions! We will be happy to hear your thoughts Leave a reply
{"url":"https://www.exceldemy.com/how-to-find-the-slope-of-a-regression-line-in-excel/","timestamp":"2024-11-07T22:31:31Z","content_type":"text/html","content_length":"194870","record_id":"<urn:uuid:7fdd9e77-2a1c-476a-8c10-10fb59f7023f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00681.warc.gz"}
facni experimental gismu x[1] is an n-ary operator/map which is distributive/linear/homomorphic in or over or from space/structure x[2], mapping to space or structure x[3], thereby producing a new space/structure x[4] which is the 'union' of x[2] and x[3] endowed with x[1]; x[1] distributes over/through all of the operators of x[2]. x[2] and x[3] cannot merely be sets; they must be structures/systems which each are a set/category endowed with at least one operator/relation/property (here, "operator" will refer to any of these options) each; the ith operator endowing one space corresponds to exactly the ith operator endowing the other space under mapping x[1]. For any operator of x[2], x[1] is commutative with it with respect to functional composition (fa'ai) when the (other) operator is 'translated' to the corresponding operator of x[3] appropriately. x[1] is linear/a linear operator; x[1] is a homomorphism; x[1] distributes. x[2] is homomorphic with x[3] under x[1]; they need not be identical (in fact, their respective operators need not even be identical, just 'homomorphically similar'). For "distributivity"/"distributive property", "linearity of operator", or "homomorphicity of operator", use "ka(m)( )facni" with x[1] filled with "ce'u"; for "homomorphicity of spaces", use the same thing, but with x[2] or x[3] filled with "ce'u". See also: "socni", "cajni", "sezni", "dukni"; "fa'ai"; "fatri". This is a structure-operator-preserving function, and thus is an example of a
{"url":"https://vlasisku.lojban.org/facni","timestamp":"2024-11-01T20:59:46Z","content_type":"text/html","content_length":"8271","record_id":"<urn:uuid:3e443d2d-6ff6-41cc-932f-993d4e8c857e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00080.warc.gz"}
Worksheet Order Of Operations With Exponents | Order of Operation Worksheets Worksheet Order Of Operations With Exponents BlueBonkers Free Printable Math Sheets Order Of Operations Exponents Worksheet Order Of Operations With Exponents Worksheet Order Of Operations With Exponents – You may have become aware of an Order Of Operations Worksheet, however just what is it? In this write-up, we’ll talk about what it is, why it’s essential, and just how to get a Worksheet Order Of Operations With Exponents Hopefully, this details will certainly be valuable for you. Your pupils are entitled to an enjoyable, effective means to examine the most crucial principles in maths. In addition, worksheets are a wonderful method for trainees to practice brand-new abilities and review old ones. What is the Order Of Operations Worksheet? An order of operations worksheet is a sort of mathematics worksheet that needs students to perform math operations. These worksheets are separated into 3 main sections: addition, multiplication, as well as subtraction. They likewise consist of the assessment of exponents and also parentheses. Students that are still discovering just how to do these tasks will certainly discover this kind of worksheet useful. The main purpose of an order of operations worksheet is to help trainees discover the appropriate way to address math equations. They can evaluate it by referring to an explanation page if a pupil does not yet understand the idea of order of operations. Additionally, an order of operations worksheet can be divided right into numerous categories, based on its trouble. Another crucial objective of an order of operations worksheet is to instruct students just how to do PEMDAS operations. These worksheets start with easy troubles connected to the fundamental policies as well as develop to a lot more complex issues involving every one of the policies. These worksheets are a terrific method to present young students to the excitement of fixing algebraic formulas. Why is Order of Operations Important? One of the most vital points you can learn in math is the order of operations. The order of operations makes sure that the mathematics troubles you address correspond. This is important for tests and also real-life estimations. When addressing a math issue, the order ought to start with exponents or parentheses, followed by multiplication, addition, and subtraction. An order of operations worksheet is an excellent means to teach students the correct means to solve math formulas. Prior to students begin utilizing this worksheet, they may require to evaluate ideas related to the order of operations. An order of operations worksheet can aid pupils create their skills in addition and subtraction. Teachers can use Prodigy as a simple way to differentiate practice and also provide engaging web content. Natural born player’s worksheets are an excellent means to assist trainees discover the order of operations. Teachers can start with the fundamental concepts of multiplication, division, and also addition to aid trainees build their understanding of parentheses. Worksheet Order Of Operations With Exponents Order Of Operations 12 Sample Order Of Operations Worksheets Sample Templates Order Of Operations With Exponents Worksheet Order Of Operations With Exponents Worksheet Order Of Operations With Exponents offer a terrific source for young learners. These worksheets can be conveniently customized for certain requirements. They can be discovered in 3 degrees of problem. The very first level is easy, requiring students to exercise utilizing the DMAS technique on expressions including four or even more integers or 3 drivers. The 2nd level calls for pupils to utilize the PEMDAS approach to streamline expressions using internal as well as external parentheses, braces, and curly dental braces. The Worksheet Order Of Operations With Exponents can be downloaded completely free and can be published out. They can after that be evaluated using addition, division, subtraction, and multiplication. Trainees can likewise utilize these worksheets to review order of operations and using backers. Related For Worksheet Order Of Operations With Exponents
{"url":"https://orderofoperationsworksheet.com/worksheet-order-of-operations-with-exponents/","timestamp":"2024-11-11T13:54:29Z","content_type":"text/html","content_length":"44061","record_id":"<urn:uuid:cba0d4e2-05bb-4d81-abf5-59050956ab1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00102.warc.gz"}
Times Table Chart Free Printable Times Table Chart Free Printable - Web welcome to our printable multiplication times table charts to 10x10 page. We have two multiplication charts available for your class — one for reference. Here you will find a wide range of free printable. Use these colorful multiplication tables to help your. Now that you know how to read a multiplication chart, why the order doesn’t matter. Web free multiplication chart printables. Web free printable multiplication charts (times tables) available in pdf format. Choose from over 20 styles of multiplication chart printables, from 12×12. You will see in the colorful tables below, the first. Web printable multiplication charts to learn or teach times tables. printable multiplication chart 1 12 pdf printable free multiplication table SKYTONEVOIP We have two multiplication charts available for your class — one for reference. Now that you know how to read a multiplication chart, why the order doesn’t matter. Web welcome to our printable multiplication times table charts to 10x10 page. Web free multiplication chart printables. Web free printable multiplication charts (times tables) available in pdf format. Multiplication tables chart printable promogeser Web printable multiplication charts to learn or teach times tables. You will see in the colorful tables below, the first. Here you will find a wide range of free printable. Web free multiplication chart printables. Use these colorful multiplication tables to help your. Free Multiplication Chart Printable Paper Trail Design Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! We have two multiplication charts available for your class — one for reference. Choose from over 20 styles of multiplication chart printables, from 12×12. Web welcome to our printable multiplication times table charts to 10x10 page. Web printable multiplication charts to learn or teach. Multiplication chart free printable pdf fourjes Web free multiplication chart printables. Choose from over 20 styles of multiplication chart printables, from 12×12. Here you will find a wide range of free printable. Web welcome to our printable multiplication times table charts to 10x10 page. Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! Printable Multiplication Chart 25X25 Web welcome to our printable multiplication times table charts to 10x10 page. Web free multiplication chart printables. Now that you know how to read a multiplication chart, why the order doesn’t matter. Choose from over 20 styles of multiplication chart printables, from 12×12. Web printable multiplication charts to learn or teach times tables. Multiplication Tables and Times Tables Printable Charts Blank and Completed Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! Now that you know how to read a multiplication chart, why the order doesn’t matter. We have two multiplication charts available for your class — one for reference. Web free multiplication chart printables. You will see in the colorful tables below, the first. Printable Multiplication Table 1 10 12 PDF Multiplication chart printable, Multiplication We have two multiplication charts available for your class — one for reference. Use these colorful multiplication tables to help your. Now that you know how to read a multiplication chart, why the order doesn’t matter. Here you will find a wide range of free printable. Web free multiplication chart printables. Printable multiplication Charts 112 (PDF) Free Memozor Use these colorful multiplication tables to help your. Web welcome to our printable multiplication times table charts to 10x10 page. Now that you know how to read a multiplication chart, why the order doesn’t matter. Web printable multiplication charts to learn or teach times tables. Web free printable multiplication charts (times tables) available in pdf format. X 20 Multiplication Chart Printable Multiplication Flash Cards Web free printable multiplication charts (times tables) available in pdf format. Use these colorful multiplication tables to help your. You will see in the colorful tables below, the first. Choose from over 20 styles of multiplication chart printables, from 12×12. Web welcome to our printable multiplication times table charts to 10x10 page. Web printable multiplication charts to learn or teach times tables. Web welcome to our printable multiplication times table charts to 10x10 page. Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! Web free printable multiplication charts (times tables) available in pdf format. Web free multiplication chart printables. Here you will find a wide range of free printable. Now that you know how to read a multiplication chart, why the order doesn’t matter. Web free multiplication chart printables. Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! We have two multiplication charts available for your class — one for reference. Web free printable multiplication charts (times tables) available in pdf format. Use these colorful multiplication tables to help your. Choose from over 20 styles of multiplication chart printables, from 12×12. Web printable multiplication charts to learn or teach times tables. Web welcome to our printable multiplication times table charts to 10x10 page. You will see in the colorful tables below, the first. You Will See In The Colorful Tables Below, The First. Choose from over 20 styles of multiplication chart printables, from 12×12. Web printable multiplication charts to learn or teach times tables. We have two multiplication charts available for your class — one for reference. Web free printable multiplication charts (times tables) available in pdf format. Web Welcome To Our Printable Multiplication Times Table Charts To 10X10 Page. Use these colorful multiplication tables to help your. Here you will find a wide range of free printable. Web multiplication charts, also called times tables charts, are essential tools to learn the multiplication tables! Now that you know how to read a multiplication chart, why the order doesn’t matter. Web Free Multiplication Chart Printables. Related Post:
{"url":"https://neu-news.de/printable/times-table-chart-free-printable.html","timestamp":"2024-11-07T13:55:00Z","content_type":"text/html","content_length":"25486","record_id":"<urn:uuid:6797f7d8-2232-424c-ba3b-6df24c156eef>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00784.warc.gz"}
Computer Images Computers use numbers, not letters or images. So how are images stored as numbers? Start with a pixel. A pixel is a single dot on the screen. It contains numbers that represent the red, green and blue light levels. An image is a large number of pixels. These pixels are saved in an array. An array is a block of reserved memory. It contains width * height pixels, and each pixel contains three numbers representing red green and blue. Red, green and blue is a simplification. Black and white images can contain a single number that represents black to white. Some images can use yellow, cyan and magenta, which are used for printing. In addition, they may also have alpha which represents transparency. Bit depth is the number of bits that make each pixel. 24-bit images have 24 bits per pixel. This is typically 8 bits per color. A 32 bit image can have 10-11-11 bit color depth or 8-8-8-8 red, green, blue and alpha. Old images sometimes use a palette. Old computers lacked memory, so programmers had to work around that. One way to shrink an image is to take all common colors and move them into a palette. A palette contains a set number of colors. Each color contains red, green and blue values. The image then needs to store the palette index instead of the red green and blue values. It provides a significant size reduction at a cost of color smoothness. Images are saved and loaded. To save disk space, nearly every image format uses tricks to reduce the file size. This includes compression. Some compression formats, such as JPEG, lose data. They are called a lossy compression. While they lose details, the gist of the image can be saved in a much smaller space. There are often multiple versions of each. Open standards exist for most image file Programs can create images. You can create an image directly, but most programmers use image libraries. An image library provides a set of functions that create images, draw things on it then output it in various formats. Writing an image library is a fun task that most programmers do at least once in their career. The algorithms are precise and clear. Most image libraries organize memory in the same way. The block of memory (width * height * bit-depth) is set so the top left pixel is first. The pixel to the right is next in memory. At the end of the row, it drops one point and keeps going. The first image libraries used this because it was easy to convert from (x,y) to a memory address. It’s just pixel[y*width+x]. So to set a pixel you just void set(x,y,color) pixel[y*width+x] = color; Now half the programmers are annoyed over lack of input validation. In truth, input validation isn’t always needed. If you are writing this for your own use, it’s often easier to build validation into the algorithms. However, if you are writing a library then you can add safe functions for people who want to use them. int set_safe(x,y,color) if (x < 0 || x >= width || y < 0 || y >= height) return -1; return 0; The next basic function found in all images is line. It draws a line between two points. There are plenty of line drawing algorithms, but some are far simpler than others. There’s even one that doesn’t use division. While this may seem trivial, you sometimes need line drawing algorithms in embedded processors. For example, 3D printers. This line drawing algorithm requires you to check if a line is mostly horizontal or vertical. (x1, y1) – First point (x2, y2) – Second point A simple line looks like this: If you zoom in, you’ll find a new perspective. Sometimes these will be blurred, so some are blacker than others. This is called anti-aliased. An aliased image is easier to visualize. Notice that for every two pixels over, it moves one pixel up. Normal line algorithms rely on division to calculate the y value. Typically it is (y2-y1)/(x2-x1)=(y-y1)/(x-x1). Or rearrange to get: y = y1 + (x-x1)*(y2-y1)/(x2-x1). This is the typical geometric approach. Division is typically computationally expensive, or at least far more than multiplication or addition. It also takes additional die space if you are implementing this in hardware, such as through an FPGA. To eliminate the division we first notice a pattern. When Δx is twice that of Δy, we move two pixels over and one pixel up. When Δx is thrice Δy, we move three pixels over and one pixel up. This isn’t always clear cut. For example if Δx is two and a half times Δy, we move over two pixels, up one, over three, up one then repeat. We can use a simple math trick to calculate this. Every time we draw a pixel, we add Δy to a sum variable. When this variable is greater than Δx, we shift the y value and subtract Δx from the sum. There’s one more wrinkle which you’ll find after the line draws. It will be shifted over slightly to the right. To correct this, we start the sum as one half of Δx. Ah, but there wasn’t supposed to be division. In this case it’s alright, because it’s division by two. This is accomplished with a bit shift, and it’s extremely simple to do in hardware. This trick only works if the line is mostly horizonal, in other words if Δx > Δy. That’s alright, because we can just have two algorithms: one for mostly horizontal and the other for mostly vertical. Let’s assume the line is mostly horizontal. First, to simplify the algorithm, we ensure the first point is always on the left. This is simple. if (x1 > x2) int tmp = x2; x2 = x1; x1 = tmp; tmp = y2; y2 = y1; y1 = tmp; Then we draw the line using the described algorithm. int dx = x2 - x1; int dy = y2 - y1; int step = sign(dy); dy = abs(dy); int y = y1; int sum = dy >> 1; for (int x = x1;x <= x2;x += 1) sum += dy; if (sum >= dx) sum -= dx; y += step; There are various ways to optimize this further. You could have two algorithms for horizontal, one for each left-right orientation. The same applies for the vertical line algorithm. This would eliminate the need for the initial swap. It requires more code but runs a slight bit faster, which is the perpetual trade-off between code size and run time. Additionally if you know the points always lie in the image then you can optimize it to write directly to the pixel array. int dx = x2 - x1; int dy = y2 - y1; int step = sign(dy) * width; dy = abs(dy); int y = y1; int sum = dy >> 1; int *mem, *stop = pixel + y2 * width + x2; for (mem = pixel + y1 * width + x1; mem != stop; mem++) *mem = color; sum += dy; if (sum >= dx) sum -= dx; mem += step; *mem = color; If coordinates can be given outside the image, then you can calculate the intercepts and draw the line between the intercept points. That’s another potential optimization. Optimization should fit how the code is used. In a case like this, a good approach is to have the functions line() and line_safe(). This gives programmers a safe or fast option. Finally, there is a way of optimizing this further, but it requires self-mutating code. In an algorithm like this, where only parts change depending on initial parameters, then you can avoid additional statements inside loops by dynamically rewriting parts of the code. For example, instead of adding a step value, you can rewrite the code to add an immediate value. This reduces the registers used, which can save a few more clock cycles. Unfortunately most code segments are write protected these days so optimization through self-mutation is no longer a viable method. Obviously there are tradeoffs between code size and optimization. You can generalize the algorithm so it uses one simple loop. If you’re writing it in assembler, you design it around the available registers. If you have a custom core on an FPGA, then you can make opcodes for some of these sequences and you can make as many custom registers as you want. Computer graphics is a complex field, but everything condenses into a two-dimensional image. Even 3D graphics are converted into a 2D image before they are displayed. So a fundamental understanding of images are essential for any graphics.
{"url":"https://blog.waterloointuition.com/computer-images/","timestamp":"2024-11-08T21:00:07Z","content_type":"text/html","content_length":"81500","record_id":"<urn:uuid:0ad4fa69-f5f2-4501-9a3a-0f850162414c>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00191.warc.gz"}
Entropy always increases For the problem texts, see . I found this contest easier than usual for a COCI. Given two wands A < B and two boxes X > Y, there is never any point in putting A in X and B in Y; if they fit, then they also fit the other way around. So sort the boxes and wands, match them up in this order and check if it all fits. This is really just an implementation challenge. For each spiral, walk along it, filling in the grid where the spiral overlaps it. With more than one spiral, set each grid point to the minimum of the values induced by each spiral. Of course, it is not necessary to go to \(10^{100}\); about 10000 is enough to ensure that the grid is completely covered by the partial spiral. The rule for determining which employee does the next task is a red herring. Each employee does exactly one task, and the order doesn't matter. The money earned by an employee is the sum of the distance from that employee to all underlings. The can be computed by a tree DP (computing both this sum and the size of each subtree). Suppose there is a solution. It can be modified by a series of transformations into a canonical form.Firstly, moving a false claim below an adjacent true claim will not alter either of them; thus, we can push all the false claims to the bottom and true claims to the top. At this point, the true claims can be freely reordered amongst themselves. Similarly, if we have a false claim with small a above one with a larger a, we can swap them without making either true. So we can assume that the false claims are increasing from bottom to top in the deck. Finally, given a true claim with large a and a false claim with small a, we can swap them and they will both flip (so the positions of false claims stay the same). After these transformations, the deck will, from bottom to top, contain the largest K cards in increasing order, followed by the rest in arbitrary order (let's say increasnig). To solve the problem, we simply construct this canonical form, then check if it indeed satisfies the conditions. When building roads on the day with factor F, we don't actually need to build roads between every pair of multiples of F: it is equivalent to connect every multiple of F to F. This gives O(N log M) roads. We can represent the connected components after each day with a standard union-find structure. For reasons we'll see later, we won't use path compression, but always putting the smaller component under the larger one in the tree is sufficient to ensure a shallow tree (in theory O(log N), but I found the maximum depth was 6). A slow solution would be to check every remaining query after each day to see whether the two mathematicians are in the same component yet. To speed this up, we can record extra information in the tree: for each edge, we record the day on which it was added. If the largest label on a path from A to B is D, then D was the first day on which they were connected (this property would be broken by path compression, which is why we cannot use it). Thus, to answer a query, we need only walk up the tree from each side to the least common ancestor; and given the shallowness of the tree, is this cheap. I really liked this problem. Take a single starting peak P. Let A be the size of the maximum matching the graph, and let B be the size of the maximum matching excluding P. Suppose A = B. Then Mirko can win as follows: take the latter matching, and whenever Slavko moves to a valley, Mirko moves to the matched peak. Slavko can never move to a valley without a match, because otherwise the journey would form an augmenting path that would give a matching for the graph of size B + 1. Conversely, suppose A > B. Then take a whole-graph matching, which by the assumption must include a match for P. Slavko can win by always moving to the matched valley. By a similar argument, Mirko can never reach an unmatched peak, because otherwise toggling all the edges on their journey would give a maximum matching that excludes P. To implement it, it may not be efficient enough to construct a new subgraph matching from every peak. Instead, one can start with a full-graph matching, remove P and its match from the graph (if any), then re-augment starting from that match. This should give an O(NM) algorithm (O(NM) for the initial matching, then O(M) per query).
{"url":"https://blog.brucemerry.org.za/2018/","timestamp":"2024-11-05T15:37:30Z","content_type":"application/xhtml+xml","content_length":"57562","record_id":"<urn:uuid:4feb7b40-6c3d-455d-ac62-4a63c67b6402>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00748.warc.gz"}
How can I verify spec satisfies the properties for infinite states? [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] How can I verify spec satisfies the properties for infinite states? I wonder how can I verify spec satisfies the properties for infinite states. In my spec, there are infinitely states so TLC is executed infinitely. In this case, I wonder how can I verify spec satisfies the properties. For example, I want to prove "Theorem Spec => []TypeInv". When i check by the bound length constraint because of infinite states, TypeInV is satisfied. But, I think it can be not proved that spec satisfies the properties for infinitely states by this bounded checking.
{"url":"https://discuss.tlapl.us/msg02441.html","timestamp":"2024-11-01T23:21:46Z","content_type":"text/html","content_length":"3711","record_id":"<urn:uuid:7fcceec4-4db7-443a-b249-977d17bfc6dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00057.warc.gz"}
• Most problems are found in one of the following: 1. ADM: The Algorithm Design Manual 2. IDMA: An Introduction to Discrete Mathematics and Algorithms • For full credit, provide context for each problem, show all calculations, and justify all answers by providing enough comments to explain your reasoning. • You will lose a significant amount of credit if you do not provide context, calculations, and justifications for a problem. • Numbers and/or algebra by themselves are not enough. A correct answer with no justification will be worth no more than half credit, and sometimes much less than that. • Precision is very important. You cannot skip steps, make guesses, or use flawed logic. Any of these things can lead to incorrect answers. • Homework assignments must be very neatly written or typeset (e.g. using Word or OpenOffice). • You must indicate any assistance/collaboration you had on an assignment as specified on the Policies page. • NEW! If a problem asks for an algorithm, you should give the most efficient algorithm you can find to ensure full credit. You should also specify the complexity of the algorithm with justification, whether or not the problem asks for it. The Assignments All Homework
{"url":"https://cusack.hope.edu/Teaching/?class=csi255F13&page=homework","timestamp":"2024-11-04T05:31:52Z","content_type":"text/html","content_length":"6766","record_id":"<urn:uuid:fd6db14d-b475-431c-aa10-d80718aa2d11>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00004.warc.gz"}
How to solve every problem in the world: Explaining the power of genetic solvers What if - hear me out - there was a way of solving every problem in the world? Ok, maybe not all of the world's problems, but all problems related to optimization. Optimization refers to the practice of finding the optimal set of values for certain parameters so that we can get to (or close to) an optimal solution. This field of study is called Operations Research. Here are some examples of optimization problems: • You're producing different kinds of dog food. Every week you obtain certain quantities of different ingredients. Each kind of dog food you produce has a unique composition. The cheaper ones (where you will typically also get a lower margin) have a recipe comprising of a lot of grains and some cheaper meats while the more expensive ones are mainly made from lamb and beef and smaller amounts of grains. Your goal is too choose how many bags you'll produce of each kind that week so that you'll maximize your profit while keeping in mind that you'll have to stick to the available quantities of each ingredient. • You're working in a large production facility that exists of multiple adjacent units. Not all units are constantly manned; this depends on what is being produced at any given moment. A manned units has an extra cost due to lightning, heating, and supervision. Your task is to figure out a production schedule so that the number of units in use simultaneously is minimized while still meeting the production quota. Note that in the first example we want to maximize profit while the second one is focused on minimizing costs. This is what we call the objective. The objective can be anything, and in most cases it is some kind of value we want to either maximize of minimize. Both the examples are what we call discrete problems which means that the answer will be in integer numbers. In the production facility example each unit is either in use or not at a certain point: e.g. even if it's running at half capacity you'll still need to provide full lightning, heating, and supervision. The dog food mixing case is similar in that we want to end with fixed numbers of bags for each composition (as our customer won't be buying half a bag). If we did the bagging ourselves but provided the dog food in bulk however, it would be a continuous problem. Both examples also contain constraints which set limits to what solutions are feasible. For the dog food this is the amount of every available ingredient. In the other example it is that you have to meet the production quota, otherwise the solution would be so naive as to just shut down the whole plant as that would minimize the costs. In more technical texts you'll see constraints mentioned as subject to. Now that you've got a better understanding of what optimization is we'll explain how they are typically "solved". Thereafter we'll dive into genetic solvers that form a framework for all kinds of problems that are out of range of the classic methods. We'll not get too technical in this article; that might be something for a next one. The goal in this text is to get you interested in the amazing world of solvers and how this often overlooked discipline in data-science can help your company. How to solve these problems? The madman's way A possible way to find an optimum (i.e. minimum or maximum depending on your objective) would be to try out every possible solution. For continuous problems it's not even worth considering this as the number of possible solutions is infinite. You could try this for a discrete problem but unless we're talking about really small numbers, the number of possible solutions grows astronomically and therefore becomes practically impossible to compute. Just to give you a feeling about this: you've got a problem that requires you to decide which of the six production lines in your factory need to run at a certain point. Each of the lines has a different capacity, fixed running cost, variable running cost, and other properties making it a rather complex problem (i.e. it's not just a matter of how many lines we run, but which ones specifically). One possibility would be to run zero (which will certainly not lead to a feasible solution as you actually need to produce something). There are six possible combinations to run a single line, 15 to run two, 20 to run half of them, again 15 to run four, six to run five of them, and finally there is of course a single way to run them all. That's a total of 64 possible solutions. Doubling the number of lines will not double the number of possible solutions here. It will square this number: with 12 lines you have 4096 possible solutions! What about 100 lines? Well, imagine every possible solution would take the size of a grain of sand; you would need over a billion earth-size spheres to contain all those possible solutions! It would be pretty clear that "trying every possible solution" would not be a very useful strategy. The somewhat smarter way Instead of trying all possible solutions you might be more tempted to use your own experience and intelligence to come up with a set of rules to solve the problem. Such an approach is called a heuristic. A possible heuristic for the dog food problem could look like this: 1. Select the recipe wherefore the bags of dog food have the highest profit margin. 2. Manufacture that type of dog food until one of the ingredients runs out. 3. Select the next most profitable type of dog food for which you still have ingredients left ... and so on. Although this might often give a decent solution it will probably not be the best one. As the more profitable type of dog food contains more meat (and less grain) you'll probably get stuck with an excessive amount of grain that is of no use (as even the cheapest kind will need some meat). Therefore the optimal solution will probably be one where a bit less of the meat-rich variety is made. A heuristic you probably use quite often is finding the shortest route between two locations (cities, ...). Humans using traditional maps as well as navigational apps on your phone or in your car make use of heuristics to figure out what is (probably) the best route. A special heuristic is the treatment of a discrete problem as a continuous one as these are typically more easily solved. If we go back to the dog food example that would mean ignoring the fact that we can't end up with non-whole bags and rounding the results to down to integer numbers for every type of dog food we offer. However, this is often not the optimal solution, though it must be said that with large numbers (i.e. hundreds or thousands of bags) the difference between this approach and the best discrete solution might be negligible. The mathematician's way Many problems and accompanying constraints can be expressed as mathematical equations. The optimization then really becomes a matter of finding the minimum or maximum of a function within the boundaries set by said constraints. I promised not to get technical so let's limit ourselves to an extremely simple example: you're hosting a stand that sells variety of home bottled fruit juice. These are the two compositions that you Mix 1 Mix 2 Apple 4/5 2/3 Blueberry 1/5 1/3 If you have 10 liters of apple juice available and 4 liters of blueberry juice, what is the maximum total amount of drinks we could make with above recipes? The completely naive method of trying every possible solution is obviously out of the question now as we have a continuous problem (i.e. the number of liters we make from each mix doesn't have to be a whole number). Now let's try a very simplistic heuristic: start by making one drink until you run out of ingredients for it and see if you could still make some more of the other drink. From Mix 1 we can make 12.5 liter using all 10 liters of apple juice and 2.5 liters of blueberry juice. Note that the ratio is correct for Mix 1, namely four times more apple than blueberry. You've used all your apple juice but still have 1.5 liter of blueberry juice left. Sadly no recipe we're offering can be made with only blueberry juice. Conversely, if we started with Mix 2, which has a much higher concentration of the more scarce blueberry juice we would only be able to create 12 liter using 8l apple and the full 4l blueberry juice. 2l of apple juice stays unused this way. We now know there is a lower limit of 12.5 liter mixed drink we can create, and we have at least some suspicion that making some of both the mixes might result in the biggest total amount of mixed drinks. But how much will we create? Will we reach 13 liters, 13.5 maybe? And what will be the proper ratios? Let's first introduce some proper notation by putting the objective and constraints into mathematical expressions. The objective is to maximize the total amount of the two mixed drinks which we'll call X and Y, so we write "Maximize X + Y". The first constraint is that the total amount of apple juice must not be greater than 10 liters. As a liter of Mix 1 (X) is made of 80% apple juice while this is two third of a liter for Mix 2 (Y) we can say that 4/5 X + 2/3 Y can not be more than 10 (liters). Similarly a fifth of X plus a third of Y must not exceed 4 (available liters of blueberry juice). Additionally we add two extra constraints: both X and Y must not be negative as it is quite obviously impossible to generate a negative amount of mixed juice. Together we write this down as Now there exists a nice mathematical way to find for which values of X and Y their sum is maximized given said constraints, but - as I promised not to get too technical - let's stick with the graphical way to answer this question. Let's draw a graph with the horizontal axis expressing the amount of Mix 1 to make, and the amount of Mix 2 to make on the vertical axis. Every point (x, y) you would put on this graph corresponds with a certain amount of both mixed drinks. The further to the top right of the graph the higher the total amount of mixed drinks. Now let's add the constraints to the graph: the first two constraints take the form of a triangle (red and blue respectively). A point within this triangle complies to the constraint. The purple area thus complies with both constraints. The two so called non-negativity constraints are also present here though less explicitly: we take these into account by not letting our red and blue area extend under the horizontal, or left, of the vertical axis. The purple area where all constraints are met is also called the feasible region. Now it's just a matter of finding which point (x, y) is within the areas defined by the constraints where the sum is the highest possible. As we already know the optimal solution will not be to make either only Mix 1 or Mix 2 this point can be found by looking for the "kink" in the feasible region. This seems to be located at point (5, 9) which tells us we must make 5 liters of Mix 1 and 9 liters of Mix 2 for a total of 14 liters which is way above the 12.5 liters our overly simplistic heuristic gave us! Let's check if this solution is indeed feasible: Mix 1 (X) Mix 2 (Y) Total used Apple 4/5 * 5 = 4 2/3 * 9 = 6 10 Blueberry 1/5 * 5 = 1 1/3 * 9 = 3 4 Total 5 9 14 Mix 1 requires 80% apple juice, so for the assigned five liters of Mix 1 we'll need four liters of apple juice. For Mix 2, two thirds of our nine liters needs to be apple juice; combined we get a required amount of ten liters which is exactly what we have at our disposal. Similarly for blueberry juice the sum corresponds to the available amount. Obviously, with more ingredients and more complicated recipes this problem would become a bit more difficult and might also result in optimal solutions where not all ingredients are used completely. This example should however suffice to give you a taste of the capabilities of mathematical solving (admittedly we used a "graphical way" instead of the purely mathematical way as it's easier to understand but they are inherently equivalent). Is mathematical solving then the answer to all problems? No, if they can be used we should use them but there are some limitations: 1. To represent your problem and constraints as mathematical equations you might need a mathematician. 2. Some parts of your problem or constraints will be terribly difficult to put into an equation. 3. Having an equation does not mean that we can actually solve it mathematically! (e.g. certain fifth or higher order polynomials) The versatile alternative: the genetic solver Inspiration from biology Genetic solvers follow a totally different approach inspired by the way evolution works. When two individuals of a species breed they share their DNA creating new individuals with traits found in Mom or Dad, but also some new ones that result from how the exact way the DNA got mixed. If certain traits are beneficial for the survival and breeding of the offspring there is a higher probability this trait gets passed through to the next generation. This is a rather slow process because a lot of randomness is involved, but after many generations it can lead to substantial changes resulting in the situation where you are reading this text instead of sitting in a tree nibbling juicy leaves. DNA exists of chains of four molecules, basically making it a quarternary numeral system, therefor containing double the information per bit compared to the binary system (only two different values, normally expressed as 0 and 1) that a computer works with. Though the way information is encoded is slightly different, we can use the principles from biology to allow potential solutions of a problem to "evolve" towards an optimum and thus using it as a solver. Instead of strands of DNA we'll talk about arrays of bits when we move away from the biological world. There are multiple mechanisms for how two strands of DNA (i.e. one from each parent) combine to a new one (the offspring). The most commonly known mechanism is called crossover where part of the strand from one parent is replaced by the corresponding part of the other partner. The use case Let's take a real-world example of a problem we would like to solve with a genetic algorithm. Our example is a case of a vehicle routing problem: You have to deliver pallets 47 pallets of goods to your clients. You have to make sure that each pallet gets delivered and of course you want to do that for the minimal total cost. To reach that goal you have the following trucks available: Size Amount Capacity Cost/km Cost/hour Relative speed Small 12 2 €0.35 €35 0.95 Medium 7 3 €0.70 €35 0.85 Large 2 5 €1.20 €40 0.70 There are three sizes of trucks, e.g. we have 12 small ones with a capacity of two pallets and so on. Every size of truck has both a cost per km (mainly fuel, maintenance, and depreciation), and a cost per hour (mainly cost related to the driver). To calculate route costs we'll use OpenStreetMaps (OSM) to provide us the distances and driving times between the depot and all delivery locations as well as between all delivery locations mutually (of course this is done via an API and not manually; but that's a story for another time). As the OSM driving time predictions are rather optimistic, certainly for the medium and large trucks we added a correction factor "Relative speed" by which we divide the driving time of OSM to obtain a more realistic estimation. We assume that every truck will only do a single tour per day and the cost only results from distance and time driving. Actually we'll use a lot of simplifications, not because genetic algorithms can't handle complex scenario's but because we want to keep this article fun to read. A quick multiplication of truck amounts and capacity shows that we have a daily capacity of 55 pallets which is enough for the 47 that are waiting for delivery. Our - fictional - depot is located in a small town called Strombeek-Bever just north of Brussels. This place also happens to be the home of Keyrus Belgium, but that is purely a coincidence. The 47 pallets need to go to 47 different delivery locations spread across Belgium. Which delivery locations should we assign to which truck in order to minimize the total cost while making sure that all pallets get delivered? A naive attempt If you're familiar with similar problems in the field of Operations Research you might think this is an example of a simple assignment problem where you have to assign a set of workers to a set of tasks. In those problems however the tasks are basically independent of each other (apart from the fact that if one worker gets a certain task assigned you can't assign it to another one anymore). In our example however there is an extreme interdependence between the tasks. For example, if you need to deliver a pallet in Ghent that can be done relatively cheaply if you also have to deliver one in Bruges (as Ghent is more or less on the way between our depot and Bruges), while the same pallet would result in a much higher additional cost if it was put in a truck on its way to Liège (the complete opposite direction). Let's start very naively by putting all 47 pallets in a random order in one of 55 slots. The pallet in the first slots will be handled by the first truck, and so on. Each pallet has its specific destination. Eight slots will remain empty. We load the trucks and they start their route. At this point both the assignment of pallets (and thus destination) to trucks as well as the order in which a truck delivers its pallets is completely random and thus absolutely un-optimized. We calculated the total cost of this operation using distance- and duration tables obtained with OSM in combination with the above table with cost/km and cost /hour; the result is €6776. Additionally your drivers will also be pretty annoyed with your complete lack of efficiency and consider you a poor manager. The following image shows the (badly) assigned routes with each color representing one of our trucks: First improvement: optimization per truck The first optimization we're going to do is to change the order of delivery for each truck separately. This problem where you have to find the least costly way to visit a certain set of locations and returning back to the first one (in our case the depot) is called the travelling salesman problem (TSP) which is probably one of the more famous problems in Operations Research. Though traditionally explained in terms of minimizing the total distance, you can just as easily use any kind of cost metric, as in our example. Though simple to explain it's a hard problem compute-wise as the only way to make 100% sure you've got the best solution is to check all possibilities and this number of possibilities grows exponentially with the number of locations to visit. Luckily for us however this problem is so well studied that very good algorithms and accompanying software packages exist so that we don't have to figure out this part ourselves. After letting a TSP-solver loose on each of the trucks we reduce the cost to €5581 and get the following solution: This is an improvement, but only optimizing per truck will still result in many suboptimal solutions like two trucks going to the same two cities both to deliver one pallet each instead of having one serving the two clients in one city and the other taking care of the other city. Time to go genetic In order to use a genetic algorithm to assign the pallets in an optimal way we need to figure out a way to represent this as a binary array (i.e. a series of zero's and ones). This is often the most difficult part when it comes to programming these kinds of solvers. One way to approach the problem here is to start from the line of 55 slots we mentioned earlier where each slot contains either one of the pallets or is empty (as we only have 47 to deliver) and find a way to encode each possible order in a binary way. As the order of the pallets within each truck does not matter we can reduce the number of necessary bits further. Once we've figured out how to represent this as an array of zero's and ones (for completeness, genetic solvers exist that deal with decimal numbers too, but that also is a story for another time) we can offload the heavy work to a library specialized in genetic optimization. There are a few parameters to choose, but they are easy to understand given the similarity with the biological counterpart. An important one is the population size, where smaller populations will obviously get processed faster but will more easily get stuck in a local optimum and thus not improve further. The most tricky parameter you'll have to decide on is how long - in terms of "generations" - you'll keep the algorithm running. Remember that a genetic algorithm is not guaranteed to provide the optimal solution, nor will it be able to tell if an optimal solution has been reached. There are however a few indicators that show you can stop "breeding", for example if no improvements are observed for multiple generations. In practice we often start with a very simple example where we can find a (near-)optimal solution in another way (for example by trying out all possibilities). If our genetic algorithm converges to that solution we have a good indication it will also perform well with real problems. From these small examples we can also calculate an average expected cost per pallet delivery. As we scale up this cost is expected to lower as the probability increases of having multiple delivery locations close together that can be served by the same truck. Another common strategy while tuning and evaluating the genetic algorithm would be to have it run a few times with a different (random) first generation of "parents". If totally different starting populations reach a similar (or similarly good) result that is a strong indication your genetic algorithm is performing well. A library for solving genetic algorithms typically also allows you to "choose" part of the "genes" of your first generation. If you can think of some easy to implement heuristics that already give a significant improvement over randomly chosen solutions it's really advised to add those as it will help the genetic algorithm a great deal to find a good solution faster. Additionally you should keep the results of your solver because even though you might have a different task each day (e.g. other delivery locations) there will be recurring patterns. The knowledge obtained from previously solved cases can help to find better start heuristics. We applied this on our example (using the "genalg" package in R) and let it run for a few generations until no further improvements seem to happen. It turned out that the best solution we found would cost us €3836, which is nearly half of our original (though admittedly rather stupid) initial estimate: Potential extensions We focused on the way a genetic algorithm is capable of providing a good solution for a not so simple problem. In practice however our case study was actually still very much an oversimplification of reality. This choice was made to keep the article easy to read, but I hope by now you realize that the power of genetic algorithms is that you could basically pass any kind of function that calculates your costs or profit (or whatever you want to optimize). Here are some potential additions that we could easily implement: • Different package sizes and packing optimization. • Multiple packages at same destination. • Order of packages in the truck. • Addition of loading and unloading times. • Addition of waiting times if too many trucks want to load at the same moment. • Limit number of different trucks to use in a day (i.e. less drivers needed). • Multiple trips per day per truck. • And much more... Every company has to deal with optimization problems from time to time but often sub-optimal solutions are chosen as they are good enough and improvements are considered too hard. I hope that while reading this article you have came to the enlightening realization that there might be a whole lot of "low-hanging fruit" at your business waiting to be optimized, maybe with the use of a genetic I admit that this article did not provide you with enough technical knowledge to start implementing one of these amazing optimization techniques yourself as the required skills and knowledge are a bit bigger than what I can cover with a blog post. Luckily though there are people at Keyrus who can help you with that! If you've got any questions, don't hesitate to contact the author via joris.pieters@keyrus.com
{"url":"https://keyrus.com/be/en/insights/how-to-solve-every-problem-in-the-world-explaining-the-power-of-genetic","timestamp":"2024-11-05T17:07:40Z","content_type":"text/html","content_length":"171517","record_id":"<urn:uuid:d5dd2e66-94fb-4f57-9834-12ee6168acfb>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00350.warc.gz"}
In this vignette we examine and model the Fujita2023 data in more detail. Processing the data cube The data cube in Fujita2023$data contains unprocessed counts. The function processDataCube() performs the processing of these counts with the following steps: • It performs feature selection based on the sparsityThreshold setting. Sparsity is here defined as the fraction of samples where a microbial abundance (ASV/OTU or otherwise) is zero. • It performs a centered log-ratio transformation of each sample using with a pseudo-count of one (on all features, prior to selection based on sparsity). • It centers and scales the three-way array. This is a complex topic that is elaborated upon in our accompanying paper. By centering across the subject mode, we make the subjects comparable to each other within each time point. Scaling within the feature mode avoids the PARAFAC model focusing on features with abnormally high variation. The outcome of processing is a new version of the dataset. Please refer to the documentation of processDataCube() for more information. Determining the correct number of components A critical aspect of PARAFAC modelling is to determine the correct number of components. We have developed the functions assessModelQuality() and assessModelStability() for this purpose. First, we will assess the model quality and specify the minimum and maximum number of components to investigate and the number of randomly initialized models to try for each number of components. Note: this vignette reflects a minimum working example for analyzing this dataset due to computational limitations in automatic vignette rendering. Hence, we only look at 1-3 components with 5 random initializations each. These settings are not ideal for real datasets. Please refer to the documentation of assessModelQuality() for more information. # Setup minNumComponents = 1 maxNumComponents = 3 numRepetitions = 5 # number of randomly initialized models numFolds = 8 # number of jack-knifed models ctol = 1e-6 maxit = 200 numCores= 1 # Plot settings colourCols = c("", "Genus", "") legendTitles = c("", "Genus", "") xLabels = c("Replicate", "Feature index", "Time point") legendColNums = c(0,5,0) arrangeModes = c(FALSE, TRUE, FALSE) continuousModes = c(FALSE,FALSE,TRUE) # Assess the metrics to determine the correct number of components qualityAssessment = assessModelQuality(processedFujita$data, minNumComponents, maxNumComponents, numRepetitions, ctol=ctol, maxit=maxit, numCores=numCores) We will now inspect the output plots of interest for Fujita2023. Jack-knifed models Next, we investigate the stability of the models when jack-knifing out samples using assessModelStability(). This will give us more information to choose between 2 or 3 components. stabilityAssessment = assessModelStability(processedFujita, minNumComponents=1, maxNumComponents=3, numFolds=numFolds, considerGroups=FALSE, groupVariable="", colourCols, legendTitles, xLabels, legendColNums, arrangeModes, ctol=ctol, maxit=maxit, numCores=numCores) Model selection We have decided that a three-component model is the most appropriate for the Fujita2023 dataset. We can now select one of the random initializations from the assessModelQuality() output as our final model. We’re going to select the random initialization that corresponded the maximum amount of variation explained for three components. numComponents = 3 modelChoice = which(qualityAssessment$metrics$varExp[,numComponents] == max(qualityAssessment$metrics$varExp[,numComponents])) finalModel = qualityAssessment$models[[numComponents]][[modelChoice]] Finally, we visualize the model using plotPARAFACmodel(). plotPARAFACmodel(finalModel$Fac, processedFujita, 3, colourCols, legendTitles, xLabels, legendColNums, arrangeModes, continuousModes = c(FALSE,FALSE,TRUE), overallTitle = "Fujita PARAFAC model") You will observe that the loadings for some modes in some components are negative. This is due to sign flipping: two modes having negative loadings cancel out but describe the same subspace as two positive loadings. We can manually sign flip these loadings to obtain a more interpretable plot. finalModel$Fac[[1]][,2] = -1 * finalModel$Fac[[1]][,2] # mode 1 component 2 finalModel$Fac[[1]][,3] = -1 * finalModel$Fac[[1]][,3] # mode 1 component 3 finalModel$Fac[[2]][,3] = -1 * finalModel$Fac[[2]][,3] # mode 2 component 3 finalModel$Fac[[3]][,2] = -1 * finalModel$Fac[[3]][,2] # mode 3 component 2 plotPARAFACmodel(finalModel$Fac, processedFujita, 3, colourCols, legendTitles, xLabels, legendColNums, arrangeModes, continuousModes = c(FALSE,FALSE,TRUE), overallTitle = "Fujita PARAFAC model")
{"url":"https://cloud.r-project.org/web/packages/parafac4microbiome/vignettes/Fujita2023_analysis.html","timestamp":"2024-11-04T05:29:53Z","content_type":"text/html","content_length":"175074","record_id":"<urn:uuid:cf751b36-a22f-4f2e-bdea-e539b613e6bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00406.warc.gz"}
Comments on the articles “Hyperbolic thermoelasticity: A review of recent literature” (Chandrasekharaiah DS, 1998, Appl Mech Rev 51(12), 705–729) and “Thermoelasticity with second sound: A review” (Chandrasekharaiah DS, 1986, Appl Mech Rev 39(3), 355–376) Issue Section: In the articles Thermoelasticity with second sound: A review1 and Hyperbolic thermoelasticity: A review of recent literature2, Chandrasekharaiah has presented an in-depth look at nonconventional (a.k.a. generalized or non-Fourier) theories of thermoelasticity. The motivation driving the formulation of these theories is the desire to overcome the infinite propagation speed of thermal signals predicted by conventional thermoelasticity (CTE), the so-called “paradox of heat conduction.” In 1, two of these nonconventional theories are examined in the context of the Danilovskaya problem (DP). (In the DP, the homogeneous and isotropic thermoelastic half-space $x>0,$ under a stress free boundary condition (BC) at $x=0,$ is subjected to a Heaviside, or step, temperature BC at time $t=0+.)$ The first he refers to as extended thermoelasticity (ETE) and the second as temperature-rate dependent thermoelasticity (TRDTE). In both ETE and TRDTE, the parabolic diffusion equation of CTE is replaced with a hyperbolic heat transport equation. As a result, both theories predict thermal waves (ie, second sound) propagating with finite speeds. In ETE, a single relaxation time $τ>0$ appears and second sound propagates with speed $vT=κ/τ,$ where κ is used here to denote the thermal diffusivity. It is noted that ETE reduces to CTE in the limit $τ→0.$ TRDTE was presented in 1972 by Green and Lindsay 3. This theory involves the two relaxation times $α0$ and α, where $α⩾α0>0$ and, in the case of homogeneous and isotropic materials, reduces to CTE in the limit $α→0.$ (While it has been postulated that $α0$ is actually non-negative, it must be noted that TRDTE admits second sound only when $α0>0$1.) An important aspect of TRDTE is that Fourier’s heat law is not violated in materials that have a center of symmetry at each point 2 3. Although it was not pointed out in 1, Chandrasekharaiah did note in 2 several physically unrealistic results associated with TRDTE, in particular the fact that the displacement suffers jump discontinuities in the presence of a step temperature BC. A natural question that arises is why was this problem with TRDTE not reported in 1, especially since the author of that paper derived parts of the small-time solution to the DP for a TRDTE medium? (See 2 and the references therein for a discussion of the problems with TRDTE.) The intent of the present Letter is the following: $i)$ Show that the small-time expression given in 1 for the normal stress corresponding to the DP for a TRDTE medium is incorrect; $ii)$ Show how this erroneous expression could have lead to the aforementioned shortcoming of TRDTE being missed in 1; and $iii)$ Give for the record the correct small-time expressions for the normal stress, displacement, and strain corresponding to the DP for a TRDTE medium. Lastly, all quantities below are dimensionless, unless stated otherwise the same notation employed in 1 is used here, and the reader is referred to 1 for the definition of all undefined symbols. In the Laplace transform domain, the normal stress is given by (see Eq. (5.53) of is the transform parameter. For large it can be shown that are positive constants and denotes the speed of the second sound ( , thermal) wave. Using Eqs. ( ), the large- expression for is found to be Expanding and rearranging Eq. ( ) into increasing powers of and then truncating all terms after so as to match , gives (The quantity does not appear in , it is introduced here for convenience.) Inverting Eq. ( ), the small-time expression for the normal stress is found to be $σx,t≈T0M0 ∑j=12−1j+1{αδt−x/Vj*+Ht−x/Vj*[1+αR0+t−x/Vj*S0]}e−rjx,$ is the Heaviside unit step function and denotes the Dirac delta function. (The notation used here for is slightly different than that of .) Equation ( ) is the correct form of Eq. (5.58) in . Comparing the former with the latter it is clear why the latter is incorrect; the contribution of the term which is part of the quantity in the numerator of Eq. (5.53) of , is missing in the inverse. Indeed, no term with coefficient α that appears in Eq. ( ) is present in Eq. (5.58) of . In particular, Eq. (5.58) of does not contain the two delta function terms that it should. A second consequence of these missing terms is that the expressions given in Eq. (5.61) of denote the amplitudes of the jumps in σ across the wavefronts are also incorrect. Specifically, since the (correct) expression for exhibits two propagating delta functions, in the sense of From the Laplace transforms of Eqs. (5.49) and (5.51) in , it can be shown that is the image of the -component of the displacement vector in the Laplace transform domain. Consequently, using Eq. ( ), it follows that Again using the approximations given in Eqs. ( ), the large- expansion of turns out to be $u¯x,s≈T0M0 ∑j=12−1j1sαVj*+1s2αrj+αR0+1Vj*exp[−rj+s/Vj*x].$ On inverting Eq. ( ), the small-time solution for is found to be $ux,t≈T0M0 ∑j=12−1jαVj*+t−x/Vj*αrj+αR0+1Vj*e−rjxHt−x/Vj*.$ From Eq. ( ), it is clear that always admits two propagating jump discontinuities, the amplitudes of which are given by In Eq. ( denote the amplitudes of the jumps in (The quantities are introduced here in a manner consistent with the notation convention of While not given in , the small-time relation for the strain will be given here for completeness. To this end, Eq. ( ) is differentiated with respect to re-expressed using the identities and the approximations given in Eqs. ( ), and then inverted to yield the small-time strain relation $∂u∂xx,t≈1+ε2 ∫0tσx,t′dt′+L02σx,t+T02 ∑j=12e−rjx{Ht−x/Vj*+αδt−x/Vj*}.$ In Eq. ( denotes the right-hand side of Eq. ( Figure 1 shows a comparison of the inverse of Eq. (^6) with the small-time solution given in Eq. (^8). The inverse of Eq. (^6) was computed numerically using Tzou’s Riemann sum inversion algorithm (TRSIA) 5 and the values of the material parameters were obtained from Table II of 1. As shown in Fig. 1, Eq. (^8) is a very good/excellent approximation to $u$ for $t≲0.05.$ In addition, the two propagating jumps are clearly visible, with $|u1*|>|u2*|,$ and it is noted that $x1,2*$ are the elastic (trailing) and thermal (leading) wavefronts, respectively. It must be pointed out that the presence of propagating jumps in $u$ violates the continuity of displacements requirement [6, p. 142], and thus indicates that TRDTE is inconsistent with the continuum theory of matter under a step (actually any discontinuous) temperature BC (see 2 and the references therein). These jumps, which occur in both the coupled $ε>0$ and uncoupled $ε=0$ cases, vanish only in the limit $α→0.$ (For a treatment of the uncoupled, spherically symmetric case for a shell, see 7.) Finally, it should be mentioned that an error similar to the one corrected here, in which all $δs˙$ and $δ′s˙$ terms are missing from the Laplace inverse, occurs in the expression for the strain (ie, Eq. (48)) in 8. (It is of interest to note that had the correct expression for the strain been obtained in 8, the drawbacks with TRDTE could have been uncovered in 1980.) However, while Eq. (5.58) of 1 is incorrect, and this error appears to have directly resulted in the primary physically objectionable feature of TRDTE being overlooked in 1 as well as to the mistaken claim (1, p 371) that the TRDTE expression for $σx,t$ reduces to its ETE counterpart (ie, Eq. (4.39) of 1) when $α=α0=τ,$ Chandrasekharaiah’s two articles 1 2 nevertheless provide an excellent review of the literature on nonconventional thermoelasticity and contain a wealth of information on the subject. PM Jordan was supported by CORE/ONR/NRL funding (PE 602435N). Thermoelasticity with second sound: A review Appl. Mech. Rev. Hyperbolic thermoelasticity: A review of recent literature Appl. Mech. Rev. J. Elast. Propagation of discontinuities in coupled thermo-elastic problems ASME J. Appl. Mech. Tzou DY (1997), Macro- to Microscale Heat Transfer: The Lagging Behavior, Taylor and Francis, Washington DC. Achenbach JD (1973), Wave Propagation in Elastic Solids, North-Holland, Amsterdam. Thermal stresses in a spherical shell under three thermo-elastic models J. Therm. Stresses On generalised thermoelastic wave propagation Proc of Indian Acad Sci (Math Sci)
{"url":"https://ebooks.asmedigitalcollection.asme.org/appliedmechanicsreviews/article/56/4/451/463878/Comments-on-the-articles-Hyperbolic","timestamp":"2024-11-15T01:39:36Z","content_type":"text/html","content_length":"200383","record_id":"<urn:uuid:308d20d7-4e5b-4982-b5c9-14f2d8a97612>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00178.warc.gz"}
Sample LaTeX Document with Mathematic Scratch Work A LaTeX template that can be used for mathematics homework/assignments problem or solution sheets, showing scratch work and/or working steps. % This is a document written to introduce students in MATH 2300-04 at FSU to LaTeX and Overleaf. Any other students are free to use this as well. % All of this stuff with '%' in front is a comment and ignored by the compiler. % % The lines before the "\begin{document}" line is called the preamble. % This is where you load particular packages you need. % Until you are more experienced, or the program says you are missing packages, it is safe to ignore it. % %---------------------------------- \documentclass[12pt]{article} \usepackage[margin=1in]{geometry}% Change the margins here if you wish. \setlength{\parindent}{0pt} % This is the set the indent length for new paragraphs, change if you want. \setlength{\parskip}{5pt} % This sets the distance between paragraphs, which will be used anytime you have a blank line in your LaTeX code. \pagenumbering{gobble}% This means the page will not be numbered. You can comment it out if you like page numbers. %These packages allow the most of the common "mathly things" \usepackage{amsmath,amsthm,amssymb} %This package allows you to add graphs or any other images. \usepackage{graphicx} %These are the packages I usually use and needed for this document. There are bajillions of others to do nearly antyhing you want. \usepackage{color} \usepackage{enumerate} \usepackage{multicol} %------------------------------------------ % The stuff you want to edit starts here. %------------------------------------------ \begin{document} \title{Sample \LaTeX \, Document} % You should make this the title of your project. \author{Sarah Wright} % Replace this with your name. % \date{} I have this commented out so that the program will use whatever today's date is. You can specify a particular date (or use this for other info if you need) in the {} or leave the {} empty for no date (or other extra info) to appear in the title \maketitle % If you don't want a formal title, you can erase this and the bits above. You will need to find a different way to include the same information. % Some projects will just be a list of problems, and in that case you may wish to number them: \begin{enumerate} % This environment numbers "\items" in your list starting at 1. If you want to change the numbering scheme, you can do something like \begin{enumerate}[(a)] and whatever is in the [] will be the form of the "numbers". If you wish to change the number on just a specific item, use \item[1.2.7] \item Use the formal definition of the limit of a function at a point to prove that the following holds: $$\lim_{x \rightarrow 4} x^2 + x - 5$$ % Any mathematics will go inside $s. One $ on each side keeps the mathematics inline with the text. These double $s on each side set the equation on its own line in the center of the page. \begin {proof} % Anything with \begin{} needs a corresponding \end{}. What goes inside the brackets {} is the environment you're working in. This is a proof, and the environment tells the compiler to write "Proof" in italics at the start of this work and a box to mark the end. We won't do many proofs in this calculus class, but still good to know. Fix an arbitrary $\epsilon > 0$.%Notice the mathematics inside a single pair of $. Most Greek letters are just spelled out with a \ in front; Capitalize if you want the capital Greek letter. We wish to determine a $\delta >0$ such that when $0 < |x - 4| < \delta$, it must be true that $|f(x) - 15| < \epsilon$. \begin{multicols}{2} % This environment allows you to write in columns, which is sometimes handy. It can be a little finicky sometimes, so be careful. The number in the {} is the number of columns you want. Choose $\displaystyle{\delta = \min\left\{1, \frac{\epsilon}{10}\right\}}$. % Many things are happening in this single line: % % \ displaystyle{} gives the mathematics inside the brackets the same treatment as the double $$, but keeps it inline with the text. This is nice for "taller" symbols like fractions, limits, integrals, etc. % % \left and \right followed by bracketing symbols (, [, <, etc. make the symbol the appropriate size for the mathematics inside. Every \left must be matched with a \right, but the symbols need not match, so $\left(3, 4\right]$ will compile. % % Since {} are used so often in the code, if you want your compiled document to have {}, you need a \ in front. % % Fractions are made using \frac{} {}. The numerator expression is in the first set of braces {} and the denominator in the second. Now, suppose that $0 < |x - 4| < \delta$. Then, \begin{align*} % Another environment! align* will keep the & in your work aligned in a vertical column. This is good for lining up equals signs in long calculations. If you leave out the *, each line will be numbered. This can be handy for referring back to later. |f(x) - 15| & = |(x^2 + x - 5) - 15| \text{, by the definition of $f$,}\\ % The \\ tells the align* environment to move to the next line. Since align forces us into a mathematics environment, we need to specify when we want "regular" text inside align. & = |x^2 + x - 15|\\ & = |(x - 4)(x + 5)|\\ & = |x - 4||x + 5| \text{, by properties of absolute value,}\\ & < \delta \cdot | x + 5| \text{, by the assumption $|x - 4| < \delta$,}\\ & \leq \frac{\epsilon}{10}|x + 5| \text{, since $\delta \leq \frac{\epsilon}{10}$,}\\ & = \frac{\epsilon}{10} |(x - 4) + 9|\\ & \leq \frac{\ epsilon}{10}\left(|x - 4| + |9|\right) \text{, by properties of absolute value,}\\ & < \frac{\epsilon}{10}\left(\delta + 9\right) \text{, since $|x - 4| < \delta$,}\\ & \leq \frac{\epsilon}{10}(1 + 9) \text{, since $\delta \leq 1$,}\\ & = \left(\frac{\epsilon}{10}\right) (10) = \epsilon \end{align*} \columnbreak % This ends the column. Sometimes this will happen automatically where you want it to, sometimes not. {\color{blue} %This makes the color of everything that is inside the {} blue (in the compiled pdf on the right). \LaTeX knows a list of colors, you can try lots of things or google the list. \center{\bf Scratch Work} % \center centers all the stuff inside the {} and \bf makes the text bold face \begin{align*} % the \hspace{} here is a hack to move the stuff in this second column over a bit. Delete it and see what happens. You can use \vspace{} or \hspace{} to add vertical or horizontal space to your work. This doesn't always behave like you might expect because of LaTeX's automatic formatting. \hspace{2in}|f(x) - 15| & < \epsilon \\ |(x^2 + x - 5) - (15)| & < \epsilon \\ |x^2 + x - 20| & < \epsilon \\ |(x + 5)(x - 4)| & < \epsilon \\ |(x - 4)|\cdot|(x + 5)| & < \epsilon \\ |x - 4| & < \frac{\epsilon}{|x + 5|} \\ \end{align*} \begin{align*} \hspace{1.75in}\delta = 1 \Longrightarrow |x - 4| & < 1\\ -1 < x - 4 & < 1\\ 8 < x + 5 & < 10\\ | x + 5| & < 10 \end {align*} } \end{multicols} All together, this shows that for any $\epsilon >0$, if we choose $\displaystyle{\delta = \min\left\{1, \frac{\epsilon}{10}\right\}}$, then $0 \leq |x - 4| < \delta$ implies that $|f(x) - 15| < \epsilon$. Thus, $\displaystyle{\lim_{x \rightarrow 4} x^2 + x - 5 = 15}$. \end{proof} \newpage % This starts a new page. % I happen to not need this here. LaTeX can be smart enough on its own sometimes. When I only had one line form the next problem typed in, it appeared at the bottom of the first page, and I didn't like that. But once I added more, LaTeX moved it itself. % Fun fact! When I needed to find this place once I was done typing up my work, and add in the comments, I clicked on the place I wanted to go to in the pdf on the right, and Overleaf brought me to this place in the LaTeX code. Pretty spiffy. \item[1.5.15] Evaluate the given limits of the piecewise defined function $f$. $$f(x) = \left\{\begin{array}{lcl} x^2 - 1 & \text{ if }& x < -1 \\ x ^3 + 1 & \text{ if } & -1 \leq x \leq 1 \\ x^2 + 1 & \text{ if } & x > 1 \end{array}\right.$$ % array is another environment designed for matrices, but it works well for piecewise defined functions as well. The {lcl} I have here tells LaTeX that I want three columns with the first and last aligned to the left, and the middle in the center. Use & to indicate a new column and \\ moves to a new row/line. Like align, it is a math environment, so you don't need $s but must indicate text. \begin{enumerate} % You can nest enumerate environments for problems or solutions with multiple parts. \ item $\displaystyle{\lim_{x \rightarrow -1^-} f(x)}$ Since we are evaluating the limit as $x$ approaches -1 from the left, we need to consider the form of the function for values of $x$ that are less than -1, $x^2 - 1$. \begin{align*} \lim_{x \rightarrow -1^-} f(x) & = \lim_{x \rightarrow -1^-} x^2 - 1\\ & = (-1)^2 - 1 \text{, by Theorem 2,}\\ & = 0. \end{align*} \bigskip %This is another way to add vertical space. There are big, med, and small varieties. \item $\displaystyle{\lim_{x \rightarrow -1^+} f(x)}$ Since we are evaluating the limit as $x$ approaches -1 from the right, we need to consider the form of the function for values of $x$ that are greater than -1, $x^3 + 1$. \begin{align*} \lim_{x \rightarrow -1^+} f(x) & = \lim_{x \rightarrow -1^+} x^3 + 1\\ & = (-1)^3 + 1 \text{, by Theorem 2,}\\ & = 0. \end{align*} \bigskip \item $\displaystyle{\lim_{x \rightarrow -1} f(x)}$ Since $\displaystyle{\lim_{x \rightarrow -1^-} f(x) = \lim_{x \rightarrow -1^+} f(x) = 0}$, $\ displaystyle{\lim_{x \rightarrow -1} f(x) = 0}$ by Theorem 7. \bigskip \item $f(-1)$ When $x = -1$, $f(x) = x^3 + 1$. So, $f(-1) = (-1)^3 + 1 = 0$. \bigskip \item $\displaystyle{\lim_{x \rightarrow 1 ^-} f(x)}$ Since we are evaluating the limit as $x$ approaches 1 from the left, we need to consider the form of the function for values of $x$ that are less than (but near) 1, $x^3 + 1$. \begin {align*} \lim_{x \rightarrow 1^-} f(x) & = \lim_{x \rightarrow 1^-} x^3 + 1\\ & = (1)^3 + 1 \text{, by Theorem 2,}\\ & = 2. \end{align*} \bigskip \item $\displaystyle{\lim_{x \rightarrow 1^+} f(x)}$ Since we are evaluating the limit as $x$ approaches 1 from the right, we need to consider the form of the function for values of $x$ that are greater than (but near) 1, $x^2 + 1$. \begin{align*} \ lim_{x \rightarrow 1^-} f(x) & = \lim_{x \rightarrow 1^+} x^2 + 1\\ & = (1)^2 + 1 \text{, by Theorem 2,}\\ & = 2. \end{align*} \bigskip \item $\displaystyle{\lim_{x \rightarrow 1} f(x)}$ Since $\ displaystyle{\lim_{x \rightarrow 1^-} f(x) = \lim_{x \rightarrow 1^+} f(x) = 2}$, $\displaystyle{\lim_{x \rightarrow 1} f(x) = 2}$ by Theorem 7. \bigskip \item $f(1)$ When $x = 1$, $f(x) = x^3 + 1$. So, $f(1) = (1)^3 + 1 = 2$. \end{enumerate} \vfill % This is yet another way to add vertical space, and there is a similar horizontal option. It will fill the space, and you can use many of them to spread things out evenly on a page. To help us visualize all of these limits, a graph of $y = f(x)$ is provided below. \begin{center}\includegraphics[width = .85\textwidth]{SampleGraph}\end{center} % This is how we include pictures in a LaTeX document. In Overleaf, you need to add this file to the project. Click on "Project" with the squares in the menu at the top. Choose "Add Files..." and probably upload from your computer. (If you have installed a LaTeX editor on your own machine, then make sure the file you want to include is in the same folder as your .tex document... or look up how to call other folders.) % Once the file is part of the project, type the name of the file inside the {}. You typically do not need the file type .pdf, .jpg, etc. but nothing bad happens when you include it. % The [] can be left empty or used to enter a variety of different instructions, most commonly adjustments to the size of the image are here. You can use inches or centimeters or points, LaTeX knows many measurement systems. You can also specify relative lengths, like I have above. \textwidth is the width of the text in this document, and putting .85 in front shrinks it to 85% of the text width. \end{enumerate} \end{document}
{"url":"https://cs.overleaf.com/latex/templates/sample-latex-document-with-mathematic-scratch-work/twhrqbrjvqgj","timestamp":"2024-11-13T02:36:01Z","content_type":"text/html","content_length":"49296","record_id":"<urn:uuid:390e623f-461e-4bc6-92a2-bd4d4bbe018d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00879.warc.gz"}
A network made of math Alexander Grothendieck (1928—2014) is viewed by many as one of the greatest mathematicians of all time. He made contributions to many different fields, but the work he is mainly celebrated for is his shaping of some of the most abstract, fundamental branches of mathematics, such as category theory and algebraic geometry. (Besides being a great mathematician he also led a fascinating life, well worth reading about.) One important contribution that Grothendieck made was the idea of “stacks”, which is the name of a certain idea that plays an important role in algebraic geometry. Stacks are abstract, complicated objects. So complicated, in fact, that even most mathematicians don’t know how they work. To know this to some usable degree requires several years of study. And if that weren’t bad enough, there is basically no good book to explain it. Grothendieck himself was famous for not being very good at explaining stuff (much of his later work took the form of long handwritten diaries that also contained autobiographical essays, and weren’t intended for publication). But even in the sixty years since his discovery of stacks, other, more businesslike mathematicians have struggled to bring the concept into the light. In 2005, a number of algebraic geometrists finally had enough of the rather ridiculous situation that one of their central ideas, stacks, was so far out of reach as to be (almost) unusable. To remedy the situation they started an ambitious new project, simply called “the stacks project”. The goal of the stacks project was straightforward: create a website that explains stacks. Now, twelve years later, the stacks project is an immense, incredibly useful resource for anyone who wants to learn about stacks, or modern algebraic geometry in general. By nature, it is still highly specialized, but for someone who knows enough about mathematics, it makes the subject much more accessible. Now it should be said that algebraic geometry has almost nothing to do with the mathematics of networks. For networks, we study very specific objects, often with very clear applications in mind. Algebraic geometry, on the other hand, tries to find some of the most fundamental truths about the shapes of mathematical functions. (For example, about the locations of the roots of a polynomial, for all possible polynomials at once). But even though we can't really use it, the stacks project provides us with an amazing example of a network: namely, a network of mathematical ideas. The website allows you to visualize beautifully how a certain mathematical idea is related to other ideas, by showing you where the idea is used, and which other ideas it uses. Mapping this for every separate idea, a network naturally arises. The website offers different settings to view the network. And it is possible to visualize the depth of the network (how many steps of logic two different ideas are removed). The chains of reasoning are remarkably long (no small-world phenomenon here, it seems.) It would be very interesting to analyze this network mathematically, for instance, by looking at typical distances and the degree sequences. As far as I know, this hasn’t been done yet. Related articles
{"url":"https://www.networkpages.nl/a-network-made-of-math/","timestamp":"2024-11-09T04:10:24Z","content_type":"text/html","content_length":"80790","record_id":"<urn:uuid:15b68e27-1b7d-4432-89da-35bfe649e4a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00602.warc.gz"}
Lesson 10 Piecewise Linear Functions Let’s explore functions built out of linear pieces. Problem 1 The graph shows the distance of a car from home as a function of time. Describe what a person watching the car may be seeing. Problem 2 The equation and the graph represent two functions. Use the equation \(y=4\) and the graph to answer the questions. 1. When \(x\) is 4, is the output of the equation or the graph greater? 2. What value for \(x\) produces the same output in both the graph and the equation? (From Unit 6, Lesson 7.) Problem 3 This graph shows a trip on a bike trail. The trail has markers every 0.5 km showing the distance from the beginning of the trail. 1. When was the bike rider going the fastest? 2. When was the bike rider going the slowest? 3. During what times was the rider going away from the beginning of the trail? 4. During what times was the rider going back towards the beginning of the trail? 5. During what times did the rider stop? Problem 4 The expression \(\text-25t+1250\) represents the volume of liquid of a container after \(t\) seconds. The expression \(50t+250\) represents the volume of liquid of another container after \(t\) seconds. What does the equation \(\text-25t+1250=50t+250\) mean in this situation? (From Unit 4, Lesson 17.)
{"url":"https://im-beta.kendallhunt.com/MS_ACC/students/2/6/10/practice.html","timestamp":"2024-11-04T17:05:57Z","content_type":"text/html","content_length":"71916","record_id":"<urn:uuid:b764ce93-f8c6-45ec-b317-2a836a9c82c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00249.warc.gz"}
Nestedmodels Limitations This vignette outlines the limitations of nested modelling, and introduces some alternatives. It also gives an idea as to the properties of data that are most suited to nested modelling. A nested model is already an unlikely candidate for most modelling problems. Yet even within their limited use cases, nested models have further limitations. The first of these is a more theoretical one. Nested models fit a model within each nested data frame of the data that they are given. This raises the issue that these models do not communicate with each other; each model exists only in the isolation of its corresponding nested data frame. These models therefore find it harder to recognise patterns that exist irrespective of, or outside of the nests. This is not necessarily an issue, provided that you remember that the model is identifying patterns within each nest. However, this negatively affects the performance of the model. In a more extreme example, if you were to fit a nested model to some data containing 200 nested data frames, each with 10 rows; each model would only be fit on 10 observations, likely resulting in wildly inaccurate predictions, despite the size of the overall data being fairly adequate. It is often useful to ponder whether a nested model is likely to be as useful as another approach. The second problem is more related to physical performance. Even when fitting a very simple model to a fairly small dataset, the fitting process takes more time than we might expect to complete. model <- linear_reg() %>% set_engine("lm") %>% fit(model, z ~ ., tidyr::nest(example_nested_data, data = -id)) #> user system elapsed #> 0.049 0.000 0.049 This is because a model is fit to 20 nests. More computationally expensive models take more time to complete, and the time taken increases in direct proportion to the number of nested data frames. Furthermore, storing a nested model means storing a model for every nest. For more complex model objects, this can result in a monstrously sized fit object. This makes the nested model approach non-scalable, since it would take an unreasonable amount of computational power and time to fit a complex model to large datasets with thousands of nested data These two limitations are important to consider, but note that they matter most for data with a large number of nests and/or not very much data in each nest. What is the alternative? For some datasets, these issues will be too problematic to ignore. In most cases, the alternative approach is obvious: just use a non-nested model. The recipes package has many methods for dealing with categorical data, and these models are likely to give you more promising results. However, for some models, most notably forecasting algorithms, nestedmodels can seem like the only solution for forecasting panel data. In this specific case, a global forecasting method would be recommended (e.g. Prophet or a gradient boosting model), since these models can deal with categorical data. In general, it is better to find a model that will suit all of your needs, rather than sticking with the one you are the most comfortable with. In this vignette, we discussed the conditions and reasons why nested modelling is not the best approach for every situation, and how to respond if this is the case.
{"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/nestedmodels/vignettes/nestedmodels-limitations.html","timestamp":"2024-11-14T01:12:08Z","content_type":"text/html","content_length":"14000","record_id":"<urn:uuid:fe6d910f-9b94-4f84-a8b1-0e9cc5038df3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00893.warc.gz"}
Financial Systems: A New Kind of Science | Online by Stephen Wolfram [Page 432] simple models do not necessarily have simple behavior. And indeed the picture below shows an example of the behavior that can occur. In real markets, it is usually impossible to see in detail what each entity is doing. Indeed, often all that one knows is the sequence of prices at which trades are executed. And in a simple cellular automaton the rough analog of this is the running difference of the total numbers of black and white cells obtained on successive steps. And as soon as the underlying rule for the cellular automaton is such that information will eventually propagate from one entity to all others—in effect a minimal version of an efficient market hypothesis—it is essentially inevitable that running totals of numbers of cells will exhibit significant randomness. One can always make the underlying system more complicated—say by having a network of cells, or by allowing different cells to have different and perhaps changing rules. But although this will make it more difficult to recognize definite rules even if one looks at the complete behavior of every element in the system, it does not affect the basic point that there is randomness that can intrinsically be generated by the evolution of the system. An example of a very simple idealized model of a market. Each cell corresponds to an entity that either buys or sells on each step. The behavior of a given cell is determined by looking at the behavior of its two neighbors on the step before according to the rule shown. The bottom-right plot below gives as a rough analog of a market price the running difference of the total numbers of black and white cells at successive steps. And although there are patches of predictability that can be seen in the complete behavior of the system the bottom-right plot looks in many respects
{"url":"https://www.wolframscience.com/nks/p432--financial-systems/","timestamp":"2024-11-09T14:12:18Z","content_type":"text/html","content_length":"95149","record_id":"<urn:uuid:b4e61710-7b7a-4b6f-b8b0-1479fa64d9e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00784.warc.gz"}
The interior of charged black holes and the problem of uniqueness in general relativity We consider a spherically symmetric, double characteristic initial value problem for the (real) Einstein-Maxwell-scalar field equations. On the initial outgoing characteristic, the data is assumed to satisfy the Price law decay widely believed to hold on an event horizon arising from the collapse of an asymptotically flat Cauchy surface. We establish that the heuristic mass inflation scenario put forth by Israel and Poisson is mathematically correct in the context of this initial value problem. In particular, the maximal future development has a future boundary over which the space-time is extendible as a C ^0 metric but along which the Hawking mass blows up identically; thus, the space-time is inextendible as a C ^1 metric. In view of recent results of the author in collaboration with I. Rodnianski, which rigorously establish the validity of Price's law as an upper bound for the decay of scalar field hair, the C ^0 extendibility result applies to the collapse of complete, asymptotically flat, spacelike initial data where the scalar field is compactly supported. This shows that under Christodoulou's C ^0 formulation, the strong cosmic censorship conjecture is false for this system. All Science Journal Classification (ASJC) codes • General Mathematics • Applied Mathematics Dive into the research topics of 'The interior of charged black holes and the problem of uniqueness in general relativity'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/the-interior-of-charged-black-holes-and-the-problem-of-uniqueness","timestamp":"2024-11-13T22:03:59Z","content_type":"text/html","content_length":"50207","record_id":"<urn:uuid:1f73248c-416a-4d0e-8e2e-675cda5fd3f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00413.warc.gz"}
Category:Pages with math errors - Wikibooks, open books for an open world Administrators: Please do not delete this category even if it is empty! Categories with this message may be empty occasionally or even most of the time. The following related category may be of interest. Pages in category "Pages with math errors" More recent additions More recent modifications The following 37 pages are in this category, out of 37 total.
{"url":"https://en.wikibooks.org/wiki/Category:Pages_with_math_errors","timestamp":"2024-11-04T23:44:36Z","content_type":"text/html","content_length":"66092","record_id":"<urn:uuid:82dc8732-ea4f-4ae9-86d8-64ef91b1aaf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00373.warc.gz"}
BFE.dev #8. can you shuffle() an array? BFE.dev is like a LeetCode for Front End developers. I’m using it to practice my skills. This article is about the coding problem BFE.dev#8. can you shuffle() an array? The goal is to shuffle an array, seems pretty easy. Since shuffle is choose one of the all possible permutations, so we can choose the item one by one randomly from rest possible positions. function shuffle(arr) { for (let i = 0; i < arr.length; i++) { const j = i + Math.floor(Math.random() * (arr.length - i)) ;[arr[i], arr[j]] = [arr[j], arr[i]] Code Not working function shuffle(arr) { for (let i = 0; i < arr.length; i++) { const j = Math.floor(Math.random() * arr.length) ;[arr[i], arr[j]] = [arr[j], arr[i]] Above code looks working but actually not. It loops all the positions and randomly swaps with another. Suppose we have an array of [1,2,3,4]. Let's look at number 1. The first step is to swap 1 with all 4 positions, chance is high that it is moved to the positions other than itself and later it is traversed and swapped again. Now let's look at number 4. It is last number to be traversed, and it might be also swapped before its turn and never be traversed again. So 1 and 4 have different chances of be swapped, 4 is obviously lower, so the result array is not randomly shuffled. Here is my video explaining: https://www.youtube.com/watch?v=FpKnR7RQaHM Hope it helps, you can have a try at here Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/jser_zanp/bfe-dev-8-can-you-shuffle-an-array-5hlo","timestamp":"2024-11-10T21:05:49Z","content_type":"text/html","content_length":"66420","record_id":"<urn:uuid:b586ff5f-0ad0-4b2f-a843-b49a84b86412>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00366.warc.gz"}
Curve Fitting problem So I am trying create a script that create a fit line to a set of data. The trend in the data is most certainly the sum of a sin wave plus a series of exponential functions (two exponentials for a quite good approximation). I have been working on a script to get matlab to calculate the exact function. The script does a fine job at mathcing sin wave however it will not include the exponential part of the function, no matter what guess I put in for the exponential constants matlab returns the same values. Any suggestions on what is wrong or perhaps a better way to make this better? Below I have included two scripts I have written to try and accomplish this, neither worked, also I have tried simply using the cftool and have run into the same problem. Script 1 plot(x, y, 'ro') title('BtmPorePressure vs. Time'); xlabel('Time (s)'); ylabel('Pressure (MPa)'); C0=[a_guess b_guess c_guess d_guess e_guess f_guess g_guess h_guess]; func = @(B,x)(B(1)*sin(B(2)*x+B(3))+B(4)*exp(-(B(5)^2)*x)-B(6)*exp(-((B(7)^2)*x))+B(8)); C = nlinfit(x,y,func,C0); a_calc = C(1); b_calc = C(2); c_calc = C(3); d_calc = C(4); e_calc = C(5); f_calc = C(6); g_calc = C(7); h_calc = C(8); disp('Compare the solutions to the actual values') fprintf('''a'' actual =%g, a_guess = %.4f\n', a_calc,a_guess); fprintf('''b'' actual =%g, b_guess = %.4f\n', b_calc,b_guess); fprintf('''c'' actual =%g, c_guess = %.4f\n', c_calc,c_guess); fprintf('''d'' actual =%g, d_guess = %.4f\n', d_calc,d_guess); fprintf('''e'' actual =%g, e_guess = %.4f\n', e_calc,e_guess); fprintf('''f'' actual =%g, f_guess = %.4f\n', f_calc,f_guess); fprintf('''g'' actual =%g, g_guess = %.4f\n', g_calc,g_guess); fprintf('''h'' actual =%g, h_guess = %.4f\n', h_calc,h_guess); y_new = func(C,x); hold on plot (x,y_new) legend('Raw Data', 'Fitted Curve') Attempt 2 plot(x, y, 'ro') title('BtmPorePressure vs. Time'); xlabel('Time (s)'); ylabel('Pressure (MPa)'); %function F = myfun(a,data) F= @(a,x) (a(1)*sin(a(2)*x+a(3))+a(4)*exp(-(a(5)^2)*x)-a(6)*exp(-((a(7)^2)*x))+a(8)); data = [x;y]; a0 = [.2, .025, .785, 5, 10, 5, 10, 12]; C = lsqcurvefit(F,a0,x,y); [a,resnorm] = lsqcurvefit(F,a0,x,y) y_new = F(C,x); hold on plot (x,y_new) legend('Raw Data', 'Fitted Curve') 3 Comments Dan, I assume you know most of us don't have IGORCurveFit. I don't even have lsqcurvefit because it's in the optimization toolbox. See Also Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
{"url":"https://in.mathworks.com/matlabcentral/answers/33931-curve-fitting-problem?s_tid=prof_contriblnk","timestamp":"2024-11-11T01:48:50Z","content_type":"text/html","content_length":"124789","record_id":"<urn:uuid:940a9527-4287-4b83-8134-6a70d116ce4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00698.warc.gz"}
How to write equation in Google Docs ?- A Complete Guide Table of Contents GOOGLE DOCS is a great word processor offered by Google. We know that GOOGLE DOCS is a word processor and we use a word processor to create documents. In scientific documents, we need to write the equations. The equations consists of many Greek Letters , symbols etc. which are not present on the keyboard. These symbols can’t be created with the drawing too, and if created will be a time consuming task. Thanks to GOOGLE DOCS to provide a dedicated Equation symbols toolbar. In this article, we’d learn to type equations in the Google Docs. WHY DO WE NEED TO WRITE AN EQUATION IN GOOGLE DOCS ? Google Docs is a word processor and a word processor is used to create the documents. If we are creating any MATHEMATICAL or SCIENTIFIC page or document, we’ll be inserting the equation at any point of time. For creating these equations, we need the option for inserting them easily. The need is to record the equation to show the numbers or relationship in the mathematics of science. We are lucky to have a dedicated option for creating equations in Google Docs. The location of the option is under the INSERT MENU > EQUATION As we click on the EQUATION MENU item, a small toolbar appears which contains the ready to use EQUATION SYMBOLS. A small box is also created in the document where the selected symbol will be inserted. 2. A SMALL BOX IN WHICH THE SELECTED EQUATION SYMBOLS WILL BE SHOWN. So, we learnt that we can select the option of inserting the equation from the menu and the toolbar will appear. Let us have a look at the symbols available. Equation editor toolbar provides the symbols against the following categories. GROUP 1 – GREEK LETTERS Let us now try to create a simple equation first. We know that velocity of light = frequency of wave x wavelength of wave if we use the standard scientific notation, we’d be needing the symbols like c, nu and lambda. so, let us use equation symbols and try to type this. • Go to INSERT MENU and choose EQUATION. [ If we have opened the TOOLBAR using VIEW MENU, we need to click START EQUATION ] • As we click the option, A small drawing box will be created and become active. • The first letter is “c=” which we can type with the use of keyboard. TYPE c= USING THE KEYBOARD After entering the text, we want nu [GREEK SYMBOL] to be entered. Simply click the first group of symbols and choose nu from the table. as we click the nu symbol, it’ll be inserted immediately. The next letter is a dot [ which means and in binary mathematics]. Simply type dot [.] and it’ll be inserted. The next symbol to be inserted is a lambda. Again click the first group of symbols and choose lambda. After choosing the lambda symbol, our equation is complete. The equation will look something like shown below. We can see that an INTEGRATION EXPRESSION is hard to type in the page. The expression used in the EXAMPLE STATEMENT is a picture. Let us insert this INTEGRATION EXPRESSION using the EQUATION OPTION in GOOGLE DOCS. Follow the steps to insert an integration expression in Google Docs. • Open the document where you want to insert the expression. • Activate the EQUATION TOOLBAR by going to VIEW MENU >SHOW EQUATION TOOLBAR or simply going to the INSERT MENU and choosing EQUATION. • Search the INTEGRATION SYMBOL in different groups of symbols in the equation toolbar. • You’ll find it in the fourth group. • As you click the symbol, it’ll be inserted and the lower limit i.e. a place will start blinking and will be ready for the editing. • Enter the value 12 there. [ Or any other value as you want ] • Click Enter and the upper limit b will start blinking to let you enter the value. • Enter value 13 as the upper limit and press Enter. • The cursor will now move on to the variable place. • Enter xdx using the keyboard and we are done. The expression has been created as we intended to. EQUATION symbols provide many options which are easy to access. Some of the symbols are GREEK SYMBOLS LIKE ALPHA , BETA , GAMMA , DELTA , EPSILON, NU, MU, ZETA , THETA, LAMBDA ETC. RELATIONS LIKE SUBSET, GREATER THAN , LESS THAN AND MORE. ARROWS LIKE LEFT ARROW, RIGHT ARROW, IMPLIES SIGN AND MORE. The list discusses the examples and is not an exhaustive list. You can repeat the procedure to insert any symbol.
{"url":"https://gyankosh.net/googledocs/how-to-write-an-equation-in-google-docs/","timestamp":"2024-11-03T19:49:15Z","content_type":"text/html","content_length":"168043","record_id":"<urn:uuid:b18dc7a4-5e45-40b2-be67-40333be1e27e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00731.warc.gz"}
The Periodic Standing-Wave Approximation: Overview and Three Dimensional Scalar Models The Periodic Standing-Wave Approximation: Overview and Three Dimensional Scalar Models The periodic standing-wave method for binary inspiral computes the exact numerical solution for periodic binary motion with standing gravitational waves, and uses it as an approximation to slow binary inspiral with outgoing waves. Important features of this method presented here are: (i) the mathematical nature of the ``mixed'' partial differential... Show more
{"url":"https://synthical.com/article/The-Periodic-Standing-Wave-Approximation%3A-Overview-and-Three-Dimensional-Scalar-Models-548a3d58-ffce-11ed-90ce-72eb57fa10b3?","timestamp":"2024-11-07T00:03:58Z","content_type":"text/html","content_length":"67225","record_id":"<urn:uuid:43343c3a-0482-4c84-8d68-8abe27e2ee68>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00079.warc.gz"}
- (Math) Subtracts the second of the given values from the first one. is a natural or integer or real number. must be of the same type as Return values as a natural or integer or real number, according to argument types. In case of overflow for natural numbers, the result modulo 2^N is returned, where N is the bit count of the type. Negative integers are stored as two's complement, and behave correspondingly in case of overflow. 2 3 - print LF print 2.0 3.0 - print -1 -1.000000
{"url":"http://gravilink.org/builtins/math/sub/index.html","timestamp":"2024-11-02T09:31:37Z","content_type":"text/html","content_length":"3579","record_id":"<urn:uuid:6fa64a27-42a8-417d-a3c3-3f60f7cbbfb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00513.warc.gz"}
Exam-Style Question on Circular functions Question id: 461. This question is similar to one that appeared on an A-Level paper (specimen) for 2017. The use of a calculator is allowed. The height above the ground, H metres, of a passenger on a Ferris wheel t minutes after the wheel starts turning, is modelled by the following equation: $$H = k – 8\cos (60t)° + 5\sin (60t)°$$ where k is a constant. (a) Express \(H\) in the form \(H = k - R \cos(60t + a)° \) where \(R\) and \(a\) are constants to be found (\( 0° \lt a \lt 90° \)). (b) Given that the initial height of the passenger above the ground is 2 metres, find a complete equation for the model. (c) Hence find the maximum height of the passenger above the ground. (d) Find the time taken for the passenger to reach the maximum height on the fifth cycle. (Solutions based entirely on graphical or numerical methods are not acceptable.) (e) It is decided that, to increase profits, the speed of the wheel is to be increased. How would you adapt the equation of the model to reflect this increase in speed? The worked solutions to these exam-style questions are only available to those who have a Transum Subscription. Subscribers can drag down the panel to reveal the solution line by line. This is a very helpful strategy for the student who does not know how to do the question but given a clue, a peep at the beginnings of a method, they may be able to make progress themselves. This could be a great resource for a teacher using a projector or for a parent helping their child work through the solution to this question. The worked solutions also contain screen shots (where needed) of the step by step calculator procedures. A subscription also opens up the answers to all of the other online exercises, puzzles and lesson starters on Transum Mathematics and provides an ad-free browsing experience. Teacher Subscription Parent Subscription Drag this panel down to reveal the solution The exam-style questions appearing on this site are based on those set in previous examinations (or sample assessment papers for future examinations) by the major examination boards. The wording, diagrams and figures used in these questions have been changed from the originals so that students can have fresh, relevant problem solving practice even if they have previously worked through the related exam paper. The solutions to the questions on this website are only available to those who have a Transum Subscription. Exam-Style Questions Main Page To search the entire Transum website use the search box in the grey area below. Do you have any comments about these exam-style questions? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. ©1997 - 2024 Transum Mathematics :: For more exam-style questions and worked solutions go to Transum.org/Maths/Exam/
{"url":"https://www.transum.org/Maths/Exam/Question.asp?Q=461","timestamp":"2024-11-13T06:37:57Z","content_type":"text/html","content_length":"20296","record_id":"<urn:uuid:46067df2-c413-4c73-ae09-d4ff7e4e1815>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00354.warc.gz"}
commutative algebra nLab commutative algebra Group theory Ring theory Module theory A commutative $k$-algebra (with $k$ a field or at least a commutative ring) is an associative unital algebra over $k$ such that the multiplicative operation is commutative. Equivalently, it is a commutative ring $R$ equipped with a ring homomorphism $k \to R$. There is a generalization of commutativity when applied to finitary monads in $Set$, that is generalized rings, as studied in Durov's thesis. Commutative algebra is the subject studying commutative algebras. It is closely related and it is the main algebraic foundation of algebraic geometry. Some of the well-known classical theorems of commutative algebra are the Hilbert basis theorem and Nullstellensatz and Krull's theorem?, as well as many results pertaining to syzygies, resultants and discriminants. Discussion of commutative algebra with constructive methods: • Henri Lombardi, Claude Quitté, Commutative algebra: Constructive methods. Finite projective modules (arXiv:1605.04832) Last revised on September 16, 2024 at 17:30:04. See the history of this page for a list of all contributions to it.
{"url":"https://ncatlab.org/nlab/show/commutative%20algebra","timestamp":"2024-11-07T02:43:06Z","content_type":"application/xhtml+xml","content_length":"26790","record_id":"<urn:uuid:2492c26d-8a07-4043-aa31-fe8b1348bc14>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00113.warc.gz"}
Response to A domino problem in disguise Subject: Re: Sequencing integers Date: Sun, 18 Jan 1998 13:36:59 -0500 Alex Bogomolny Dear Tom: Come to think of it. Drop the nine multiples of ten: 10, 20, 30, ... The integers that remain may be looked at as domino pieces. It's a good but not difficult problem to show that however you place them back-to-back according to the domino rules, you'll be always able to place all of them. So, you may think of how to place them to get a smallest possible number. Note that the length of the string does not depend on the order of pieces. The moves are forced. For example, the first got to be 11, next should come 12, then, 21, 13, 31, 14, 41, ... You always have to selected the smallest suitable number. Two steps remain: 1. We have to place the tens: 10, 20 , 30, ... 2. We have to coalesce repeated digits. Give a little thought to how to proceed. I'd be grateful to hear of your solution. Best regards, Alexander Bogomolny |Reply| |Up| |Exchange index| |Contents| |Store| Copyright © 1996-2018 Alexander Bogomolny
{"url":"https://www.cut-the-knot.org/exchange/intPairs2.shtml","timestamp":"2024-11-03T20:11:51Z","content_type":"text/html","content_length":"12405","record_id":"<urn:uuid:5cfa91b5-c07e-442d-8299-c548da9cc30c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00671.warc.gz"}
Circle Tool (see also: Shape Tools Overview) The Circle tool has three different ways to create a circle dependent on your needs: • Center Radius • Two Point • Three Point Center Radius Center Radius will determine the width of the circle from the center of the circle. You can change these parameters in Design Central or the handles on the shape. The X and Y positions are based on the center of the circle. Two Point This allows you to draw a circle between two points. The diameter of the circle is determined between these two points. Design Central will let you enter the exact XY Coordinate for each of these two points. Three Point The Three Point Circle operates much the same way as the Two Point, drawing the perimeter of the circle so that it meets all three points wherever they are.
{"url":"https://support.thinksai.com/hc/en-us/articles/4403611241108-Circle-Tool","timestamp":"2024-11-15T03:12:40Z","content_type":"text/html","content_length":"34955","record_id":"<urn:uuid:26111d43-b331-49ee-bd85-2811052c062f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00268.warc.gz"}
What is Fannings equation? According to the Fanning equation, the frictional pressure loss in CT can be expressed as follows:(6.16)ΔPf=2fρv2Ldwhere ρ is the fluid density in kg/m3, v is the average fluid velocity in m/s, d is the inner diameter of CT in m, L is the length of pipe in m, and ΔP is the frictional pressure loss in MPa. How do you calculate pressure loss in water pipes? The formula used is: ΔP = 0.0668 μv ÷ D², in which: ΔP is pressure loss per 100 feet of pipe; 1. μ is viscosity in centipoises (not SSU); 2. v is flow velocity in feet per second; 3. D is inside diameter of pipe, in inches. How do you calculate velocity pressure in a pipe? Pressure To Velocity Calculator 1. Formula. V = Sqrt [ (2*q/p) ] 2. Dynamic Pressure (pascals) 3. Fluid Density (kg/m^3) What is pressure in a pipe? Water pressure is described as the force or strength that is used to push water through pipes or other pathways and is created by altitude or height. What is equivalent length of pipe? The equivalent length method (The Le/D method) allows the user to describe the pressure loss through an elbow or a fitting as a length of straight pipe. This method is based on the observation that the major losses are also proportional to the velocity head (v2/2g). What is the Darcy Weisbach formula for head loss due to friction? In fluid dynamics, the Darcy–Weisbach equation is an empirical equation that relates the head loss, or pressure loss, due to friction along a given length of pipe to the average velocity of the fluid flow for an incompressible fluid. The equation is named after Henry Darcy and Julius Weisbach. What is the pressure loss in pipe? Pressure loss is the result of frictional forces exerted on a fluid within a piping system, resisting its flow. As pressure loss increases, the energy required by system pumps to compensate also increases, leading to greater operating costs. How much pressure is lost in a pipe? Read down the column to the row for the flow rate (GPM) in the pipe section. You will find a PSI loss value (given as PSI/100). Multiply the PSI loss value shown by the total length of the pipe section, then divide the product by 100. (PSI loss on these tables is given in PSI per 100 feet of pipe.) What is velocity pressure in pipe? Velocity pressure is that pressure required to accelerate air from zero velocity to some velocity (V) and is proportional to the kinetic energy of the air stream. For example, when a fan is moving air through a duct system, two types of pressure are encountered: velocity pressure and static pressure. How do you calculate pressure drop in a pipe bend? The pressure drop of bends in series is lower or equal than the pressure drop calculated by adding the pressure loss of every single bend. TECCINESS assumes that the inner diameter of the pipe equals the inner diameter of the bend….Pressure loss in pipe bends. Δ p = K · ρ/2 · v2 d : Inner diameter of the bend How is Darcy Weisbach calculated? To find the pressure drop in a pipe using the Darcy Weisbach formula: Multiply the friction factor by pipe length and divide by pipe diameter. Multiply this product with the square of velocity. Divide the answer by 2. How does pressure change with pipe diameter? Because if the diameter of a pipe decreased, then the pressure in the pipeline will increase. As per Bernoulli’s theorem, pressure can be reduced when the area of conveyance is reduced. In the narrower pipe, the velocity can be high, and pressure can be higher. How velocity and pressure are related in pipe? Pressure and velocity are inversely proportional to each other. If pressure increases, the velocity decreases to keep the algebraic sum of potential energy, kinetic energy, and pressure constant. How do you calculate velocity and pressure? Velocity pressure is calculated by taking the difference between the total pressure and static pressure. To measure the velocity pressure, connect a Pitot or averaging tube to a velocity sensor and place the tube into the air flow of the duct. How do you find the pressure of A cylinder? Divide the force by the total surface area to get the system pressure. 1. Pressure = Force ÷ Surface Area. 2. Pressure = 100 ÷ 7.948. 3. Pressure = 12.582. (Rounds to 13 psi) How do you calculate pressure in A container? You can calculate the hydrostatic pressure of the liquid in a tank as the force per area for the area of the bottom of the tank as given by pressure = force/area units. In this case, the force would be the weight the liquid exerts on the bottom of the tank due to gravity. What is the correct equation for pressure? Pressure and force are related, and so you can calculate one if you know the other by using the physics equation, P = F/A. Because pressure is force divided by area, its meter-kilogram-second (MKS) units are newtons per square meter, or N/m2.
{"url":"https://www.sheppard-arts.com/writing-prompts/what-is-fannings-equation/","timestamp":"2024-11-02T20:47:41Z","content_type":"text/html","content_length":"75509","record_id":"<urn:uuid:24d96df9-d425-4f7c-97a6-2498618840bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00582.warc.gz"}
openQCM the Temperature Sensor Using a Thermistor with Arduino - Quartz Crystal Microbalance with Dissipation Monitoring: Open Source QCM-D openQCM has a temperature sensor with high accuracy based on thermistor and Arduino. The Quartz Crystal Microbalance accuracy depends on temperature. The openQCM temperature sensor is physically placed into the Arduino Micro shield, so it actually measures the openQCM device temperature. The ambient temperature is a key parameter in the development of a QCM because the quartz resonator frequency is partially affected by the variations in temperature. At first we choose an RTD Resistance to Temperature Detector for measuring the temperature. But the test results were not good at all ! We were able to measure the temperature with a poor resolution of about 2°C. Although I had processed the signal I could not do the magic, definitely ! Finally we found a very easy solution, by replacing the RTD with a Thermistor temperature sensor without changing the openQCM shield circuit. The thermistor temperature sensor has a resolution of about 0.2 °C Thermistor Temperature Sensor The thermistor is a special kind of resistor whose resistance varies strongly with the temperature. It is common used as a temperature sensor. Because the temperature measurement is related to the thermistor resistance, we have to measure the resistance. Arduino board does not have a built-in resistance sensor. We have to convert the thermistor resistance in a voltage and measure the voltage via the Arduino analog pin. Finally we calculate the temperature using the Steinhart–Hart equation which described the thermistor resistance – temperature curve. The Voltage Divider Circuit The voltage divider circuit for measuring temperature using a thermistor and Arduino To the purpose of measuring the voltage we connect in series the thermistor to another fixed resistor R in a voltage divider circuit. The variable thermistor resistance is labeled as R0. We select a thermistor with a resistance of 10 KΩ at 25°C and the fixed resistance of 10 kΩ. The input voltage Vcc of the voltage divider circuit is connected to the Arduino Micro 3V, which provides a 3.3 volt supply generated by the on board regulator with a maximum current draw of 50 mA. The 3V pin is connected to the AREF pin because we need to change the upper reference of the analog input range. Say the output voltage V0, the power supply Vcc, the variable thermistor resistance R0 and the fixed resistance R, the output voltage is given by: V_0= (V_{cc} \cdot R_0) / (R_0 + R) The output voltage is connected to the Arduino analog input pin A1. Arduino Micro provides a 10-bit ADC Analog to Digital Converter, which means that the output voltage is converted in a number between 0 and 1023. Say A1 the ADC value measured by Arduino Micro then the output voltage is given by: V_0 = A1 \cdot V_{cc} / 1023 By combining the previous equations we have: V_{cc} \cdot R_0 / (R_0 + R) = A1 \cdot V_{cc}/1023 \Rightarrow R_0 / (R_0 + R) = A1 / 1023 That’ s really interesting ! The thermistor resistance R0 is independent by the supply voltage Vcc What we need for temperature measurement is the thermistor variable resistance R0. Using the previous equation and some math we have: R_0 = A1 \cdot R / (1023 - A1) The resistance measurement depends on the ADC Arduino A1, the fixed resistor R in the voltage divider, and the ADC resolution 1023. Tip & Tricks How To Improve the Temperature Measure We used some tricks to improve the temperature measurement with a thermistor. Supply Voltage I have shown before that the thermistor resistance measurement does not depends on the supply voltage Vcc. So, why do we connect Vcc to the Arduino 3V pin rather than 5V pin ? The 5V pin supply comes from your computer USB and it is used to power Arduino and a lot of other stuff on the board. So it is noisy definitely ! Instead the 3V pin is much more stable because it goes through a secondary regulator stage. In addition, as I will explain at the end of the post, the temperature accuracy depends on the supply voltage. The lower is the supply voltage the better is the temperature accuracy. ADC The Arduino board has a 10-bit ADC resolution. As far as I know, the easiest way to improve the ADC resolution is to acquire multiple samples and take the average. I suggest to average over 10 samples to smooth the ADC data. Thermistor Tolerance Every passive electronic component has a nominal value and a tolerance, which is the relative error of the nominal value. I suggest to choose a 10 KΩ thermistor with a tolerance of 1%, which means that the resistance has an error of 100 ohm @25° C . At 25° C a difference of 450 ohm corresponds to about 1°C, so a tolerance of 1% corresponds to temperature error of about 0.2 °C, which is good enough for this application! Converting the Resistance to Temperature We are developing a temperature sensor, so the last step is to convert the resistance in a temperature measurement. The thermistor has a quite complicated relation between resistance and temperature, typically you can use the resistance to temperature conversion tables. Instead I suggest to use the Steinhart-Hart equation (aka B or β parameter equation) which is a good approximation of the resistance to temperature relation: 1/T = 1/T_0 + 1/B \cdot ln (R/R_0) where R is the thermistor resistance at the generic tempearature T, R0 is the resistance at T0 = 25° C and B is a parameter depending on the thermistor. The B value is typically between 3000 – 4000. The equation depends on three parameters (R0, TO and B) which you could find in any thermistor datasheet. Although this is an approximation, it is good enough in the temperature range of application and it is easier to implement respect to a lookup table. The Arduino Code The temperature measurement is implemented in the Arduino code via the getTemperature function. The code is based on that wrote by Lady Ada on the Adafruit website. Here my piece of sketch for the temperature using a thermistor with Arduino: [code language=”cpp”] // Thermistor pin #define THERMISTORPIN A1 // resistance at 25 degrees C #define THERMISTORNOMINAL 10000 // temp. for nominal resistance (almost always 25 C) #define TEMPERATURENOMINAL 25 // how many samples to take and average #define NUMSAMPLES 10 // The beta coefficient of the thermistor (usually 3000-4000) #define BCOEFFICIENT 3950 // the value of the ‘other’ resistor #define SERIESRESISTOR 10000 // measure temperature int getTemperature(void){ int i; float average; int samples[NUMSAMPLES]; float thermistorResistance; int Temperature; // acquire N samples for (i=0; i&amp;lt; NUMSAMPLES; i++) { samples[i] = analogRead(THERMISTORPIN); // average all the samples out average = 0; for (i=0; i&amp;lt; NUMSAMPLES; i++) { average += samples[i]; average /= NUMSAMPLES; // convert the value to resistance thermistorResistance = average * SERIESRESISTOR / (1023 – average); float steinhart; steinhart = thermistorResistance / THERMISTORNOMINAL; // (R/Ro) steinhart = log(steinhart); // ln(R/Ro) steinhart /= BCOEFFICIENT; // 1/B * ln(R/Ro) steinhart += 1.0 / (TEMPERATURENOMINAL + 273.15); // + (1/To) steinhart = 1.0 / steinhart; // Invert steinhart -= 273.15; // convert to C // decimal value Temperature = steinhart * 10; Thermistor vs RTD Temperature sensor I have shown before that using a thermistor of 10 KΩ with a tolerance of 1% you can measure the temperature with a resolution of about 0.25 °C. Why the thermistor is better than an RTD (Resistance to Temperature Sensor) as temperature sensor for this specific application ? Both the sensors measures the temperature by measuring their variation in resistance. The voltage divider circuit is used for both sensors. In the first electronic design of openQCM we choose the RTD PT100 sensor manufactor Jumo part number PCS_1.1503.1 with nominal value of 100 Ω and a tolerance of 0.12 %. The RTD PT100 sensor is strongly affected by overheating. The recommended measuring current is i_min = 1.0 mA and the maximum current is i_max = 7.0 mA. We need to choose the series fixed resistor in order to fulfill this requirement. But the lower is the current the lower is the resolution of RTD resistor measurement. In order to strike a balance between these requirements, I choose the series resistor R = 400 ohm which determines a current of about 6.5 mA in the voltage divider circuit. Say Vcc the supply voltage, R the fixed series resistance and R0 the variable RTD resistance, one has at 25° C a current: i = V_{cc}/(R + R_0) = 3.3 V / (400 + 100) \Omega = 6.5 mA The standard platinum RTD resistance to temperature conversion table is available for example at this link . Consider the RTD resistance values at 0°C and 50°C : R_0(50^{\circ} C ) = 119.4 \Omega \qquad R_0(0 ^{\circ} C ) = 100</p> <p>\Omega The 50° C temperature variation corresponds to a voltage variation dV given by; dV = V_0(50^{\circ} C) - V_0(0^{\circ} C) = = V_{cc} \cdot R_0(50^{\circ} C) / (R + R_0(50^{\circ} C) ) - V_{cc} \cdot R_0(0^{\circ} C) / (R + R_0(0^{\circ} C)) Say Vcc = 3.3 V and R = 400 Ω we would measure a voltage variation: dV = 0.758 V - 0.660V = 0.1 V By using the Arduino 10-bit ADC the voltage variation dV corresponds to 1023 * 0.1 / 3.3 = 31 divisions. Finally, the accuracy dT of the RTD temperature sensor is given by: dT = T / \# divisions = 50^{\circ} C/31 div = 1.6 ^{\circ} C The RTD temperature sensor has an accuracy of 1.6 °C which is by far too low for this kind of application ! Do the same for the thermistor ! By using the standard 10 kΩ thermistor resistance to temperature table, one has: R_0(50^{\circ} C ) = 10.97 k\Omega \qquad R_0(0 ^{\circ} C ) = 29.49k\Omega If Vcc = 3.3 V and the fixed series resistor is R = 10 kΩ, the 50° C temperature variation corresponds to a voltage variation of: dV = 0.74 V The number of ADC divisions is given by: \# divisions = 1023 \cdot 0.74 V/ 3.3 V = 229 The temperature accuracy is: dT = 50 ^{\circ}C / 229 div = 0.2 ^{\circ}C The thermistor sensor has a temperature accuracy of about 0.2 °C in the temperature range of interest for openQCM. The thermistor accuracy is much better than the RTD one and it is good enough for this application definitely ! Reference and Further Reading
{"url":"https://openqcm.com/openqcm-temperature-sensor-using-a-thermistor-with-arduino.html","timestamp":"2024-11-10T21:49:31Z","content_type":"text/html","content_length":"165884","record_id":"<urn:uuid:14761c67-bb5c-4c7d-877e-eb0dc8a67796>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00320.warc.gz"}
589. N-ary Tree Preorder Traversal Photo by Mailchimp / Unsplash Given the root of an n-ary tree, return the preorder traversal of its nodes' values. Nary-Tree input serialization is represented in their level order traversal. Each group of children is separated by the null value (See examples) Example 1: Input: root = [1,null,3,2,4,null,5,6] Output: [1,3,5,6,2,4] Example 2: Input: root = [1,null,2,3,4,5,null,null,6,7,null,8,null,9,10,null,null,11,null,12,null,13,null,null,14] Output: [1,2,3,6,7,11,14,4,8,12,5,9,13,10] • The number of nodes in the tree is in the range [0, 10^4]. • 0 <= Node.val <= 10^4 • The height of the n-ary tree is less than or equal to 1000. Follow up: Recursive solution is trivial, could you do it iteratively? # Definition for a Node. class Node: def __init__(self, val=None, children=None): self.val = val self.children = children class Solution: def preorder(self, root: 'Node') -> List[int]: # pre order means we travel all the way down the left before we go to the next node over # so it's a DFS self.output = [] def dfs(root): # basecase if root is None: for child in root.children: return self.output
{"url":"https://skerritt.blog/589-n-ary-tree-preorder-traversal/","timestamp":"2024-11-03T15:30:34Z","content_type":"text/html","content_length":"32699","record_id":"<urn:uuid:0dab5dc8-ff80-406a-b866-e5dfd38c6b73>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00850.warc.gz"}
Electric field of an infinitely long wire with radius R • Thread starter Lambda96 • Start date Homework Statement Calculate the potential and check if it satisfies the poisson equation. Relevant Equations I don't know if I have calculated the electric field correctly in task a, because I get different values for the Poisson equation from task b The flow of the electric field only passes through the lateral surface, so ##A=2\pi \varrho L## I calculated the enclosed charge as follows ##q=\rho_0 \pi R^2 L## then I obtained the following electric field ##E=\frac{\rho_0 R^2}{2 \varrho \epsilon_0 }## For the potential I got the following ##\phi=\frac{\rho_0 R^2}{2 \epsilon_0} \ln{\frac{\varrho }{R}}## For the Poisson equation I then get ##-\Delta \phi=\frac{\rho_0 R^2}{2 \epsilon_0 \varrho^2 }## Unfortunately, I don't know what I've done wrong, which also makes me wonder why the task only says ##4 \pi \rho## without the ##\epsilon_0## Staff Emeritus Science Advisor Homework Helper Education Advisor Did you use the right form for the Laplacian? Remember you're working in cylindrical coordinates. For the problem as a whole, you also need to consider the region inside the wire. Science Advisor Homework Helper Gold Member 2023 Award Lambda96 said: . . . which also makes me wonder why the task only says ##4 \pi \rho## without the ##\epsilon_0## That's probably because the textbook where you found this is using cgs units. What does the expression for Coulomb's law look like? Science Advisor Homework Helper Gold Member 2023 Award vela said: For the problem as a whole, you also need to consider the region inside the wire. I thought about that but the problem in part (b) asks about the potential in the region outside the wire as deduced from the limits of integration. Of course, the Poisson equation is trivially satisfied in that region. Thank you vela and kuruman for your help Since the potential depends only on the radius, the Laplacian in cylindrical coordinates has the following form ##\frac{1}{\varrho} \frac{\partial}{\partial \varrho} \Bigl( \varrho \frac{\partial f} {\partial \varrho} \Bigr)## Because of the limits for the integral, I also assumed that only the electric field and potential outside the wire is required I then applied the Laplacian to the potential and got the following: $$\frac{1}{\varrho} \frac{\partial}{\partial \varrho} \Bigl( \varrho \frac{\partial f}{\partial \varrho} \Bigr) \frac{\rho_0 R^2}{2 \epsilon_0} \ln{\frac{\varrho}{R}}=\frac{1}{\varrho} \frac{\ partial}{\partial \varrho} \frac{\rho_0 R^2}{2 \epsilon_0}=0$$ Can it be that ##-\Delta \phi=4 \pi \rho## is valid for the inside of the wire? I have now completed the calculation for the electric field and the potential within the wire. Then calculated the Laplacian and converted the result from SI to cgs and have now got ##4 \pi \rho_0##
{"url":"https://www.physicsforums.com/threads/electric-field-of-an-infinitely-long-wire-with-radius-r.1066597/#post-7129097","timestamp":"2024-11-13T04:30:37Z","content_type":"text/html","content_length":"104115","record_id":"<urn:uuid:6eefda02-0a11-426e-814a-3ac18f3843d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00558.warc.gz"}
Repeated Substrings Problem F Repeated Substrings Picture from Wikimedia Commons String analysis often arises in applications from biology and chemistry, such as the study of DNA and protein molecules. One interesting problem is to find how many substrings are repeated (at least twice) in a long string. In this problem, you will write a program to find the total number of repeated substrings in a string of at most $100\, 000$ alphabetic characters. Any unique substring that occurs more than once is counted. As an example, if the string is “aabaab”, there are 5 repeated substrings: “a”, “aa”, “aab”, “ab”, “b”. If the string is “aaaaa”, the repeated substrings are “a”, “aa”, “aaa”, “aaaa”. Note that repeated occurrences of a substring may overlap (e.g. “aaaa” in the second case). The input consists of at most 10 cases. The first line contains a positive integer, specifying the number of cases to follow. Each of the following line contains a nonempty string of up to $100\, 000$ alphabetic characters. For each line of input, output one line containing the number of unique substrings that are repeated. You may assume that the correct answer fits in a signed 32-bit integer. Sample Input 1 Sample Output 1 aabaab 4 aaaaa 5
{"url":"https://open.kattis.com/contests/kmpsba/problems/substrings","timestamp":"2024-11-14T12:18:59Z","content_type":"text/html","content_length":"30168","record_id":"<urn:uuid:45fb6dc2-d888-48e2-8041-e8674067e401>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00210.warc.gz"}
1. MONDAY (5/18): Which Of The Following Functions Matches The Graph To The Righta) F(x) = -(x+3)+6b) a.) f(x) = -⅙(x+3)²+6 Step-by-step explanation: The maximum value, our vertex, is at point (-3,6). We can insert this value into the vertex form of a quadratic function and then solve for a as follows... [tex]4.5 = a(0 +3)^2 +6\\4.5=a(9)+6\\-1.5 = 9a \\-.17=a\\a = -1/6[/tex] a equals -1/6... We can input this into the original equation we used... f(x) = -1/6(x+3)^2+6 Good luck on the bellwork ;) Options A, B, D, and E are correct answers such that the probability of the usage of social media by 8th and 9th-grade students is given. A. There is no association between using social media and grade level. B. The probability a student uses social media, given that he or she is in 9th grade, is 81%. D. An 8th grader is more likely to use social media than not. E. Knowing a student’s grade level does not help determine if he or she uses social media. What is Probability? Probability is the ratio of the favorable outcomes to the total outcomes. Given that, a group of 8th and 9th graders were surveyed about whether they use social media to communicate with their friends. We can see that, there is no relationship between the usage of social media and grade level so, we can say that there is no association between using social media and grade level. Probability of using social media by a 9th grader is- P(9) = 0.81 / 1 x 100 = 81% Probability of using social media by a 8th grader is- P(8) = 0.84 / 1 x 100 = 84% We cannot define the usage of social media depending on the grade level. Thus, Options A, B, D, and E are correct answers. Learn more about probability here- brainly.com/question/11234923 The table is attached
{"url":"https://academy.hartland.edu/answers/1-monday-518-which-of-the-following-functions-matches-the-gr-2swa","timestamp":"2024-11-07T09:59:25Z","content_type":"text/html","content_length":"81236","record_id":"<urn:uuid:c3209a8b-bd80-449c-b1aa-fdb8231e1549>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00148.warc.gz"}
The coordinate plane below represents a city. Points A through F are schools in the city. graph of coordinate plane. Point A is at 2, negative 3. Point B is at negative 3, negative 4. Point C is at negative 4, 2. Point D is at 2, 4. Point E is at 3, 1. Point F is at negative 2, 3. Part A: Using the graph above, create a system of inequalities that only contains points A and E in the overlapping shaded regions. Explain how the lines will be graphed and shaded on the coordinate grid above. (5 points) Part B: Explain how to verify that the points A and E are solutions to the system of inequalities created in Part A. (3 points) Part C: William can only attend a school in his designated zone. William's zone is defined by y &lt; βx β 1. Explain how you can identify the schools that William is allowed to attend. (2 points) 1. Home 2. General 3. The coordinate plane below represents a city. Points A through F are schools in the city. graph of...
{"url":"https://thibaultlanxade.com/general/the-coordinate-plane-below-represents-a-city-points-a-through-f-are-schools-in-the-city-graph-of-coordinate-plane-point-a-is-at-2-negative-3-point-b-is-at-negative-3-negative-4-point-c-is-at-negative-4-2-point-d-is-at-2-4-point-e-is-at-3","timestamp":"2024-11-08T23:18:29Z","content_type":"text/html","content_length":"34093","record_id":"<urn:uuid:c5156b1b-a738-40cc-991e-83e40fb4e368>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00817.warc.gz"}
Clever reference package Hello, I am writing my ph.D. thesis and i want to use the \cref package. I am using different files for different chapters and pointing to them from a single source file 'main.tex'. In that file i have defined your package \usepackage{cleverer}. However, when i am compiling it, it cannot refer anything. I am using mac and texnicle editor. If i define the package in a single source file, such as when i am writing a report or paper, it works. but for my thesis that involves multiple files, it does not work. help please. shihan, 2015-04-30 19:00 CEST Clever reference package shihan, 2015-04-30 19:00 CEST Leave an Entry The guest book of CTAN may not be abused. The CTAN team reserves the right to delete improper entries without notice. Input Syntax The fields for head line and text support a rather small subset of the TeX language since you are probably used to it. The following constructions are supported: • An empty line starts a new paragraph. • The following control sequences are supported: TeX, eTeX, LaTeX, LaTeXe, BibTeX, XeTeX, XeLaTeX, TeXLaTeX, MetaFont, MetaPost, PicTeX, -, \, par, • Other control sequences remain untouched. Note that HTML is escaped. Thus embedding links does not work! Legal Disclaimer The entries in the guest book represent the opinion of their respective author. This may not be the opinion of the CTAN team. The CTAN team reserves the right to remove entries from the guest book without notice.
{"url":"https://ctan.org/guestbook/item/206538","timestamp":"2024-11-11T01:28:25Z","content_type":"text/html","content_length":"16437","record_id":"<urn:uuid:c645b555-6507-432f-b2e4-b7c82035d7f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00541.warc.gz"}
Signal Processing, Communications & Networks Courses Under-Graduate Courses • EE 200 SIGNALS, SYSTEMS AND NETWORKS Continuous and discrete time signals; Fourier series, Fourier, Laplace and Z tr\ansform techniques; DFT. Sampling Theorem. LTI systems: I/O description, impulse response and system functions, pole/ zero plots, FIR and IIR systems. Analog and digital filters. Networks: topological description, network theorems,Two port analysis. • EE 301 DIGITAL SIGNAL PROCESSING Review of discrete time signals and systems. Sampling of CT signals: aliasing, prefiltering, decimation and interpolation, A/D and D/A conversion, quantization noise. Filter design techniques. DFT Computation. Fourier analysis of signals using DFT. Finite register length effects. DSP hardware. Applications. . • EE 320 PRINCIPLES OF COMMUNICATION Communication problem and system models. Representation of deterministic and stochastic signals. Analog and digital modulation systems, Receiver structures, SNR and error probability calculations, Frequency and time division multiplexing. Digital encoding of analog signals. Elements of information theory, Multiple access techniques and ISDN. • EE 321 COMMUNICATION SYSTEMS Information measures. Source coding. ISI & channel equalization, partial response signalling. M-ary modulation systems, error probability calculations. PLLs and FM threshold extension. Error control coding, block and convolution codes. Combined modulation and coding, trellis coded modulation. Spread spectrum systems. • EE390 COMMUNICATION SKILLS • EE 403 ADVANCED DIGITAL SIGNAL PROCESSING Review of linear algebra; functional analysis, time-frequency representation; frequency scale and resolution; uncertainity principle, short-time Fourier transform, Multi-resolution concept and analysis, Wavelet transforms. Wigner-ville distributions. Multi-rate signal processing; discrete-time bases and filter banks; 2D signals and systems, 2D sampling in arbitrary lattices, 2D-linear transforms, 1D/2D signal compression; introduction to DSP architecture. • EE 422 COMMUNICATION SYSTEM ENGINEERING Baseband signal characterisation-telegraphy, telephony, television and data; message channel objective; voice frequency transmission, radio wave propagation methods: random noise characterization in communication systems, intermodulation distortion : line of sight systems description and design; troposcattrer systems. Post-Graduate Courses • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS & SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing. • EE 602 STATISTICAL SIGNAL PROCESSING I Power Spectrum Estimation-Parametric and Maximum Entropy Methods, Wiener, Kalman Filtering, Levinson-Durban Algorithms Least Square Method, Adaptive Filtering, Nonstationary Signal Analysis, Wigner-Ville Distribution, Wavelet Analysis. • EE 603 ADVANCED TOPICS IN DIGITAL FILTERING Multirate Processing of discrete Time Signals; Orthogonal Digital Filter Systems. Two-Dimensional Discrete Time Filters. VLSI Computing structures for Signal Processing. • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 606 ARCHITECTURE AND APPLICATIONS OF DIGITAL SIGNAL PROCESSORS Review of DSP fundamentals. Issues involved in DSP processor design - speed, cost, accuracy, pipelining, parallelism, quantization error, etc. Key DSP hardware elements - Multiplier, ALU, Shifter, Address Generator, etc. TMS 320C55 X and TM 320C6X and 21000 family architecture and instruction set. Software development tools - assembler, linker and simulator. Applications using DSP Processor - spectral analysis, FIR/IIR filter, linear-predictive coding, etc. • EE 607 WAVELET TRANSFORMS FOR SIGNAL AND IMAGE PROCESSING Basics of functional Analysis; Basics of Fourier Analysis; Spectral Theory; Time- Frequency representations; Nonstationary Processes; Continuous Wavelet Transforms; Discrete Time-Frequency Transforms; Multi resolution Analysis; Time-Frequency Localization; Signal Processing Applications; Image Processing Applications • EE 608 VIDEO PROCESSING • EE 609 BASICS OF BIOMEDICAL SIGNAL AND IMAGE PROCESSING Speech and pathology of vocal tract/ cords, Perpetual coding of audio signal and data compression, Spatio-temporal nature of bioelectric signals, cardiac generator and its models, Specific digital technique for bioelectric signals, Modes of medical imaging. • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Bandlimited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communciation; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE 625 SATELLITE COMMUNICATION Introduction. Historical background and overall perspective; Satellite network modeling ; Link calculations; FM analysis; TV Transmission; Digital modulation; Error control; Multiple access; FDMA, TDMA, CDMA. Orbital considerations; Launching; Atmospheric effects; Transponders; Earth Stations; VSATs. • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 628 TOPICS IN CRYPTOGRAPY AND CODING Cryptography and error control coding in communication and computing systems. Stream and block ciphers; DES; public-key cryptosystems; key management, authentication and digital signatures. Codes as ideals in finite commutative rings and group algebras. Joint coding and cryptography. • EE 629 DIGITAL SWITCHING Network Architecture; time division multiplexing; digital switching; space & time division switching, cross point and memory requirements; blocking probabilities. traffic Analysis, models for circuit and packet switched systems, performance comparison; ISDN. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 671 NEURAL NETWORKS Theory of representation; Two computational paradigms; Multi-layer networks; Auto-associative and hetero-associative nets; Learning in neural nets: Supervised and unsupervised learning; Application of neural nets; Neural network simulators. • EE 672 COMPUTER VISION AND DOCUMENT PROCESSING Human and computer vision, Image representation and modelling, Line and edge detection, Labeling, Image segmentation, Pattern recognition, Statistical, structural neural and hybrid techniques, Training & classification, Document analysis and optical character recognition, object recognition, Scene matching & analysis, Robotic version, Role of knowledge. • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. • EE 676 DIGITAL MOBILE RADIO SYSTEM Introduction to Mobile Radio networks, channel description and analysis, Propagation Effects, Technologies, TDMA/CDMA Techniques, Architectures, Cellular Systems, GSM Systems, Mobile Satellite Communication, Wireless ATM, Third Generation Cellular, Universal Mobile Telecommunication Systems (UMTS). • EE 678 NEURAL SYSTEMS AND NETWORKS Memory: Eric Kandel's memory and its physiological basis, Explicit and Implicit memories, Short Term and Long Term potentiation (STP and LTP), Hopfield's Model of Associative Memories, its comparison with Kandel's model, Stability of Hopefield net, its use as CAM, Hamming's Model and comparision of number of weights, Learning: Supervised and Unsupervised nets, Learning Methods, Neural systems: Different types of neurons, dendrites, axons, role of Na+ K+ AT Pase and resting potentials, synaptic junctions and transmission of action potentials, Consciousness and its correlation with respiratory sinus arrythmia, a bioinstrumentation scheme for its measurement; Neural nets for technical applications: Bidirectional Associative Memories, (SAMs), Radial Basic, Function nets. Boltzmann machine, Wavelet nets, Cellular Neural Nets and Fuzzy nets. Courses Offered to PG Students In 2009-2010 1st semester • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing. • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE624 INFORMATION AND CODING THEORY • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. • EE680 INTELLIGENT INSTRUMENTATION In 2009-2010 2nd semester • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS & SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 602 STATISTICAL SIGNAL PROCESSING I Power Spectrum Estimation-Parametric and Maximum Entropy Methods, Wiener, Kalman Filtering, Levinson-Durban Algorithms Least Square Method, Adaptive Filtering, Nonstationary Signal Analysis, Wigner-Ville Distribution, Wavelet Analysis. • EE 608 VIDEO PROCESSING • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Bandlimited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communciation; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 628 TOPICS IN CRYPTOGRAPY AND CODING Cryptography and error control coding in communication and computing systems. Stream and block ciphers; DES; public-key cryptosystems; key management, authentication and digital signatures. Codes as ideals in finite commutative rings and group algebras. Joint coding and cryptography. • EE 629 DIGITAL SWITCHING Network Architecture; time division multiplexing; digital switching; space & time division switching, cross point and memory requirements; blocking probabilities. traffic Analysis, models for circuit and packet switched systems, performance comparison; ISDN. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 670 WIRELESS COMMUNICATION • EE679 QUEUEING SYSTEMS • EE 671 NEURAL NETWORKS Theory of representation; Two computational paradigms; Multi-layer networks; Auto-associative and hetero-associative nets; Learning in neural nets: Supervised and unsupervised learning; Application of neural nets; Neural network simulators. • EE698D UNIVERSAL COMPRESSION ALGORITHMS & ENTROPY RATE In 2010-2011 1st semester • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE607 WAVELET TRANSFORMS FOR SIGNAL AND IMAGE PROCESSING • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. In 2010-2011 2nd semester • EE600 MATHEMATICAL STRUCTURES OF SIGNALS & SYSTEMS • EE602 STATISTICAL SIGNAL PROCESSING • EE608 VIDEO SIGNAL PROCESSING • EE698V SIMULATION OF COMMUNICATION SYSTEMS • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Bandlimited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communciation; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 646 PHOTONICS AND SWITCHING NETWORKS • EE 670 WIRELESS COMMUNICATION • EE 671 NEURAL NETWORKS Theory of representation; Two computational paradigms; Multi-layer networks; Auto-associative and hetero-associative nets; Learning in neural nets: Supervised and unsupervised learning; Application of neural nets; Neural network simulators. In 2011-2012 1st semester • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 607 WAVELET TRANSFORMS FOR SIGNAL AND IMAGE PROCESSING Basics of functional Analysis; Basics of Fourier Analysis; Spectral Theory; Time- Frequency representations; Nonstationary Processes; Continuous Wavelet Transforms; Discrete Time-Frequency Transforms; Multi resolution Analysis; Time-Frequency Localization; Signal Processing Applications; Image Processing Applications • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 629 DIGITAL SWITCHING Network Architecture; time division multiplexing; digital switching; space & time division switching, cross point and memory requirements; blocking probabilities. traffic Analysis, models for circuit and packet switched systems, performance comparison; ISDN. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. In 2011-2012 2nd semester • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS & SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 608 VIDEO PROCESSING • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Bandlimited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communciation; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 628 TOPICS IN CRYPTOGRAPY AND CODING Cryptography and error control coding in communication and computing systems. Stream and block ciphers; DES; public-key cryptosystems; key management, authentication and digital signatures. Codes as ideals in finite commutative rings and group algebras. Joint coding and cryptography. • EE 629 PHOTONICS AND SWITCHING NETWORKS • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 670 WIRELESS COMMUNICATION • EE 676 DIGITAL, MOBILE RADIO SYSTEMS • EE 671 NEURAL NETWORKS Theory of representation; Two computational paradigms; Multi-layer networks; Auto-associative and hetero-associative nets; Learning in neural nets: Supervised and unsupervised learning; Application of neural nets; Neural network simulators. In 2012-2013 1st semester • EE 602 STATISTICAL SIGNAL PROCESSING I Power Spectrum Estimation-Parametric and Maximum Entropy Methods, Wiener, Kalman Filtering, Levinson-Durban Algorithms Least Square Method, Adaptive Filtering, Nonstationary Signal Analysis, Wigner-Ville Distribution, Wavelet Analysis. • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainity, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. In 2012-2013 2nd semester • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 608 VIDEO PROCESSING • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Band limited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communication; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 628 TOPICS IN CRYPTOGRAPY AND CODING Cryptography and error control coding in communication and computing systems. Stream and block ciphers; DES; public-key cryptosystems; key management, authentication and digital signatures. Codes as ideals in finite commutative rings and group algebras. Joint coding and cryptography. • EE 658 FUZZYSET,LOGIC & SYSTEMS AND APPLICATIONS Introduction, Uncertainty, Imprecision and Vagueness, Fuzzy systems, Brief history of Fuzzy logic, Foundation of Fuzzy Theory, Fuzzy Sets and Systems, Fuzzy Systems in Commercial Products, Research Fields in Fuzzy Theory, Classical sets and Fuzzy sets, Classical Relations, Fuzzy relations, Membership Functions, Fuzzy to crisp conversions, Fuzzy arithmetic, Numbers, Vectors and the extension principle, Classical logic and Fuzzy logic, Mathematical background of Fuzzy Systems, Classical (Crisp) vs, Fuzzy sets, Representation of Fuzzy sets, Types of Membership Functions, Basic Concepts (support, singleton, height, a-cut projections), Fuzzy set operations, S-and T- Norms, Properties of Fuzzy sets, Sets as Points in Hypercube, Cartesian Product, Crisp and Fuzzy Relations, Examples, Liguistic variables and hedges, Membership function design. Basic Principles of Inference in Fuzzy Logic, Fuzzy IF-THEN Rules, Canonical Form, Fuzzy Systems and Algorithms, Approximate Reasoning, Forms of Fuzzy Implication, Fuzzy Inference Engines, Graphical Techniques of Inference, Fuzzyifications/ DeFuzzification, Fuzzy System Design and its Elements, Design options. Fuzzy Events, Fuzzy Measures, Possibility Distributions as Fuzzy Sets, Possibility vs, Probability, Fuzzy Systems as Universal Approximators, Additive Fuzzy Systems (standard additive • EE 629 DIGITAL SWITCHING • EE 643 SMART ANTENNA FOR MOBILE COMMUNICATION • EE 670 WIRELESS COMMUNICATION • EE679 QUEUEING THEORY • EE698V SIMULATION OF COMMUNICATION SYSTEM • EE698W CONVEX OPTIMIZATION IN SP/COM In 2013-2014 1st semester • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing. • EE602 STATISTICAL SIGNAL PROCESSING • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 607 WAVELET TRANSFORMS FOR SIGNAL AND IMAGE PROCESSING Basics of functional Analysis; Basics of Fourier Analysis; Spectral Theory; Time- Frequency representations; Nonstationary Processes; Continuous Wavelet Transforms; Discrete Time-Frequency Transforms; Multi resolution Analysis; Time-Frequency Localization; Signal Processing Applications; Image Processing Applications. • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE627 SPEECH SIGNAL PROCESSING • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. • EE698C PEER TO PEER NETWORKS In 2013-2014 2nd semester • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 608 DIGITAL VIDEO PROCESSING • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Band limited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communication; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE626 TOPICS IN STOCHASTIC PROCESSES • EE 629 DIGITAL SWITCHING • EE 643 SMART ANTENNA FOR MOBILE COMMUNICATION • EE 670 WIRELESS COMMUNICATION • EE698V SIMULATION OF COMMUNICATION SYSTEM • EE698W CONVEX OPTIMIZATION IN SP/COM In 2014-2015 1st semester • EE 601 MATHEMATICAL METHODS IN SIGNAL PROCESSING Generalized inverses, regularization of ill-posed problems. Eigen and singular value decompositions, generalized problems. Interpolation and approximation by least squares and minimax error criteria. Optimization techniques for linear and nonlinear problems. Applications in various areas of signal processing. • EE 604 IMAGE PROCESSING Human visual system and image perception, monochrome & colour vision models, colour representation ; image sampling & quantization; 2-D systems; image transforms; image coding; stochastic models for image representation; image enhancement, restoration & reconstruction. Image analysis using multiresolution techniques. • EE 605 INTRODUCTION TO SIGNAL ANALYSIS Discrete and Continuous time signals and systems, LTI systems, Convolution, Difference equations. Frequency domain representation: Fourier transform and its properties. Random discrete signals. Sampling and reconstruction: Change of sampling rate. Normed vector spaces, basis, linear independence, orthogonality. Linear systems of equations. Over- and Underdetermined systems. Row- and Column spaces, Null spaces. Least square and minimum norm solutions. Inverse and pseudo inverse, Symmetry transformations. Eigenvectors and eigenvalues. Hilbert transforms, band pass representations and complex envelope. Base band pulse transmission, matched filtering, ISI, equalization. Coherent and noncoherent detection. • EE 607 WAVELET TRANSFORMS FOR SIGNAL AND IMAGE PROCESSING Basics of functional Analysis; Basics of Fourier Analysis; Spectral Theory; Time- Frequency representations; Nonstationary Processes; Continuous Wavelet Transforms; Discrete Time-Frequency Transforms; Multi resolution Analysis; Time-Frequency Localization; Signal Processing Applications; Image Processing Applications. • EE 621 REPRESENTATION AND ANALYSIS OF RANDOM SIGNALS Review of probability, random variables, random processes; representation of narrow band signals. Transmission of signals through LTI systems; Estimation and detection with random sequences; BAYES, MMSE, MAP, ML schemes. KL and sampling theorem representations, matched filter, ambiguity functions, Markov sequences, linear stochastic dynamical systems. • EE 624 INFORMATION AND CODING THEORY Entropy and mutual information, rate distortion function, source coding, variable length coding, discrete memoryless channels, capacity cost functions, channel coding, linear block codes, cyclic codes. Convolutional codes, sequential and probabilistic decoding, majority logic decoding, burst error-correcting codes. • EE 673 DIGITAL COMMUNICATION NETWORKS OSI model, queueing theory, physical layer, error detection and correction, data link layer, ARQ strategies, framing, media access layer, modelling and analysis of important media access control protocols, FDDI and DQDB MAC protocols for LANs and MANs, network layer, flow control & routing, TCP/IP protocols, ATM. In 2014-2015 2nd semester • EE 600 MATHEMATICAL STRUCTURES OF SIGNALS SYSTEMS Nature of definitions; Theory of measurement and scales; Symmetry, invariance and groups; Groups in signals and systems; Algebraic and relational structures of signal spaces and convolutional systems; Representation theory of groups, harmonic analysis and spectral theory for convolutional systems. • EE 608 DIGITAL VIDEO PROCESSING • EE 622 COMMUNICATION THEORY Rate Distortion Theory, Channel Coding Theorems, Digital Modulation Schemes, Trellis Coded Modulation, Digital Transmission over Band limited Channels, Fading Multipath Channels, Synchronization. Analog Modulation Schemes, Optimum/ Suboptimum Receivers; Diversity Combining; Cellular Mobile Communication; Equalization. • EE 623 DETECTION AND ESTIMATION THEORY Classical Detection and Estimation Theory, Signal Representation, Detection of signals in Gaussian noise, Waveform estimation, Linear estimation problems, Wiener filtering, Kalman filtering. • EE 627 SPEECH SIGNAL PROCESSING Spectral and non-spectral analysis techniques; Model-based coding techniques; Noise reduction and echo cancellation; Synthetic and coded speech quality assessment; Selection of recognition unit; Model-based recognition; Language modelling; Speaker Identification; Text analysis and text-to-speech synthesis. • EE 643 SMART ANTENNA FOR MOBILE COMMUNICATION • EE 609 CONVEX OPTIMIZATION IN SIGNAL PROCESSING AND COMMUNICATION • EE 629 DIGITAL SWITCHING Last Updated: Friday, 14 July 2023 10:38
{"url":"https://iitk.ac.in/ee/signal-processing-communications-networks-courses","timestamp":"2024-11-14T06:54:01Z","content_type":"application/xhtml+xml","content_length":"118673","record_id":"<urn:uuid:2a4849a6-6a15-4873-9718-02c931e27408>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00553.warc.gz"}
3. The Development of Quantum Mechanics (1925 – 1927) The early 1920s witnessed fundamental difficulties in atomic physics. The quantum theory of atomic structure, founded by Bohr and largely developed by Bohr and Sommerfeld, did not describe the properties of complicated atoms and molecules. Moreover, the discovery of the Compton effect at the end of 1922 focussed attention on the problem of the nature of radiation. Its interpretation in the light-quantum hypothesis contradicted classical radiation theory, and the radical attempt by Bohr, Kramers and Slater in early 1924 to resolve the difficulty by assuming only statistical conservation of energy and momentum was refuted by the experiment of Walther Bothe and Hans Geiger in April 1925. Heisenberg was growing ever more concerned with these and other difficulties in atomic theory. His works on the anomalous Zeeman effect, only successful in part, and his unsuccessful calculation of the helium states with Born had sensitized him by early 1925 to the “crisis” of current theory. Nevertheless, his latest calculations in Copenhagen on dispersion theory and on complex spectra, especially the principle of “sharpened” correspondence applied in these works, seemed to point toward a future satisfactory theory. With characteristic optimism the Göttingen Privatdozent took on a new and difficult problem at the beginning of May 1925, the calculation of the line intensities in the hydrogen spectrum. Heisenberg began with a Fourier analysis of the classical hydrogen orbits, intending to translate them into a quantum theoretical scheme – just as he had done with Kramers for the dispersion of light by atoms. But the hydrogen problem proved much too difficult, and he replaced it with the simpler one of an anharmonic oscillator. With the help of a new multiplication rule for a quantum-theoretical Fourier series he succeeded in writing down a solution for the equations of motion for this system. On 7 June 1925 he went to the island of Helgoland to recover from a severe attack of hay fever. There he completed the calculation of the anharmonic oscillator, determining all the constants of the motion. He made use, in particular, of a modified quantum condition that was later called by Born, Pascual Jordan and himself a “commutation relation”, and he proved that the new theory yielded stationary states (conservation of energy). Returning to Göttingen on 19 June 1925 Heisenberg composed his fundamental paper “Über die quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen” (“On a Quantum Theoretical Reinterpretation of Kinematic and Mechanical Relations”), which was completed on 9 July 1925. In this paper, the starting point for a new quantum mechanics, Heisenberg announced as the leading philosophical principle of quantum mechanics that only observable quantities are allowed in the theoretical description of atoms. Heisenberg reported his new results during visits shortly thereafter with Paul Ehrenfest in Leiden and with Ralph Fowler in Cambridge. After Born and Jordan managed in August and September 1925 to develop the mathematical content of Heisenberg's work into a consistent theory with the help of infinite Hermitian matrices (Z. Phys. 34, 858, 1925), Heisenberg participated, starting in September 1925, in the completion and application of the new “matrix mechanics”, culminating in the long “three-man-paper”, by Born, Heisenberg and Jordan, submitted on 16 November 1925. Further developments followed rapidly: Pauli calculated the stationary states of the hydrogen atom in October 1925; Cornelius Lanczos in Frankfurt and Born and Norbert Wiener in the USA extended the method of operator mechanics to describe continuous motions (December 1925); and Paul Adrien Maurice Dirac in Cambridge developed independently of the Göttingen school a different scheme based upon Heisenberg's July paper, the method of q-numbers (November 1925), in which many-electron atoms and the relativistic Compton effect could be handled successfully (spring 1926). In addition Heisenberg and Jordan utilized electron spin and matrix mechanics to solve the old problems of hydrogen fine structure and the anomalous Zeeman effect (April 1926); and finally Heisenberg discovered the phenomenon of quantummechanical resonance (June 1926), which played a decisive role in his subsequent calculation of the term system of the helium atom (July 1926). In May 1926 Niels Bohr offered Heisenberg a position at his institute in Copenhagen as Lector and successor to his assistant Kramers. There Heisenberg delivered lectures at the university (in Danish) on contemporary physical theories, directed beginning students, helped guest researchers with their problems, and discussed with Bohr the most important results of quantum mechanics. In the summer and fall of 1926 the main topic of discussion was wave mechanics, the quantum atomic theory that Erwin Schrödinger began introducing in January 1926. The complete mathematical equivalence between Göttingen's matrix mechanics or Dirac's q-number scheme and Schrödinger's wave mechanics was proved by Jordan and Dirac in December 1926, after preparatory work by Schrödinger (March 1926), Pauli (April 1926), and Carl Eckart (June 1926). However, Schrödinger's physical interpretation of the square of the wave amplitude as the continuously distributed charge density of the electron was rejected by Born, Bohr and Heisenberg and replaced on Born's proposal by the interpretation that it is the probability for finding the electron at each location (June 1926). In close contact with Pauli, and in intense discussion with Bohr, Heisenberg analyzed what he termed the “perceptual content of the quantum-theoretical kinematics and mechanics”. As a result of the analysis he presented in March 1927 the so-called “indeterminacy” or “uncertainty relations”, which limit the simultaneous measurement of canonically conjugate variables, such as the position and momentum of a particle. Bohr, on the other hand, pondered the simultaneous use of the physical pictures of particles and waves, which resulted in his general principle of “complementarity” announced in fall of 1927. Born's statistical interpretation of Schrödinger's wave function, Heisenberg's uncertainty relations, and Bohr's complementary principle formed the basis of the physical interpretation of the new quantum mechanics, as explicated by Bohr in his lectures at the Volta Conference in Como (September 1927) and at the Solvay Congress in Brussels (24-29 October 1927). This “Copenhagen Interpretation” of quantum mechanics, as it was later called, found acceptance by most physicists, but not by all: Albert Einstein in particular raised serious objections to it at the 1927 and 1930 Solvay conferences and later, for example, in his paper with Boris Podolsky and Nathan Rosen (Phys. Rev. 47, 777, 1935).
{"url":"https://www.heisenberg-gesellschaft.de/3-the-development-of-quantum-mechanics-1925-ndash-1927.html","timestamp":"2024-11-03T10:37:06Z","content_type":"text/html","content_length":"45818","record_id":"<urn:uuid:b71eabce-d5c9-4267-b278-cf8395eb59b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00358.warc.gz"}
Bài 8: Numbers three and four (Số 3 và số 4) Các em làm quen với một số thuật ngữ cơ bản của số học trong bài học "Numbers three and four" của môn Toán bằng Tiếng anh nhé Bạn phải là "Học viên Trung Tâm Anh Ngữ Thần Đồng Việt" mới xem được Video! Bài 8: Numbers three and four Introducing 3 and 4. Introducing 3 and 4. Hey, look Starry! The children are reading. Let us count the number of boys. Hey, look Starry! The children are reading. Let us count the number of boys. 1-2, hey Starry do you know how to count the next one? 1-2, hey Starry do you know how to count the next one? Right, the next is three. This is what three look like. Right, the next is three. This is what three look like. Let us now count the number of girls. 1-2-3 and 4. Let us now count the number of girls. 1-2-3 and 4. There are 4 girls. Look at number 4 dancing about. There are 4 girls. Look at number 4 dancing about. So Starry, now you know that this is 1, this is 2, this is 3 and this is 4. So Starry, now you know that this is 1, this is 2, this is 3 and this is 4. Such pretty flowers in this vase! How many flowers are these? Such pretty flowers in this vase! How many flowers are these? Let us count them! 1-2-3, there are 3 flowers. Let us count them! 1-2-3, there are 3 flowers. Let us count the number of flowers in this vase! 1-2-3-4, there are 4 flowers. Let us count the number of flowers in this vase! 1-2-3-4, there are 4 flowers. Starry now we are friends with 1-2-3 and 4. Starry now we are friends with 1-2-3 and 4. Let us see if you are able to count using these friends. Let us see if you are able to count using these friends. Hey, look at this garden! How many children are there?1-2, there are 2 children. Hey, look at this garden! How many children are there?1-2, there are 2 children. Let us count the trees! 1, there is one tree. Let us count the trees! 1, there is one tree. How many birds do we see? 1-2-3-4, there are 4 birds. How many birds do we see? 1-2-3-4, there are 4 birds. Hey, look who is on the ground? There are some snails. Hey, look who is on the ground? There are some snails. Let us count to them! 1-2-3. There are 3 snails. Let us count to them! 1-2-3. There are 3 snails. Is it that fun Starry? I know you like coloring. Is it that fun Starry? I know you like coloring. Now look at these pictures. Color the pictures are shown by the numbers. Now look at these pictures. Color the pictures are shown by the numbers. Color 3 stars.1-2-3, 3 stars Color 3 stars.1-2-3, 3 stars Color 4 cherries.1-2-3-4, 4 cherries. Color 4 cherries.1-2-3-4, 4 cherries. Color 2 cars. 1-2, 2 cars. Color 2 cars. 1-2, 2 cars. Color 1 cup. 1, 1 cup. Color 1 cup. 1, 1 cup. Now, look at the number and draw lines or dots. Now, look at the number and draw lines or dots. Starry, are you ready? Good. Starry, are you ready? Good. Draw 3 lines.1-2-3, 3 lines. Draw 3 lines.1-2-3, 3 lines. Now you have to draw 2 dots. 1-2, 2 dots. Now you have to draw 2 dots. 1-2, 2 dots. That was great Starry! That was great Starry! There is another activity for you! Quickly look at these numbers again.1-2-3-4. There is another activity for you! Quickly look at these numbers again.1-2-3-4. These numbers are having some fun. These numbers are having some fun. Look at how they are standing. Let us read the numbers at their stand. Look at how they are standing. Let us read the numbers at their stand. 3-2-4-1-3-4-1 and 2. 3-2-4-1-3-4-1 and 2. Let us try this in another activity. After a number blinks, let us call out its name. Let us try this in another activity. After a number blinks, let us call out its name. Starry, are you ready? Let’s begin! Starry, are you ready? Let’s begin! 4-1-3-2-1-4-2-3-2-4-3-1-2. Oh that was fun. See you soon for some more activity Starry. 4-1-3-2-1-4-2-3-2-4-3-1-2. Oh that was fun. See you soon for some more activity Starry.
{"url":"https://anhvanthieunhi.vn/toan-tieng-anh-lop-1/bai-8-numbers-three-and-four-2-45.html","timestamp":"2024-11-10T12:24:51Z","content_type":"application/xhtml+xml","content_length":"36240","record_id":"<urn:uuid:bfedc79e-da2d-4545-a6b3-47271a19cf54>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00556.warc.gz"}
Black Holes and Quantum Gravity | Aurélien Barrau | Inference Although black holes were first imagined in the late eighteenth century, it was not until Karl Schwarzchild devised a solution to Einstein’s field equations in 1915 that they were accurately described. Despite Schwarzchild’s pioneering work, black holes were still widely thought to be purely theoretical, and so devoid of physical meaning. This view persisted until recent decades, an accumulation of observational evidence removing any lingering doubts about their existence. Beyond their obvious interest as astrophysical phenomena, black holes may, in time, come to be considered a laboratory for new physics. It is conceivable that black holes could be used to study quantum gravity; and a complete and consistent theory of quantum gravity remains the most elusive goal in theoretical physics. The Schwarzschild Metric In his solution to the Einstein field equations, Schwarzschild described the gravitational field for a static and spherically symmetric mass. Using natural units, the metric can be written as 1. $$ds^2=e^{2\phi}dt^2-e^{2\Lambda}dr^2-r^2d\Omega^2,$$ where the parameters of the exponential functions can be determined using Einstein’s equations. In (1), the component (00) leads to 2. $$e^{2\Lambda}=\left(1-\frac{2M}{r}\right)^{-1},$$ where M is the mass of the black hole, while the component (11) leads to 3. $$\phi=\frac{1}{2}ln\left(1-\frac{2M}{r}\right).$$ Schwarzschild’s metric can then be written as 4. $$ds^2=\left(1-\frac{2M}{r}\right)dt^2-\left(1-\frac{2M}{r}\right)^{-1}dr^2-r^2d\Omega^2.$$ In fundamental terms, a black hole is an object with an effective radius r < 2M.^1 In comparison to the Schwarzschild metric, the Kerr–Newman metric is more complex, describing a solution for a spinning, charged mass. But for the purposes of investigating quantum gravity, the Schwarzschild metric is of greater interest. The Schwarzschild metric tends toward the Minkowski metric for r → $\infty$ and M → 0 and so recovers the metric characteristic of special relativity. The coefficient $\left(1-\frac{2M}{r}\right)$ shows that the circumference of a massive spherical body differs from its radius multiplied by 2$\pi$. The coefficient $\left(1-\frac{2M}{r}\right)^{-1}$ may be attributed to gravitational redshift. The energy conserved by a free-falling light-emitting particle can be written as 5. $$E=m\left(1-\frac{2M}{r}\right)\frac{dt}{d\tau}, \label{ener}$$ where $\tau$ is the particle’s proper time. Consider an object initially at rest and that remains at rest. Its energy is E = m. From (5), 6. $$\left(1-\frac{2M}{r}\right)\frac{dt}{d\tau}=1.$$ 7. $$\left(1-\frac{2M}{r}\right)^2=\left(1-\frac{2M}{r}\right)-\left(1-\frac{2M}{r}\right)^{-1}\left(\frac{dr}{dt}\right)^2.$$ And whence again 8. $$\frac{dr}{dt}=-\left(1-\frac{2M}{r}\right)\left(\frac{2M}{r}\right)^{1/2}.$$ Shell coordinates are next: 9. $$dt_{shell}=\left(1-\frac{2M}{r}\right)^{1/2}dt,$$ 10. $$dr_{shell}=\left(1-\frac{2M}{r}\right)^{-1/2}dt.$$ Equation (8) can be written as: 11. $$\frac{dr_{shell}}{dt_{shell}}=-\left(\frac{2M}{r}\right)^{-1/2}.$$ The event horizon is the threshold at which the escape velocity from a black hole exceeds the speed of light. At the event horizon, the velocity in (8) tends toward 0, but in (12), toward –1. The particle is falling. A local observer sees an object in free fall entering a black hole at the speed of light. A distant observer sees the object frozen on the event horizon. Both points of view are correct and consistent within their respective frames of reference. The precise nature of the event horizon represents another historically important question. The Schwarzschild metric appears to diverge at R = 2M, the factor dr^2 tending toward $\infty$ at the event horizon. Consider coordinates defined in free fall by using a Lorentz transformation over the shell coordinates: 12. $$dt_{fall}=-\gamma V_{rel}dr_{shell}-\gamma dt_{shell},$$ where V[rel] is the relative velocity between the two frames of reference. It follows that 13. $$dt=\frac{dt_{fall}}{\gamma \left(1-\frac{2M}{r}\right)^{1/2}}+\frac{V_{rel}dr}{\left(1-\frac{2M}{r}\right)}.$$ Replacing $\gamma$ with $(1-\frac{2M}{r})^{-1/2}$ yields 14. $$ds^2= \left(1-\frac{2M}{r}\right)dt^2_{fall}-2 \left(\frac{2M}{r}\right)^{1/2} dt_{fall}dr-dr^2.$$ Although this expression uses a mixed coordinate system, it does show that the singularity at the event horizon is an artifact and so devoid of physical meaning. On the other hand, the central singularity is real, so much so that the Kretschmann invariant diverges from it—something it could not do if the central singularity were purely a mathematical construction. The behavior of light in radial motion inside a black hole can be probed by starting from (14) and writing ds^2 = 0: 15. $$\frac{dr^2}{dt^2_{fall}}+2\left(\frac{2M}{r}\right)^{1/2}\frac{dr}{dt_{fall}}-\left(1-\frac{2M}{r}\right)=0.$$ This equation has two solutions, 16. $$\frac{dr}{dt_{fall}}=-\left(\frac{2M}{r}\right)^{1/2}\pm1,$$ corresponding to movements toward the center and the exterior. Inside the black hole, where r < 2M, both solutions are negative. An emitted photon moves inward. Since nothing can move locally faster than light, it is impossible to escape from a black hole. Consider a particle initially at rest within a black hole. Introducing the proper time, $\tau$, in (8), where dr/d$\tau$ = –(2M/r)^1/2, its time inside the black hole is 17. $$\tau=-\int_{2M}^0\left(\frac{r}{2M}\right)^{1/2}dr=\frac{4}{3}M.$$ The particle reaches a singularity in a finite time. Since the event horizon is mathematically but not physically real,^2 particles within a super massive black hole may enjoy a certain grim existence if tidal effects remain weak at the surface. That singularity is represented as a horizontal line in the Penrose diagram of a Schwarzschild black hole, but it could equally be considered the end of time. Black Hole Thermodynamics In recent decades, much energy has been devoted to investigating the degree to which the properties of black holes conform to the laws of thermodynamics. In general relativity, a black hole is described according to three fundamental parameters: mass, angular momentum, and electrical charge.^3 This suggests that they may be amenable to thermodynamic analysis, if only because thermodynamics studies systems whose properties can also be described by a small number of parameters. A black hole can expand but not contract. It is a one-way job. In 1973, Jacob Bekenstein suggested that the entropy of a black hole might be proportional to the area of its event horizon,^4 18. $$S=\frac{A}{4},$$ where the proportionality constant is determined by consistency considerations.^5 From this conjecture, four laws followed, each with its analogue in old-fashioned thermodynamics: Zeroth Law: The surface gravity of a black hole is constant at its event horizon. The analogue: The temperature of a body is homogeneous at equilibrium. First Law: Black hole perturbations are given by 19. $$SdE=\frac{\kappa}{8\pi}dA+\Omega dJ+\Phi dQ,$$ where E is the energy, $\kappa$, the surface gravity, $\Omega$, angular velocity, J, angular momentum, $\Phi$, electrical potential, and Q, charge. The analogue: Energy conservation in Second Law: Given the weak energy condition $T_{\mu\nu}X^{\mu}X^{\nu}\le0$ for an inhomogeneous vector field, the surface area of a black hole increases in the obvious way: 20. $$\frac{dA}{dt}\ge 0.$$ The analogue: The entropy of a closed system can only increase. Third Law: It is impossible for the surface gravity of a black hole to be precisely zero. The analogue: It is impossible to reach absolute zero in a finite number of operations. The Hawking Effect In 1975, Stephen Hawking established that, contrary to conventional vision, black holes can evaporate and emit radiation.^7 This effect can be understood in a number of different ways. The most basic view considers the tidal effect on vacuum fluctuations at the boundary of a black hole, the razor’s edge. Particle creation is pair-wise. One of the particles drops back into the black hole, the other is ejected beyond the event horizon. The Unruh effect demonstrates that this might well be so.^8 An observer subjected to constant acceleration perceives a thermal bath of particles at the temperature T = a/(2$\pi$). This phenomenon is established using Bogoliubov transformations. This approach involves starting from the Schwarzschild metric and considering a stationary observer in the expression 21. $$r=2M+\frac{\rho^2}{8M}.$$ The associated Rindler metric, in which the Rindler coordinates represent a hyperbolic acceleration reference frame, can then be written in lowest order as $\tau$ = t/(4M). Einstein’s equivalence principle establishes that, given the Unruh effect, the observer would perceive an excited field at the temperature 22. $$T_{loc}=\frac{a}{2\pi}=\frac{1}{2\pi\rho}=\frac{1}{4\pi\sqrt{2M(r-2M)}}.$$ The temperature at distance R is obtained by simply applying the gravitational shift factor g[00](r) / g[00](R) to (22): 23. $$T(R)=\frac{1}{4\pi\sqrt{2Mr(1-\frac{2M}{R})}}.$$ The value at infinity is therefore 24. $$T(\infty)=\frac{1}{4\pi\sqrt{2Mr}},$$ which leads, with r = 2M, to 25. $$T_H=\frac{1}{8\pi M}.$$ In complete units, the temperature is written as $T_H=\hbar c^3/(8\pi G k_B M)$. This is one of the few simple formulas in physics to include all of the fundamental constants. The Hawking effect invokes gravitation, quantum physics, statistical physics, and relativity, all at the same time. In reality, the Hawking effect is obviously much more complex and the spectrum is not entirely thermal. The number of spin particles s emitted per unit of time t and energy Q is 26. $$\frac{d^2N}{dQdt}=\frac{\Gamma_s}{2\pi\left(e^{\frac{Q}{\kappa/(4\pi^2)}}-(-1)^{2s}\right)},$$ with the gray-body factor 27. $$\Gamma_s=4\pi \sigma_s(Q,M,\mu)(Q^2-\mu^2),$$ where $\mu$ is the mass of the emitted particle and $\sigma$, the effective absorption cross section. The absorption cross section is not trivial: it contains information about the structure of space-time.^9 It also serves to express the probability of backscattering from the emitted particle in the gravitational potential. The Hawking effect is explosive. Unlike a piece of metal, for example, the more a black hole radiates, the hotter it becomes. Although this process is negligible for massive astrophysical black holes with an extremely low Hawking temperature, it becomes important for low mass black holes. The existence of such black holes remains unproven, but they may have arisen from particular conditions in the primordial universe.^10 The Hawking effect has nevertheless been observed in similar acoustic black hole systems.^11 Although the evaporation of black holes is well understood, it is nonetheless linked to a central paradox in theoretical physics. The notion that information is lost to the outside universe when it enters a black hole is not inherently problematic. The paradox arises from the fact that evaporation transforms the black hole into a thermal, or quasi-thermal, spectrum. As a result, the information loss that occurs conflicts with quantum field theory.^12 Although many solutions have been proposed, ranging from stable relics to subtle correlations between emitted particles, a consensus is yet to Quantum Gravity The search for a quantum theory of gravitation is considered one of the most important and difficult problems in theoretical physics. The conceptual and technical obstacles are formidable, encompassing, as they do, problems of non-renormalizability and the question of fundamental invariance. It is generally thought that the effects of quantum geometry are only felt close to the singularity at the center of a black hole. By contrast, the outer and measurable zone of a black hole, are faithful to the predictions of general relativity. A number of solutions have been proposed to overcome this somewhat disappointing and limited conclusion. Loop quantum gravity (LQG) is an attempt to formulate a non-perturbative and invariant quantification of general relativity.^13 LQG has been developed in both a canonical form, using the Ashtekar connection and the flow of triad densities,^14 and a covariant form, using spin networks. The description of black holes under LQG is based on the concept of an isolated event horizon,^15 a quasi-local notion liberated from a global description of space-time.^16 In essence, the event horizon acts as a surface intercepting the spin network. Each intersection has corresponding quantum numbers (j, m),^17 where j is a half-integer associated with the area and m is the associated projection corresponding to the curvature. These values verify 28. $$A-\Delta\leq 8\pi \gamma \sum_{p}{\sqrt{j_p(j_p+1)}}\leq A+\Delta,$$ where $\gamma$ is the Immirzi parameter,^18 while $\Delta$ is a smoothing scale that refers to the different intersections. Physicists using a Monte Carlo simulation have demonstrated that Hawking evaporation retains a footprint of the discrete structure of this area^19—an imprint emanating from the effects of quantum geometry; yet if the density of black hole states grows exponentially, it is still not possible to discern emission lines for high mass black holes. For masses relatively close to the Planck mass, the discrete character of the area, given by 29. $$A_j=8 \pi \gamma \sum_{p=1}^N\sqrt{j_p(j_p+1)},$$ where N is the total number of intersections, is revealed in the evaporation spectrum. A Kolmogorov–Smirnov test shows that the observation of 4 × 10^5 events would be required to discriminate LQG-type behavior from what might be considered purely Hawking behavior at 3$\sigma$ for 20% experimental resolution. In this sense, the non-perturbative effects of quantum gravity can alter the semi-classical effects associated with evaporation. It is conceivable that the consequences of discretization would be visible for much more massive black holes. Suppose that transitions between the quantum states of a black hole during evaporation are not associated with a complete state reconfiguration, but only a change of state for a single wafer of the elementary area.^20 In this case, only the quantum states associated with one of the terms in (29) need be considered, rather than the quantum state associated with its sum. A simple calculation shows that the relative spacing between the rays no longer depends on the mass of the black hole, suggesting that the measurement of quantum gravitational effects might be achievable when at some arbitrary distance from the Planck mass. The reasoning is straightforward, if intuitive. When a high mass black hole evaporates, it should emit a very low quantum of energy since the temperature is given by T = 1/(8$\pi$M) for large values of M. Its area should, therefore, decrease by a tiny and possibly sub-Planckian value. Not so. Classically, A = 16$\pi$M^2 and thus dA = 32$\pi$MdM. Given that dM ~ T, dA ~ 4, it follows that a variation in area is not dependent on mass. At the peak of the spectrum, the independence of the variation in area from mass is the underlying reason why a local perspective can induce measurable quantum gravity effects for massive black holes. The diffuse background of gamma rays created by the decay of the neutral pions emanating from the quarks and gluons emitted by the black hole is on the same order of magnitude as this signal and does not mask it.^21 Even if the area spectrum were continuous, the mere existence of a minimum value for the area, probably close to the Planck area, would induce a truncation of the emitted flux by prohibiting excessively low energy values. At a given fixed temperature T, the spectrum would be truncated below 30. $$E_{min}=\frac{T}{4}A_{min}\sim\frac{T}{4}.$$ This is another potential avenue for observation. It seems likely that the gray-body coefficients, which encode the possible backscattering of particles in the gravitational potential of a black hole, are affected by quantum gravitational effects.^ 22 The result would be a distortion of the potentially measurable Hawking spectrum. Bouncing Black Holes Might black holes rebound from the effects of quantum gravity? Some physicists have suggested as much.^23 Something similar happens in loop quantum cosmology when the Big Bang is replaced by a Big Bounce. The current expansion phase would then have been preceded by a phase in which the universe contracted. A number of general arguments lend support to the supposition that non-perturbative quantum geometry effects could cause a transition between a black hole and white hole type of state. From the nodal point, the expected rebound time is proportional to M^2, with a proportionality constant on the order of 5 × 10^–2 set for internal consistency. The Hawking evaporation time is on the order of M^3. In this model, the black holes bounce before evaporating; the Hawking effect acts as a dissipative correction. Primordial black holes formed just after the Big Bang with a mass of 10^26g or less should have already rebounded. The phenomenology associated with these ideas is rich and complex. A more delicate point concerns the energy of the signal emitted by bouncing black holes. Two hypotheses are forthcoming, the first for a low energy, and the second for a high energy signal. On the first hypothesis, only scale counts, and so the wavelength of the emitted radiation must be on the order of the black hole’s size. On the second, the energy of the radiation emitted by the white hole is equal to the energy of the radiation that collapsed to form the black hole in the first place. Since the mass of a primordial black hole is almost in bijective correspondence with its formation time, it is possible to determine the wavelength of the emitted radiation for each mass. Fast radio bursts—the result of a still unidentified astrophysical process—might be phenomena of this type.^24 These mysterious bursts of radio waves could be due to the low-energy component of bouncing black holes when the stochastic nature of their lifespans is taken into account. Although alternative explanations have also been proposed, this model has the advantage of being testable in principle. Indeed, redshift dependence is extremely specific. If fast radio bursts are explained by an astrophysical or particle physics phenomenon—such as the annihilation of dark matter particles—their characteristic frequency must be redshifted in a way that matches the host galaxy. In the case of bouncing black holes, the black holes located further away rebound earlier and are lighter as a result. They emit a higher energy signal, which compensates for the rebound. In the future, it would be interesting to study the effects of a gravitational, rather than a cosmological redshift. In this type of redshift, radiation would be emitted in the vicinity of the black hole as an intense gravitational field, its energy red-shifted by a factor of $\left(1-\frac{2M}{r_e}\right)$, where r[e] is the source of emission. More precise modeling is required for a quantitative estimate, but the characteristic frequencies could be substantially reddened. In this context, it is pertinent to consider varying the coefficients of proportionality between the rebound time and the square of the mass, while also taking care to ensure that the coefficient remains lower than the Hawking time. In doing so, it is little short of remarkable that the excess of gamma rays observed by the Fermi satellite can also be explained by bouncing black holes.^25 A potential link via the high energy component to the events detected by the Auger collaboration, an ongoing effort to study ultra-high energy cosmic rays, is also conceivable. Quasi-Normal Modes Gravitational waves were first measured by the LIGO interferometry project in September 2015.^26 This discovery not only represented the first step in a new area of astrophysics, but also provided the basis for new high-precision tests of general relativity. The black hole coalescence events observed by LIGO exhibit three distinct phases: orbit, fusion, and relaxation. During the final phase, the resulting coalesced black hole emits so-called quasi-normal modes (QNMs), which correspond to its de-excitation by gravitational wave emissions. The radial part of the perturbed metric is written 31. $$\Psi = A e^{-i\omega t} = A e^{-i(\omega_R + i\omega_I)t},$$ where $\omega$[R] characterizes the oscillations and $\omega$[I] is the relaxation time $\tau = \frac{1}{\omega_I}$. QNMs form a discrete set.^27 They are composed of axial and polar perturbations that are described by the Regge–Wheeler and Zerilli equations. Modified gravitation models—which can break the Lorentz invariance, the equivalence principle, and even diffeomorphism invariance—generically lead to QNM modifications.^28 It is conceivable that approaches to quantum gravity might lead to this type of effect. In 2016, a model along these lines was developed by Hal Haggard and Carlo Rovelli.^29 The idea they proposed was straightforward. The scale of curvature is on the order $l_R \sim {\cal R}^{-1/2}$, where the Kretschmann scalar is ${\cal R}^2:=\cal R_{\mu\ nu\rho\lambda}\cal R^{\mu\nu\rho\lambda}$. From this formulation it seems that quantum effects are negligible for the massive black holes encountered in astrophysics. Yet this assertion does not take into account cumulative effects, which may turn out to be significant. On the basis of dimensional arguments, the so-called quanticity of space-time can be estimated and integrated over a proper time $\tau$ at $q=l_P \ {\cal R} \ \tau$. Relating proper time to Schwarzschild time by 32. $$\tau=\sqrt{1-\frac{2M}{r}}\ t,$$ 33. $$q(r) = \frac{M}{r^3} \ \left(1-\frac{2M}{r}\right)^{1/2} t.$$ The maximum of this function is reached for $r=2M\left(1+\frac{1}{6}\right)$, which is where the integrated quantum effects should be most important. It is remarkable that this is outside the event For the moment, these arguments remain heuristic and the exact location of maximum quantum correction has not been rigorously established. For this reason, it is relevant to model the effect of the Schwarzschild metric distortion^30 34. $$ds^2=-f(r)dt^2+f^{-1}(r)dr^2-r^2d\Omega^2,$$ 35. $$f(r)=\left(1-\frac{2M}{r}\right)\left(1+Ae^{-\frac{(r-\mu)^2}{2\sigma^2}}\right)^2.$$ It is then possible to calculate the QNMs in this approach and to examine the parameters for which deviations from general relativity become important. The effects are greatest for a modification at r = 3M. This is not entirely surprising because such a result corresponds to the maximum potential. A numerical analysis shows that the relative variation for the real part of the QNMs is given by 0.8 × A, for $\mu$ = (7/6)r[S], and that the relative variation of the imaginary part is given by 2.7 × A. Given that these coefficients of proportionality are known, it then becomes easy to estimate the magnitude of the quantum corrections that give rise to an observation. This is an enticing prospect. In a little more than a decade, the Einstein Telescope should allow for a measurement of the fundamental and first harmonics of QNMs with a precision of a few percent.^31 Event Horizon Telescope The information paradox demonstrates that there is some tension between the theories of relativity and quantum mechanics and the principle of locality. To resolve this tension, a new and radical development is needed. Several arguments have been made emphasizing that significant changes are expected on the event horizon, or beyond, even for supermassive black holes.^32 In 2019, the Event Horizon Telescope captured an image of the black hole at the heart of the M87 galaxy using interferometric radio astronomy techniques.^33 It turns out that such images could also be used to probe quantum gravity effects.^34 The presence of metric fluctuations with time and length scales determined by the size of the black hole should result in a temporal variation in images. This is potentially measurable for the most massive black holes. The period is given by 36. $$P\simeq 0.93 \left(\frac{M}{4.3\times 10^6 M_\odot}\right)~\mbox{hours},$$ where M$_\odot$ is the mass of the Sun. At this stage, the measurements being made are few in number and the complex averaging technique in use makes interpretation in terms of stability a challenge. Conventional astrophysical effects, it should be noted, can also generate temporal effects. Although there are still limitations, this approach holds much promise for future research and should also be considered in relation to gravitational waves. New Physics It has long been assumed that the quantum gravity effects associated with black holes are confined to their centers and are unobservable as a result. Despite such constraints, black holes still have much to offer researchers investigating quantum gravitation. The emergence of a true black hole astronomy based on the measurement of gravitational waves and radio interferometry has the potential to bring quantum gravity into the field of experimental or observational science. On this view, black holes should rightly be considered incomparable laboratories for the development of new physics. Translated and adapted from the French by the editors.
{"url":"https://inference-review.com/article/black-holes-and-quantum-gravity","timestamp":"2024-11-04T20:16:44Z","content_type":"text/html","content_length":"110384","record_id":"<urn:uuid:de11f24a-4a48-4ac9-8268-c95abab21bd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00521.warc.gz"}
Adding Dark Matter to the Standard Model 1. Introduction The Standard Model of quarks and leptons is enormously successful, it has passed many precision tests, and is here to stay. However, if the Standard Model were complete, the universe would have no matter: no dark matter, little baryonic matter, and no neutrino masses. “The New Minimal Standard Model” [1] is an extension that aims to “include the minimal number of new degrees of freedom to accommodate convincing (e.g., >5σ) evidence for physics beyond the Minimal Standard Model”. But this aim has a moving target: as new data becomes available, the model may need to be amended accordingly. The inclusion of a “minimal number of new degrees of freedom” is in accordance with the absence of new particles at the LHC. The purpose of the present study is to see if the New Minimal Standard Model is consistent with the new data on dark matter that has recently become available, and, if necessary, update the model accordingly. Let us briefly describe the New Minimal Standard Model [1]. First, the Standard Model Lagrangian is extended to include classical gravity. Next, a gauge singlet real scalar Klein-Gordon field with $ {Z}_{2}$ parity is added for dark matter. Dark energy is described by the cosmological constant $\Lambda$. Two gauge singlet Majorana neutrinos are added to account for neutrino masses and mixing (leaving one neutrino massless until data requires otherwise), and also to obtain baryogenesis via leptogenesis. Finally, a real gauge singlet scalar field is included to implement inflation. The outline of this article is as follows. Measurements of dark matter properties are presented in Section 2. Scalar, vector and sterile neutrino dark matter models are studied in Sections 3 to 5. We close with conclusions. 2. Measured Properties of Dark Matter Fits to spiral galaxy rotation curves [2] [3] [4], and studies of galaxy stellar mass distributions [5] [6] [7], independently obtain the following dark matter scenario. Dark matter is in thermal and diffusive equilibrium with the Standard Model sector in the early universe, i.e. no freeze-in, and decouples (from the Standard Model sector and from self-annihilation) while still ultra-relativistic, i.e. no freeze-out. The decoupling occurs at a temperature $T>{T}_{C}\approx 0.2\text{\hspace{0.17em}}\text{GeV}$ to not upset Big Bang Nucleosynthesis. Dark matter has zero chemical potential. The root-mean-square velocity of non-relativistic dark matter particles, at expansion parameter $a$, is ${v}_{h\text{rms}}\left(a\right)={v}_{h\text{rms}}\left(1\right)/a$, where ${v}_{h\text{rms}}\left(1\right)=0.48±0.19\text{\hspace{0.17em}}\text{km}/\text{s}$. Dark matter becomes non-relativistic at an expansion parameter ${{a}^{\prime }}_{h\text{NR}}\equiv {v}_{h\text {rms}}\left(1\right)/c=\left(1.61±0.64\right)×{10}^{-6}$. Dark matter is warm with a free-streaming cut-off wavenumber ${k}_{\text{fs}}={0.92}_{-0.24}^{+0.54}{\text{Mpc}}^{-1}$. The corresponding free-streaming transition mass is ${M}_{\text{fs}}\equiv 4\pi {\left(1.555/{k}_{\text{fs}}\right)}^{3}{\Omega }_{m}{\rho }_{\text{crit}}/3={10}^{11.8±0.5}{M}_{\odot }$, comparable with the mass of the Milky Way. The dark matter particle mass is ${m}_{h}={73}_{-17}^{+33}\text{\hspace{0.17em}}\text{eV}$ ( ${m}_{h}={61}_{-14}^{+28}\text{\hspace{0.17em}}\text{eV}$ ) for scalar (vector) dark matter. There is evidence in favor of boson dark matter with a significance of 3.5σ [7]. The number of boson degrees of freedom is limited to ${N}_{b}=1$ or 2, i.e. to scalar or vector dark matter. The ultra-relativistic dark matter temperature, relative to the photon temperature, after e^+e^− annihilation, is measured to be ${T}_{h}/T={0.456}_{-0.054}^{+0.039}$ ( ${T}_{h}/T={0.383}_{-0.050}^ {+0.033}$ ) for scalar (vector) dark matter. All uncertainties have 68% confidence. These numbers are obtained from Table 4 of [7] for the boson scenario that assumes that non-relativistic dark matter particles reach non-relativistic thermal equilibrium (NRTE) (i.e. the non-relativistic Bose-Einstein momentum distribution) due to their dark matter-dark matter elastic scatterings. The relations between ${v}_{h\text{rms}}\left(1\right)$ and ${m}_{h}$ and ${T}_{h}/T$ for zero chemical potential are [7]: In the case of negligible dark matter elastic scattering, the non-relativistic dark matter retains its ultra-relativistic thermal equilibrium (URTE), i.e. the ultra-relativistic Bose-Einstein momentum distribution, and the measurements are ${v}_{h\text{rms}}\left(1\right)=0.67±0.24\text{\hspace{0.17em}}\text{km}/\text{s}$, ${{a}^{\prime }}_{h\text{NR}}=\left(2.23±0.80\right)×{10}^{-6}$, $ {k}_{\text{fs}}={0.37}_{-0.08}^{+0.17}\text{\hspace{0.17em}}{\text{Mpc}}^{-1}$, ${M}_{\text{fs}}={10}^{13.0±0.4}{M}_{\odot }$, and ${T}_{h}/T={0.367}_{-0.038}^{+0.029}$ ( ${T}_{h}/T={0.309}_{-0.033}^ {+0.024}$ ) and ${m}_{h}={124}_{-25}^{+50}\text{\hspace{0.17em}}\text{eV}$ ( ${m}_{h}={104}_{-21}^{+42}\text{\hspace{0.17em}}\text{eV}$ ) for scalar (vector) dark matter. The relations between ${v}_ {h\text{rms}}\left(1\right)$ and ${m}_{h}$ and ${T}_{h}/T$ for zero chemical potential are [7]: For an overview of these measurements see [8]. To make this article self-contained, Figure 1 presents forty-six independent measurements of ${{a}^{\prime }}_{h\text{NR}}$ from fits to spiral galaxy rotation curves [4]. From ${{a}^{\prime }}_{h\text{NR}}$ we calculate the warm dark matter free-streaming cut-off wavenumber ${k}_{\text{fs}}$ [7]. This cut-off wavenumber is also obtained from galaxy stellar mass distributions as shown in Figure 2 [7]. These independent measurements are consistent! The current limit on dark matter self interaction cross-section is ${\sigma }_{\text{DM-DM}}/{m}_{\text{DM}}<0.47\text{\hspace{0.17em}}{\text{cm}}^{\text{2}}/\text{g}$ with 95% confidence [14] [15]. A tentative measurement obtains ${\sigma }_{\text{DM-DM}}/{m}_{\text{DM}}\approx \left(1.7±0.7\right)×{10}^{-4}{\text{cm}}^{\text{2}}/\text{g}$ [16]. If this measurement is confirmed, dark matter retains URTE. The current limits on dark matter particle mass are ${m}_{h}>70\text{\hspace{0.17em}}\text{eV}$ for fermions, and ${m}_{h}>{10}^{-22}\text{ }\text{ }\text{eV}$ for bosons [14]. In the present study we will assume this specific dark matter scenario, and ask the following questions. What dark matter interactions lead to this scenario? How is dark matter created? How does dark matter and the Standard Model sector come into thermal and diffusive equilibrium? How do they decouple? Figure 1. Forty-six independent measurements of the expansion parameter ${{a}^{\prime }}_{h\text{NR}}$ at which dark matter particles become non-relativistic (uncorrected for dark matter halo rotation). Each measurement was obtained by fitting the rotation curves of a spiral galaxy in the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample [9] with the indicated total luminosity at 3.6 μm. Full details of each fit are presented in [4]. Figure 2. Distribution of stellar masses of galaxies at redshift $z=4.5$ compared with predictions. From this data, and similar distributions corresponding to $z=6,7$, and 8, we obtain the power spectrum cut-off wavenumber ${k}_{\text{fs}}={0.90}_{-0.40}^{+0.44}\text{\hspace{0.17em}}{\text{Mpc}}^{-1}$. Figure from [7]. The data are from [10] [11] [12] [13]. How does dark matter acquire mass? Why is dark matter stable (relative to the age of the universe)? And, why is the measured dark matter particle mass ${m}_{h}$ so tiny compared to the Higgs boson mass ${M}_{H}$ ? Notes: For a discussion of tensions between measurements of, and limits on, thermal relic dark matter mass see [7] [8]. We should mention that the observed galaxy mass distribution presented in Figure 2 is in tension with Lyman-α forest studies [17]. The 3.5σ confidence in favor of boson dark matter mentioned above, based on spiral galaxy rotation curves and galaxy stellar mass distributions, does not include the Tremaine-Gunn limit on fermion dark matter mass [18] [19]. Including this limit would strengthen the confidence. However, the Tremaine-Gunn limit needs to be revised in view of resent observations on dwarf spheroidal “satellites” of the Milky Way [20] [21] [22] [23]. 3. Scalar Dark Matter The measured dark matter properties allow scalar or vector dark matter, with fermion dark matter disfavored but not ruled out. We begin with the real scalar field $S$ of [1]. To attain thermal and diffusive equilibrium between dark matter and the Standard Model sector we need to add a coupling between the two. The simplest renormalizable coupling is proportional to $\left(SS\right)\left({\ varphi }^{†}\varphi \right)$ since $\left({\varphi }^{†}\varphi \right)$ is the only Standard Model gauge singlet scalar with mass dimension ≤ 2. $\varphi$ is the Higgs boson field. The interaction rates $\Gamma \left(SS↔hh\right)$, relative to the universe expansion rate, scale as 1/T, so equilibrium is approached towards the future, and statistical equilibrium needs to be achieved by $T\ gtrsim {M}_{H}$ to avoid freeze-in. Decoupling occurs when the Higgs boson $\varphi$ becomes non-relativistic at $T\approx {M}_{H}$. Thereafter the reaction rates become exponentially suppressed because the Higgs bosons annihilate, and only the tail of the $S$ particle momentum distribution is above threshold. With ${M}_{S}<{M}_{H}$ there is no freeze-out if $S$ is stable. A super-renormalizable interaction proportional to $S\left({\varphi }^{†}\varphi \right)$ needs to be avoided because it leads to a ratio of number densities ${n}_{S}/{n}_{\varphi }$ that depends on T. For this reason, and to obtain a stable $S$, and to avoid extra parameters in the potential $V\left(S\right)$, we impose a ${Z}_{2}$ symmetry $S↔-S$. Therefore, we consider a gauge singlet real Klein-Gordon scalar dark matter field $S$, with ${Z}_{2}$ symmetry $S↔-S$, and portal coupling to the Higgs boson [1]. Here we present a brief review of the model to see if it can describe the observed properties of dark matter. To the Standard Model Lagrangian we add ${\mathcal{L}}_{S}=\frac{1}{2}{\partial }_{\mu }S\cdot {\partial }^{\mu }S-\frac{1}{2}{\stackrel{¯}{m}}_{S}^{2}{S}^{2}-\frac{{\lambda }_{S}}{4!}{S}^{4}+\cdot \cdot \cdot ,$(5) and a contact coupling to the Higgs field $\varphi$ : ${\mathcal{L}}_{S\varphi }=-\frac{1}{2}{\lambda }_{hS}\left({\varphi }^{†}\varphi \right){S}^{2}.$(6) (We are omitting the metric factor $\sqrt{-g}$.) After electroweak symmetry breaking (EWSB) the Higgs doublet, in the unitary gauge, has the form $\varphi =\left(\begin{array}{c}{\varphi }^{+}\\ {\varphi }^{0}\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ {v}_{h}+h\left(x\right)\end{array}\right)$(7) with real $h\left(x\right)$, the interaction Lagrangian becomes ${\mathcal{L}}_{S\varphi }=-\frac{1}{4}{\lambda }_{hS}\left({v}_{h}^{2}+2{v}_{h}h+{h}^{2}\right){S}^{2},$(8) and dark matter particles acquire a mass squared ${M}_{S}^{2}=\frac{1}{2}{\lambda }_{hS}{v}_{h}^{2}+{\stackrel{¯}{m}}_{S}^{2}$(9) assumed to be >0. We note that $S$ is absolutely stable since there is no interaction term with a single $S$. The running of coupling parameters to 1-loop or 2-loop order can be found in [24] [25] [26] [27]. Some center of mass cross-sections are $\sigma \left(hh↔SS\right)=\frac{{\lambda }_{hS}^{2}}{16\pi s}\frac{|{p}_{f}|}{|{p}_{i}|},$(10) $\sigma \left({W}^{-}{W}^{+}↔{h}^{*}↔SS\right)=\frac{{\lambda }_{hS}^{2}{M}_{W}^{4}}{4\pi s}\frac{|{p}_{f}|}{|{p}_{i}|}\frac{1}{{\left(s-{M}_{H}^{2}\right)}^{2}+{M}_{H}^{2}{\Gamma }_{H}^{2}},$(11) where $s\equiv {\left({p}_{1}+{p}_{2}\right)}^{2}$ is the Mandelstam variable. The reaction rates are exponentially suppressed at $T\lesssim {M}_{H}$ or $T\lesssim {M}_{W}$. These interactions bring dark matter into thermal and diffusive equilibrium with the Standard Model sector at $T\gtrsim {M}_{H}$ if $|{\lambda }_{hS}|\gtrsim {10}^{-6}$ and 10^−^6, respectively. The Higgs boson invisible decay rate for ${M}_{H}>2{M}_{S}$ is $\Gamma \left(h\to SS\right)=\frac{{\lambda }_{hS}^{2}{v}_{h}^{2}}{8\pi {M}_{H}}.$(12) Requiring this decay rate to be less than the limit on the invisible width of the Higgs boson (≈0.013 GeV [14] ) implies $|{\lambda }_{hS}|\lesssim 0.03$. In summary, we require ${10}^{-6}\lesssim | {\lambda }_{hS}|\lesssim 0.03$. As an example, take ${M}_{S}=73\text{\hspace{0.17em}}\text{eV}$ and ${\lambda }_{hS}=±{10}^{-5}$, so $\frac{1}{2}{\lambda }_{hS}{v}_{h}^{2}=±0.3\text{\hspace{0.17em}}{\text{GeV}}^{2}$. Then there is fine tuning in (9): ${\stackrel{¯}{m}}_{S}^{2}=5×{10}^{-15}\mp 0.3\text{\hspace{0.17em}}{\text{GeV}}^{2}$. Note that to achieve ${M}_{S}$ as low as 73 eV starting from ${v}_{h}=246\text{\hspace {0.17em}}\text{GeV}$ requires fine tuning between two unrelated input parameters with dimensions of mass. Let us now check whether non-relativistic dark matter acquires the non-relativistic Bose-Einstein momentum distribution due to elastic scattering. The cross-section at $T\ll {M}_{H}$ (neglecting interference with (14)), $\sigma \left(SS\to {h}^{*}\to SS\right)=\frac{9{\lambda }_{hS}^{4}{v}_{h}^{4}}{16\pi s{M}_{H}^{4}},$(13) implies that the mean time between collisions of dark matter particles at $T\lesssim {M}_{S}$ is less than the age of the universe even for ${\lambda }_{hS}={10}^{-6}$, so, in this model, non-relativistic dark matter has non-relativistic thermal equilibrium. The cross-section (neglecting interference with (13)), $\sigma \left(SS\to SS\right)=\frac{{\lambda }_{S}^{2}}{16\pi s},$(14) also corresponds to collisional dark matter if ${\lambda }_{S}>{10}^{-11}$. Dark matter decouples from the Standard Model sector at $T\approx {M}_{H}$ when the Higgs bosons become non-relativistic. As the universe expands and cools, particles and antiparticles that become non-relativistic annihilate heating the Standard Model sector without heating dark matter, or neutrinos if they have already decoupled. For decoupling at ${M}_{H}$ we expect the temperature of ultra-relativistic dark matter, relative to the photon temperature, after e^+e^− annihilation, to be ${T}_{h}/T={\left[8×43/\left(385×22\right)\right]}^{1/3}=0.344$ [14], which can be compared with the measured ratio ${T}_{h}/T={0.456}_{-0.054}^{+0.039}$ [7]. The cross-section limit ${\sigma }_{\text{DM-DM}}/{m}_{\text{DM}}<0.47\text{\hspace{0.17em}}{\text{cm}}^{\text{2}}/\text{g}$ [14] at $a\approx 1$, and (13), implies ${\lambda }_{hS}<5×{10}^{-8}$, so the present model is ruled out. If we lower ${\lambda }_{hS}$ to this value, $S$ and the Standard Model sector do not achieve statistical equilibrium at $T\approx {M}_{H}$. 4. Vector Dark Matter To reduce the dark matter-dark matter elastic scattering cross-section, and to relieve the fine tuning in the model of Section 3, we attempt reaching the small ${m}_{h}$ in two steps. To the Standard Model Lagrangian we add a complex scalar field $S$ that is invariant with respect to the local $U{\left(1\right)}_{S}$ transformation $S\to \mathrm{exp}\left[i{Q}_{S}\alpha \left(x\ right)\right]S$. The corresponding vector gauge boson ${V}^{\mu }$ acquires mass due to the breaking of the $U{\left(1\right)}_{S}$ symmetry of the ground state. In the present model, $V$ is the dark matter candidate, and $S$ decays to $VV$. The dark matter sector is known in the literature as the “Abelian Higgs model”. The relevant part of the Lagrangian is ${\mathcal{L}}_{\varphi SV}={\left({D}^{\mu }\varphi \right)}^{†}\left({D}_{\mu }\varphi \right)+{\left({{D}^{\prime }}^{\mu }S\right)}^{†}\left({{D}^{\prime }}_{\mu }S\right)-V\left(\varphi ,S\ $V\left(\varphi ,S\right)=-{\mu }_{h}^{2}\left({\varphi }^{†}\varphi \right)+{\lambda }_{h}{\left({\varphi }^{†}\varphi \right)}^{2}-{\mu }_{s}^{2}\left({S}^{†}S\right)+{\lambda }_{s}{\left({S}^{†}S\ right)}^{2}+{\lambda }_{hs}\left({\varphi }^{†}\varphi \right)\left({S}^{†}S\right),$(16) $i{D}_{\mu }=i{\partial }_{\mu }-g\frac{\tau }{2}\cdot {W}_{\mu }-{g}^{\prime }\frac{1}{2}{B}_{\mu },$(17) $i{{D}^{\prime }}_{\mu }=i{\partial }_{\mu }+{g}_{V}{Q}_{S}{V}_{\mu },$(18) ${V}_{\mu }\to {V}_{\mu }+\frac{1}{{g}_{V}}{\partial }_{\mu }\alpha .$(19) $S$ and the Standard Model sector have no charges in common. For ${\mu }_{h}^{2}>0$, ${\lambda }_{h}>0$, ${\mu }_{s}^{2}>0$, and ${\lambda }_{s}>0$, there is symmetry breaking, and the fields $\varphi ={\left({h}^{+},\left(h+iA+{v}_{h}\right)/\sqrt{2}\right)}^ {\text{T}}$ and $S=\left(s+i\rho +{v}_{s}\right)/\sqrt{2}$ acquire vacuum expectation values [28] ${v}_{h}^{2}=\frac{2{\mu }_{s}^{2}{\lambda }_{hs}-4{\mu }_{h}^{2}{\lambda }_{s}}{{\lambda }_{hs}^{2}-4{\lambda }_{s}{\lambda }_{h}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace {0.17em}}\text{\hspace{0.17em}}{v}_{s}^{2}=\frac{2{\mu }_{h}^{2}{\lambda }_{hs}-4{\mu }_{s}^{2}{\lambda }_{h}}{{\lambda }_{hs}^{2}-4{\lambda }_{s}{\lambda }_{h}}$(20) if ${v}_{h}^{2}>0$ and ${v}_{s}^{2}>0$. In unitary gauge, the real amplitudes A and $\rho$ become the longitudinal components of ${Z}^{\mu }$ and ${V}^{\mu }$, respectively, and the complex amplitude ${h}^{±}$ becomes the longitudinal components of ${W}^{+}$ and ${W}^{-}$. The mass eigenstates are ${M}_{\varphi ,S}^{2}=\left({\lambda }_{h}{v}_{h}^{2}+{\lambda }_{s}{v}_{s}^{2}\right)±\sqrt{{\left({\lambda }_{h}{v}_{h}^{2}-{\lambda }_{s}{v}_{s}^{2}\right)}^{2}+{\left({\lambda }_{hs}{v}_{h}{v}_ and the mixing angle is $\mathrm{tan}\left(2\theta \right)=\frac{{\lambda }_{hs}{v}_{h}{v}_{s}}{{\lambda }_{s}{v}_{s}^{2}-{\lambda }_{h}{v}_{h}^{2}}.$(23) To bring $S$ into thermal and diffusive equilibrium with the Standard Model sector without exceeding the limit on the invisible width of the Higgs boson $h$ requires ${10}^{-6}\lesssim {\lambda }_ {hs}\lesssim 0.03$ as in Section 3. Some reactions of interest are $\Gamma \left(s\to VV\right)=\frac{{g}_{V}^{4}{Q}_{S}^{4}{v}_{s}^{2}}{2\pi {M}_{S}},$(24) $\sigma \left(ss\to {h}^{*}\to {W}^{+}{W}^{-}\right)=\frac{{\lambda }_{hs}^{2}{M}_{W}^{4}}{4\pi s}\frac{|{p}_{f}|}{|{p}_{i}|}\frac{1}{{\left(s-{M}_{H}^{2}\right)}^{2}+{M}_{H}^{2}{\Gamma }_{H}^{2}},$ $\sigma \left(ss\to VV\right)=\frac{{g}_{V}^{4}{Q}_{S}^{4}}{4\pi s}\frac{|{p}_{f}|}{|{p}_{i}|}.$(26) The couplings of $V$, $s$, and $h$ (up to order 4) are proportional to ${h}^{3}$, ${h}^{2}s$, $h{s}^{2}$, ${s}^{3}$, ${h}^{4}$, ${h}^{2}{s}^{2}$, ${s}^{4}$, ${V}^{2}s$, and ${V}^{2}{s}^{2}$. The kinematics allow $V$ to decay only to γ’s or ν’s. However, we note that there is no coupling with a single $V$, so $V$ is absolutely stable. Our challenge is to choose parameters so that $S$ attains statistical equilibrium with the Standard Model sector at $T\gtrsim {M}_{H}$, but $V$ does not; and we need the decay $S\to VV$ to occur after $S$ has decoupled from the Standard Model sector, and while $S$ is still ultra-relativistic, i.e. within the temperature range ${M}_{S}<T<{M}_{H}$ ; and that ${m}_{h}={M}_{V}=112\text{\hspace Case ${M}_{S}<{M}_{H}$ : Let us assign the high mass eigenstate to $\varphi$, and the low mass eigenstate to $S$ (the opposite case will be considered below). A particular solution of interest has $| \theta |\ll 1$, so ${M}_{H}^{2}\approx 2{\lambda }_{h}{v}_{h}^{2}\approx 2{\mu }_{h}^{2}$ as in the Standard Model. A bench-mark scenario with ${M}_{V}=112\text{\hspace{0.17em}}\text{eV}$ is ${M}_{S} =9×{10}^{-4}\text{ }\text{ }\text{GeV}$, ${\lambda }_{hs}={10}^{-5}$, ${\lambda }_{s}=0.1$, ${g}_{V}{Q}_{S}=6×{10}^{-5}$, ${v}_{s}=2×{10}^{-3}\text{ }\text{GeV}$, and ${\mu }_{s}=0.551\text{\hspace {0.17em}}\text{GeV}$. To meet all requirements, there is fine tuning of ${\mu }_{s}^{2}$ to lower ${v}_{s}$ : the relative difference of the two terms in the numerator of (20) is ≈10^−^6. The reaction rate of $ss↔{h}^{*}↔{W}^{+}{W}^{-}$, relative to the expansion rate of the universe $H$, is $1/\left(\Delta t\cdot H\right)=500$ at $T\approx {M}_{H}$, so this coupling is strong. For $ss↔hh$, $1/\left(\Delta t\cdot H\right)=700$ at $T\approx {M}_{H}$, so this coupling is also strong. For $ss↔VV$, $1/\left(\Delta t\cdot H\right)=3×{10}^{-4}$ (200) at $T\approx {M}_{H}$ ( ${M}_{S}$ ), so $V$ does not attain statistical equilibrium with $S$, or with the Standard Model sector, at $T\gtrsim {M}_{H}$. The decay rate of $s\to VV$, relative to the expansion rate of the universe $H$, is $\Gamma \left(s\to VV\right)/H=3×{10}^{-7}$ (3 × 10^4) at $T\approx {M}_{H}$ ( ${M}_{S}$ ), so indeed we have arranged that the decay occurs after $S$ has decoupled, and while $S$ is still ultra-relativistic, i.e. in the temperature range ${M}_{S}<T<{M}_{H}$. The cross-section for $VV\to {s}^{*}\to VV$ at $T\ll {M}_{S}$ is $\sigma \left(VV\to {s}^{*}\to VV\right)=\frac{9{g}_{V}^{8}{Q}_{S}^{8}{v}_{s}^{4}}{\pi s{M}_{S}^{4}}.$(27) This cross-section implies that the mean dark matter particle interaction rate is much less than the expansion rate of the universe $H$ at all temperatures, so, in this model, non-relativistic dark matter retains the ultra-relativistic Bose-Einstein momentum distribution. The two $V$ ’s in the decay $S\to VV$ have correlated polarizations, so the average number of boson degrees of freedom, needed to calculate the dark matter density (see (21) of [7] ) is ${N}_{bV}=\ left(2+1\right)/2$. Then, from (3) and (4), the measured values for this scenario are ${m}_{h}\equiv {M}_{V}={112}_{-23}^{+45}\text{\hspace{0.17em}}\text{eV}$, and ${T}_{h}/T={0.332}_{-0.029}^ For zero chemical potential, the number of $s$ per unit volume, given by the ultra-relativistic Bose-Einstein distribution, is ${n}_{s}=\frac{{N}_{bs}}{{\left(2\pi \hslash \right)}^{3}}{\int }_{0}^{\infty }\frac{4\pi {p}^{2}\text{d}p}{\mathrm{exp}\left[\frac{pc}{k{T}_{s}}\right]-1},$(28) where the number of boson degrees of freedom of $s$ is ${N}_{bs}=1$. After the decay $2{n}_{s}={n}_{V}=\frac{{N}_{bV}}{{\left(2\pi \hslash \right)}^{3}}{\int }_{0}^{\infty }\frac{4\pi {p}^{2}\text{d}p}{\mathrm{exp}\left[\frac{pc}{k{T}_{V}}\right]-1}.$(29) Each $s$ in 8 orbitals of momentum 2p decays to two $V$ ’s corresponding to one orbital with momentum p, so $2{n}_{s}={n}_{V}=2\cdot 8\frac{{N}_{bs}}{{\left(2\pi \hslash \right)}^{3}}{\int }_{0}^{\infty }\frac{4\pi {p}^{2}\text{d}p}{\mathrm{exp}\left[\frac{2pc}{k{T}_{s}}\right]-1}.$(30) Integrating, we obtain ${T}_{V}={\left(4/3\right)}^{1/3}{T}_{s}$. So, the predicted ratio is ${T}_{h}/T={\left(4/3\right)}^{1/3}\cdot 0.344=0.379$, to be compared with the measured value ${T}_{h}/T= The cross-section limit ${\sigma }_{\text{DM-DM}}/{m}_{\text{DM}}<0.47\text{\hspace{0.17em}}{\text{cm}}^{\text{2}}/\text{g}$ [14] at $a\approx 1$, and (27), implies ${g}_{V}{Q}_{S}<4.3×{10}^{-4}$, in agreement with the benchmark solution. The tentative measurement ${\sigma }_{\text{DM-DM}}/{m}_{\text{DM}}\approx \left(1.7±0.7\right)×{10}^{-4}\text{ }{\text{cm}}^{\text{2}}/\text{g}$ [16], if confirmed, would imply ${g}_{V}{Q}_{S}\approx 1.6×{10}^{-4}$, which is in agreement with the benchmark solution within uncertainties! In summary, the vector model with ${M}_{S}<{M}_{H}$ is consistent with all currently measured properties of dark matter. There is fine tuning to obtain the small required symmetry breaking of the ground state of $S$. Case ${M}_{S}>{M}_{H}$ : Let us now assign the high mass eigenstate to $S$, and the low mass eigenstate to $\varphi$. Again, as an example, we consider the case $|\theta |\ll 1$, so ${M}_{H}^{2}\ approx 2{\lambda }_{h}{v}_{h}^{2}\approx 2{\mu }_{h}^{2}$ as in the Standard Model, and ${M}_{S}^{2}\approx 2{\lambda }_{s}{v}_{s}^{2}\approx 2{\mu }_{s}^{2}$. A benchmark solution with ${M}_{V}=112\ text{\hspace{0.17em}}\text{eV}$ is ${M}_{S}=135\text{\hspace{0.17em}}\text{GeV}$, ${\lambda }_{hs}=3×{10}^{-5}$, ${\lambda }_{s}=0.1$, ${g}_{V}{Q}_{S}=4×{10}^{-10}$, ${v}_{s}=300\text{\hspace {0.17em}}\text{GeV}$, and ${\mu }_{s}=96\text{\hspace{0.17em}}\text{GeV}$. When particles $S$ become non-relativistic at $T\approx {M}_{S}>{M}_{H}$, they decay mostly to the Standard Model sector: reactions $ss\to {h}^{*}\to {W}^{+}{W}^{-}$ are much faster than the universe expansion rate, while $ss\to VV$ and $s\to VV$ are much slower, so the universe is left with no dark matter. Assigning charges Q[S] to Standard Model particles, to enhance or replace the contact interaction between $S$ and $\varphi$, does not lead to compelling alternative models. 5. Sterile Neutrino Dark Matter Observations of spiral galaxy rotation curves and of galaxy stellar mass distributions favor boson over fermion dark matter with a significance of 3.5σ [7], so we should not yet rule out fermion dark matter. Sterile neutrinos have been studied extensively as dark matter candidates [29] [30] [31]. In this section we briefly review sterile neutrinos and see if they are consistent with the measured properties of dark matter presented in Section 2. We extend the Standard Model with a gauge singlet neutrino ${u }_{R}$ with a Majorana mass $M={107}_{-20}^{+36}\text{\hspace{0.17em}}\text{eV}$. This is the measured mass for the case of fermion dark matter retaining ultra-relativistic thermal equilibrium (URTE), see Table 4 of [7]. We will refer to the two irreducible representations of the proper Lorentz group of dimension 2 as “Weyl_L” and “Weyl_R”. For simplicity we focus on one generation. ${u }_{L}$ and ${u }_{R}$ are two-component Weyl_L and Weyl_R fields, respectively. $i{\sigma }_{2}{u }_{L}^{*}$ and $i{\sigma }_{2}{u }_{R}^{*}$ transform as Weyl_R and Weyl_L fields, respectively, where ${\sigma }_{2}$ is a Pauli matrix. ${u }_{L}^{†}{u }_{R}$, ${u }_{R}^{†}{u }_{L}$, ${u }_{R}^{\text{T}}{\sigma }_{2}{u }_{R}$, and ${u }_{R} ^{†}{\sigma }_{2}{u }_{R}^{*}$ are scalars with respect to the proper Lorentz group. To include Weyl spinors into the Standard Model, it is convenient to use 4-component Dirac spinor notation. Our metric is $diag\left({\eta }^{\mu u }\right)=\left(1,-1,-1,-1\right)$. The matrices A and C are defined, in any basis, as $A{\gamma }_{\mu }={\gamma }_{\mu }^{†}A$, and ${\gamma }_{\mu }C=-C{\gamma }_{\mu }^{\text{T}}$ [31], with ${A}^{†}=A$, ${C}^{\text{T}}=-C$, and $C{A}^{*}{C}^{*}A =1$. We define $\stackrel{˜}{\psi }\equiv {\psi }^{†}A$, and the charge conjugate field ${\psi }^{c}\equiv C{\stackrel{˜}{\psi }}^{\text{T}}$. Then ${\left({\psi }^{c}\right)}^{c}=\psi$, and ${\ stackrel{˜}{\psi }}^{c}=-{\psi }^{\text{T}}{C}^{-1}$. A Dirac spinor that satisfies ${\psi }^{c}={\text{e}}^{i\xi }\psi$ is a Majorana spinor ( $\xi$ is an arbitrary phase). In a Weyl basis [30], ${\psi }^{c}=-i{\gamma }^{2}{\psi }^{*}$, $A={\gamma }^{0}$, $\psi =\left(\begin{array}{c}{u }_{L}\\ {u }_{R}\end{array}\right),\text{ }\stackrel{˜}{\psi }=\left({u }_{R}^{†},{u }_{L}^{†}\right),$(31) ${\gamma }^{0}=\left(\begin{array}{cc}0& {\sigma }_{0}\\ {\sigma }_{0}& 0\end{array}\right),\text{ }{\gamma }^{k}=\left(\begin{array}{cc}0& {\sigma }_{k}\\ -{\sigma }_{k}& 0\end{array}\right),\text{ }C=\left(\begin{array}{cc}-i{\sigma }_{2}& 0\\ 0& i{\sigma }_{2}\end{array}\right),$(32) ${\gamma }^{5}=\left(\begin{array}{cc}-{\sigma }_{0}& 0\\ 0& {\sigma }_{0}\end{array}\right),\text{ }{\psi }_{L}=\left(\begin{array}{c}{u }_{L}\\ i{\sigma }_{2}{u }_{L}^{*}\end{array}\right),\text{ } {\psi }_{R}=\left(\begin{array}{c}-i{\sigma }_{2}{u }_{R}^{*}\\ {u }_{R}\end{array}\right).$(33) Note that ${\psi }_{L}^{c}={\psi }_{L}$, and ${\psi }_{R}^{c}={\psi }_{R}$, so these are Majorana fields. With this notation the Majorana fields ${\psi }_{L}$ and ${\psi }_{R}^{c}$ can mix. Note however that ${\psi }_{L}$ and ${\psi }_{R}^{c}$ are distinct: ${\psi }_{L}$ has weak interactions while ${\psi }_{R}^{c}$ does not. ${\stackrel{˜}{\psi }}_{R}{\psi }_{R}$, ${\stackrel{˜}{\psi }}_{L} ^{c}{\psi }_{R}^{c}$, ${\stackrel{˜}{\psi }}_{R}{\psi }_{L}$, ${\stackrel{˜}{\psi }}_{R}^{c}{\psi }_{R}$, and ${\stackrel{˜}{\psi }}_{R}^{c}{\psi }_{R}^{c}$ are scalars with respect to the proper Lorentz group. The neutrino mass term after electroweak symmetry breaking has the form [31] ${\mathcal{L}}_{u mass}=-\frac{m}{2}{\stackrel{˜}{\psi }}_{L}^{c}{\psi }_{R}^{c}-\frac{m}{2}{\stackrel{˜}{\psi }}_{R}{\psi }_{L}-\frac{M}{2}{\stackrel{˜}{\psi }}_{R}{\psi }_{R}^{c}+H.c.,$(34) where $m=Y{v}_{h}/\sqrt{2}$ is a Dirac mass ( $Y$ is a Yukawa coupling), and $M$ is a Majorana mass. We consider the case $|m/M|\ll 1$. The mass eigenstates are [31] $\begin{array}{l}{\psi }_{\text{a}}=i\mathrm{cos}\theta {\psi }_{L}-i\mathrm{sin}\theta {\psi }_{R}^{c},\text{ }\text{with}\text{\hspace{0.17em}}\text{mass}\text{\hspace{0.17em}}{m}_{\text{a}}=\frac {{m}^{2}}{M},\\ {\psi }_{\text{s}}=\mathrm{sin}\theta {\psi }_{L}+\mathrm{cos}\theta {\psi }_{R}^{c},\text{ }\text{with}\text{\hspace{0.17em}}\text{mass}\text{\hspace{0.17em}}{m}_{\text{s}}=M,\end where $\mathrm{tan}\left(\theta \right)=m/M$, ${u }_{La}\left(t\right)={u }_{La}\left(0\right)\mathrm{exp}\left[\mp iEt±i\sqrt{{E}^{2}-{m}_{a}^{2}}x\right]$, and ${u }_{Rs}\left(t\right)={u }_{Rs}\ left(0\right)\mathrm{exp}\left[±iEt\mp i\sqrt{{E}^{2}-{M}^{2}}x\right]$. Let us now consider dark matter production. We are interested in the reactions ${u }_{e}{e}^{+}\to {W}^{+*}\to {u }_{s}{e}^{+}$, or $u\stackrel{¯}{u}\to {Z}^{*}\to {u }_{s}{u }_{e}$. First, we verify that the produced ${\psi }_{L}$ is a coherent superposition of ${\psi }_{\text{a}}$ and ${\psi }_{\text{s}}$. The coherence factor is [32] ${\epsilon }_{\text{coh}}=\mathrm{exp}\left[-\Delta {M}^{2}/\left(8{\sigma }_{E}^{2}\right)\right]\cdot \mathrm{exp}\left[-\Delta {t}^{2}/{t}_{\text{coh}}^{2}\right],$(36) with $\Delta {M}^{2}\equiv {M}^{2}-{m}_{a}^{{}^{2}}$. Since we are interested in energy E of ${u }_{L}$ of order ${M}_{W}/2$, we take its uncertainty to be ${\sigma }_{E}\approx {\Gamma }_{W}\gg M$, so the first factor is 1. $\Delta t$ is the mean time between ${u }_{L}$ interactions. The propagation time of ${u }_{a}$ and ${u }_{s}$ over which their wave packets cease to overlap is the decoherence time [32] ${t}_{\text{coh}}=2\sqrt{2}\frac{2{E}^{2}}{|\Delta {M}^{2}|}{\sigma }_{t}.$(37) Taking the wave packet duration ${\sigma }_{t}\approx 1/{\Gamma }_{W}$, we estimate $\Delta t\ll {t}_{\text{coh}}$ for the small value of $M$ being considered. In conclusion, ${\epsilon }_{\text {coh}}\approx 1$, and ${u }_{a}$ and ${u }_{s}$ do not become decoherent between ${u }_{L}$ interactions, so we must take into account their oscillations. Consider initial conditions for ${\psi }_{L}$ production to be ${\psi }_{L}\left(0\right)\propto 1$ and ${\psi }_{R}^{c}\left(0\right)\propto 0$. Then, from (35), we obtain the probabilities ${P}_{L} \left(\Delta t\right)={|{\psi }_{L}\left(\Delta t\right)|}^{2}$ to observe a Weyl_L neutrino, and ${P}_{s}\left(\Delta t\right)={|{\psi }_{R}^{c}\left(\Delta t\right)|}^{2}$ to create a sterile ${P}_{s}\left(\Delta t\right)=1-{P}_{L}\left(\Delta t\right)=4{\mathrm{sin}}^{2}\theta {\mathrm{cos}}^{2}\theta {\mathrm{sin}}^{2}\left(\frac{\Delta {M}^{2}}{4E}x\right),$(38) with $x\approx \Delta t$, and $E=\sqrt{s}/2$. Note that ${P}_{s}\left(\Delta t\right)=2{m}_{a}/M$ for $\Delta {M}^{2}\Delta t/\left(4E\right)\gg \pi$, but this probability is suppressed by a factor $2{\mathrm{sin}}^{2}\left[\Delta {M}^{2}\Delta t/\left(4E\right)\right]$ for small $\Delta t$. Equation (38) describes the oscillation between the active and sterile neutrinos. Similar phenomenology has been confirmed in neutrino flavor oscillation experiments. The cross-section $\sigma \left({u }_{e}{e}^{+}\to {W}^{+*}\to {u }_{e}{e}^{+}\right)$ is given by Eq. (50.25) of [14]. Multiplying by ${P}_{s}\left(\Delta t\right)$ we obtain the cross-section for sterile neutrino production $\sigma \left({\psi }_{L}{l}^{+}\to {W}^{+*}\to {\psi }_{s}{l}^{+}\right)$. We find that the production mechanism ${\psi }_{L}{l}^{+}\to {W}^{+*}\to {\psi }_{s}{l}^{+}$ to bring ${u }_{s}$ into statistical equilibrium with the Standard Model sector at $T\approx {M}_{W}$, and decouple at $T\gtrsim 0.2\text{\hspace{0.17em}}\text{GeV}$, fails because of the interference factor $2{\mathrm{sin}}^{2}\left[\Delta {M}^{2}\Delta t/\left(4E\right)\right]\ll 1$. (Note: In Figure 11 of [2] I did not include this factor so that figure is wrong.) The production channel ${W}^{+}{W}^{-}\to {h}^{*}\to {\psi }_{L}{\psi }_{R}$ is negligible. 6. Conclusions Accurate, detailed and redundant measurements of dark matter properties have recently become available [7]. We have studied scalar, vector and sterile neutrino dark matter models in the light of these measurements. The vector dark matter model presented in Section 4 is (arguably) the renormalizable model with the least number of new degrees of freedom that is consistent with all current observations, and replaces the scalar dark matter model of Section 3 [1] that is ruled out. The sterile neutrino dark matter production mechanism studied in Section 5 did not meet experimental New insights pose new questions. If nature has chosen the vector dark matter of Section 4, why do the two terms in the numerator of (20) cancel to 1 part in 10^6? Similar questions can be made regarding the cosmological constant $\Lambda$, or the strong CP phase $\theta$. Do the scalars $\varphi$ and/or $S$ participate/cause inflation? Baryogenesis via leptogenesis (arguably) requires sterile Majorana neutrinos. How are they produced? What is the origin, if any, of their masses? How can we move forward? A signal in direct dark matter searches would rule out the vector model. Indirect searches may find an excess of photons (or neutrinos!) with energy ≈36 eV, ≈53 eV, or ≈62 eV, if dark matter is unstable and decays. Such a signal would also rule out the vector dark matter model. Collider experiments may discover an invisible Higgs decay width. Further progress will come from the cosmos: more studies of disk galaxy rotation curves, and galaxy stellar mass distributions (these studies can enhance the boson/fermion discrimination, and perhaps can observe the predicted tail of the boson warm dark matter power spectrum cut-off factor ${\tau }^{2}\left(k/{k}_{\text{fs}}\right)$ [7] ), galaxy formation simulations, the “small scale crisis” (missing satellites, too big to fail, galaxy core vs. cusp, large voids), super massive black holes at galaxy centers (Einstein condensation may occur at the galaxy center), revised constraints on fermion dark matter mass from the Tremaine-Gunn limit, and tighter constraints on dark matter self-interactions. It is necessary to understand the tensions between the Lyman-α forest studies and the observed galaxy stellar mass distributions, see Figure 2. Studies of dark matter halo rotation in disk galaxies are also needed.
{"url":"https://www.scirp.org/journal/paperinformation?paperid=107587","timestamp":"2024-11-14T18:11:23Z","content_type":"application/xhtml+xml","content_length":"304545","record_id":"<urn:uuid:24097024-76da-4a7f-a663-92fec1f997dc>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00484.warc.gz"}
Thermochem WS #2 Thermochemistry Problems - Worksheet Number Two The older energy unit of calories has not been discussed in class. You may see it from time to time. The conversion is 1.000 cal = 4.184 J. All the calculation techniques are the same regardless of using calories or Joules. Solutions to most of 1 to 10 and 16-20 1. Convert from one unit to the other: a. 1.69 Joules to calories b. 0.3587 J to cal c. 820.1 J to kilocalories d. 68 calories to kilocalories e. 423 calories to kilocalories f. 20.0 calories to Joules g. 252 cal to J h. 2.45 kilocalories to calories i. 556 kilocalories to cal j. 6.78 kilocalories to kilojoules k. 59.6 calories to kcal l. 449.6 joules to kilojoules m. 9.806 kJ to J n. 5.567 cal to J o. 5467.9 kcal to J 2. Determine the temperature change when: a. 20.0 g of water is heated from 16.8 °C to 39.2 °C. b. 35.0 g of water is cooled from 56.5 °C to 5.9 °C. c. 50.0 g liquid water is heated from 0.0 °C to 100.0 °C. d. 25.0 g of ice is warmed from -25.0 °C to 0.0 °C, but does not melt. e. 30.0 g of steam heats from 373.2 K to 405.0 K. 3. Determine the energy required (in Joules) when the temperature of 3.21 grams of liquid water increases by 4.0 °C. 4. Determine the energy needed (in Joules) when 55.6 grams of water at 43.2 °C is heated to 78.1 °C. 5. Determine the energy required (in kilojoules) when cooling 456.2 grams of water at 89.2 °C to a final temperature of 5.9 °C. 6. Determine the energy required to: a. melt 5.62 moles of ice at 0 °C. b. melt 74.5 grams of ice at 0 °C. c. boil 0.345 moles of water at 100.0 °C. d. boil 43.89 grams of water at 100.0 °C. 7. Determine the energy change involved to: a. Convert 16.2 grams of ice to liquid water. b. Convert 5.8 grams of water to steam c. Convert 98.2 grams of water to ice. d. Convert 52.6 grams of steam to water e. Convert 34.0 grams of water at 20.0 °C to steam at 100.0 °C. f. Convert 125.0 grams of ice at 0.0 °C to steam at 100.0 °C. g. Convert 25.9 grams of steam at 100.0 °C.to ice at 0.0 °C. 8. Determine the final temperature in each of the following problems: a. 32.2 g of water at 14.9 °C mixes with 32.2 grams of water at 46.8 °C. b. 139 g of water at 4.9 °C mixes with 241 grams of water at 96.0 °C. c. 2.29 g of water at 48.9 °C mixes with 3.65 grams of water at 36.1 °C. d. 56.3 grams of water at 12.3 °C mixes with 46.2 grams of water at 78.1 °C. e. 14.2 grams of ice at -16.2 °C is placed in 250.0 grams of water at 70.0 °C. 9. A student places 42.3 grams of ice at 0.0 °C in an insulated bottle. The student adds 255.8 grams of water at 90.0 °C. Determine the final temperature of the mixture. 10. A student places 21.4 grams of ice at 0.0 °C and 13.1 grams of steam at 100.0 °C in a sealed and insulated container. Determine the final temperature of the mixture. 11. Determine the specific heat of a 150.0 gram object that requires 62.0 cal of energy to raise its temperature 12.0 °C. 12. Determine the heat required to convert 62.0 grams of ice at -10.3 °C to water at 0.0 °C. The specific heat capacity of ice is 2.02 J/g °C. 13. Determine the energy released when converting 500.0 g of steam at 100.0 °C to ice at -25.0 °C. 14. Determine the energy required to convert 32.1 grams of ice at -5.0 °C to steam at 100.0 °C. 15. Determine the energy required to raise the temperature of 46.2 grams of aluminum from 35.8 °C to 78.1 °C. Specific heat capacity of aluminum is 0.089 J/g °C. 16. Determine the final temperature when 450.2 grams of aluminum at 95.2 °C is placed in an insulated calorimeter with 60.0 grams of water at 10.0 °C. 17. Determine the mass of iron heated to 85.0 °C to add to 54.0 grams of ice to produce water at 12.5 °C. The specific heat of iron is 0.045 J/g °C. 18. Determine the final temperature when 45.8 grams of aluminum at -5.2 °C is added to a mixture of 45.0 grams of ice at 0.0 °C and 2000.0 grams of water at 95.0 °C. 19. A sample of cobalt, A, with a mass of 5.00 g, is initially at 25.0 °C. When this sample gains 6.70 J of heat, the temperature rises to 27.9 °C. Another sample of cobalt, B, with a mass of 7.00 g, is initially at 25.0 °C. If sample B gains 5.00 J of heat, what is the final temperature of sample B. (Hint: think about the specific heat of both samples.) 20. 50.0 g of copper at 200.0 °C is placed in ice at 0 °C. How many grams of ice will melt?
{"url":"https://web.chemteam.info/Thermochem/Thermochem-WS2.html","timestamp":"2024-11-04T07:09:25Z","content_type":"text/html","content_length":"5407","record_id":"<urn:uuid:b3cce8f9-628c-4cff-99cf-2b7b44ccd3c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00360.warc.gz"}
Prior sensitivity in BAMM By Brian R. Moore, Sebastian Höhna, Michael R. May, Bruce Rannala, and John P. Huelsenbeck BAMM (Rabosky, 2014) is a popular Bayesian method intended to estimate: (1) the number and location of diversification-rate shifts across branches of a phylogeny, and; (2) the diversification-rate parameters for every branch of the study tree. As a Bayesian method, BAMM seeks to estimate the joint posterior probability density (the ‘posterior’) of the model parameters, which reflects our belief in the parameter values after evaluating the data at hand. The posterior is an updated version of the joint prior probability density (the ‘prior’) of the model parameters, which reflects our belief in the parameter values before evaluating the data at hand. In this post, we elaborate on our recent discovery (Moore et al., 2016) that BAMM exhibits prior sensitivity; i.e., where the inferred posterior is strongly influenced by the assumed prior. To this end, we first discuss why prior sensitivity is a theoretical concern for BAMM. We then describe our efforts to explore the prior sensitivity of BAMM by means of simulation. We present results demonstrating that posterior estimates of the number of diversification-rate shifts are sensitive to the assumed prior on the number of diversification-rate shifts. Next, we describe how the original paper (Rabosky, 2014) concealed the strong prior sensitivity of BAMM. Finally, we describe how errors in a recent presponse to our paper (Rabosky, 2016) masked the prior sensitivity of BAMM. Importantly, both the results of our study (Moore et al., 2016) and those described in the presponse to our paper (Rabosky, 2016) used the current version of BAMM (v.2.5). However, the results of Rabosky (2016) masked the prior sensitivity of BAMM because those analyses used an option in BAMM v.2.5 that incorrectly computes extinction probabilities. 1. How did we reach the conclusion that BAMM exhibits prior sensitivity? BAMM provides estimates of the number and location of diversification-rate shifts across the branches of a study tree, and also estimates the diversification-rate parameters—speciation, extinction, and time-dependence parameters—for every branch of the phylogeny. BAMM is implemented in a Bayesian statistical framework, which seeks to estimate the posterior probability distribution of the model From Bayes theorem: The posterior probability reflects our belief regarding the parameter values after evaluating the data at hand; it is an updated version of the prior probability, which reflects our belief about the parameter values before evaluating the data at hand. The likelihood function is the vehicle that extracts the information in the data to transform our prior beliefs into our posterior beliefs about the parameter values. BAMM uses the following priors: • an exponential prior for the speciation rate • an exponential prior for the extinction rate • a normal prior for the time-dependence parameter • a Compound Poisson Process (CPP) prior model on diversification-rate shifts 1.1. Why is prior sensitivity a concern for BAMM? The CPP prior model describes the prior distribution of the number and location of diversification-rate shifts across branches of the tree. This is a concern because the CPP model is often non-identifiable: that is, the likelihood function returns an identical value for the observed data under an infinite number of parameter combinations (Rannala, 2002). Specifically, under the CPP, the observed data may be equally likely to be the outcome of relatively infrequent shifts of large magnitude, or relatively frequent shifts of small magnitude. If the likelihood function cannot identify (distinguish between) parameter values based on the data, the posterior probability distribution will closely resemble the assumed prior distribution. This is the issue of prior sensitivity: our posterior estimates of parameters (e.g., about the number and location of diversification-rate shifts) after looking at the data will essentially be identical to whatever we assumed about these parameters before looking at the data. 1.2. How we assessed the prior sensitivity of BAMM It is straightforward to assess whether BAMM exhibits prior sensitivity: we simply need to compare the posterior distribution for the estimated number of diversification-rate shifts inferred under a variety of assumed prior distributions for the expected number of diversification-rate shifts. If BAMM is sensitive to the assumed prior, then the estimated posterior distribution will depend strongly on the assumed prior distribution (i.e., the estimated posterior distribution for the number of diversification-rate shifts will change to reflect corresponding changes in the assumed prior distribution for the number of diversification-rate shifts). We explored the prior sensitivity of BAMM by simulating 100 trees under a constant-rate birth-death process (where the true number of diversification-rate shifts is zero). To ensure that these simulated trees were empirically realistic, we first estimated the speciation and extinction rates for the cetacean phylogeny (Steeman 2009; which was analyzed by Rabosky, 2014) under a constant-rate birth-death model using RevBayes (Höhna et al. 2016). We then simulated each tree under a constant-rate birth-death process by sampling values for the speciation and extinction rates from the corresponding posterior distributions estimated for the cetacean tree, where the age of each simulated tree is the same as that of the cetacean tree. We analyzed each simulated tree using BAMM (v.2.5), centering the speciation- and extinction-rate prior distributions on the true value for the simulated phylogeny, and under a range of prior values on the expected number of diversification-rate shifts (, where the expected number of rate shifts is ). For each prior value, we plotted the prior distribution and the posterior distribution for the number of diversification processes (a tree with a single process has zero diversification-rate shifts) averaged over the 100 trees (Figure 1). We note that our simulation study exploring the prior sensitivity of BAMM represents a best case scenario: (1) a constant-rate birth-death process is the simplest model described by BAMM, which minimizes the number of parameters that must be estimated from a given amount of data, and; (2) we have centered the priors for the diversification-rate parameters (speciation and extinction rate) on their true values. 2. Why do our conclusions contradict Rabosky (2014)? The results of our simulation study described above indicate that estimates of the number of diversification-rate shifts using BAMM are sensitive to the assumed prior. This conclusion contradicts the finding presented in the original study: Rabosky (2014) performed a simulation study that apparently demonstrated that BAMM is not overly sensitive to the prior. These contradictory conclusions are not due to differences in the two simulation studies; in fact, the design of the two simulation studies is very similar. Similar to our study, Rabosky simulated 500 constant-rate trees (where the true number of diversification-rate shifts is zero). He then analyzed each tree with BAMM under a range of priors values for the expected number of diversification-rate shifts; . For each tree, he recorded the mode of the posterior distribution for the number of diversification-rate shifts: the mode of the posterior is also known as the maximum a posteriori (or MAP) estimate. He repeated this process for each of the 500 simulated trees, recording a set of 500 MAP estimates for a given prior value, e.g., . He then summarized the corresponding set of 500 MAP estimates as a histogram (a frequency distribution) for each prior value. Rabosky then repeated this procedure to create histograms for each of the three prior values for the expected number of diversification-rate shifts. Finally, he noted that the histograms of the posterior modes for the estimated number of diversification-rate shifts looked similar for all three values of the prior. Accordingly, Rabosky concluded that posterior estimates of the number of diversification-rate shifts inferred using BAMM are insensitive to the prior assumptions about the expected number of diversification-rate shifts. The results from his study (and our replicated results summarized in the same way) are depicted in Figure 2. As you can see in Figure 2, using the mode (MAP) to summarize the estimated posterior distribution on the number of events is an unfortunate choice if we are interested in assessing the prior sensitivity of BAMM. It is an error because it makes it impossible to assess whether the posterior is influenced by the prior. This is simply because the prior distribution for the number of diversification processes has a mode of one for all values of the prior (see Moore et al., 2016 SI1.3 Joint prior distribution). Therefore, even if the estimated posterior was virtually identical to the assumed prior (which is the case here), the posterior will always have mode of one for constant-rate trees. Therefore, if we made a histogram of the posterior modes inferred for each tree, the most frequent value would always be one, regardless of the prior. It cannot be otherwise. In fact, this is exactly what we observe when we summarize the results of our simulation using the same protocol as that used by Rabosky (see the lower row of panels in Figure 2, above). The most frequent value for the posterior mode (MAP) is always one, because the prior mode is always one, and the estimated posterior always resembles the assumed prior. Accordingly, the MAP is incapable of detecting prior sensitivity on constant-rate trees; the histogram of the MAPs will always be similar regardless of the prior on constant-rate trees, so the similarity of the MAP histograms for the various priors does not provide evidence that BAMM is insensitive to the choice of prior. Rabosky’s (2014) conclusion that BAMM is insensitive to the prior is therefore misleading. Simple inspection of the posterior and prior distributions makes it immediately obvious that the number of diversification-rate shifts estimated by BAMM is extremely sensitive to the assumed prior. There is a second curious aspect of the results presented by Rabosky (2014). The histograms of the MAPs presented in Rabosky (2014; Figure 2) start to move toward the right as the prior favors more diversification-rate shifts. Specifically, the frequency of the MAPs is concentrated on one process when , is split between 1 and 2 processes when , and is concentrated on 2 processes when (see Figure 2, top row). It is as though the posterior estimates of the number of processes are increasing slightly as the prior increasingly favors more diversification-rate shifts. Rabosky describes this observation as follows: “With increasing values of , the model with maximum a posteriori probability (MAP) was biased in favor of M[1], a model with two processes.” Accordingly, these results create the impression that BAMM is only slightly sensitive to the assumed prior on the number of diversification-rate shifts. However, as we have shown, these results cannot be true. The prior mode for all values of is one (see Figure 1 and Figure 2, bottom row). Therefore, there is no reason for the frequency distribution of the MAPs to shift rightward (from 1 to 2 processes as ranges from 1 to 10) because the true number of processes for these constant-rate trees is always one. We are unable to reproduce these results. 3. How did Rabosky (2016) conclude that BAMM is not sensitive to the prior (again)? Recently, Rabosky reported on the BAMM website that he was unable to reproduce our results showing prior sensitivity for the estimated number of diversification-rate shifts using the latest version of BAMM (v.2.5). In this section, we clarify the source of this discrepancy. Specifically, Rabosky’s results differ from our published results because he used an option (combineExtinctionAtNodes = if_different, rather than combineExtinctionAtNodes = random as we used) implemented in BAMM (v.2.5) that causes the estimated number of diversification-rate shifts to artifactually depart from the prior. This new option is essentially an implementation error (i.e., a “software bug”) because it derives from a miscalculation of extinction probabilities. Clearly, any effort to evaluate the statistical behavior of a given method should avoid the confounding effects of any errors in the implementation of that method. In other words, we want to understand whether the there are problems with the method itself, not whether there are problems with the implementation of the method. Accordingly, the analyses that we present in our PNAS paper did not use this invalid option. 3.1. Background on extinction probabilities Birth-death processes (with non-zero extinction rates) give rise to lineages that may go extinct before the present. Accordingly, if we want to compute the probability of a phylogeny generated by a birth-death process, we need to accommodate the possibility of unobserved speciation events (where one of the daughter lineages has gone extinct before the present). Naturally, this requires the ability to compute the probability that a given lineage goes extinct, a quantity referred to as the extinction probability. The need to compute the extinction probability is by no means unique to BAMM: Kendall (1948) describes computing the “chances of extinction” for general birth-death processes, Nee et al. (1994) describe the probability of a phylogeny given that it survived (1 – the probability that it went extinct), etc. Perhaps of most direct relevance to BAMM, Maddison et al. (2007) explicitly define the extinction probability under the BiSSE model (from which BAMM draws heavily). Specifically, Maddison et al. (2007) define the extinction probability as E(t); the probability that a lineage that exists at some time in the past, t, goes extinct before the present time. We wrote a simple program, available here, that performs forward simulation under the described birth-death process to demonstrate what the extinction probability represents. Under the birth-death process model (as also used in BAMM), the fate of a given lineage is independent of other contemporaneous lineages; Maddison et al. (2007) note, “the extinction probabilities do not depend on tree structure of the surviving lineages, only on time“. Indeed, all lineages in the same diversification-rate category at time t must have the same extinction probability, regardless of the shape of the tree and the presence of diversification-rate shifts elsewhere in the tree (Figure 3)! 3.2. The source of the bug BAMM computes extinction probabilities using a recursive algorithm that begins at the tips of the tree and computes the change in the extinction probability down each branch using a set of differential equations (see Moore et al., 2016; SI1.2 Likelihood function). A question arises when BAMM reaches an internal node: what should the initial extinction probability be for the branch immediately ancestral to the speciation event? Here is where the problem arises: BAMM (v.2.5) enables an option (combineExtinctionAtNodes = if_different) that compares the extinction probabilities for each of the descendant branches at time t; if these probabilities are different, BAMM takes the product of the extinction probabilities as the initial extinction probability for the immediately ancestral branch; otherwise (if the extinction probabilities are the same), BAMM uses either one of the extinction probabilities as the initial extinction probability (note: no product is taken). These extinction probabilities for internal nodes are then carried down their ancestral branches, continuing all the way to the root of the tree. The description of how BAMM handles extinction probabilities directly follows from the BAMM website which also resembles the current version of the code. We emphasize that this option is not mathematically valid: the extinction probability for a lineage arising right before a node certainly does not depend in any sensible way on the product of extinction probabilities of two lineages that arise immediately after that node (c.f., Maddison et al., 2007, and Figure 3, above), especially only when the extinction probabilities for those two lineages are different! (e.g., why take the product in some situations and not others?). Taking the product of these extinction probabilities violates the statistical principles underpinning model-based inference. Thus, all benefits normally associated with model-based approaches (consistency, efficiency) do not apply. We note that the correct procedure, which is not implemented in BAMM, is to take the extinction probability from the lineage in the same diversification process as the ancestral node, since the extinction probability for a given diversification-rate category depends solely on time. 3.3. The consequences of the bug Taking products of extinction probabilities (which are always smaller than 1) results in increasingly smaller extinction probabilities. Since BAMM only takes the product of these extinction probabilities when they are different, and these probabilities will only be different when there are rate shifts, this incorrect procedure imposes an unpredictable change to the probabilities of models with diversification-rate shifts. Thus, the new option if_different in BAMM (v.2.5) implicitly penalizes rate-shifts more heavily. For example, consider a model that has no rate shifts and the extinction probabilities for both lineages descending from a node are 0.2. In this case the extinction probability at the ancestral node should be 0.2 because the extinction probabilities for either lineage are equal. Now imagine that there is a rate shift on one of the daughter lineages so that the extinction probability for that lineage is now of 0.15. Under the new option in BAMM, we would multiply the extinction probabilities to compute the extinction probability at the ancestral node, which would be approximately 0.03 in this case (which is clearly different from 0.2). To demonstrate the impact of this bug, we compare the results of analyses with (orange) and without (blue) the option on our constant-rate birth-death simulated trees below (Figure 4). As we describe in our PNAS paper, we centered the diversification-rate priors on their true values; therefore, these results represent a best-case scenario for inferring the correct number of diversification-rate This phenomenon is not restricted to simulated phylogenies; we analyzed the cetacean phylogeny with and without the if_different option and observe the same general pattern (Figure 5). We note that the results for our if_different (orange lines above) are effectively identical to those reported by Rabosky (BAMM website), while our random analyses (blue lines) correspond to results presented in our PNAS paper. In this post, we have described the analyses and main results presented in Moore et al. (2016) demonstrating that the posterior distribution of the number of diversification-rate shifts inferred by BAMM is extremely sensitive to the assumed prior distribution on the number of diversification-rate shifts. We demonstrate how Rabosky’s (2014) conclusion that BAMM is prior insensitive is an artifact of using the posterior mode (MAP) to summarize the inferred number of diversification-rate shifts. We clarify that our results (Moore et al., 2016) are based on the most recent version of BAMM (v.2.5) but avoided use of the combineExtinctionAtNodes = if_different option because we view this as a software bug, which would not provide a fair evaluation of the behavior of BAMM. This option for computing extinction probabilities at nodes (taking the product if the probabilities are different) is theoretically invalid, leads to pathological behavior, and masks prior sensitivity of BAMM. Furthermore, we show that the use of the if_different option caused the discrepancy between our results (Moore et al., 2016) and a recent blogpost by Rabosky (2016), which concealed the strong prior sensitivity of BAMM. Although the details are somewhat technical, we believe that it is important to our community to understand the limits of existing methods for inferring diversification-rate shifts, and we hope that you will join the discussion via the comments section below. In a subsequent post we plan to elaborate on the issue of diversification-rate parameter estimates under BAMM. Data availability All of the phylogenetic data and BAMM config files used for this blog post are publicly available as a BitBucket repository. Höhna, S, MJ Landis, TA Heath, B Boussau, N Lartillot, BR Moore, JP Huelsenbeck, and F Ronquist. 2016. RevBayes: Bayesian Phylogenetic Inference Using Graphical Models and an Interactive Model Specification Language. Systematic Biology, 65:726–736. Huelsenbeck JP, B Larget, and DL Swofford. 2000. A compound Poisson process for relaxing the molecular clock. Genetics, 154:1879–1892. Kendall, DG. 1948. On the generalized “birth-and-death” process. The Annals of Mathematical Statistics, 19:1–15. Maddison W, P Midford, and S Otto. 2007. Estimating a binary character’s effect on speciation and extinction. Systematic Biology, 56:701. Moore, BR, S Höhna, MR May, B Rannala, and JP Huelsenbeck. 2016. Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures. Proceedings of the National Academy of Sciences, USA, (in press). Nee, S, RM May, and PH Harvey. 1994. The reconstructed evolutionary process. Philosophical Transactions: Biological Sciences, 344:305–311. Rabosky DL. 2014. Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees. PLoS One, 9:e89543. Rabosky DL. 2016. Is the posterior on the number of shifts overly sensitive to the prior? http://bamm-project.org/prior.html Rabosky, DL, M Grundler, C Anderson, JJ Shi, JW Brown, H Huang, JG Larson, et al. 2014. BAMMtools: an R package for the analysis of evolutionary dynamics on phylogenetic trees. Methods in Ecology and Evolution, 5:701–707. Rannala B. 2002. Identifiability of parameters in MCMC Bayesian inference of phylogeny. Systematic Biology, 51:754–760. Steeman, ME, MB Hebsgaard, RE Fordyce, SY Ho, DL Rabosky, R Nielsen, C Rahbek, H Glenner, MV Sørensen, and E Willerslev. 2009. Radiation of extant cetaceans driven by restructuring of the oceans. Systematic Biology, 58:573–585. One thought on “Prior sensitivity in BAMM” 1. Brian Moore Post author There’s a nice conversation on twitter that is raising some great points that I thought may be worth including here (I’m not a big fan of 140 character limit and navigating byzantine twitter conversation threads). One question that has come up is whether the option for computing extinction probabilities that we used in our paper (i.e., the ‘combineExtinctionAtNodes = random’ option) is implemented in BAMM. This is indeed one of the options implemented in BAMM, but it is not the default option in BAMM v.2.5—we note, however, that using this option does not require altering any source code. The default option (as of v.2.5) for computing extinction probabilities is the ‘combineExtinctionAtNodes = if_different’ option, which computes extinction probabilities at nodes by taking the product of extinction probabilities of descendant branches if they are different (as described above in the blog post). A second question is “Why wouldn’t you use the default option (i.e., the combineExtinctionAtNodes = if_different’ option) for your analyses exploring the statistical behavior of BAMM in your paper?” This new option for computing extinction probabilities is an implementation error (a bug); it is an option for computing extinction probabilities that was implemented in BAMM v.2.5 that has not been published or subjected to peer review, and is obviously incorrect. The goal of our paper is to understand the statistical behavior of the BAMM method itself. If we were to explore the behavior of BAMM using the default ‘combineExtinctionAtNodes = if_different’ option/bug, then it would be unclear whether any problems with BAMM were due to problems that are inherent to the method, or simply reflected problems with the implementation of the method. In other words, it would make it difficult to interpret the conclusions of our study. Given that a more correct solution for computing extinction probabilities is implemented in BAMM, we used that option. This is also in line with our goal to evaluate the behavior of BAMM in the most charitable way possible (e.g., centering priors for analyses of simulated datasets to the true values used to generate those data, maximizing the information-to-free parameter ratio, etc.). A third question is “Why didn’t you provide a discussion of the issues with the ‘combineExtinctionAtNodes’ options in your paper?” Up until the penultimate (fourth) revision of our manuscript, we actually included a detailed section of the Supporting Information document (‘Implementation Errors’) that both described the various bugs that we had discovered in BAMM over the course of our study, and we also presented results of reanalyses of the simulated and empirical datasets using various bugs (to demonstrate how these bugs impacted the results of our analyses and conclusions of our study). However, two of the reviewers and the associate editor requested that we remove this section from the SI. I think their reasons for asking us to remove this section are pretty reasonable. (1) First, it was a very long and detailed discussion of implementation errors that was clearly outside the focus of our paper; our paper is not a bug report, it is a study describing inherent problems with the method. If the only problems with BAMM were implementation problems (bugs), then we would not need to write a paper, we would just need to submit a bug report to the BAMM developers. Moreover, the editor and reviewers argued that no one besides the BAMM developers would be interested in our bug collection. (2) The second justification for omitting description of bugs is that it significantly contributed to the critical tone of our paper. Given that our paper is a critical evaluation of BAMM—it documents serious flaws with the method—it is to some extent inherently/unavoidably critical, but the editor and reviewers (and authors) wanted to avoid an unnecessarily critical tone. In other words, going into gory detail about the implementation errors that we discovered in BAMM just seemed like mean-spirited piling on. Although we generally agree with the arguments for excluding the discussion of bugs from our paper, in retrospect, I still feel that there is also a downside to this decision. Retaining the discussion of the combineExtinctionAtNodes bug would have allowed us to explicitly describe our choices and why we made them. So, I think the decision to remove the discussion of the bugs in BAMM is a mistake on our part. Hopefully this blog post will provide a compromise solution that will allow us to rectify this mistake on our part. A final question/comment raised in the twitter conversation is “It seems kind of sketchy that you guys didn’t use the default option for your analyses; what gives?” I can see—from an outside perspective—how it might seem like there is something nefarious going on. In fact, our decision is motivated to avoid being sketchy. It seems that concerns regarding the fairness of our decision to use a non-default option implicitly assumes that “default option” equals “best option.” As described above, the default option for computing extinction probabilities in BAMM is demonstrably invalid (i.e., we know it’s wrong), so it would be sketchy to demonstrate problems with BAMM where we know that the results are likely to be adversely impacted by using an obvious bug for computing extinction probabilities. In fact, most of the problems that we identify are (predictably) much worse when using the the default ‘combineExtinctionAtNodes = if_different’ option/bug in BAMM. Specifically, use of this bug greatly exacerbates errors in the likelihoods computed by BAMM, and this bug also has a strong adverse impact on estimates of the diversification-rate parameters (speciation and extinction rates) inferred by BAMM. The only “upside” of the default ‘combineExtinctionAtNodes = if_different’ option/bug is that it partially masks the prior sensitivity issue in BAMM: that is, the default ‘combineExtinctionAtNodes = if_different’ option/bug happens to mask the sensitivity of the estimated number of diversification-rate shifts to the assumed prior on the expected number of diversification-rate shifts. But—as we have explained in this blog post—this masking of the prior sensitivity is just one of the spurious manifestations of this bug. In order to correctly explore the prior sensitivity issue, it is necessary to avoid the confounding effects of this bug. It may be worth pointing out that prior sensitivity is a relatively minor issue with BAMM. This blog is motivated to address possible confusion stemming from recent contradictory claims about the prior sensitivity of BAMM. The other problems with BAMM that we demonstrate in our paper are far more fundamental: the likelihood function is incorrect (it cannot correctly compute the probability of the data when rates of diversification vary across branches), and the CPP prior model used to describe the prior distribution of diversification-rate shifts across branches is incoherent (it does not provide a valid probability distribution on the number and location of diversification-rate shifts). We may elaborate on these issues in future posts.
{"url":"https://treethinkers.org/prior-sensitivity-in-bamm/","timestamp":"2024-11-01T22:03:40Z","content_type":"text/html","content_length":"107868","record_id":"<urn:uuid:54461169-e51b-4c98-b02a-485ef2ae4946>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00376.warc.gz"}
Okishio's theorem is a theorem formulated by Japanese economist Nobuo Okishio. It has had a major impact on debates about Marx's theory of value. Intuitively, it can be understood as saying that if one capitalist raises his profits by introducing a new technique that cuts his costs, the collective or general rate of profit in society goes up for all capitalists. In 1961, Okishio established this theorem under the assumption that the real wage remains constant. Thus, the theorem isolates the effect of pure innovation from any consequent changes in the wage. For this reason the theorem, first proposed in 1961, excited great interest and controversy because, according to Okishio, it contradicts Marx's law of the tendency of the rate of profit to fall. Marx had claimed that the new general rate of profit, after a new technique has spread throughout the branch where it has been introduced, would be lower than before. In modern words, the capitalists would be caught in a rationality trap or prisoner's dilemma: that which is rational from the point of view of a single capitalist, turns out to be irrational for the system as a whole, for the collective of all capitalists. This result was widely understood, including by Marx himself, as establishing that capitalism contained inherent limits to its own success. Okishio's theorem was therefore received in the West as establishing that Marx's proof of this fundamental result was inconsistent. More precisely, the theorem says that the general rate of profit in the economy as a whole will be higher if a new technique of production is introduced in which, at the prices prevailing at the time that the change is introduced, the unit cost of output in one industry is less than the pre-change unit cost. The theorem, as Okishio (1961:88) points out, does not apply to non-basic branches of The proof of the theorem may be most easily understood as an application of the Perron–Frobenius theorem. This latter theorem comes from a branch of linear algebra known as the theory of nonnegative matrices. A good source text for the basic theory is Seneta (1973). The statement of Okishio's theorem, and the controversies surrounding it, may however be understood intuitively without reference to, or in-depth knowledge of, the Perron–Frobenius theorem or the general theory of nonnegative matrices. Sraffa model The argument of Nobuo Okishio, a Japanese economist, is based on a Sraffa-model. The economy consists of two departments I and II, where I is the investments goods department (means of production) and II is the consumption goods department, where the consumption goods for workers are produced. The coefficients of production tell how much of the several inputs is necessary to produce one unit of output of a given commodity ("production of commodities by means of commodities"). In the model below two outputs exist ${\displaystyle x_{1}}$ , the quantity of investment goods, and ${\ displaystyle x_{2}}$ , the quantity of consumption goods. The coefficients of production are defined as: • ${\displaystyle a_{11}}$ : quantity of investment goods necessary to produce one unit of investment goods. • ${\displaystyle a_{21}}$ : quantity of hours of labour necessary to produce one unit of investment goods. • ${\displaystyle a_{12}}$ : quantity of investment goods necessary to produce one unit of consumption goods. • ${\displaystyle a_{22}}$ : quantity of hours of labour necessary to produce one unit of consumption goods. The worker receives a wage at a certain wage rate w (per unit of labour), which is defined by a certain quantity of consumption goods. • ${\displaystyle w\cdot a_{21}}$ : quantity of consumption goods necessary to produce one unit of investment goods. • ${\displaystyle w\cdot a_{22}}$ : quantity of consumption goods necessary to produce one unit of consumption goods. This table describes the economy: │ │Investment goods used │Consumption goods used │Output │ │Department I │${\displaystyle a_{11}x_{1}}$│${\displaystyle a_{21}wx_{1}}$ │${\displaystyle x_{1}}$│ │Department II│${\displaystyle a_{12}x_{2}}$│${\displaystyle a_{22}wx_{2}}$ │${\displaystyle x_{2}}$│ This is equivalent to the following equations: • ${\displaystyle (a_{11}x_{1}p_{1}+a_{21}wx_{1}p_{2})(1+r)=x_{1}p_{1}}$ • ${\displaystyle (a_{12}x_{2}p_{1}+a_{22}wx_{2}p_{2})(1+r)=x_{2}p_{2}}$ • ${\displaystyle p_{1}}$ : price of investment good ${\displaystyle x_{1}}$ • ${\displaystyle p_{2}}$ : price of consumption good ${\displaystyle x_{2}}$ • ${\displaystyle r}$ : General rate of profit. Due to the tendency, described by Marx, of rates of profits to equalise between branches (here departments) a general rate of profit for the economy as a whole will be created. In department I expenses for investment goods or for constant capital are: • ${\displaystyle a_{11}x_{1}p_{1}}$ and for variable capital: • ${\displaystyle a_{21}wx_{1}p_{2}}$ . In Department II expenses for constant capital are: ${\displaystyle a_{12}x_{2}p_{1}}$ and for variable capital: ${\displaystyle a_{22}wx_{2}p_{2}.}$ (The constant and variable capital of the economy as a whole is a weighted sum of these capitals of the two departments. See below for the relative magnitudes of the two departments which serve as weights for summing up constant and variable capitals.) Now the following assumptions are made: • ${\displaystyle p_{2}=1}$ : The consumption good ${\displaystyle x_{2}}$ is to be the numéraire, the price of the consumption good ${\displaystyle p_{2}}$ is therefore set equal to 1. • The real wage is assumed to be ${\displaystyle w=2p_{2}=2.}$ • Finally, the system of equations is normalised by setting the outputs ${\displaystyle x_{1}}$ und ${\displaystyle x_{2}}$ equal to 1, respectively. Okishio, following some Marxist tradition, assumes a constant real wage rate equal to the value of labour power, that is the wage rate must allow to buy a basket of consumption goods necessary for workers to reproduce their labour power. So, in this example it is assumed that workers get two pieces of consumption goods per hour of labour in order to reproduce their labour power. A technique of production is defined according to Sraffa by its coefficients of production. For a technique, for example, might be numerically specified by the following coefficients of production: • ${\displaystyle a_{11}=0.8}$ : quantity of investment goods necessary to produce one unit of investment goods. • ${\displaystyle a_{21}=0.1}$ : quantity of working hours necessary to produce one unit of investment goods. • ${\displaystyle a_{12}=0.4}$ : quantity of investment goods necessary to produce one unit of consumption goods. • ${\displaystyle a_{22}=0.1}$ : quantity of working hours necessary to produce one unit of consumption goods. From this an equilibrium growth path can be computed. The price for the investment goods is computed as (not shown here): ${\displaystyle p_{1}=1.78}$ , and the profit rate is: ${\displaystyle r= 0.0961=9.61\%}$ . The equilibrium system of equations then is: • ${\displaystyle (0.8\cdot 1\cdot 1.78+0.1\cdot 2\cdot 1\cdot 1)\cdot (1+0.0961)=1\cdot 1.78}$ • ${\displaystyle (0.4\cdot 1\cdot 1.78+0.1\cdot 2\cdot 1\cdot 1)\cdot (1+0.0961)=1\cdot 1}$ Introduction of technical progress A single firm of department I is supposed to use the same technique of production as the department as a whole. So, the technique of production of this firm is described by the following: ${\displaystyle (a_{11}x_{1}p_{1}+a_{21}wx_{1}p_{2})(1+r)=x_{1}p_{1}}$ ${\displaystyle =(0.8\cdot 1\cdot 1.78+0{,}1\cdot 2\cdot 1\cdot 1)\cdot (1+0.0961)=1\cdot 1.78}$ Now this firm introduces technical progress by introducing a technique, in which less working hours are needed to produce one unit of output, the respective production coefficient is reduced, say, by half from ${\displaystyle a_{21}=0.1}$ to ${\displaystyle a_{21}=0.05}$ . This already increases the technical composition of capital, because to produce one unit of output (investment goods) only half as much of working hours are needed, while as much as before of investment goods are needed. In addition to this, it is assumed that the labour saving technique goes hand in hand with a higher productive consumption of investment goods, so that the respective production coefficient is increased from, say, ${\displaystyle a_{11}=0.8}$ to ${\displaystyle a_{11}=0.85}$ . This firm, after having adopted the new technique of production is now described by the following equation, keeping in mind that at first prices and the wage rate remain the same as long as only this one firm has changed its technique of production: ${\displaystyle =(0.85\cdot 1\cdot 1.78+0.05\cdot 2\cdot 1\cdot 1)\cdot (1+0.1036)=1\cdot 1.78}$ So this firm has increased its rate of profit from ${\displaystyle r=9{,}61\%}$ to ${\displaystyle 10{,}36\%}$ . This accords with Marx's argument that firms introduce new techniques only if this raises the rate of profit.^[1] Marx expected, however, that if the new technique will have spread through the whole branch, that if it has been adopted by the other firms of the branch, the new equilibrium rate of profit not only for the pioneering firm will be again somewhat lower, but for the branch and the economy as a whole. The traditional reasoning is that only "living labour" can produce value, whereas constant capital, the expenses for investment goods, do not create value. The value of constant capital is only transferred to the final products. Because the new technique is labour-saving on the one hand, outlays for investment goods have been increased on the other, the rate of profit must finally be lower. Let us assume, the new technique spreads through all of department I. Computing the new equilibrium rate of growth and the new price ${\displaystyle p_{2}}$ gives under the assumption that a new general rate of profit is established: • ${\displaystyle (0.85\cdot 1\cdot 1.77+0.05\cdot 2\cdot 1\cdot 1)\cdot (1+0.1030)=1\cdot 1.77}$ • ${\displaystyle (0.4\cdot 1\cdot 1.77+0.1\cdot 2\cdot 1\cdot 1)\cdot (1+0.1030)=1\cdot 1}$ If the new technique is generally adopted inside department I, the new equilibrium general rate of profit is somewhat lower than the profit rate, the pioneering firm had at the beginning (${\ displaystyle 10.36\%}$ ), but it is still higher than the old prevailing general rate of profit: ${\displaystyle 10.30\%}$ larger than ${\displaystyle 9.61\%}$ . Nobuo Okishio proved this generally, which can be interpreted as a refutation of Marx's law of the tendency of the rate of profit to fall. This proof has also been confirmed if the model is extended to include not only circulating capital but also fixed capital. Mechanisation, defined as increased inputs of machinery per unit of output combined with the same or reduced amount of labour-input, necessarily lowers the maximum rate of profit.^[2] Marxist responses Some Marxists simply dropped the law of the tendency of the rate of profit to fall, claiming that there are enough other reasons to criticise capitalism, that the tendency for crises can be established without the law, so that it is not an essential feature of Marx's economic theory. Others would say that the law helps to explain the recurrent cycle of crises, but cannot be used as a tool to explain the long term developments of the capitalist economy. Others argued that Marx's law holds if one assumes a constant ‘’wage share’’ instead of a constant real wage ‘’rate’’. Then, the prisoner's dilemma works like this: The first firm to introduce technical progress by increasing its outlay for constant capital achieves an extra profit. But as soon as this new technique has spread through the branch and all firms have increased their outlays for constant capital also, workers adjust wages in proportion to the higher productivity of labour. The outlays for constant capital having increased, wages having been increased now also, this means that for all firms the rate of profit is lower. However, Marx did not know the law of a constant wage share. Mathematically the rate of profit could always be stabilised by decreasing the wage share. In our example, for instance, the rise of the rate of profit goes hand in hand with a decrease of the wage share from ${\displaystyle 58.6\%}$ to ${\displaystyle 41.9\%}$ , see computations below. However, a reduction in the wage share is not possible in neoclassical models due to the assumption that wages equal the marginal product of labour. A third response is to reject the whole framework of the Sraffa-models, especially the comparative static method.^[3] In a capitalist economy entrepreneurs do not wait until the economy has reached a new equilibrium path but the introduction of new production techniques is an ongoing process. Marx's law could be valid if an ever-larger portion of production is invested per working place instead of in new additional working places. Such an ongoing process cannot be described by the comparative static method of the Sraffa models. According to Alfred Müller^[4] the Okishio theorem could be true, if there was a coordination amongst capitalists for the whole economy, a centrally planned capitalist economy, which is a contradiction in itself. In a capitalist economy, in which means of production are private property, economy-wide planning is not possible. The individual capitalists follow their individual interests and do not cooperate to achieve a general high rate of growth or rate of profit. Model in physical terms Dual system of equations Up to now it was sufficient to describe only monetary variables. In order to expand the analysis to compute for instance the value of constant capital c, variable capital v und surplus value (or profit) s for the economy as whole or to compute the ratios between these magnitudes like rate of surplus value s/v or value composition of capital, it is necessary to know the relative size of one department with respect to the other. If both departments I (investment goods) and II (consumption goods) are to grow continuously in equilibrium there must be a certain proportion of size between these two departments. This proportion can be found by modelling continuous growth on the physical (or material) level in opposition to the monetary level. In the equations above a general, for all branches, equal rate of profit was computed given • certain technical conditions described by input-output coefficients • a real wage defined by a certain basket of consumption goods to be consumed per hour of labour ${\displaystyle x_{2}}$ whereby a price had to be arbitrarily determined as numéraire. In this case the price ${\displaystyle p_{2}}$ for the consumption good ${\displaystyle x_{2}}$ was set equal to 1 (numéraire) and the price for the investment good ${\displaystyle x_{1}}$ was then computed. Thus, in money terms, the conditions for steady growth were established. General equations To establish this steady growth also in terms of the material level, the following must hold: ${\displaystyle (a_{11}x_{1}+Ka_{12}x_{2})(1+g)=x_{1}}$ ${\displaystyle (a_{21}w\cdot x_{1}+Ka_{22}\cdot wx_{2})(1+g)=Kx_{2}}$ Thus, an additional magnitude K must be determined, which describes the relative size of the two branches I and II whereby I has a weight of 1 and department II has the weight of K. If it is assumed that total profits are used for investment in order to produce more in the next period of production on the given technical level, then the rate of profit r is equal to the rate of growth g. Numerical examples In the first numerical example with rate of profit ${\displaystyle r=9.61\%}$ we have: ${\displaystyle (0.8\cdot 1+0.2808\cdot 0.4\cdot 1)\cdot (1+0.0961)=1}$ ${\displaystyle (0.1\cdot 2\cdot 1+0.2808\cdot 0.1\cdot 2\cdot 1)\cdot (1+0.0961)=0.2808\cdot 1}$ The weight of department II is ${\displaystyle K=0.2808}$ . For the second numerical example with rate of profit ${\displaystyle r=10.30\%}$ we get: ${\displaystyle (0.85\cdot 1+0.14154\cdot 0.4\cdot 1)\cdot (1+0.1030)=1}$ ${\displaystyle (0.1\cdot 2\cdot 1+0.14154\cdot 0.05\cdot 2\cdot 1)(1+0.1030)=0.14154\cdot 1}$ Now, the weight of department II is ${\displaystyle K=0.14154}$ . The rates of growth g are equal to the rates of profit r, respectively. For the two numerical examples, respectively, in the first equation on the left hand side is the input of ${\displaystyle x_{1}}$ and in the second equation on the left hand side is the amount of input of ${\displaystyle x_{2}}$ . On the right hand side of the first equations of the two numerical examples, respectively, is the output of one unit of ${\displaystyle x_{1}}$ and in the second equation of each example is the output of K units of ${\displaystyle x_{2}}$ . The input of ${\displaystyle x_{1}}$ multiplied by the price ${\displaystyle p_{1}}$ gives the monetary value of constant capital c. Multiplication of input ${\displaystyle x_{2}}$ with the set price ${\displaystyle p_{2}=1}$ gives the monetary value of variable capital v. One unit of output ${\displaystyle x_{1}}$ and K units of output ${\displaystyle x_{2}}$ multiplied by their prices ${\ displaystyle p_{1}}$ and ${\displaystyle p_{2}}$ respectively gives total sales of the economy c + v + s. Subtracting from total sales the value of constant capital plus variable capital (c + v) gives profits s. Now the value composition of capital c/v, the rate of surplus value s/v, and the "wage share" v/(s + v) can be computed. With the first example the wage share is ${\displaystyle 58.6\%}$ and with the second example ${\displaystyle 41.9\%}$ . The rates of surplus value are, respectively, 0.706 and 1.389. The value composition of capital c/v is in the first example 6,34 and in the second 12.49. According to the formula ${\displaystyle {\text{Rate of profit }}p={{s \over v} \over {{c \over v}+1}}}$ for the two numerical examples rates of profit can be computed, giving ${\displaystyle 9.61\%}$ and ${\displaystyle 10.30\%}$ , respectively. These are the same rates of profit as were computed directly in monetary terms. Comparative static analysis The problem with these examples is that they are based on comparative statics. The comparison is between different economies each on an equilibrium growth path. Models of dis-equilibrium lead to other results. If capitalists raise the technical composition of capital because thereby the rate of profit is raised, this might lead to an ongoing process in which the economy has not enough time to reach a new equilibrium growth path. There is a continuing process of increasing the technical composition of capital to the detriment of job creation resulting at least on the labour market in stagnation. The law of the tendency of the rate of profit to fall nowadays usually is interpreted in terms of disequilibrium analysis, not the least in reaction to the Okishio critique. David Laibman and Okishio's theorem Between 1999 and 2004, David Laibman, a Marxist economist, published at least nine pieces dealing with the Temporal single-system interpretation (TSSI) of Marx's value theory.^[5] His "The Okishio Theorem and Its Critics" was the first published response to the temporalist critique of Okishio's theorem. The theorem was widely thought to have disproved Karl Marx's law of the tendential fall in the rate of profit, but proponents of the TSSI claim that the Okishio theorem is false and that their work refutes it. Laibman argued that the theorem is true and that TSSI research does not refute In his lead paper in a symposium carried in Research in Political Economy in 1999,^[6] Laibman's key argument was that the falling rate of profit exhibited in Kliman (1996)^[7] depended crucially on the paper's assumption that there is fixed capital which lasts forever. Laibman claimed that if there is any depreciation or premature scrapping of old, less productive, fixed capital: (1) productivity will increase, which will cause the temporally determined value rate of profit to rise; (2) this value rate of profit will therefore "converge toward" Okishio's material rate of profit; and thus (3) this value rate "is governed by" the material rate of profit. These and other arguments were answered in Alan Freeman and Andrew Kliman's (2000) lead paper in a second symposium,^[8] published the following year in the same journal. In his response, Laibman chose not to defend claims (1) through (3). He instead put forward a "Temporal-Value Profit-Rate Tracking Theorem" that he described as "propos[ing] that [the temporally determined value rate of profit] must eventually follow the trend of [Okishio's material rate of profit]" ^[9] The "Tracking Theorem" states, in part: "If the material rate [of profit] rises to an asymptote, the value rate either falls to an asymptote, or first falls and then rises to an asymptote permanently below the material rate"^[10] Kliman argues that this statement "contradicts claims (1) through (3) as well as Laibman's characterization of the 'Tracking Theorem.' If the physical [i.e. material] rate of profit rises forever, while the value rate of profit falls forever, the value rate is certainly not following the trend of the physical [i.e. material] rate, not even eventually."^[11] In the same paper, Laibman claimed that Okishio's theorem was true, even though the path of the temporally determined value rate of profit can diverge forever from the path of Okishio's material rate of profit. He wrote, "If a viable technical change is made, and the real wage rate is constant, the new MATERIAL rate of profit must be higher than the old one. That is all that Okishio, or Roemer, or Foley, or I, or anyone else has ever claimed!" ^[12] In other words, proponents of the Okishio theorem have always been talking about how the rate of profit would behave only in the case in which input and output prices happened to be equal. Kliman and Freeman suggested that this statement of Laibman's was simply "an effort to absolve the physicalist tradition of error."^[13] Okishio's theorem, they argued, has always been understood as a disproof of Marx's law of the tendential fall in the rate of profit, and Marx's law does not pertain to an imaginary special case in which input and output prices happen for some reason to be equal. • Considered abstractly the rate of profit may remain the same, even though the price of the individual commodity may fall as a result of greater productiveness of labour and a simultaneous increase in the number of this cheaper commodity … The rate of profit could even rise if a rise in the rate of surplus-value were accompanied by a substantial reduction in the value of the elements of constant, and particularly of fixed, capital. But in reality, as we have seen, the rate of profit will fall in the long run. Karl Marx, Capital III, chapter 13. The last sentence is, however, not from Karl Marx but from Friedrich Engels. • No capitalist ever voluntarily introduces a new method of production, no matter how much more productive it may be, and how much it may increase the rate of surplus-value, so long as it reduces the rate of profit. Yet every such new method of production cheapens the commodities. Hence, the capitalist sells them originally above their prices of production, or, perhaps, above their value. He pockets the difference between their costs of production and the market-prices of the same commodities produced at higher costs of production. He can do this, because the average labour-time required socially for the production of these latter commodities is higher than the labour-time required for the new methods of production. His method of production stands above the social average. But competition makes it general and subject to the general law. There follows a fall in the rate of profit — perhaps first in this sphere of production, and eventually it achieves a balance with the rest — which is, therefore, wholly independent of the will of the capitalist. Marx, Capital volume III, chapter 15. 1. ^ Volume III of Capital, chapter 15: "No capitalist ever voluntarily introduces a new method of production, no matter how much more productive it may be, and how much it may increase the rate of surplus-value, so long as it reduces the rate of profit." 2. ^ Anwar Shaikh (1978): Political economy and capitalism: notes on Dobb's theory of crisis. Cambridge Journal of Economics, 1978, 2, 233-251. In this article is a reference to Bertram Schefold (1976): Different forms of technical progress. Economic Journal 3. ^ Andrew Kliman: The Okishio Theorem: An Obituary: AN OBITUARY 4. ^ Alfred Müller: Die Marxsche Konjunkturtheorie - Eine überakkumulationstheorietische Interpretation. PapyRossa Köln, 2009 (dissertation 1983), p. 160. 5. ^ David Laibman, □ "The Okishio Theorem and Its Critics: Historical Cost Vs. Replacement Cost," Research in Political Economy, Vol. 17, 1999, pp. 207–227; □ "The Profit Rate and Reproduction of Capital: A Rejoinder," Research in Political Economy, Vol. 17, 1999, pp. 249–254; * □ "Rhetoric and Substance in Value Theory," Science & Society, Fall 2000, pp. 310–332 (also in The New Value Controversy and the Foundations of Economics, ed. Alan Freeman, Andrew Kliman, and Julian Wells, Edward Elgar, 2004); □ "Two of Everything: A Response," Research in Political Economy, Vol. 18, 2000, pp. 269–278; □ "Numerology, Temporalism, and Profit Rate Trends," Research in Political Economy, Vol. 18, 2000, pp. 295–306; □ "Rising Material’ Vs. Falling Value’ Rates of Profit: Trial by Simulation," Capital and Class, No. 73, Spring 2001, pp. 79–96; □ "Temporalism and Textualism in Value Theory: Rejoinder [to comments by Guglielmo Carchedi and Fred Moseley]." Science & Society, 65:4 (Winter 2001–2002), pp. 528–533 □ "The Un-Simple Analytics of Temporal Value Calculation," Political Economy: Review of Political Economy and Social Science (Athens, Greece), No. 10, (Spring 2002), pp. 5–16 6. ^ "Okishio and His Critics: Historical cost versus replacement cost," Research in Political Economy 17, 207–27. 7. ^ "A Value-theoretic Critique of the Okishio Theorem." In Freeman and Carchedi (eds.),Marx and Non-equilibrium Economics, 206–24. 8. ^ "Two Concepts of Value, Two Rates of Profit, Two Laws of Motion," Research in Political Economy 18, 243–67 9. ^ "Two of Everything: A response," Research in Political Economy 18, p. 275, emphasis in original. 10. ^ Laibman, ibid., p. 274, emphases added. 11. ^ Andrew Kliman, Reclaiming Marx's "Capital": A refutation of the Myth of Inconsistency, Lanham, MD: Lexington Books, 2007, p. 133. 12. ^ "Two of Everything: A response," Research in Political Economy 18, p. 275, emphases in original. 13. ^ Andrew Kliman and Alan Freeman, 2000, "Rejoinder to Duncan Foley and David Laibman, Research in Political Economy 18, p. 290. • Foley, D.(1986) Understanding Capital: Marx's Economic Theory. Harvard, US: Harvard University Press. ISBN 0674920880 • Freeman, A. (1996): Price, value and profit – a continuous, general, treatment in: Freeman, A. and Guglielmo Carchedi (eds.) Marx and non-equilibrium economics. Cheltenham, UK and Brookfield, US: Edward Elgar Online • Okishio, N. (1961) "Technical Change and the Rate of Profit", Kobe University Economic Review, 7, 1961, pp. 85–99. • Seneta, E. (1973) Non-negative Matrices – An Introduction to Theory and Applications. London: George Allen and Unwin • Sraffa, P (1960) Production of Commodities by Means of Commodities: Prelude to a critique of economic theory, 1960. Cambridge: CUP. • Steedman, I. (1977) Marx after Sraffa. London:Verso
{"url":"https://www.knowpia.com/knowpedia/Okishio%27s_theorem","timestamp":"2024-11-08T17:28:40Z","content_type":"text/html","content_length":"228246","record_id":"<urn:uuid:209c5773-7de5-4e0f-a945-40cc5d00a431>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00525.warc.gz"}
CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies Canessa, E., Chaigneau, S.E., Moreno, S. et al. CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies. Behav Res (2022). https://doi.org/10.3758/ 23 de Marzo 2022 In conceptual properties norming studies (CPNs), participants list properties that describe a set of concepts. From CPNs, many different parameters are calculated, such as semantic richness. A generally overlooked issue is that those values are only point estimates of the true unknown population parameters. In the present work, we present an R package that allows us to treat those values as population parameter estimates. Relatedly, a general practice in CPNs is using an equal number of participants who list properties for each concept (i.e., standardizing sample size). As we illustrate through examples, this procedure has negative effects on data’s statistical analyses. Here, we argue that a better method is to standardize coverage (i.e., the proportion of sampled properties to the total number of properties that describe a concept), such that a similar coverage is achieved across concepts. When standardizing coverage rather than sample size, it is more likely that the set of concepts in a CPN all exhibit a similar representativeness. Moreover, by computing coverage the researcher can decide whether the CPN reached a sufficiently high coverage, so that its results might be generalizable to other studies. The R package we make available in the current work allows one to compute coverage and to estimate the necessary number of participants to reach a target coverage. We show this sampling procedure by using the R package on real and simulated CPN data.
{"url":"https://cscn.uai.cl/publicacion/cpncoverageanalysis-an-r-package-for-parameter-estimation-in-conceptual-properties-norming-studies/","timestamp":"2024-11-06T08:24:22Z","content_type":"application/xhtml+xml","content_length":"46748","record_id":"<urn:uuid:fe1cf36a-5706-4c96-89e1-56fae9c40831>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00420.warc.gz"}