content
stringlengths
86
994k
meta
stringlengths
288
619
26/10/2024 - Basculasbalanzas.com In mathematics, a measure is a set function that assigns each subset of a countable disjoint union a value. A measure must satisfy the properties of sigma finiteness and finite additivity. Use measures when you require dynamic calculations to be applied across visuals and data intersections. Avoid using measures as a substitute for Dimension tables. What is a Measure? A measure is a quantity that is used to describe the relative size or magnitude of an object or event. It can be either a number or an adjective. Measurements are important in science, engineering, commerce and everyday life. Many philosophers have studied the nature of measurement. There is still no consensus about how to define it or what sorts of things can be measured. One approach to the philosophy of measurement takes a model-based view. This construes measurement as a process that involves interactions between the object of interest, an instrument and its environment. It also involves a theoretical or statistical model of the interaction. Realists, on the other hand, take a more empiricist approach. They argue that measurable properties and relations are not directly observable and can only be estimated by comparing inaccurate measurements. Realists also stress that knowledge claims about measurable properties and relations are theory-laden. In particular, they require background theories about the properties and relations being measured. What is a Measure Table? A measures table is a distinct table in which you store all of your measure calculations. This makes it much easier to find these metrics within your field list and helps to keep your data model A well-organized measures table can also help you with other aspects of your Power BI data analysis, such as enhanced collaboration and security. By having your measurements contained in a separate table, you can avoid accidentally sharing the entire fact table when working with reports or analyzing data. This can be an especially crucial feature for businesses that need to share sensitive or classified information. In addition to enabling you to more easily identify your metrics, a well-organized measures table can improve data literacy and facilitate effective communication among stakeholders. This is accomplished by including descriptions and units for each metric, as well as providing clear and consistent naming conventions. The use of a measures table can also make it easier for users to understand how each metric is calculated, ensuring accuracy and enhancing data analysis efficiency. What is a Measure Visual? Measures allow you to perform custom calculations on a visual without touching the data model. These calculations can be used in visualizations to answer ad-hoc questions and add business intelligence to a report. Visual calculations combine the simplicity of context from calculated columns with on-demand calculation flexibility, resulting in better performance than aggregations on For example, Janice imports reseller sales from a table in her Power BI model and creates a new chart visual to show projected sales for each year. She can easily use visual calculations to calculate the projections by adding an expression such as Profit = (Sales Amount) – (Total Product Cost) to her visual. Power BI Desktop organizes measures in a special table called Measures, which appears at the top of Fields list. You can also move a measure into multiple folders within this table by using the name and a semicolon to separate them, for example: ProductsNames;Departments. You can also hide a measure in the field list by selecting it and clicking Hide. What is a Measure Calculator? A measure calculator lets you compare a numeric value to another measurement unit to determine the difference. For example, you can convert a length measurement such as 7 inches to a capacity measurement such as centimeters or meters. To use a measure calculator, select the Measures list and then click an item to make it active. In the Expression definition box, type a new calculation using the CALCULATE function. You can also select other properties for the measure, such as its description and format. The measure calculator is a handy tool when you need to calculate an amount in one unit of measure while knowing the quantity for the product in a different unit of measurement. For example, you can use the Measure Calculator when you enter a physical inventory amount in Inventory Bin Move Demand Create, One Step Inventory Location Transfer, or Product Inventory Reservation Maintenance. Then, you can calculate the amount in a different unit of measure by clicking the U/M Calc button. What Is Mass? Students often ask, “What is mass?” It’s important to understand that everything around us has mass—even the air we breathe. A basic understanding of the metric system makes converting between measurements easy. This enables communication between professionals and scientists from different countries. It also makes learning more fun. All metric measurements are based on multiples of ten, making conversions quick and intuitive. While the terms “weight” and “mass” are often used interchangeably, they are actually distinct physical properties. The word “weight” refers to the force of gravity acting on an object, while the term “mass” describes the amount of matter contained within an object. Unlike weight, which depends on the gravitational pull of Earth, mass is constant regardless of the location or shape of an object. Your body’s mass remains the same whether you are curled up on a sofa or stretching out on the beach. Measuring mass is essential to a number of technological applications, from weighing scales to industrial processes. Accurate mass measurements allow for quality control in manufacturing and ensure consistency in products. In scientific research, mass measurements enable researchers to study the atomic and molecular makeup of objects. For example, mass spectrometry allows scientists to analyze complex mixtures of compounds using high-sensitivity instruments. The resulting data can help improve the efficiency of agricultural production by enabling the optimization of fertilizer application. The most basic unit for measuring mass is the gram (g), which can also be expressed as kilograms (kg) in the metric system. In the United States, pounds (lb) can also be used to measure mass. Students should be aware that the term mass is different from weight. The latter is a property of matter that depends on the gravitational field, but the former is a fundamental quantity. Students should also be aware that the verb “to weigh” is inappropriate for describing how an object’s mass is measured. Each of the seven base units of the metric system has a corresponding name, symbol and meaning. These units can be turned into larger or smaller measurements by adding or subtracting a prefix, as shown in Table 2. For example, kilo (k) is equal to 1000 (the meaning of the number) grams. Similarly, litre (L) is equal to 1 cubic meter (1 dm3). These measurements are all very important in chemistry, and students should learn how to convert between these units as needed. Many industrial applications rely on mass measurements. In manufacturing, for example, accurate mass measurement is critical to quality control and consistency. It is also an important aspect of analytical chemistry, allowing researchers to identify unknown compounds via their molecular weight determinations and quantify known ones. The concept of mass is a fundamental one in physics and is one of the seven SI base units. Measurements of mass can be made using a balance or other instruments, such as graduated cylinders and density bottles. A comparison of an unknown sample with a known reference provides an estimation of its mass, and the estimation can be corrected by referencing to a standard calibration compound (see Waters Micromass oa-TOF Instruments for more). Exact mass measurement has received increased attention recently with the development of smaller and more affordable magnetic sector instruments such as our oa-TOF line. These are particularly valuable for measuring the change in mass following deposition, etch and clean processes. Future developments The recent CDF measurement of the W boson mass shows that high-precision measurements will play a crucial role in future experiments. Whether the search for new physics is successful or not, it is clear that precision will be important. It is therefore essential to statistically treat accurate mass measurements and to use terminology that describes these procedures consistently. This paper is designed to clarify and recommend appropriate terms for these purposes. A new method of measuring the density of a mass standard was developed at NIST by immersing the standard in a bath of fluorocarbon fluid and then comparing it to volume standards. This new technique achieves a combined standard uncertainty of less than 0.01 %. NIST is also leading efforts to redefine the kilogram, which currently consists of a lump of metal kept in France. This will make the international standard for mass a property of nature rather than a physical object. This could help to further improve the stability of mass standards and transfer standards.
{"url":"https://www.basculasbalanzas.com/2024/10/26/","timestamp":"2024-11-10T15:33:40Z","content_type":"text/html","content_length":"52671","record_id":"<urn:uuid:779d0fd5-58bf-4459-ac2b-b08ca03c9a1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00203.warc.gz"}
Length - Wikiwand Length is a measure of distance. In the International System of Quantities, length is a quantity with dimension distance. In most systems of measurement a base unit for length is chosen, from which all other units are derived. In the International System of Units (SI) system the base unit for length is the metre.^[1] Length is commonly understood to mean the most extended dimension of a fixed object.^[1] However, this is not always the case and may depend on the position the object is in. Various terms for the length of a fixed object are used, and these include height, which is vertical length or vertical extent, width, breadth, and depth. Height is used when there is a base from which vertical measurements can be taken. Width and breadth usually refer to a shorter dimension than length. Depth is used for the measure of a third dimension.^[2] Length is the measure of one spatial dimension, whereas area is a measure of two dimensions (length squared) and volume is a measure of three dimensions (length cubed). Measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As trade between different places increased, the need for standard units of length increased. And later, as society has become more technologically oriented, much higher accuracy of measurement is required in an increasingly diverse set of fields, from micro-electronics to interplanetary ranging.^[3] Under Einstein's special relativity, length can no longer be thought of as being constant in all reference frames. Thus a ruler that is one metre long in one frame of reference will not be one metre long in a reference frame that is moving relative to the first frame. This means the length of an object varies depending on the speed of the observer. Euclidean geometry In Euclidean geometry, length is measured along straight lines unless otherwise specified and refers to segments on them. Pythagoras's theorem relating the length of the sides of a right triangle is one of many applications in Euclidean geometry. Length may also be measured along other types of curves and is referred to as arclength. In a triangle, the length of an altitude, a line segment drawn from a vertex perpendicular to the side not passing through the vertex (referred to as a base of the triangle), is called the height of the triangle. The area of a rectangle is defined to be length×width of the rectangle. If a long thin rectangle is stood up on its short side then its area could also be described as its height×width. The volume of a solid rectangular box (such as a plank of wood) is often described as length×height×depth. The perimeter of a polygon is the sum of the lengths of its sides. The circumference of a circular disk is the length of the boundary (a circle) of that disk. Other geometries In other geometries, length may be measured along possibly curved paths, called geodesics. The Riemannian geometry used in general relativity is an example of such a geometry. In spherical geometry, length is measured along the great circles on the sphere and the distance between two points on the sphere is the shorter of the two lengths on the great circle, which is determined by the plane through the two points and the center of the sphere. Measure theory In measure theory, length is most often generalized to general sets of ${\displaystyle \mathbb {R} ^{n}}$ via the Lebesgue measure. In the one-dimensional case, the Lebesgue outer measure of a set is defined in terms of the lengths of open intervals. Concretely, the length of an open interval is first defined as ${\displaystyle \ell (\{x\in \mathbb {R} \mid a<x<b\})=b-a.}$ so that the Lebesgue outer measure ${\displaystyle \mu ^{*}(E)}$ of a general set ${\displaystyle E}$ may then be defined as^[6] ${\displaystyle \mu ^{*}(E)=\inf \left\{\sum _{k}\ell (I_{k}):I_{k}{\text{ is a sequence of open intervals such that }}E\subseteq \bigcup _{k}I_{k}\right\}.}$ In the physical sciences and engineering, when one speaks of units of length, the word length is synonymous with distance. There are several units that are used to measure length. Historically, units of length may have been derived from the lengths of human body parts, the distance travelled in a number of paces, the distance between landmarks or places on the Earth, or arbitrarily on the length of some common object. In the International System of Units (SI), the base unit of length is the metre (symbol, m), now defined in terms of the speed of light (about 300 million metres per second). The millimetre (mm), centimetre (cm) and the kilometre (km), derived from the metre, are also commonly used units. In U.S. customary units, English or imperial system of units, commonly used units of length are the inch (in), the foot (ft), the yard (yd), and the mile (mi). A unit of length used in navigation is the nautical mile (nmi).^[7] 1.609344 km = 1 miles Units used to denote distances in the vastness of space, as in astronomy, are much longer than those typically used on Earth (metre or kilometre) and include the astronomical unit (au), the light-year, and the parsec (pc). Units used to denote sub-atomic distances, as in nuclear physics, are much smaller than the millimetre. Examples include the fermi (fm).
{"url":"https://www.wikiwand.com/en/articles/Length","timestamp":"2024-11-13T14:58:16Z","content_type":"text/html","content_length":"266811","record_id":"<urn:uuid:077e9116-ccbc-487e-8b59-9d660d9f2a44>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00255.warc.gz"}
[Solved] Exercises 7880 will help you prepare for | SolutionInn Exercises 7880 will help you prepare for the material covered in the next section. Factor: x 2 Exercises 78–80 will help you prepare for the material covered in the next section. Factor: x^2 - 6x + 9. Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 55% (9 reviews) Answered By John Kimutai I seek to use my competencies gained through on the job experience and skills learned in training to carry out tasks to the satisfaction of users. I have a keen interest in always delivering excellent work 4.70+ 11+ Reviews 24+ Question Solved Students also viewed these Mathematics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/college-algebra-graphs-and-models/exercises-7880-will-help-you-prepare-for-the-material-covered-in","timestamp":"2024-11-14T01:48:09Z","content_type":"text/html","content_length":"78328","record_id":"<urn:uuid:b3150614-455a-4266-b1a3-e69ccd2e34df>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00172.warc.gz"}
entering permutation as product of not necessarily disjoint cycles entering permutation as product of not necessarily disjoint cycles I was expecting to get the identity when I did the following: sage: G = SymmetricGroup(3) sage: G('(1,2)(1,2)') but I get (1,2). How to tell Sage to compute a product of not necessarily disjoint cycles? 1 Answer Sort by ยป oldest newest most voted Here is one solution, suggested by the phrasing of the question itself. Use a product of cycles, individually turning each cycle into a group element: sage: G = SymmetricGroup(3) sage: G('(1,2)') * G('(1,2)') edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/59114/entering-permutation-as-product-of-not-necessarily-disjoint-cycles/?answer=59115","timestamp":"2024-11-02T21:15:32Z","content_type":"application/xhtml+xml","content_length":"53051","record_id":"<urn:uuid:4aba11e9-cbca-4e7c-b64e-767c24284021>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00330.warc.gz"}
Monstrous frustrations Thanks for clicking through… I guess. If nothing else, it shows that just as much as the stock market is fueled by greed, mathematical reasearch is driven by frustration (or the pleasure gained from knowing others to be frustrated). I did spend the better part of the day doing a lengthy, if not laborious, calculation, I’ve been postponing for several years now. Partly, because I didn’t know how to start performing it (though the basic strategy was clear), partly, because I knew beforehand the final answer would probably offer me no further insight. Still, it gives the final answer to a problem that may be of interest to anyone vaguely interested in Moonshine : What does the Monster see of the modular group? I know at least two of you, occasionally reading this blog, understand what I was trying to do and may now wonder how to repeat the straightforward calculation. Well the simple answer is : Google for the number 97239461142009186000 and, no doubt, you will be able to do the computation overnight. One word of advice : don’t! Get some sleep instead, or make love to your partner, because all you’ll get is a quiver on nine vertices (which is pretty good for the Monster) but having an horrible amount of loops and arrows… If someone wants the details on all of this, just ask. But, if you really want to get me exited : find a moonshine reason for one of the following two numbers : $791616381395932409265430144165764500492= 2^2 * 11 * 293 * 61403690769153925633371869699485301 $ (the dimension of the monster-singularity upto smooth equivalence), or, $1575918800531316887592467826675348205163= 523 * 1655089391 * 15982020053213 * 113914503502907 $ (the dimension of the moduli space).
{"url":"http://www.neverendingbooks.org/monstrous-frustrations/","timestamp":"2024-11-12T18:51:00Z","content_type":"text/html","content_length":"30807","record_id":"<urn:uuid:062e6c4c-d99a-495e-aaec-c276ceb4084b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00226.warc.gz"}
What are the Applications of Linked List? - Scaler Blog What are the Applications of Linked List? What is a Linked List ? A linked list is a linear data structure. It is a collection of nodes, and a node contains data and addresses the next node. Linked lists do not use contiguous memory allocation for storage, unlike What are the Applications of Linked List ? There are many applications of linked lists, be it in computer science or the real world. Some of these Applications are : Applications of Linked List in Computer Science : • Linked lists can be used to represent polynomials. • Using a linked list, we can perform the polynomial manipulation. • Arithmetic operations like addition or subtraction of long integers can also be performed using a linked list. • The linked list can be used to implement stacks and queues. • The linked list is also used in implementing graphs in which the adjacent vertices are stored in the nodes of the linked list popularly known as Adjacency list representation. Applications of Linked Lists in the Real World : • In music players, we can create our song playlist and can play a song either from starting or ending of the list. And these music players are implemented using a linked list. • We watch the photos on our laptops or PCs, and we can simply see the next or previous images easily. This feature is implemented using a linked list. • You must be reading this article on your web browser, and in web browsers, we open multiple URLs, and we can easily switch between those URLs using the previous and next buttons because they are connected using a linked list. Applications of Circular Linked Lists : • The circular linked list can be used to implement queues. • In web browsers, the back button is implemented using a circular linked list. • In an operating system, a circular linked list can be used in scheduling algorithms like the Round Robin algorithm. • The undo functionality that is present in applications like photo editors etc., is implemented using circular linked lists. • Circular linked lists can also be used to implement advanced data structures like MRU (Most Recently Used) lists and Fibonacci heap. Applications of Singly Linked List : • The singly linked list is used to implement stack and queue. • The undo or redo options, the back buttons, etc., that we discussed above are implemented using a singly linked list. • During the implementation of a hash function, there arises a problem of collision, to deal with this problem, a singly linked list is used. Application of Doubly Linked Lists : • The doubly linked list is used to implement data structures like a stack, queue, binary tree, and hash table. • It is also used in algorithms of LRU (Least Recently used) and MRU(Most Recently Used) cache. • The undo and redo buttons can be implemented using a doubly-linked list. • The doubly linked list can also be used in the allocation and deallocation of memory. Polynomial Manipulation Polynomials are algebraic expressions that contain coefficients and variables. Polynomial manipulation is doing mathematical operations, like addition, subtraction, etc., on polynomials. Polynomials are a very important part of mathematics, and there aren’t any direct data structures present that can be used to store polynomials in memory. Thus, we take the help of a linked list to represent a polynomial. To represent the polynomials using a linked list, we assume that each node of the linked list corresponds to each term of the polynomials. Let us see how a polynomial is represented in a linked list. The node of the linked list contains three parts : • the coefficient value, • the exponent value, and • the link to the next term. For example the polynomial 4×3+6×2+10x+64x3+6x2+10x+6 can be represented as C++ Program for Addition of Two Polynomials To add two polynomials, we first represent both of them in the form of a linked list. And then, we add the coefficients having the same exponent. For example, suppose we want to add two polynomials that are : $5x^3 + 4x^2 + 2x^0$ and $5x^1 – 5x^0$. C++ Program : // program to add two polynomials #include <bits/stdc++.h> using namespace std; // structure of a node struct Node{ int co_eff; int pwr; struct Node* nxt; // add new node void make_newnode(int x, int y, struct Node** temp){ struct Node *r, *z; z = *temp; if (z == NULL){ r = (struct Node*)malloc(sizeof(struct Node)); r->co_eff = x; r->pwr = y; *temp = r; r->nxt = (struct Node*)malloc(sizeof(struct Node)); r = r->nxt; r->nxt = NULL; r->co_eff = x; r->pwr = y; r->nxt = (struct Node*)malloc(sizeof(struct Node)); r = r->nxt; r->nxt = NULL; // add two polynomials void add_poly(struct Node* first_poly, struct Node* sec_poly, struct Node* poly){ // if the degree of the first polynomial is // greater than the second polynomial then // do not change the first polynomial and move // its pointer while (first_poly->nxt && if (first_poly->pwr > sec_poly->pwr){ poly->pwr = first_poly->pwr; poly->co_eff = first_poly->co_eff; first_poly = first_poly->nxt; // if the degree of the second polynomial is // greater than the first polynomial then // do not change the second polynomial and move // its pointer else if (first_poly->pwr < sec_poly->pwr){ poly->pwr = sec_poly->pwr; poly->co_eff = sec_poly->co_eff; sec_poly = sec_poly->nxt; // if the degree of both polynomials is // same then add the coefficients poly->pwr = first_poly->pwr; poly->co_eff = (first_poly->co_eff + first_poly = first_poly->nxt; sec_poly = sec_poly->nxt; poly->nxt = (struct Node*)malloc(sizeof(struct Node)); poly = poly->nxt; poly->nxt = NULL; while (first_poly->nxt || sec_poly->nxt){ if (first_poly->nxt){ poly->pwr = first_poly->pwr; poly->co_eff = first_poly->co_eff; first_poly = first_poly->nxt; if (sec_poly->nxt){ poly->pwr = sec_poly->pwr; poly->co_eff = sec_poly->co_eff; sec_poly = sec_poly->nxt; poly->nxt = (struct Node*)malloc(sizeof(struct Node)); poly = poly->nxt; poly->nxt = NULL; // print the linked list void display_poly(struct Node* node){ while (node->nxt != NULL){ printf("%dx^%d", node->co_eff, node = node->nxt; if (node->co_eff >= 0){ if (node->nxt != NULL) // main function int main(){ struct Node *first_poly = NULL, *sec_poly = NULL, *poly = NULL; make_newnode(5, 2, &first_poly); make_newnode(4, 1, &first_poly); make_newnode(2, 0, &first_poly); make_newnode(-5, 1, &sec_poly); make_newnode(-5, 0, &sec_poly); printf("1st Number: "); printf("2nd Number: "); poly = (struct Node*)malloc(sizeof(struct Node)); add_poly(first_poly, sec_poly, poly); printf("Added polynomial: "); return 0; Output : 1st Number: 5x^2 +4x^1 + 2x^0 2nd Number: -5x^1 - 5x^0 Added polynomial: 5x^2 - 1x^1 -3x^0 Time Complexity : The time complexity of the program would be O(a+b)O(a+b), where a will be the Number of nodes in the first linked list and b will be the Number of nodes in the second linked list since we are traversing both lists at once. Addition of Long Positive Integer Using Linked List In most programming languages, there is always a limit on the maximum value of an integer that it can store. But sometimes, there might be cases where we need to add two numbers, and the resultant number exceeds the limit of the integer. We can do this by representing the numbers in the form of a linked list and then performing the addition on the linked list and storing the result into the resultant linked list. To perform addition, we traverse both the linked lists parallelly, and then we add the corresponding digits and carry obtained from the previous edition, and we store that value in the resultant linked list. For example, suppose we want to add two numbers : 543467 and 48315. C++ program for the addition of two polynomials using Linked Lists : #include <bits/stdc++.h> using namespace std; // node structure class Node { int value; Node* nxt; // create new node Node* create_node(int value){ Node* new_node = new Node(); new_node->value = value; new_node->nxt = NULL; return new_node; //add a new node to a linked list void add_node(Node** head_ref, int new_value){ Node* new_node = create_node(new_value); new_node->nxt = (*head_ref); (*head_ref) = new_node; // function to add two linked lists Node* add_list(Node* first, Node* second){ // head of result list Node* res = NULL; Node *temp, *prev = NULL; int cry = 0, sum; while (first != NULL || second != NULL) { // sum will be the addition of carrying, first //node and second node if they exist sum = cry + (first ? first->value : 0) + (second ? second->value : 0); // update carry is sum is greater than 10 cry = (sum >= 10) ? 1 : 0; // update sum if the sum is greater than 10 sum = sum % 10; temp = create_node(sum); if (res == NULL) res = temp; prev->nxt = temp; prev = temp; if (first) first = first->nxt; if (second) second = second->nxt; if (cry > 0) temp->nxt = create_node(cry); return res; // function to reverse the list Node* reverse_list(Node* head){ if (head == NULL || head->nxt == NULL) return head; Node* rest = reverse_list(head->nxt); head->nxt->nxt = head; head->nxt = NULL; return rest; // function to print list void printList(Node* node){ while (node != NULL) { cout << node->value; node = node->nxt; cout << endl; // main function int main(void){ Node* res = NULL; Node* first = NULL; Node* second = NULL; add_node(&first, 7); add_node(&first, 6); add_node(&first, 4); add_node(&first, 3); add_node(&first, 4); add_node(&first, 5); printf("First List is "); add_node(&second, 5); add_node(&second, 1); add_node(&second, 3); add_node(&second, 8); add_node(&second, 4); cout << "Second List is "; first = reverse_list(first); second = reverse_list(second); res = add_list(first, second); res = reverse_list(res); cout << "Resultant list is "; return 0; Output : First List is 543467 The Second List is 48315 The resultant list is 591782 Time Complexity : The time complexity of the program would be O(a+b)O(a+b), where a will be the Number of nodes in the first linked list and b will be the Number of nodes in the second linked list since we are traversing both lists at once. Polynomial of Multiple Variables In polynomials, we can have more than one variable. Such polynomials can also be represented using a linked list in the same manner. In the case of multiple variables, each node of the linked list contains a separate part for each exponent. That is, if there are three variables, then there will be three parts for exponents. Suppose we have polynomial $10x^2 y^2 z + 17 x^2y z^2 – 5 xy^2*z+ 21 y^4 z^2 + 7$. It can be represented as a linked list : Simple C++ program to multiply two polynomials : To multiply two polynomials that are given in the form of an array, we simply traverse the first polynomial and multiply its every term with each term of the second polynomial. #include <iostream> using namespace std; // array P contains coefficients of the first poly // array Q contains coefficients of the second poly int *mul(int P[], int Q[], int a, int n){ int *product = new int[a+n-1]; // product array for (int i = 0; i<a+n-1; i++) product[i] = 0; // multiplication of two polynomials for (int i=0; i<a; i++){ for (int j=0; j<n; j++) product[i+j] += P[i]*Q[j]; return product; // Function to print polynomial void poly_print(int pol[], int n){ for (int i=0; i<n; i++){ cout << pol[i]; if (i != 0) cout << "x^" << i ; if (i != n-1) cout << " + "; // main function int main(){ int P[] = {5, 0, 10, 6}; int Q[] = {1, 2, 4}; int a = sizeof(P)/sizeof(P[0]); int n = sizeof(Q)/sizeof(Q[0]); cout << "First polynomial is "<<endl; poly_print(P, a); cout << "Second polynomial is "<<endl; poly_print(Q, n); int *product = mul(P, Q, a, n); cout << "Product polynomial is "<<endl; poly_print(product, a+n-1); return 0; Output : The first polynomial is 5 + 0x^1 + 10x^2 + 6x^3 The second polynomial is 1 + 2x^1 + 4x^2 Product polynomial is 5 + 10x^1 + 30x^2 + 26x^3 + 52x^4 + 24x^5 Time Complexity : The time complexity of the given program will be O(a∗b), where a is the size of the first array and b is the size of the second array. Some Other Applications of Linked List Some important applications of a linked list include : • Allocation of Memory • Email applications • Reducing file sizes on disk • Implementation of advanced data structures Advantages of Linked List Over Arrays A few advantages of linked lists over arrays are : • Dynamic size • Efficient implementation of data structures • No memory wastage • Efficient insertion and deletion operation Learn More After reading this article, you might be getting curious to learn more about linked lists, or there might be some topics that you did not understand. To learn more about the linked list, you can refer to other articles present on Scaler topics on Linked list. So far, we have discussed a lot about linked lists and their applications. A few important points that we covered are : • Linked lists have many applications both in computer science and in the real world. • Some computer science applications include polynomial manipulations, implementation of advanced data structures, etc. • Few real-world applications include web browsers, back buttons, music players, image viewers, etc. • We can perform operations like addition, multiplication, etc. on polynomials with the help of linked lists.
{"url":"https://www.scaler.in/what-are-the-applications-of-linked-list/","timestamp":"2024-11-07T09:26:15Z","content_type":"text/html","content_length":"99712","record_id":"<urn:uuid:624f4bae-0f94-4a4d-bc2b-e75ddbf9d190>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00448.warc.gz"}
A Primer on Noncommutative Classical Dynamics on Velocity Phase Space and Souriau Formalism We present a comprehensive survey on dynamics of the motion of particles with noncommutative Poisson structure. We use Souriau’s method of orbit to study this exotic mechanics on the tangent bundle of the configuration space or velocity phase space. We consider Feynman-Dyson’s proof of Maxwell’s equations using Jacobi identity on the velocity phase space. In this review we generalize the Feynman-Dyson’s scheme by incorporating the non-commutativity between various spatial coordinates along with the velocity coordinates. This allows us to study a generalized class of Hamiltonian systems. We explore various dynamical flows associated to the Souriau form associated to this generalized Feynman-Dyson’s scheme. Moreover, using the Souriau form we show that these new classes of generalized systems are volume preserving mechanical systems. Publication series Name STEAM-H: Science, Technology, Engineering, Agriculture, Mathematics and Health Volume Part F1836 ISSN (Print) 2520-193X ISSN (Electronic) 2520-1948 • Feynman-Dyson’s method • Generalized Hamiltonian dynamics • Kostant-Kirillov two form • Noncommutativity • Poisson manifolds • Schouten-Nijenhuis bracket • Souriau form Dive into the research topics of 'A Primer on Noncommutative Classical Dynamics on Velocity Phase Space and Souriau Formalism'. Together they form a unique fingerprint.
{"url":"https://khazna.ku.ac.ae/en/publications/a-primer-on-noncommutative-classical-dynamics-on-velocity-phase-s","timestamp":"2024-11-13T11:16:26Z","content_type":"text/html","content_length":"55481","record_id":"<urn:uuid:3dd345f3-a183-43ed-a4ee-45cfe330f059>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00682.warc.gz"}
What are some ways to measure when you don’t have a ruler? - The Handy Math Answer Book Everyday Math Numbers and Math in Everyday Life What are some ways to measure when you don’t have a ruler? There are some pretty standard items that you can use if you don’t have a ruler handy to measure something. Most of them are standard sizes, such as a sheet of paper measuring 8.5″ × 11″ so it will fit into most computer printers with ease. Here are a few of the best ways to do a quick measurement (remember, too—some of these are approximations): • A sheet of standard letter paper is 8.5 by 11 inches long. • U.S. paper currency measures 6 1/8 inches wide and 2 5/8 inches long. • Most standard business cards measure 2 inches long by 3 ½ inches wide. • Most postcards measure 3 inches by 5 inches. • The diameter of a quarter is about one inch; the diameter of a penny is approximately 3/4 of an inch. • A credit card is about 3 3/8 inches by 2 1/8 inches. • A standard AA battery is about 2 inches long. • The average adult toothbrush is about 7 inches long.
{"url":"https://www.papertrell.com/apps/preview/The-Handy-Math-Answer-Book/Handy%20Answer%20book/What-are-some-ways-to-measure-when-you-don-t-have-a-ruler/001137022/content/SC/52cb022f82fad14abfa5c2e0_Default.html","timestamp":"2024-11-07T11:52:17Z","content_type":"text/html","content_length":"11947","record_id":"<urn:uuid:d14b7b12-4782-4380-ba49-c2a97ce6d57a>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00096.warc.gz"}
Researching | Wave mixing and high-harmonic generation enhancement by a two-color field driven dielectric metasurface [Invited] High-harmonic generation (HHG) was firstly, to the best of our knowledge, proved to exist in rare-gas atoms^[1], and nowadays it is one of the backbones of attosecond science^[2–5]. Typical gas-phase HHG experiments need expensive and complex experimental setups. In recent years, solid materials have been extensively used as a driven medium for HHG. One of their advantages is its high density, typically three orders of magnitude larger, allowing the HHG to happen with lower pumping laser fields^[6]. Thus, solid HHG has become a subject of great interest. Investigations have shown that high harmonics generated in bulk crystals could provide new ways to manipulate and tailor light fields and have instrumental prospects for strong field photonics applications^[7–14]. These studies will pave the way to the design of compact ultrafast and short wavelength tunable coherent light sources. Subsequent research has shown that high-order harmonics can be generated in an engineered nanoscale structure, which is able to adequately tailor the local near-field in order to reach the strong field regime. Enhancing the driving electric field by a nanostructure is the most promising way to boost the interaction between the electromagnetic field and the material, allowing it to reach the strong laser–matter interaction, with lower intensity pumping fields.
{"url":"http://m.researching.cn/articles/OJa4f0ca58c85b0b10","timestamp":"2024-11-14T07:32:28Z","content_type":"text/html","content_length":"92323","record_id":"<urn:uuid:4e45591c-4dde-4a85-805f-4fa0eb0f190c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00675.warc.gz"}
The System Of The World Isaac Newton Pdf A treatise of the system of the world [microform] / by Sir The System of the World eBook Isaac Newton Amazon. 7/02/2015 · The System of the World Isaac Newton The System of the World Observing the Heavens It was the ancient opinion of not a few, in the earliest ages of philosophy, that the fixed stars stood immoveable in the highest parts of the world; that, under the fixed stars the planets were carried about the sun; that the earth, us one of the planets, A Treatise of the System of the World Isaac Newton Full view - 1731. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apse area's arise ascent astronomers attract bodies. A Treatise of the System of the World by Isaac Newton Isaac Newton Wikipedia. Isaac Newton eBooks (author) Description This Image or Spectrum P T-was coloured, being red at its leali refracted end T, and vio~ let at, its molt refracted end'p, and yellow green and blue in the intermediate Spaces., Biography Isaac Newton was an English physicist and mathematician, who made seminal contributions to several domains of science, and was considered a leading scientist of his era and one of the. 11/11/2017 · An Account of the System of the World, as described by Isaac Newton in his 3 volume work, Philosophiæ Naturalis Principia Mathematica (The Mathematical Principles of … nate of mortals had been Isaac Newton (1642–1727), be- cause it is only possible to discover once the system of the world. 1 Newtonian mechanics is no longer regarded the system of the world observing the heavens isaac newton Thu, 06 Dec 2018 07:40:00 GMT the system of the world pdf - The WHO Statistical Information 8/01/2016 · MATHEMATICAL PRINCIPLES. OF NATURAL PHILOSOPHY, BY SIR ISAAC NEWTON; TRANSLATED INTO ENGLISH BY ANDREW MOTTE. TO WHICH IS ADDED NEWTON'S SYSTEM OF THE WORLD; With a Portrait taken from the Bust in the Royal Observatory at Greenwich. Download the religion of isaac newton or read online here in PDF or EPUB. Please click button to get the religion of isaac newton book now. All books are in clear copy here, and all files are secure so don't worry about it. Isaac Newton grew up on a farm in rural topic of alchemy. His was considered one England. As a boy, he completely immersed of the most important alchemical libraries himself in the study and application of a in the world. His collection also included a book entitled The Mysteries of Nature and thoroughly annotated personal copy of The Art, building various mechanical devices Fame and By Isaac Newton, Florian Cajori, Andrew Motte. ISBN-10: 0520009290. ISBN-13: 9780520009295. Binding and Pages are fit. the canopy has put on on it yet no bend or wrinkle edges. Isaac Newton eBooks (author) Description This Image or Spectrum P T-was coloured, being red at its leali refracted end T, and vio~ let at, its molt refracted end'p, and yellow green and blue in the intermediate Spaces. Isaac Newton By James Gleick Publisher: HarperAudio 2003-05-01 ISBN: B000VYVIGU Language: English Audio Cassette in MP3 83 mb A portrait of Isaac Newton, the man who changed our understanding of the universe, of science, and of faith is painted in this book. Isaac Newton was the chief architect of the modern world. He answered the ancient philosophical riddles of light and … Newton's three laws of motion are not stated in the language that is used in modern introductory physics texts. A good companion to this book is "Feynman's Lost Lecture" in which Richard Feyman demonstrates planetary motion using the same geometric techniques employed by Isaac Newton, but in a more clear, modern style. The System of the World can refer to several things: The System of the World (novel), a 2005 book by Neal Stephenson; The third book of Isaac Newton's Philosophiæ Naturalis Principia Mathematica Godfrey Kneller's 1689 portrait of Isaac Newton (aged 46) Biographical note Natural philosopher, born at Woolsthorpe, Lincolnshire, the son of a small landed proprietor, and educated at the Grammar School of Grantham and at Trinity College, Cambridge. A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth celestial Sir Isaac Newton believed that not only the Bible but the whole Universe was a “cryptogram set by the Almighty,” a great puzzle that mankind was meant to solve. Philip N. Moore, in The End of History, Messiah Conspiracy, discusses Sir Isaac The Philosophiae Naturalis Principia Mathematica took Isaac Newton 2 years to write. It was the culmination of more than 20 years of thinking. It was the culmination of more than 20 years of thinking. Newtons Principia (PDF) - Red Light Cameras 23/07/2016 · Sir Isaac Newton vs Bill Nye. Epic Rap Battles of History Season 3. The physicist Pierre-Simon Laplace reworked Newton’s system of the world so that the exercise of God’s will to restore the system was not part of it, and it was this image of Newton and Newtonian science that was then spread around Europe with Napoleon in the early nineteenth century. It is this image of a rational, secular Newton that we inherit today, but it is not an image that Newton 11/11/2017 · An Account of the System of the World, as described by Isaac Newton in his 3 volume work, Philosophiæ Naturalis Principia Mathematica (The Mathematical Principles of … The System of the World. Isaac Newton. The System of the World. Observing the Heavens. It was the ancient opinion of not a few, in the earliest ages of philosophy, that the fixed stars stood immoveable in the highest parts of the world; that, under the fixed stars the planets were carried about the sun; that the earth, us one of the planets Amazon.com Principia Vol. II The System of the World The System of the World Amazon.co.uk Isaac Newton. Newtons Principia (PDF) - Red Light Cameras, The System of the World Isaac Newton Text and images extracted from Newton’s Principia the mathematical principles of natural philosophy , 1st American ed., carefully rev. and corr. / with a life of the author, by N.W. Chittenden. by Sir Isaac Newton; translated into English by Andrew Motte; to which is added Newton’s system of the world.. The Mathematical Principles of Natural Philosophy by. A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth celestial, Mathematical Principles of Natural Philosophy and His System of the World (Principia.) September 05, 2018 Isaac Newton 306 Books Mathematical Principles of Natural Philosophy and His System of the World Principia This is an OCR edition without illustrations or index It may have numerous typos or missing text However purchasers can download a free scanned copy of the original rare book from. Isaac Newton The First Three Sections of Newton's Isaac Newton Free eBooks Download - ebook3000.com. A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth celestial Isaac Newton, heretic : the strategies of a Nicodemite PDF.Ramsay significantly modified the sense: as Sir Isaac Newton said. sir isaac newton philosophiae naturalis principia mathematica pdf 10 Boris Hessen, The Social and Economic Roots of Newtons Principia, in. Hans Sloanes Atlantic World,. Sir Isaac Newton, FRS , was an English physicist, mathematician, astronomer, natural philosopher, and alchemist. His Philosophiæ Naturalis Principia Mathematica , published in 1687, is considered to be the most influential book in the history of science. Sir Isaac Newton believed that not only the Bible but the whole Universe was a “cryptogram set by the Almighty,” a great puzzle that mankind was meant to solve. Philip N. Moore, in The End of History, Messiah Conspiracy, discusses Sir Isaac A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth … The System of the World, Isaac Newton, RSM. Des milliers de livres avec la livraison chez vous en 1 jour ou en magasin avec -5% de réduction . 23/07/2016 · Sir Isaac Newton vs Bill Nye. Epic Rap Battles of History Season 3. The System of the World. Isaac Newton. The System of the World. Observing the Heavens. It was the ancient opinion of not a few, in the earliest ages of philosophy, that the fixed stars stood immoveable in the highest parts of the world; that, under the fixed stars the planets were carried about the sun; that the earth, us one of the planets Download isaac newton or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get isaac newton book now. This site is like a library, Use search box in the widget to get ebook that you want. Isaac Newton grew up on a farm in rural topic of alchemy. His was considered one England. As a boy, he completely immersed of the most important alchemical libraries himself in the study and application of a in the world. His collection also included a book entitled The Mysteries of Nature and thoroughly annotated personal copy of The Art, building various mechanical devices Fame and A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth celestial Text Book Notes Isaac Newton HISTORY Major ALL Topic Historical Anecdotes Sub Topic Text Book Notes Newton was first exposed to the world of mathematics. Having come across Euclid's Elements in a bookstore, Newton was able to quickly follow the work, although he had little mathematical background to begin with. Having found the work easy reading, Newton became … A Treatise of the System of the World Isaac Newton Full view - 1731. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apse area's arise ascent astronomers attract bodies Sir Isaac Newton, FRS , was an English physicist, mathematician, astronomer, natural philosopher, and alchemist. His Philosophiæ Naturalis Principia Mathematica , published in 1687, is considered to be the most influential book in the history of science. A Treatise of the System of the World Isaac Newton Full view - 1728. A Treatise of the System of the World Isaac Newton, I. Bernard Cohen Limited preview - 2004. A Treatise of the System of the World Sir Isaac Newton Snippet view - 1969. View all » Common terms and phrases. action Æther angle aphelions apparent diameter appear appulse apses arise ascend attract bodies breadth celestial Isaac Newton grew up on a farm in rural topic of alchemy. His was considered one England. As a boy, he completely immersed of the most important alchemical libraries himself in the study and application of a in the world. His collection also included a book entitled The Mysteries of Nature and thoroughly annotated personal copy of The Art, building various mechanical devices Fame and nate of mortals had been Isaac Newton (1642–1727), be- cause it is only possible to discover once the system of the world. 1 Newtonian mechanics is no longer regarded The Philosophiae Naturalis Principia Mathematica took Isaac Newton 2 years to write. It was the culmination of more than 20 years of thinking. It was the culmination of more than 20 years of thinking. 11/11/2017 · An Account of the System of the World, as described by Isaac Newton in his 3 volume work, Philosophiæ Naturalis Principia Mathematica (The Mathematical Principles of … Newton’s “System of the World” Explains Kepler’s Laws Law of elliptical orbits –Inverse square, central forces produce conic section orbits Sir Isaac Newton's Mathematical Principles of Natural Philosophy and his System of the World, Volume 1: The Motion of Bodies by Newton, Sir Isaac, revised by Florian Cajori and a great selection of related books, art and collectibles available now at AbeBooks.com. Newton's Gift: How Sir Isaac Newton Unlocked the System of the World. (2000). 256 pp. excerpt and text search ISBN 0-684-84392-7 Buchwald, Jed Z. and Cohen, I. Bernard, eds. Isaac Newton… The System of the World, Isaac Newton, RSM. Des milliers de livres avec la livraison chez vous en 1 jour ou en magasin avec -5% de réduction . Newton’s “System of the World” Explains Kepler’s Laws Law of elliptical orbits –Inverse square, central forces produce conic section orbits
{"url":"https://pamperrystudio.com/swan-river/the-system-of-the-world-isaac-newton-pdf.php","timestamp":"2024-11-04T11:59:43Z","content_type":"text/html","content_length":"64254","record_id":"<urn:uuid:2f23c6d8-6cbf-4873-b309-9b9861d91078>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00402.warc.gz"}
On the core of a unicyclic graph A set S ⊆ V is independent in a graph G = (V, E) if no two vertices from S are adjacent. By core(G) we mean the intersection of all maximum independent sets. The independence number α(G) is the cardinality of a maximum independent set, while μ(G) is the size of a maximum matching in G. A connected graph having only one cycle, say C, is a unicyclic graph. In this paper we prove that if G is a unicyclic graph of order n and n - 1 = α(G) + μ(G), then core (G) coincides with the union of cores of all trees in G - C. • Core • König- Egerváry graph • Matching • Maximum independent set • Unicyclic graph Dive into the research topics of 'On the core of a unicyclic graph'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/on-the-core-of-a-unicyclic-graph-3","timestamp":"2024-11-03T13:23:53Z","content_type":"text/html","content_length":"51270","record_id":"<urn:uuid:47415b83-37e6-4b1a-aae1-d537522840de>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00308.warc.gz"}
Functions for Character Table Constructions 5 Functions for Character Table Constructions The functions described in this chapter deal with the construction of character tables from other character tables. So they fit to the functions in Section Reference: Constructing Character Tables from Others. But since they are used in situations that are typical for the GAP Character Table Library, they are described here. An important ingredient of the constructions is the description of the action of a group automorphism on the classes by a permutation. In practice, these permutations are usually chosen from the group of table automorphisms of the character table in question, see AutomorphismsOfTable (Reference: AutomorphismsOfTable). Section 5.1 deals with groups of the structure M.G.A, where the upwards extension G.A acts suitably on the central extension M.G. Section 5.2 deals with groups that have a factor group of type S_3. Section 5.3 deals with upward extensions of a group by a Klein four group. Section 5.4 deals with downward extensions of a group by a Klein four group. Section 5.6 describes the construction of certain Brauer tables. Section 5.7 deals with special cases of the construction of character tables of central extensions from known character tables of suitable factor groups. Section 5.8 documents the functions used to encode certain tables in the GAP Character Table Library. Examples can be found in [Breb] and [Bref]. 5.1 Character Tables of Groups of Structure M.G.A For the functions in this section, let H be a group with normal subgroups N and M such that H/N is cyclic, M ≤ N holds, and such that each irreducible character of N that does not contain M in its kernel induces irreducibly to H. (This is satisfied for example if N has prime index in H and M is a group of prime order that is central in N but not in H.) Let G = N/M and A = H/N, so H has the structure M.G.A. For some examples, see [Bre11]. 5.1-1 PossibleCharacterTablesOfTypeMGA ‣ PossibleCharacterTablesOfTypeMGA( tblMG, tblG, tblGA, orbs, identifier ) ( function ) Let H, N, and M be as described at the beginning of the section. Let tblMG, tblG, tblGA be the ordinary character tables of the groups M.G = N, G, and G.A = H/M, respectively, and orbs be the list of orbits on the class positions of tblMG that is induced by the action of H on M.G. Furthermore, let the class fusions from tblMG to tblG and from tblG to tblGA be stored on tblMG and tblG, respectively (see StoreFusion (Reference: StoreFusion)). PossibleCharacterTablesOfTypeMGA returns a list of records describing all possible ordinary character tables for groups H that are compatible with the arguments. Note that in general there may be several possible groups H, and it may also be that "character tables" are constructed for which no group exists. Each of the records in the result has the following components. a possible ordinary character table for H, and the fusion map from tblMG into the table stored in table. The possible tables differ w. r. t. some power maps, and perhaps element orders and table automorphisms; in particular, the MGfusMGA component is the same in all records. The returned tables have the Identifier (Reference: Identifier for character tables) value identifier. The classes of these tables are sorted as follows. First come the classes contained in M.G, sorted compatibly with the classes in tblMG, then the classes in H ∖ M.G follow, in the same ordering as the classes of G.A ∖ G. 5.1-2 BrauerTableOfTypeMGA ‣ BrauerTableOfTypeMGA( modtblMG, modtblGA, ordtblMGA ) ( function ) Let H, N, and M be as described at the beginning of the section, let modtblMG and modtblGA be the p-modular character tables of the groups N and H/M, respectively, and let ordtblMGA be the p-modular Brauer table of H, for some prime integer p. Furthermore, let the class fusions from the ordinary character table of modtblMG to ordtblMGA and from ordtblMGA to the ordinary character table of modtblGA be stored. BrauerTableOfTypeMGA returns the p-modular character table of H. 5.1-3 PossibleActionsForTypeMGA ‣ PossibleActionsForTypeMGA( tblMG, tblG, tblGA ) ( function ) Let the arguments be as described for PossibleCharacterTablesOfTypeMGA (5.1-1). PossibleActionsForTypeMGA returns the set of orbit structures Ω on the class positions of tblMG that can be induced by the action of H on the classes of M.G in the sense that Ω is the set of orbits of a table automorphism of tblMG (see AutomorphismsOfTable (Reference: AutomorphismsOfTable)) that is compatible with the stored class fusions from tblMG to tblG and from tblG to tblGA. Note that the number of such orbit structures can be smaller than the number of the underlying table automorphisms. Information about the progress is reported if the info level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)). 5.2 Character Tables of Groups of Structure G.S_3 5.2-1 CharacterTableOfTypeGS3 ‣ CharacterTableOfTypeGS3( tbl, tbl2, tbl3, aut, identifier ) ( function ) ‣ CharacterTableOfTypeGS3( modtbl, modtbl2, modtbl3, ordtbls3, identifier ) ( function ) Let H be a group with a normal subgroup G such that H/G ≅ S_3, the symmetric group on three points, and let G.2 and G.3 be preimages of subgroups of order 2 and 3, respectively, under the natural projection onto this factor group. In the first form, let tbl, tbl2, tbl3 be the ordinary character tables of the groups G, G.2, and G.3, respectively, and aut be the permutation of classes of tbl3 induced by the action of H on G.3. Furthermore assume that the class fusions from tbl to tbl2 and tbl3 are stored on tbl (see StoreFusion (Reference: StoreFusion)). In particular, the two class fusions must be compatible in the sense that the induced action on the classes of tbl describes an action of S_3. In the second form, let modtbl, modtbl2, modtbl3 be the p-modular character tables of the groups G, G.2, and G.3, respectively, and ordtbls3 be the ordinary character table of H. CharacterTableOfTypeGS3 returns a record with the following components. the ordinary or p-modular character table of H, respectively, the fusion map from tbl2 into the table of H, and the fusion map from tbl3 into the table of H. The returned table of H has the Identifier (Reference: Identifier for character tables) value identifier. The classes of the table of H are sorted as follows. First come the classes contained in G.3, sorted compatibly with the classes in tbl3, then the classes in H ∖ G.3 follow, in the same ordering as the classes of G.2 ∖ G. In fact the code is applicable in the more general case that H/G is a Frobenius group F = K C with abelian kernel K and cyclic complement C of prime order, see [Bref]. Besides F = S_3, e. g., the case F = A_4 is interesting. 5.2-2 PossibleActionsForTypeGS3 ‣ PossibleActionsForTypeGS3( tbl, tbl2, tbl3 ) ( function ) Let the arguments be as described for CharacterTableOfTypeGS3 (5.2-1). PossibleActionsForTypeGS3 returns the set of those table automorphisms (see AutomorphismsOfTable (Reference: AutomorphismsOfTable)) of tbl3 that can be induced by the action of H on the classes of tbl3. Information about the progress is reported if the info level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)). 5.3 Character Tables of Groups of Structure G.2^2 The following functions are thought for constructing the possible ordinary character tables of a group of structure G.2^2 from the known tables of the three normal subgroups of type G.2. 5.3-1 PossibleCharacterTablesOfTypeGV4 ‣ PossibleCharacterTablesOfTypeGV4( tblG, tblsG2, acts, identifier[, tblGfustblsG2] ) ( function ) ‣ PossibleCharacterTablesOfTypeGV4( modtblG, modtblsG2, ordtblGV4[, ordtblsG2fusordtblG4] ) ( function ) Let H be a group with a normal subgroup G such that H/G is a Klein four group, and let G.2_1, G.2_2, and G.2_3 be the three subgroups of index two in H that contain G. In the first version, let tblG be the ordinary character table of G, let tblsG2 be a list containing the three character tables of the groups G.2_i, and let acts be a list of three permutations describing the action of H on the conjugacy classes of the corresponding tables in tblsG2. If the class fusions from tblG into the tables in tblsG2 are not stored on tblG (for example, because the three tables are equal) then the three maps must be entered in the list tblGfustblsG2. In the second version, let modtblG be the p-modular character table of G, modtblsG be the list of p-modular Brauer tables of the groups G.2_i, and ordtblGV4 be the ordinary character table of H. In this case, the class fusions from the ordinary character tables of the groups G.2_i to ordtblGV4 can be entered in the list ordtblsG2fusordtblG4. PossibleCharacterTablesOfTypeGV4 returns a list of records describing all possible (ordinary or p-modular) character tables for groups H that are compatible with the arguments. Note that in general there may be several possible groups H, and it may also be that "character tables" are constructed for which no group exists. Each of the records in the result has the following components. a possible (ordinary or p-modular) character table for H, and the list of fusion maps from the tables in tblsG2 into the table component. The possible tables differ w.r.t. the irreducible characters and perhaps the table automorphisms; in particular, the G2fusGV4 component is the same in all records. The returned tables have the Identifier (Reference: Identifier for character tables) value identifier. The classes of these tables are sorted as follows. First come the classes contained in G, sorted compatibly with the classes in tblG, then the outer classes in the tables in tblsG2 follow, in the same ordering as in these tables. 5.3-2 PossibleActionsForTypeGV4 ‣ PossibleActionsForTypeGV4( tblG, tblsG2 ) ( function ) Let the arguments be as described for PossibleCharacterTablesOfTypeGV4 (5.3-1). PossibleActionsForTypeGV4 returns the list of those triples [ π_1, π_2, π_3 ] of permutations for which a group H may exist that contains G.2_1, G.2_2, G.2_3 as index 2 subgroups which intersect in the index 4 subgroup G. Information about the progress is reported if the level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)). 5.4 Character Tables of Groups of Structure 2^2.G The following functions are thought for constructing the possible ordinary or Brauer character tables of a group of structure 2^2.G from the known tables of the three factor groups modulo the normal order two subgroups in the central Klein four group. Note that in the ordinary case, only a list of possibilities can be computed whereas in the modular case, where the ordinary character table is assumed to be known, the desired table is uniquely 5.4-1 PossibleCharacterTablesOfTypeV4G ‣ PossibleCharacterTablesOfTypeV4G( tblG, tbls2G, id[, fusions] ) ( function ) ‣ PossibleCharacterTablesOfTypeV4G( tblG, tbl2G, aut, id ) ( function ) Let H be a group with a central subgroup N of type 2^2, and let Z_1, Z_2, Z_3 be the order 2 subgroups of N. In the first form, let tblG be the ordinary character table of H/N, and tbls2G be a list of length three, the entries being the ordinary character tables of the groups H/Z_i. In the second form, let tbl2G be the ordinary character table of H/Z_1 and aut be a permutation; here it is assumed that the groups Z_i are permuted under an automorphism σ of order 3 of H, and that σ induces the permutation aut on the classes of tblG. The class fusions onto tblG are assumed to be stored on the tables in tbls2G or tbl2G, respectively, except if they are explicitly entered via the optional argument fusions. PossibleCharacterTablesOfTypeV4G returns the list of all possible character tables for H in this situation. The returned tables have the Identifier (Reference: Identifier for character tables) value 5.4-2 BrauerTableOfTypeV4G ‣ BrauerTableOfTypeV4G( ordtblV4G, modtbls2G ) ( function ) ‣ BrauerTableOfTypeV4G( ordtblV4G, modtbl2G, aut ) ( function ) Let H be a group with a central subgroup N of type 2^2, and let ordtblV4G be the ordinary character table of H. Let Z_1, Z_2, Z_3 be the order 2 subgroups of N. In the first form, let modtbls2G be the list of the p-modular Brauer tables of the factor groups H/Z_1, H/Z_2, and H/Z_3, for some prime integer p. In the second form, let modtbl2G be the p-modular Brauer table of H/Z_1 and aut be a permutation; here it is assumed that the groups Z_i are permuted under an automorphism σ of order 3 of H, and that σ induces the permutation aut on the classes of the ordinary character table of H that is stored in ordtblV4G. The class fusions from ordtblV4G to the ordinary character tables of the tables in modtbls2G or modtbl2G are assumed to be stored. BrauerTableOfTypeV4G returns the p-modular character table of H. 5.5 Character Tables of Subdirect Products of Index Two The following function is thought for constructing the (ordinary or Brauer) character tables of certain subdirect products from the known tables of the factor groups and normal subgroups involved. 5.5-1 CharacterTableOfIndexTwoSubdirectProduct ‣ CharacterTableOfIndexTwoSubdirectProduct( tblH1, tblG1, tblH2, tblG2, identifier ) ( function ) Returns: a record containing the character table of the subdirect product G that is described by the first four arguments. Let tblH1, tblG1, tblH2, tblG2 be the character tables of groups H_1, G_1, H_2, G_2, such that H_1 and H_2 have index two in G_1 and G_2, respectively, and such that the class fusions corresponding to these embeddings are stored on tblH1 and tblH1, respectively. In this situation, the direct product of G_1 and G_2 contains a unique subgroup G of index two that contains the direct product of H_1 and H_2 but does not contain any of the groups G_1, G_2. The function CharacterTableOfIndexTwoSubdirectProduct returns a record with the following components. the character table of G, the class fusion from tblH1 into the table of G, and the class fusion from tblH2 into the table of G. If the first four arguments are ordinary character tables then the fifth argument identifier must be a string; this is used as the Identifier (Reference: Identifier for character tables) value of the result table. If the first four arguments are Brauer character tables for the same characteristic then the fifth argument must be the ordinary character table of the desired subdirect product. 5.5-2 ConstructIndexTwoSubdirectProduct ‣ ConstructIndexTwoSubdirectProduct( tbl, tblH1, tblG1, tblH2, tblG2, permclasses, permchars ) ( function ) ConstructIndexTwoSubdirectProduct constructs the irreducible characters of the ordinary character table tbl of the subdirect product of index two in the direct product of tblG1 and tblG2, which contains the direct product of tblH1 and tblH2 but does not contain any of the direct factors tblG1, tblG2. W. r. t. the default ordering obtained from that given by CharacterTableDirectProduct ( Reference: CharacterTableDirectProduct), the columns and the rows of the matrix of irreducibles are permuted with the permutations permclasses and permchars, respectively. 5.5-3 ConstructIndexTwoSubdirectProductInfo ‣ ConstructIndexTwoSubdirectProductInfo( tbl[, tblH1, tblG1, tblH2, tblG2] ) ( function ) Returns: a list of constriction descriptions, or a construction description, or fail. Called with one argument tbl, an ordinary character table of the group G, say, ConstructIndexTwoSubdirectProductInfo analyzes the possibilities to construct tbl from character tables of subgroups H_1 , H_2 and factor groups G_1, G_2, using CharacterTableOfIndexTwoSubdirectProduct (5.5-1). The return value is a list of records with the following components. the list of class positions of H_1, H_2 in tbl, the list of orders of H_1, H_2, the list of Identifier (Reference: Identifier for character tables) values of the GAP library tables of the factors G_2, G_1 of G by H_1, H_2; if no such table is available then the entry is fail, and the list of Identifier (Reference: Identifier for character tables) values of the GAP library tables of the subgroups H_2, H_1 of G; if no such tables are available then the entries are fail. If the returned list is empty then either tbl does not have the desired structure as a subdirect product, or tbl is in fact a nontrivial direct product. Called with five arguments, the ordinary character tables of G, H_1, G_1, H_2, G_2, ConstructIndexTwoSubdirectProductInfo returns a list that can be used as the ConstructionInfoCharacterTable (3.7-4) value for the character table of G from the other four character tables using CharacterTableOfIndexTwoSubdirectProduct (5.5-1); if this is not possible then fail is returned. 5.6 Brauer Tables of Extensions by p-regular Automorphisms As for the construction of Brauer character tables from known tables, the functions PossibleCharacterTablesOfTypeMGA (5.1-1), CharacterTableOfTypeGS3 (5.2-1), and PossibleCharacterTablesOfTypeGV4 ( 5.3-1) work for both ordinary and Brauer tables. The following function is designed specially for Brauer tables. 5.6-1 IBrOfExtensionBySingularAutomorphism ‣ IBrOfExtensionBySingularAutomorphism( modtbl, act ) ( function ) Let modtbl be a p-modular Brauer table of the group G, say, and suppose that the group H, say, is an upward extension of G by an automorphism of order p. The second argument act describes the action of this automorphism. It can be either a permutation of the columns of modtbl, or a list of the H-orbits on the columns of modtbl, or the ordinary character table of H such that the class fusion from the ordinary table of modtbl into this table is stored. In all these cases, IBrOfExtensionBySingularAutomorphism returns the values lists of the irreducible p-modular Brauer characters of H. Note that the table head of the p-modular Brauer table of H, in general without the Irr (Reference: Irr) attribute, can be obtained by applying CharacterTableRegular (Reference: CharacterTableRegular ) to the ordinary character table of H, but IBrOfExtensionBySingularAutomorphism can be used also if the ordinary character table of H is not known, and just the p-modular character table of G and the action of H on the classes of G are given. 5.7 Character Tables of Coprime Central Extensions 5.7-1 CharacterTableOfCommonCentralExtension ‣ CharacterTableOfCommonCentralExtension( tblG, tblmG, tblnG, id ) ( function ) Let tblG be the ordinary character table of a group G, say, and let tblmG and tblnG be the ordinary character tables of central extensions m.G and n.G of G by cyclic groups of prime orders m and n, respectively, with m not= n. We assume that the factor fusions from tblmG and tblnG to tblG are stored on the tables. CharacterTableOfCommonCentralExtension returns a record with the following the character table t, say, of the corresponding central extension of G by a cyclic group of order m n that factors through m.G and n.G; the Identifier (Reference: Identifier for character tables ) value of this table is id, true if the Irr (Reference: Irr) value is stored in t, and false otherwise, the list of irreducibles of t that are known; it contains the inflated characters of the factor groups m.G and n.G, plus those irreducibles that were found in tensor products of characters of these groups. Note that the conjugacy classes and the power maps of t are uniquely determined by the input data. Concerning the irreducible characters, we try to extract them from the tensor products of characters of the given factor groups by reducing with known irreducibles and applying the LLL algorithm (see ReducedClassFunctions (Reference: ReducedClassFunctions) and LLL (Reference: LLL)). 5.8 Construction Functions used in the Character Table Library The following functions are used in the GAP Character Table Library, for encoding table constructions via the mechanism that is based on the attribute ConstructionInfoCharacterTable (3.7-4). All construction functions take as their first argument a record that describes the table to be constructed, and the function adds only those components that are not yet contained in this record. 5.8-1 ConstructMGA ‣ ConstructMGA( tbl, subname, factname, plan, perm ) ( function ) ConstructMGA constructs the irreducible characters of the ordinary character table tbl of a group m.G.a where the automorphism a (a group of prime order) of m.G acts nontrivially on the central subgroup m of m.G. subname is the name of the subgroup m.G which is a (not necessarily cyclic) central extension of the (not necessarily simple) group G, factname is the name of the factor group G.a. Then the faithful characters of tbl are induced from m.G. plan is a list, each entry being a list containing positions of characters of m.G that form an orbit under the action of a (the induction of characters is encoded this way). perm is the permutation that must be applied to the list of characters that is obtained on appending the faithful characters to the inflated characters of the factor group. A nonidentity permutation occurs for example for groups of structure 12.G.2 that are encoded via the subgroup 12.G and the factor group 6.G.2, where the faithful characters of 4.G.2 shall precede those of 6.G.2, as in the Examples where ConstructMGA is used to encode library tables are the tables of 3.F_{3+}.2 (subgroup 3.F_{3+}, factor group F_{3+}.2) and 12_1.U_4(3).2_2 (subgroup 12_1.U_4(3), factor group 6_1.U_4 5.8-2 ConstructMGAInfo ‣ ConstructMGAInfo( tblmGa, tblmG, tblGa ) ( function ) Let tblmGa be the ordinary character table of a group of structure m.G.a where the factor group of prime order a acts nontrivially on the normal subgroup of order m that is central in m.G, tblmG be the character table of m.G, and tblGa be the character table of the factor group G.a. ConstructMGAInfo returns the list that is to be stored in the library version of tblmGa: the first entry is the string "ConstructMGA", the remaining four entries are the last four arguments for the call to ConstructMGA (5.8-1). 5.8-3 ConstructGS3 ‣ ConstructGS3( tbls3, tbl2, tbl3, ind2, ind3, ext, perm ) ( function ) ‣ ConstructGS3Info( tbl2, tbl3, tbls3 ) ( function ) ConstructGS3 constructs the irreducibles of an ordinary character table tbls3 of type G.S_3 from the tables with names tbl2 and tbl3, which correspond to the groups G.2 and G.3, respectively. ind2 is a list of numbers referring to irreducibles of tbl2. ind3 is a list of pairs, each referring to irreducibles of tbl3. ext is a list of pairs, each referring to one irreducible character of tbl2 and one of tbl3. perm is a permutation that must be applied to the irreducibles after the construction. ConstructGS3Info returns a record with the components ind2, ind3, ext, perm, and list, as are needed for ConstructGS3. 5.8-4 ConstructV4G ‣ ConstructV4G( tbl, facttbl, aut ) ( function ) Let tbl be the character table of a group of type 2^2.G where an outer automorphism of order 3 permutes the three involutions in the central 2^2. Let aut be the permutation of classes of tbl induced by that automorphism, and facttbl be the name of the character table of the factor group 2.G. Then ConstructV4G constructs the irreducible characters of tbl from that information. 5.8-5 ConstructProj ‣ ConstructProj( tbl, irrinfo ) ( function ) ‣ ConstructProjInfo( tbl, kernel ) ( function ) ConstructProj constructs the irreducible characters of the record encoding the ordinary character table tbl from projective characters of tables of factor groups, which are stored in the ProjectivesInfo (3.7-2) value of the smallest factor; the information about the name of this factor and the projectives to take is stored in irrinfo. ConstructProjInfo takes an ordinary character table tbl and a list kernel of class positions of a cyclic kernel of order dividing 12, and returns a record with the components a character table that is permutation isomorphic with tbl, and sorted such that classes that differ only by multiplication with elements in the classes of kernel are consecutive, a record being the entry for the projectives list of the table of the factor of tbl by kernel, describing this part of the irreducibles of tbl, and the value of irrinfo that is needed for constructing the irreducibles of the tbl component of the result (not the irreducibles of the argument tbl!) via ConstructProj. 5.8-6 ConstructDirectProduct ‣ ConstructDirectProduct( tbl, factors[, permclasses, permchars] ) ( function ) The direct product of the library character tables described by the list factors of table names is constructed using CharacterTableDirectProduct (Reference: CharacterTableDirectProduct), and all its components that are not yet stored on tbl are added to tbl. The ComputedClassFusions (Reference: ComputedClassFusions) value of tbl is enlarged by the factor fusions from the direct product to the factors. If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly. factors must have length at least two; use ConstructPermuted (5.8-11) in the case of only one factor. 5.8-7 ConstructCentralProduct ‣ ConstructCentralProduct( tbl, factors, Dclasses[, permclasses, permchars] ) ( function ) The library table tbl is completed with help of the table obtained by taking the direct product of the tables with names in the list factors, and then factoring out the normal subgroup that is given by the list Dclasses of class positions. If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly. 5.8-8 ConstructSubdirect ‣ ConstructSubdirect( tbl, factors, choice ) ( function ) The library table tbl is completed with help of the table obtained by taking the direct product of the tables with names in the list factors, and then taking the table consisting of the classes in the list choice. Note that in general, the restriction to the classes of a normal subgroup is not sufficient for describing the irreducible characters of this normal subgroup. 5.8-9 ConstructWreathSymmetric ‣ ConstructWreathSymmetric( tbl, subname, n[, permclasses, permchars] ) ( function ) The wreath product of the library character table with identifier value subname with the symmetric group on n points is constructed using CharacterTableWreathSymmetric (Reference: CharacterTableWreathSymmetric), and all its components that are not yet stored on tbl are added to tbl. If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly. 5.8-10 ConstructIsoclinic ‣ ConstructIsoclinic( tbl, factors[, nsg[, centre]][, permclasses, permchars] ) ( function ) constructs first the direct product of library tables as given by the list factors of admissible character table names, and then constructs the isoclinic table of the result. If the argument nsg is present and a record or a list then CharacterTableIsoclinic (Reference: CharacterTableIsoclinic) gets called, and nsg (as well as centre if present) is passed to this function. In both cases, if the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly. 5.8-11 ConstructPermuted ‣ ConstructPermuted( tbl, libnam[, permclasses, permchars] ) ( function ) The library table tbl is computed from the library table with the name libnam, by permuting the classes and the characters by the permutations permclasses and permchars, respectively. So tbl and the library table with the name libnam are permutation equivalent. With the more general function ConstructAdjusted (5.8-12), one can derive character tables that are not necessarily permutation equivalent, by additionally replacing some defining data. The two permutations are optional. If they are missing then the lists of irreducible characters and the power maps of the two character tables coincide. However, different class fusions may be stored on the two tables. This is used for example in situations where a group has several classes of isomorphic maximal subgroups whose class fusions are different; different character tables (with different identifiers) are stored for the different classes, each with appropriate class fusions, and all these tables except the one for the first class of subgroups can be derived from this table via ConstructPermuted. 5.8-12 ConstructAdjusted ‣ ConstructAdjusted( tbl, libnam, pairs[, permclasses, permchars] ) ( function ) The defining attribute values of the library table tbl are given by the attribute values described by the list pairs and –for those attributes which do not appear in pairs– by the attribute values of the library table with the name libnam, whose classes and characters have been permuted by the optional permutations permclasses and permchars, respectively. This construction can be used to derive a character table from another library table (the one with the name libnam) that is not permutation equivalent to this table. For example, it may happen that the character tables of a split and a nonsplit extension differ only by some power maps and element orders. In this case, one can encode one of the tables via ConstructAdjusted, by prescribing just the power maps in the list pairs. If no replacement of components is needed then one should better use ConstructPermuted (5.8-11), because the system can then exploit the fact that the two tables are permutation equivalent. 5.8-13 ConstructFactor ‣ ConstructFactor( tbl, libnam, kernel ) ( function ) The library table tbl is completed with help of the library table with name libnam, by factoring out the classes in the list kernel.
{"url":"https://docs.gap-system.org/pkg/ctbllib/doc/chap5.html","timestamp":"2024-11-12T07:29:44Z","content_type":"application/xhtml+xml","content_length":"73383","record_id":"<urn:uuid:0f444f01-8c9b-44d6-b2e1-8782809be1d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00863.warc.gz"}
11.6 The Big Bang - University Physics Volume 3 | OpenStax By the end of this section, you will be able to: • Explain the expansion of the universe in terms of a Hubble graph and cosmological redshift • Describe the analogy between cosmological expansion and an expanding balloon • Use Hubble’s law to make predictions about the measured speed of distant galaxies We have been discussing elementary particles, which are some of the smallest things we can study. Now we are going to examine what we know about the universe, which is the biggest thing we can study. The link between these two topics is high energy: The study of particle interactions requires very high energies, and the highest energies we know about existed during the early evolution of the universe. Some physicists think that the unified force theories we described in the preceding section may actually have governed the behavior of the universe in its earliest moments. Hubble’s Law In 1929, Edwin Hubble published one of the most important discoveries in modern astronomy. Hubble discovered that (1) galaxies appear to move away from Earth and (2) the velocity of recession (v) is proportional to the distance (d) of the galaxy from Earth. Both v and d can be determined using stellar light spectra. A best fit to the sample illustrative data is given in Figure 11.18. (Hubble’s original plot had a considerable scatter but a general trend was still evident.) The trend in the data suggests the simple proportional relationship: where $H0=70km/s/MpcH0=70km/s/Mpc$ is known as Hubble’s constant. (Note: 1 Mpc is one megaparsec or one million parsecs, where one parsec is 3.26 light-years.) This relationship, called Hubble’s law, states that distant stars and galaxies recede away from us at a speed of 70 km/s for every one megaparsec of distance from us. Hubble’s constant corresponds to the slope of the line in Figure 11.18. Hubble’s constant is a bit of a misnomer, because it varies with time. The value given here is only its value today. Watch this video to learn more about the history of Hubble’s constant. Hubble’s law describes an average behavior of all but the closest galaxies. For example, a galaxy 100 Mpc away (as determined by its size and brightness) typically moves away from us at a speed of This speed may vary due to interactions with neighboring galaxies. Conversely, if a galaxy is found to be moving away from us at speed of 100,000 km/s based on its red shift, it is at a distance This last calculation is approximate because it assumes the expansion rate was the same 5 billion years ago as it is now. Big Bang Model Scientists who study the origin, evolution, and ultimate fate of the universe (cosmology) believe that the universe began in an explosion, called the Big Bang, approximately 13.7 billion years ago. This explosion was not an explosion of particles through space, like fireworks, but a rapid expansion of space itself. The distances and velocities of the outward-going stars and galaxies permit us to estimate when all matter in the universe was once together—at the beginning of time. Scientists often explain the Big Bang expansion using an inflated-balloon model (Figure 11.19). Dots marked on the surface of the balloon represent galaxies, and the balloon skin represents four-dimensional space-time (Relativity). As the balloon is inflated, every dot “sees” the other dots moving away. This model yields two insights. First, the expansion is observed by all observers in the universe, no matter where they are located. The “center of expansion” does not exist, so Earth does not reside at the “privileged” center of the expansion (see Exercise 11.24). Second, as mentioned already, the Big Bang expansion is due to the expansion of space, not the increased separation of galaxies in ordinary (static) three-dimensional space. This cosmological expansion affects all things: dust, stars, planets, and even light. Thus, the wavelength of light $(λ)(λ)$ emitted by distant galaxies is “stretched” out. This makes the light appear “redder” (lower energy) to the observer—a phenomenon called cosmological redshift. Cosmological redshift is measurable only for galaxies farther away than 50 million light-years. Calculating Speeds and Galactic Distances A galaxy is observed to have a redshift: This value indicates a galaxy moving close to the speed of light. Using the relativistic redshift formula (given in Relativity), determine (a) How fast is the galaxy receding with respect to Earth? (b) How far away is the galaxy? We need to use the relativistic Doppler formula to determine speed from redshift and then use Hubble’s law to find the distance from the speed. a. According to the relativistic redshift formula: where $β=v/c.β=v/c.$ Substituting the value for z and solving for $ββ$, we get $β=0.93.β=0.93.$ This value implies that the speed of the galaxy is $2.8×108m/s2.8×108m/s$. b. Using Hubble’s law, we can find the distance to the galaxy if we know its recession velocity: $d=vH0=2.8×108m/s73.8×103m/sper Mpc=3.8×103Mpc.d=vH0=2.8×108m/s73.8×103m/sper Mpc=3.8×103Mpc.$ Distant galaxies appear to move very rapidly away from Earth. The redshift of starlight from these galaxies can be used to determine the precise speed of recession, over of the speed of light in this case. This motion is not due to the motion of galaxy through space but by the expansion of space itself. Check Your Understanding 11.8 The light of a galaxy that moves away from us is “redshifted.” What occurs to the light of a galaxy that moves toward us? View this video to learn more about the cosmological expansion. Structure and Dynamics of the Universe At large scales, the universe is believed to be both isotropic and homogeneous. The universe is believed to isotropic because it appears to be the same in all directions, and homogeneous because it appears to be the same in all places. A universe that is isotropic and homogeneous is said to be smooth. The assumption of a smooth universe is supported by the Automated Plate Measurement Galaxy Survey conducted in the 1980s and 1900s (Figure 11.20). However, even before these data were collected, the assumption of a smooth universe was used by theorists to simplify models of the expansion of the universe. This assumption of a smooth universe is sometimes called the cosmological principle. The fate of this expanding and smooth universe is an open question. According to the general theory of relativity, an important way to characterize the state of the universe is through the space-time where c is the speed of light, a is a scale factor (a function of time), and $dΣdΣ$ is the length element of the space. In spherical coordinates ($r,θ,ϕ)r,θ,ϕ)$, this length element can be written where k is a constant with units of inverse area that describes the curvature of space. This constant distinguishes between open, closed, and flat universes: • $k=0k=0$ (flat universe) • $k>0k>0$ (closed universe, such as a sphere) • $k<0k<0$ (open universe, such as a hyperbola) In terms of the scale factor a, this metric also distinguishes between static, expanding, and shrinking universes: • $a=1a=1$ (static universe) • $da/dt>0da/dt>0$ (expanding universe) • $da/dt<0da/dt<0$ (shrinking universe) The scale factor a and the curvature k are determined from Einstein’s general theory of relativity. If we treat the universe as a gas of galaxies of density $ρρ$ and pressure p, and assume $k=0k=0$ (a flat universe), than the scale factor a is given by where G is the universal gravitational constant. (For ordinary matter, we expect the quantity $ρ+3pρ+3p$ to be greater than zero.) If the scale factor is positive ($a>0a>0$), the value of the scale factor “decelerates” ($d2a/dt2<0d2a/dt2<0$), and the expansion of the universe slows down over time. If the numerator is less than zero (somehow, the pressure of the universe is negative), the value of the scale factor “accelerates,” and the expansion of the universe speeds up over time. According to recent cosmological data, the universe appears to be expanding. Many scientists explain the current state of the universe in terms of a very rapid expansion in the early universe. This expansion is called inflation.
{"url":"https://openstax.org/books/university-physics-volume-3/pages/11-6-the-big-bang","timestamp":"2024-11-05T18:56:17Z","content_type":"text/html","content_length":"395674","record_id":"<urn:uuid:0f2c85e2-d36b-48bd-a197-ab1bc18892d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00296.warc.gz"}
The norms of graph spanners A t-spanner of a graph G is a subgraph H in which all distances are preserved up to a multiplicative t factor. A classical result of Althöfer et al. is that for every integer k and every graph G, there is a (2k − 1)-spanner of G with at most O(n^1+1/k) edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have “few” nodes of “large” degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the `p-norm of their degree vector, thus simultaneously modeling the number of edges (the `1-norm) and the maximum degree (the `∞-norm). We give precise upper bounds for all ranges of p and stretch t: we prove that the greedy (2k− 1)-spanner has `p norm of at k+p most max(O(n), O(n kp )), and that this bound is tight (assuming the Erdős girth conjecture). We also study universal lower bounds, allowing us to give “generic” guarantees on the approximation ratio of the greedy algorithm which generalize and interpolate between the known approximations for the `1 and `∞ norm. Finally, we show that at least in some situations, the `p norm behaves fundamentally differently from `1 or `∞: there are regimes (p = 2 and stretch 3 in particular) where the greedy spanner has a provably superior approximation to the generic guarantee. Original language American English Title of host publication 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019 Editors Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, Stefano Leonardi Publisher Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing ISBN (Electronic) 9783959771092 State Published - 1 Jul 2019 Event 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019 - Patras, Greece Duration: 9 Jul 2019 → 12 Jul 2019 Publication series Name Leibniz International Proceedings in Informatics, LIPIcs Volume 132 Conference 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019 Country/Territory Greece City Patras Period 9/07/19 → 12/07/19 All Science Journal Classification (ASJC) codes
{"url":"https://cris.iucc.ac.il/en/publications/the-norms-of-graph-spanners","timestamp":"2024-11-06T13:49:00Z","content_type":"text/html","content_length":"44986","record_id":"<urn:uuid:48de1036-803e-466f-9d32-62684d0d69e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00114.warc.gz"}
How to Fix "IndexError: Invalid Index to Scalar Variable" in Python? Did you assign a tuple to a new value and get the error IndexError: invalid index to scalar variable in Python 🤔? When a computer programmer does not use the proper index position when accessing any list value from the list, it results in a compile-time error. The number of square brackets we must employ with the name of the variable or identifier to get any specific value from that list is known as the index. Understanding how square brackets function is crucial when attempting to retrieve a specific item from a list or nested list in Python. This IndexError: invalid index to the scalar variable may One of the most important aspects when discussing huge data with a linear data structure is indexing. Understanding how we must handle data for practical usage and showcase our data using indexes is equally important. Solving incorrect indexes to the scalar variable is the subject of this article. In this article, we’ll discuss the IndexError: invalid index to scalar variable in Python, why it occurs, and how to fix ⚒️it. So without further ado, let’s dive deep into the topic. Let’s go over a few examples that will show this error’s causes and possible solutions. Why Does the “IndexError: Invalid Index To The Scalar Variable” Error Occur in Python? As we’ve discussed in Python, when we do not use correct indexing or incorrect use of square brackets causes IndexError: invalid index to the scalar variable. Let’s see an example 👇 import numpy as np My_array = np.array([[9, 0], [8, 3], [2, 5], [4, 6]]) print("Array : ", My_array[0][1][2][3]) You can see in the above example that the program displays the error for an improper index to a scalar variable. It is due to the two-dimensionality of the NumPy array specified here. This indicates that each specific value from the NumPy array produced from a nested list may be represented using just two indices. But in this instance of print(), we are making inappropriate use of three-layer How to Fix the “IndexError: Invalid Index To The Scalar Variable” Error in Python? To fix the error, first, we have to make sure that we are using the proper indexing for elements with proper square brackets: We have two different alternate solutions. 1. Using the single-tier value. 2. Using the two-tier value 1. Using The Single-tier Value In order to retrieve the lists stored inside the array, call them directly using the single-tier value. import numpy as np My_array = np.array([[9, 0], [8, 3], [2, 5], [4, 6]]) print("single-tier value : ", My_array[0],My_array[1],My_array[2],My_array[3]) single-tier value: [9 0] [8 3] [2 5] [4 6] As we have seen in the above example, we have assigned separate brackets for the index, the Python interpreter will recognize that the values inside the square brackets correspond to the indices 0, 1, and 2, respectively. In order to retrieve the lists stored inside the array, call them directly using the single-tier value. 2. Using The Two-tier Value The alternative approach is to use a two-tier value, Since the NumPy array is a two-dimensional array of data layered in a single layer, we are utilizing two-tier in this case. import numpy as np My_array = np.array([[9, 0], [8, 3], [2, 5], [4, 6]]) print(“Two-tier value: ", My_array[3][0])   two-tier value: 4 Here we are accessing the first element on index 3, which is [4, 6], second sub-index is 0, on which we are accessing element which is 4, so the output is 4. To summarize the article on how to fix the IndexError: invalid index to the scalar variable in Python, we’ve discussed why it occurs and how to fix it. Furthermore, we’ve seen that the three approaches that help fix the IndexError: invalid index to the scalar variable in Python, including Using the single-tier value, Using the two-tier value. Programmers must pay particular attention to the index value and amount of square brackets when developing code to avoid the IndexError. There is a chance of IndexError: invalid index to the scalar variable if the number of square brackets is incorrect or if an abnormality happens (the declaration and definition employ a two-dimensional NumPy array with 3-tier indexing). Therefore, understanding the various ways of expressing and retrieving NumPy array data from a specified variable is highly significant. Let’s have a quick recap of the topics discussed in this article. 1. Why does the Index Error invalid index to a scalar variable occurs? 2. How to fix the index error? 3. Use the single-tier value to fix the index error in Python. 4. Use the two-tier value to fix the index error in Python. If you’ve found this article helpful, don’t forget to share and comment below 👇 which solutions have helped you solve the problem.
{"url":"https://guidingcode.com/indexerror-invalid-index-to-scalar-variable-in-python/","timestamp":"2024-11-02T04:42:08Z","content_type":"text/html","content_length":"215643","record_id":"<urn:uuid:019e532d-55f5-4df8-a776-e5e4e3ea47a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00759.warc.gz"}
Effect of cutting angle on the performance of the head of a roadheader The cutting head is a key component of a roadheader’s cutting function, and its design level directly determines the performance of the whole machine to a certain extent. Taking the installation angle of the pick-shaped cutter on the head of the roadheader as a research object, a conversion relationship model between the mounting angle of the cutter and the angle of the cutting function was established. The rock cutting process of the cutting head was simulated by the LS-DYNA finite element software. The simulation model reveals the influence of the angle of teeth on the cutting resistance, load fluctuation and cutting ratio energy consumption. The simulation results show that the cutting resistance decreases gradually with increasing cutting angle. When the cutting angle is greater than 50°, the cutting effect is better. The artificial rock cutting moment determined by the experiment is compared with the simulation results, and the correctness of the established simulation model is verified. This research presents a theoretical basis for the improvement of the cutting performance of the cutting head of a roadheader and provides a theoretical reference for the study of other cutting problems. 1. Introduction The cutting head is an important part of the rock-breaking function of a roadheader and the most vulnerable part in the whole machine [1, 2]. Its structure includes a head body, an internal spline sleeve, a pick-shaped cutter, a tooth seat, a feed guide plate and a water spray device [3]. The composition of the cutting head is shown in Fig. 1. The cutting head is mainly responsible for axial drilling and radial offset cutting. To ensure easier cutting, more cutting teeth should be involved in cutting at all times [4]. The service life of the pick-shaped cutter directly determines the MTBF (mean time between failures) of the roadheader. The service life of the picks is not simply related to the material of the picks and the model of the picks; the service life of the picks is also inseparable from the arrangement and installation angle of the picks on the cutting head [5, 6]. Fig. 1Composition of the cutting head At present, domestic and foreign researchers have studied the force of the pick during the cutting process and provided corresponding prediction and calculation methods. The most representative method is the calculation method of the cutting force proposed by foreign scholar Evans in 1961 [7]. The model is based on the ideal cutting conditions of single-tooth plane cutting and symmetrical groove cutting. Ogruoz et al. [8] carried out a comprehensive experimental test of the cutting process, studied the effects of different rock types on the cutting force and the wear of the cutting head, and evaluated the cutting head’s cutting using the specific energy consumption effectiveness. Restner et al. [9] established a finite element model of the cutting machine's cutting process and adjusted the mechanical structure of the machine based on rock properties, cutting speed, turning speed, and cutting depth. SU et al. [10] simulated the cutting process of rock in PFC3D, and the average cutting forces determined by theoretical models, numerical simulations and experiments are relatively close. Domestic scholars Li Xiaohuo, Liu Chunsheng and others established prediction formulas for cutting resistance based on experiments and theoretical research [11, 12]. At present, although most researchers have used theoretical analysis and experimental tests to analyze the mechanism and force of crushing rock, the consistency of results obtained by different methods is not good. To date, there is no accepted and unified calculation method for calculating the force of rocks [13]. Fu Lin et al. used the dynamic simulation software LS-DYNA to perform a finite element simulation of the cutting process for single-tooth drilling. The calculation results showed that the cutting tooth bears a large feed resistance during the cutting process and that the cutting resistance varies with the cutting. The increase in the angle decreases as a quadratic function [14]. This article takes the installation angle of the pick on the head of the roadheader as the research object and establishes a conversion relationship model between the mounting angle of the pick and the angle of the cutting function. The LS-DYNA finite element software is used to simulate how the cutting head cuts the rock. The process simulation model analyzes the effects of different cutting angles on cutting resistance, load fluctuations and cutting ratio energy consumption and verifies these results through experiments to provide a theoretical reference for selecting the optimal cutting angle for the cutting teeth. 2. Definition of pick mounting angle 2.1. Cutting function angle and installation technology angle There are 6 degrees of freedom in the pick space for the cutting head of the roadheader, and its position is determined by the coordinates of the tip point ($Z$, $R$, $\theta$) and the mounting angle of the pick. The tooth point is determined according to the circumferential difference angle $\mathrm{\Delta }\theta$ of adjacent picks of the same helix and the helix rise angle $\alpha$ of the helix. To facilitate the installation of the pick and the analysis of the force, the mounting angle of the pick is defined by the cutting function and the installation process, respectively. The cutting function angle is an angle defined for analyzing the forces and rotation of the cutting teeth during the cutting process, and it indicates whether the cutting teeth cut into the object to be cut with a good attitude. It is defined as the cutting angle $\delta$, rotation angle $\epsilon$, and installation angle $\tau$. The installation process angle is the angle used when the pick is welded on the cutting head and is defined by the elevation angle $\gamma$, the rotation angle $\alpha$, and the chamfer $\beta$, as shown in Fig. 2. The conversion formula for the cutting function angle and the installation process angle can be derived from Fig. 1. The specific formula is as follows: The cutting function angle formula is derived from the installation process angle: $\left\{\begin{array}{l}\tau =\beta ,\\ \delta =\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{c}\mathrm{o}\mathrm{s}\left(\mathrm{c}\mathrm{o}\mathrm{s}\alpha \mathrm{c}\mathrm{o}\mathrm{s}\gamma \right),\\ \epsilon =\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{i}\mathrm{n}\left(\mathrm{s}\mathrm{i}\mathrm{n}\beta \mathrm{s}\mathrm{i}\mathrm{n}\gamma +\mathrm{c}\mathrm{o}\mathrm{s}\beta \mathrm{c}\ mathrm{o}\mathrm{s}\gamma \mathrm{s}\mathrm{i}\mathrm{n}\alpha \right).\end{array}\right\$ The formula for the installation process angle is derived from the cutting function angle: $\left\{\begin{array}{l}\beta =\tau ,\\ \gamma =\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{i}\mathrm{n}\left(\mathrm{s}\mathrm{i}\mathrm{n}\tau \mathrm{s}\mathrm{i}\mathrm{n}\epsilon +\mathrm{c} \mathrm{o}\mathrm{s}\tau \sqrt{\mathrm{s}\mathrm{i}{\mathrm{n}}^{2}\delta -\mathrm{s}\mathrm{i}{\mathrm{n}}^{2}\epsilon }\right),\\ \alpha =\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{c}\mathrm{o}\mathrm {s}\left(\frac{\mathrm{c}\mathrm{o}\mathrm{s}\delta }{\sqrt{1-\left(\mathrm{s}\mathrm{i}\mathrm{n}\tau \mathrm{s}\mathrm{i}\mathrm{n}\epsilon +\mathrm{c}\mathrm{o}\mathrm{s}\tau \sqrt{\mathrm{s}\ mathrm{i}{\mathrm{n}}^{2}\delta -\mathrm{s}\mathrm{i}{\mathrm{n}}^{2}\epsilon }{\right)}^{2}}}\right).\end{array}\right\$ Fig. 2Schematic diagram of the mounting angle of the pick 2.2. Selection of cutting angle analysis range Fig. 3 shows the force diagram of the pick cutting force, where ${R}_{1}$ is the support force of the rock mass to the pick, ${R}_{1}f$ is the friction generated by ${R}_{1}$, ${R}_{2}$ is the pressure on one side of the pick, ${R}_{2}f$ is the friction generated by ${R}_{2}$, $\delta$ is the cutting angle of the pick and $\theta$ is the half cone angle of the pick tip. Fig. 3Force diagram of picks According to the geometric relationships, the expression for the cutting resistance can be obtained: ${F}_{x}={R}_{1}f+{R}_{2}\mathrm{s}\mathrm{i}\mathrm{n}\left(\delta +\theta +\phi \right).$ The cutting angle $\delta$ ranges from 40° to 60°, the half-taper angle $\theta$ of the pick is approximately 40°, and the friction angle $\mathrm{\Phi }$ is approximately 30°. According to the sine increase and decrease, the cutting resistance decreases as the cutting angle increases. In the pick arrangement, the selection of the cutting angle should meet two conditions: 1) The tip of the pick tooth should contact the rock formation first; 2) The pick holder should not rub against the rock formation during cutting. The tip of the tooth cuts into the rock formation first. Due to the high hardness of the pick teeth, the tip of the alloy head is relatively sharp. When cutting the rock layer, the tip of the pick teeth should be sure to cut into the rock layer first. If the teeth of the pick teeth cut first, this situation inevitably leads to serious wear of the teeth, which reduces the life of the pick. The first entry of the alloy head into the rock must meet $\overline{OA}>\overline{OB}$. The tip of the tooth of the pick alloy head is shown in Fig. 4, and the upper limit of the cutting angle $\delta$ can be obtained: $\delta <9{0}^{\circ }-\phi +\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{c}\mathrm{o}\mathrm{s}\left(\frac{h\mathrm{c}\mathrm{o}\mathrm{s}\left(\phi \right)}{H}\right),$ where: OA – alloy head bus length; $h$ – cutting thickness OB; $H$ – alloy head extended pick length OD; ∠DOA – pick half cone angle $\phi$, ∠COB $=\delta +\phi -90°$. Fig. 4Schematic diagram of the cusp point of the pick-tipped alloy head contacting the rock Fig. 5Schematic diagram of picking tooth seat and cutting groove The tooth seat does not rub against the rock formation. A schematic diagram of the tooth base and the trough is shown in Fig. 5. To avoid friction between the tooth base and the rock formation, it is necessary to ensure that point B on the tooth base does not penetrate the trough. It is known that the rotation radius of the tooth tip point $A$ is $R$, the distance from the tooth seat to the tooth tip point is $AC=H$, and the radius of the round face of the tooth seat is $BC=r$. To avoid friction between the tooth seat and the rock formation, then ${x}_{B}^{2}+{y}_{B}^{2}<R$, which yields the following formula: $\delta <\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{H}{2R}+\frac{r\sqrt{4{R}^{2}-{H}^{2}-{r}^{2}}}{2R\sqrt{{H}^{2}+{r}^{2}}}\right).$ The range of cutting angles can be obtained by calculating the size of the pick and the seat 34$°<\delta <$56°, and the actual value should be slightly smaller than this value. 3. Simulation analysis 3.1. Specific energy consumption The specific energy consumption is the energy consumed by the cutting head to cut a unit volume of coal and rock. Its size reflects the cutting efficiency of the roadheader and is one of the indicators to measure the performance of the roadheader. The specific energy consumption is defined as the cutting head cutting energy consumed per unit volume of rock. The calculation formula is as Assuming that the rock density $\rho$ involved in cutting is the same, we can obtain [15]: ${H}_{w}=\frac{{F}_{jg}l}{V}=\rho \frac{{F}_{jg}l}{M},$ where $M$ – cutting rock mass, g; ${H}_{w}$ – specific energy consumption, MJ/m^3; $\rho$ – rock density, kg/m^3; $V$ – cut rock volume, mm^3; $l$ – picks cut unit length, mm. 3.2. Coefficient of variation of pick cutting resistance The cutting conditions of the roadheader are complicated, resulting in violent vibration and high decibel noise during the work. In severe cases, the whole machine can be immobilized and unable to work normally. Therefore, it is proposed to use the load variation coefficient to reflect the fluctuations in the force of the cutting teeth during work. The cutting resistance variation coefficient is the ratio of the standard deviation of the cutting resistance to the average value of the cutting resistance. The calculation formula for the cutting force fluctuations is [15]: $\delta \left(Z\right)=\frac{1}{\stackrel{-}{Z}}\sqrt{\frac{1}{n-1}{\sum }_{i=1}^{n}\left({Z}_{i}-\stackrel{-}{Z}{\right)}^{2}}.$ 3.3. Simulation analysis In the simulation analysis, the rock thickness is 50 mm, the width is 120 mm, the installation parameter of the pick is the axis distance $Z=$0, the cutting radius $r=$480, the circumferential angle $\theta =$0°, the rotation angle $\epsilon =$12.3°, and the mounting angle $\tau =$12°. The cutting angle simulation analysis is shown in Fig. 6, and the material properties of the rocks are shown in Table 1. Table 1Mechanical parameters of rock materials Cohesion Compressive strength Tensile strength Name Density / (g/mm^3) Elastic modulus / MPa Poisson’s ratio Expansion angle / rad Friction angle / rad / MPa / MPa / MPa rock 2.06e-3 8038 0.28 0 0.49 27 97 9.9 Solving and postprocessing: A pick-cut analysis model is established. The LS-DYNA software is used to solve the problem, and the simulation results of the truncated rock formation are obtained. The stress cloud of the truncated rock formation is shown in Fig. 6. To obtain the cutting resistance of the pick cutting rock layer, the contact force and the total combined contact force of the pick in the $X$-axis, $Y$-axis, and $Z$-axis directions of the rock layer are extracted in the simulation cutting result file, as shown in Fig. 7. Fig. 6Stress cloud diagrams of the cut rock layer The cutting resistance ${F}_{jg}$, traction resistance ${F}_{qy}$ and lateral force ${F}_{cx}$ can be transformed by the following relationships: $\left\{\begin{array}{l}{F}_{jg}={f}_{x}\mathrm{s}\mathrm{i}\mathrm{n}\theta +{f}_{y}\mathrm{c}\mathrm{o}\mathrm{s}\theta ,\\ {F}_{qy}={f}_{x}\mathrm{c}\mathrm{o}\mathrm{s}\theta -{f}_{y}\mathrm{s}\ mathrm{i}\mathrm{n}\theta ,\\ {F}_{cx}={f}_{z}.\end{array}\right\$ In the formula, $\theta$ is the angle between the tip of the pick and the $X$ axis. Fig. 7Contact force on the pick 4. Analysis of simulation results 4.1. Single tooth cutting simulation By changing the value of the cutting angle $\delta$ of the cutting head, the changes in the load on the cutting head, the coefficient of load variation, and the energy consumption of the cutting ratio with the cutting angle are analyzed. The range of cutting angle $\delta$ and the calculation results for the cutting resistance, variation coefficient of cutting resistance, and cutting ratio energy consumption are shown in Table 2. Fig. 8 shows the change in the mean cutting resistance with the cutting angle. The following conclusions can be obtained. The average cutting resistance of the picks gradually decreases with increasing cutting angle, which changes greatly. When $\delta$ is in the range of 51°-53°, the change in the cutting resistance of the pick is small, and it basically stabilizes. When $\delta <$53°, the cutting resistance gradually decreases with increasing cutting angle $\delta$. Therefore, under the condition of ensuring that the tooth seat does not rub against the rock, the larger the cutting angle is, the smaller the cutting resistance, and the better the rock cutting effect. Table 2Cutting angle and analytical results Cutting angle 36° 40° 44° 46° 48° 50° 51° 52° 53° 54° 55° 56° Cutting resistance / N 3752 3503 2766 2801 2559 2310 2028 1990 1990 2026 1795 1685 Coefficient of variation 4.53 4.26 3.63 3.09 3.22 2.84 2.7 2.46 2.62 2.44 2.04 1.61 Specific energy consumption 2.88 3.21 2.67 2.93 2.80 2.66 2.21 2.35 2.17 2.28 2.21 2.11 Fig. 9 shows the variations in the cutting resistance variation coefficient and cutting ratio energy consumption with cutting angle. The following conclusions can be obtained. As the cutting angle $\delta$ increases, the variation coefficient of the cutting resistance gradually decreases. When $\delta >$51°, as the cutting angle $\delta$ increases, the cutting resistance variation coefficient gradually decreases; when the cutting angle $\delta$ is in the range of 51°-56°, the difference in the cutting resistance variation coefficient values is small. The cutting specific energy consumption gradually decreases with increasing cutting angle. When $\delta >$50°, the cutting specific energy consumption fluctuates up and down with little change. Fig. 8Change curve of the mean cutting resistance Fig. 9Variation curve of the variation coefficient of the cutting resistance 4.2. Cutting head overall simulation The cutting head of the roadheader machine is in the cutting state of drilling. The analytical results for the cutting angle are shown in Table 3. It can be seen from the table that the cutting torque gradually decreases with increasing cutting angle and that the fluctuation coefficient of the cutting torque does not change. The quality of the cut rock gradually increases, and the specific energy consumption of cutting gradually decreases. Table 3Analytical results for cutting angle Cutting angle 46° 50° 54° Cutting moment ${M}_{z}$ / N·m 31835 30310 28712 Cutting moment ${M}_{z}$ fluctuation coefficient 0.31 0.33 0.27 Cutting rock mass / g 16661 19222 21462 Cut specific energy / MJ·m^-3 3.8 3.2 2.7 5. Cutting performance test Experimental bench: An EBZ200 roadheader. Cutting angle of picks: 50°. Concrete rock: 8×2×3 m cuboid, compressive strength of 63 MPa. Measuring equipment: torque sensor, data collector. Experimental process: The cutting head of the roadheader performs the cutting test. The cutting process of the cutting head and the measured torque are shown in Fig. 10. Measurement results and analysis: The cutting moment and load fluctuation coefficient are extracted, and the effect of the cutting angle on the cutting head's rock cutting performance is analyzed through the cutting ratio energy consumption; the results provide guidance and verification for the simulation analysis. The cutting performance measurement and calculation results of the test are shown in Table 4. Fig. 10Cutting head cutting process and measuring moment Table 4Experimental results Cutting angle (°) Cutting average moment (N.m) Cutting moment Mz fluctuation coefficient Specific energy consumption of cutting head (MJ/m^3) Single tooth cutting force (N) 50 1.5x10^4 0.33 2.7 2596 It can be seen from Fig. 10 that the time domain curves of the cutting torque determined by the simulation and the test are relatively close. The cutting torque measured by the test and the simulated cutting torque are on the same order of magnitude, which verifies the correctness of the established finite element model. Table 3 shows that the error between the cutting force of a single tooth and the cutting force of the simulation analysis is approximately 11 %. The torque fluctuation coefficient in the test is similar to the simulated torque fluctuation coefficient. 6. Conclusions 1) The installation angle of the pick is established, and the relationship between the cutting function angle and the installation process angle is determined. The optional range of the cutting angle is analyzed as 34°$<\delta <$56°, providing theoretical support for the design and installation of cutting head teeth of a roadheader. 2) A simulation model for analysis of the cutting process of the cutting head of the roadheader is established, and the influence of the cutting angle on the cutting resistance, the coefficient of variation and the energy consumption of the cutting ratio during the rock cutting process is analyzed. 3) The time history curves of the cutting torque are close in the simulation and the experimental measurement, and the average cutting torque is nearly doubled, mainly because the simulation uses full-tooth simulation cutting, and the result is double the experimental measurement value, but it is actually one-half. The cutting teeth of a cutting head participate in cutting; the relative error of the cutting torque fluctuation coefficient is 19 %, the relative error of the cutting specific energy consumption is 16 %, and the average error of the single tooth cutting force is 11 %. 4) In this paper, artificial rocks are used instead of real rocks for simulation and experimental research. If conditions permit, real rocks can be tested to further verify the correctness of the conclusions in this paper. • Acaroglu O., Ergin H. The effect of cutting head shapes on roadheader stability. Transactions of the Institution of Mining and Metallurgy, Vol. 114, Issue 3, 2005, p. 140-146. • Copur H., Ozdemir L., Rostami J. Roadheader Applications in Mining and Tunneling Industries. Preprints-Society of Mining Engineers of AIME, 1998. • He Yang, Li Xiaohuo Identification of random cutting load on cutting on cutting head of longitudinal roadheader. Journal of Wuhan University of Science and Technology, Vol. 40, Issue 2, 2017, p. • Liu S. Y., Chang Long D.-U., Cui X. X., et al. Characteristics of different rocks cut by helical cutting mechanism. Journal of Central South University of Technology, Vol. 18, Issue 5, 2011, p. • Hurt K. G., Mcandrew K. M. Designing Roadheader Cutting Heads. Mining Engineer, 1981. • Eyyuboglu E. M., Bolukbasi N. Effects of circumferential pick spacing on boom type roadheader cutting head performance. Tunnelling and Underground Space Technology, Vol. 20, Issue 5, 2005, p. • Wang Li Ping Calculation of peak cutting force of conical picks under conditions of dissymmetrical slotting. Journal of China Coal Society, Vol. 41, Issue 11, 2016, p. 2876-2882. • Dogruoz C., Bolukbasi N. Effect of cutting tool blunting on the performances of various mechanical excavators used in low-and medium-strength rocks. Bulletin of Engineering Geology and the Environment, Vol. 73, Issue 3, 2013, p. 781-789. • Restner U., Pichler J., Reumueller B. New technologies extend the range of applications of roadheaders. Symposium on Innovations in Tunnelling, Swiss Federal Institute of Technology Zurich, 2007. • Su O., Akcin N. A. Numerical simulation of rock cutting using the discrete element method. International Journal of Rock Mechanics and Mining Sciences, Vol. 48, Issue 3, 2011, p. 434-442. • Li Xiaohuo Research on Key Technologies of Roadheader Cutting. China Machine Press, Beijing, 2007. • Liu Chun Sheng, Li De Gen Mathematical model of cutting force based on experimental conditions of single pick cutting. Journal of China Coal Society, Vol. 36, Issue 9, 2011, p. 1565-1569. • Zhang Mengqi Comparison and analysis of predictor methods for rock breaking resistance of bit. Colliery Mechanical & Electrical Technology, Vol. 41, Issue 11, 2014, p. 2876-2882. • Fu Lin, Du Changlong, Liu Songyong, et al. Research on load characteristics of picks on auger drill miner’s aiguille. China Mechanical Engineering, Vol. 24, Issue 15, 2013, p. 2020-2024. • Li Xiaohuo Research on Key Skills in TBM Cutting. China Machine Press, Beijing, 2008. About this article cutting header cutting performance cutting angle Copyright © 2021 Yu-dong Xu. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/22042","timestamp":"2024-11-09T00:17:34Z","content_type":"text/html","content_length":"143548","record_id":"<urn:uuid:29f7c55b-7b5d-4280-a277-5e4024c3fcc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00440.warc.gz"}
This function treats measure-specific variance as reliable. The Mosier composite formula is computed as: where \(\rho_{XX}\) is a composite reliability estimate, \(\mathbf{r}\) is a vector of reliability estimates, \(\mathbf{w}\) is a vector of weights, \(\mathbf{S}\) is a covariance matrix, and \(\ mathbf{s}\) is a vector of variances (i.e., the diagonal elements of \(\mathbf{S}\)).
{"url":"https://www.rdocumentation.org/packages/psychmeta/versions/2.3.4/topics/composite_rel_matrix","timestamp":"2024-11-07T05:59:25Z","content_type":"text/html","content_length":"66306","record_id":"<urn:uuid:4bd518ac-b665-4811-ad98-b728039f9691>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00367.warc.gz"}
Chapter 7 Quantifying Uncertainty | Exploring Data Science with R and the Tidyverse: A Concise Introduction 7 Quantifying Uncertainty So far we have developed ways to show how to use data to decide between competing questions about the world. For instance, does Harvard enroll proportionately less Asian Americans than other private universities in the United States; are the exam grades in one lab section of a course too low when compared to other lab sections; can an experimental drug bring an improvement to patients recovering from brain trauma? We evaluated questions like these by means of an hypothesis test where we put forward two hypotheses: a null hypothesis and an alternative hypothesis. Often we are just interested in what a value looks like. For instance, airlines might be interested in the median flight delay of their flights to preserve customer satisfaction; political candidates may look to the percentage of voters favoring them to gauge how aggressive their campaigning should be. Put in the language of statistics, we are interested in estimating some unknown parameter about a population. If all the data has been made available to us, we could compute the parameter directly with ease. However, we often do not have access to the full population (as is the case with polling voters) or there may be too much data to work with that it becomes computationally prohibitive (as is the case with flights). We have seen before that sampling distributions can provide reliable approximations to the true (and usually unknown) distribution and that, likewise, a statistic computed from it can provide a reliable estimate of the parameter in question. However, the value of a statistic can turn out differently depending on the random samples that are drawn to compose a sampling distribution. How much can the value of a statistic vary? Could we quantify this uncertainty? This chapter develops a way to answer this question using an important technique in data science called resampling. We begin by introducing order statistics and percentiles. This will provide us the tools needed to develop the resampling method to produce distributions from a sample, in which we apply order statistics to the generated distributions to obtain something called the confidence 7.1 Order Statistics The minimum, maximum, and the median are part of what we call order statistics. Order statistics are values at certain positions in numerical data after reordering the data in ascending order. 7.1.1 Prerequisites This section will make use of data for all flights that departed New York City in 2013. The dataset is made available by the Bureau of Transportation Statistics in the United States. Let’s also load in the tidyverse as usual. 7.1.2 The flights data frame In our prior exploration of this data frame, we generated empirical distributions of departure delays. Let’s revisit this study and visualize the departure delays again. As before, we are interested in the bulk of the data here, so we can ignore the 1.83% of flights with delays of more than 150 minutes. flights150 <- flights |> filter(dep_delay <= 150) Let’s extract the departure delay column as a vector. dep_delays <- flights150 |> pull(dep_delay) 7.1.3 median The median is a popular order statistic that gives us a sense of the central tendency of the data. It is the value at the middle position in the data after reordering of the values. ## [1] -2 This tells us that half of the flights had early departures – not bad! Recall that we also used the mean to understand the central tendency. Note that the mean (or average) is a statistic, but it is not an order statistic. Let’s compare with the mean of departure delays. ## [1] 8.716037 There is quite a bit of discrepancy between the two. Observe that the histogram above has a very long right tail; the mean is pulled upward by flights with long departure delays. In general: If a distribution has a long tail, the mean will be pulled away from the median in the direction of the tail. Otherwise, if the distribution is symmetrical, the mean and the median will equal. When the distribution is “skewed” like the one here, the median can be a stronger indicator of central tendency. There are cases, by the way, where two median exists. Such an event occurs exactly when the number of values is an even number. If there are 10 values, the 5th and the 6th values in the ascending order are the two medians. The one at the lower position in the order is the odd median and the other one, i.e., the one at the higher position in the order, is the even median. To compute the median in this case, we usually take the average of the odd and even median. 7.1.4 min and max What is the earliest flight that left? We can find out by looking for the minimum departed flight delay. ## [1] -43 The authors admit that this flight might have left a little too early for their liking. What about the latest flight? ## [1] 150 Recall that this maximum is actually artificial because we filtered all rows whose departure delay was more than 150. To recover the true maximum, we need to refer to the original flights data. flights |> pull(dep_delay) |> max(na.rm = TRUE) ## [1] 1301 That is almost a 22 hour delay – better get a sleeping bag! 7.2 Percentiles Now that we have an understanding of order statistics, we can use it to develop the notion of a percentile. We will also explore a closely related concept called the quartile. You are probably already familiar with the concept of percentiles from sports or standardized testing like the SAT. Organizations like the College Board talk so much about percentiles – to the extent of writing full guides on how to interpret them – because they are indicators of how students perform relative to other exam-takers. Indeed, the percentile is another order statistic that tells us something about the rank of a data point after reordering the elements in a dataset. Now that we have an understanding of order statistics, we can use it to develop the notion of a percentile. We will also explore a closely related concept called the quartile. 7.2.1 Prerequisites Before starting, let’s load the tidyverse as usual. 7.2.2 The finals tibble In the spirit of the College Board, we will examine exam scores to develop an understanding of percentiles. Recall that the finals data frame contains hypothetical final exam scores from two offerings of an undergraduate computer science course. Let’s load it in. ## # A tibble: 102 × 2 ## grade class ## <dbl> <chr> ## 1 89 A ## 2 17 A ## 3 94 A ## 4 51 A ## 5 49 A ## 6 93 A ## 7 52 A ## 8 54 A ## 9 57 A ## 10 65 A ## # ℹ 92 more rows The dataset contains final scores from a total of 105 students. ## [1] 102 We will not concern ourselves with the individual offerings of the course this time. Since the scores are of interest for this study, let us extract a vector of scores from the tibble. scores <- finals |> To orient ourselves to the data, we can look at the maximum and minimum scores. ## [1] 98 ## [1] 0 We may be alarmed to see that the minimum score of 0. Some insight into the course would reveal that there were a few students who did not appear for the final exam (don’t be one of them!). Finally, let us visualize the distribution of scores. 7.2.3 The quantile function The percentile is an order statistic where the position of the data is not the rank but a percentage that specifies relative position in the data. For instance, the 50th percentile is the smallest value that is at least as large as 50% of the elements in scores; it must be a value on the list of scores. We can compute this simply with the quantile function in R. ## 50% ## 65 The value at the 50th percentile is something we already know: the median score! ## [1] 65 The quantile function gives the value that cuts off the first n percent of the data values when it is sorted in ascending order. There are many ways to compute percentiles (see the help page for a sneak peek, with ?quantile). The one that matches the definition used here corresponds to type = 1. The additional argument passed in is a vector of desired percentages. These must be between 0 and 1. This is how quantile gets its name: quantiles are percentiles scaled to have a value between 0 and 1, e.g. 0.5 rather than 50. Let us look at some more percentile values in the vector of scores. quantile(scores, c(0.05, 0.2, 0.9, 0.95, 0.99, 1), type = 1) ## 5% 20% 90% 95% 99% 100% ## 17 49 89 93 97 98 Let’s pick apart some of these values. We see that the value at the 5th percentile is the lowest exam score that is at least as large as 5% of the scores. We can confirm this easily by summing the number of scores less than 17 and dividing by the total number of scores. ## [1] 0.04901961 Moving on up, we see that the 95th and 99th percentiles are quite close together. We also observe that the 100th percentile is 98, which corresponds to the maximum score obtained on the final. That is, a 98 is at least as large as 100% of the scores, which is the entire class. The 0th percentile is simply the smallest value in the dataset, as 0% of the data is at least as large as it. In other words, there is no exam score in the class lower than a 0. ## 0% ## 0 If the College Board says a student is in the “top 10 percentile”, this would be a misnomer. What they really mean to say is that the student is in the \(1 - \text{top X percentile}\), or 90th 7.2.4 Quartiles In addition to the medians, common percentiles are the \(1/4\)th and \(3/4\)th, which we often call the bottom quarter and the top quarter. Basically, we chop the data in quarters and use the boundaries between the neighboring quarters. Since these percentiles partition the data into quarters, these are given a special name: quartiles. quantile(scores, c(0/4, 1/4, 2/4, 3/4, 4/4), type = 1) ## 0% 25% 50% 75% 100% ## 0 52 65 77 98 Observe what happens when we omit the vector of percentages. ## 0% 25% 50% 75% 100% ## 0 52 65 77 98 The corresponding 0th, 25th, 50th, 75th, and 100th percentiles of the vector are returned. 7.2.5 Combining two percentiles By combining two percentiles, we can get a rough sense of the distribution. For example, the combination of 25th and 75th percentiles represents the “middle” 50%. Similarly, the 2.5th and 97.5th percentiles represent the middle 95% of the data. That is, ## 2.5% 97.5% ## 0 95 95% of the scores is between 0 and 95. We could find this more directly by realizing that the middle 95% corresponds to going up and down from the 50th percentile by half of that amount, which is middle_area <- 0.95 quantile(scores, 0.5 + (middle_area / 2) * c(-1, 1), type = 1) ## 2.5% 97.5% ## 0 95 As one more example, here is the middle 90% of scores. middle_area <- 0.90 quantile(scores, 0.5 + (middle_area / 2) * c(-1, 1), type = 1) ## 5% 95% ## 17 93 7.2.6 Advantages of percentiles Percentile is a useful concept because it eliminates the use of population size in specifying the position; that is, the position specification does not directly take into account the size of the data. What do we mean by that? Let’s return to the example of final exam scores. Suppose that one offering of the class contained 50 students while another had 200 students. Consider the “top 10 students” in each class. Since top 10 is 20% of 50 students, there is a 20% chance for a student to be among the top 10, while the chances decrease to 5% for the class of 200. That is, if we specify a top group with its size, the significance being in the top group varies depending on the size of the population and so we must to specify the size of the underlying group, e.g., “top 10 in a group of 4000 students”. Percentiles are nice in that they are not sensitive to these changes. 7.3 Resampling It is usually the case that a data scientist will receive a sample from an underlying population to which she has no access. If she had access to the underlying population, she could calculate the parameter value directly. Since that is impossible, is there a way for her to use the sample at hand to generate a range of values for the statistic? Yes! This is a technique we call resampling, which is also known as the bootstrap. In bootstrapping, we treat the dataset at hand as the “population” and generate “new” samples from it. But there is a catch. Each sample data set that we generate should be equal in size to the original. This necessarily means that our sampling plan be done with replacement. Since the samples have the same size as the original with the use of replacement, duplicates and omissions can arise. That is, there are items that will appear multiple times as well as items that are missing. Because randomness is involved, the discrepancy varies. 7.3.1 Prerequisites This section will defer again to the New York City flights in 2013 from the Bureau of Transportation Statistics. Let’s also load in the tidyverse as usual. 7.3.2 Population parameter: the median time spent in the air When studying this dataset, we have spent a lot of time examining flight departure delays. This time we will turn our attention to another variable in the tibble which tracks the amount of time a flight spent in air, in minutes. The variable is called air_time. Let’s visualize the distribution of air time in flights. Recall the distribution of departure delays in flights150. As before, let’s concentrate on the bulk of the data and filter out any flights that flew for more than 400 minutes. We plot this distribution one more time. The parameter we will select for this study is the mean air time. pop_mean <- flights400 |> pull(air_time) |> ## [1] 149.6463 Let us see how well we can estimate this value based on a sample of the flights. We will study two such samples: an artificial sample and a random sample. 7.3.3 First try: A mechanical sample For our mechanical sample, we will assume that we have been given only a cross-section of the flights data and try to estimate the population median based on this sample. Let us suppose we have been given the flight data for only the months of September and October. flights_sample <- flights400 |> filter(month == 9 | month == 10) There are 55,522 flights appearing in the subset. Let’s visualize the distribution of air time from our sample. It appears close to the population of flights, though there are notable differences: flights that have longer air times (between 300 and 400 minutes) appear exaggerated in this dataset. Let’s compute the mean from this sample. sample_mean <- flights_sample |> pull(air_time) |> ## [1] 145.378 It is quite different from the population median. Nevertheless, this subset of flights will serve as the dataset from which we will bootstrap our samples. Put another way, we will treat this sample as if it were the population. 7.3.4 Resampling the sample mean To perform a bootstrap, we will draw from the sample, at random with replacement, the same number of times as the size of the sample dataset. To simplify the work, let us extract the column of air times as a vector. air_times <- flights_sample |> We know already how to sample at random with replacement from a vector using sample. Computing the sample mean is also straightforward: just pipe the returned vector into mean. sample_mean <- air_times |> sample(replace = TRUE) |> ## [1] 145.3686 Let us move this work into a function we can call. one_sample_mean <- function() { sample_mean <- flights_sample |> pull(air_time) |> sample(replace = TRUE) |> Give it a run! ## [1] 144.6964 This function is actually quite useful. Let’s generalize the function so that we may call it with other datasets we will work with. The modified function will receive three parameters: (1) a tibble to sample from, (2) the column to work on, and (3) the statistic to compute. one_sample_value <- function(df, label, statistic) { sample_value <- df |> pull({{ label }}) |> sample(replace = TRUE) |> We can now call it as follows. one_sample_value(flights_sample, air_time, mean) ## [1] 145.398 Q: What’s the deal with those (ugly) double curly braces ({{) ? To make R programming more enjoyable, the tidyverse allows us to write out column names, e.g. air_time, just like we would variable names. The catch is that when we try to use such syntax sugar from inside a function, R has no idea what we mean. In other words, when we say pull(label) R thinks that we want to extract a vector from a column called label, despite the fact we passed in air_time as an argument. To lead R in the right direction, we surround label with {{ so that R knows to interpret label as, indeed, 7.3.5 Distribution of the sample mean We now have all the pieces in place to perform the bootstrap. We will replicate this process many times so that we can compose an empirical distribution of all the bootstrapped sample means. Let’s repeat the process 10,000 times. bstrap_means <- replicate(n = 10000, one_sample_value(flights_sample, air_time, mean)) Let us visualize the bootstrapped sample means using a histogram. 7.3.6 Did it capture the parameter? How often does the population mean fall somewhere in the empirical histogram? Does it reside “somewhere at the center” or at the fringes where the tails are? Let us be more specific by what we mean when we say “somewhere at the center”: the middle 95% of bootstrapped means containing the population mean. We can identify the “middle 95%” using the percentiles we learned from the last section. Here they are: desired_area <- 0.95 middle95 <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) ## 2.5% 97.5% ## 144.6075 146.1229 Let us annotate this interval on the histogram. df <- tibble(bstrap_means) ggplot(df, aes(x = bstrap_means, y = after_stat(density))) + geom_histogram(col="grey", fill = "darkcyan", bins = 8) + geom_segment(aes(x = middle95[1], y = 0, xend = middle95[2], yend = 0), size = 2, color = "salmon") ## Warning in geom_segment(aes(x = middle95[1], y = 0, xend = middle95[2], : All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. ## [1] 149.6463 Our population mean is 149.6 minutes – that is nowhere to be seen in this interval or even in the histogram! It would seem then that in all of the 10,000 replications of the bootstrap, not even one was able to capture the population mean. What happened? Recall the subset selection we used: all flights in September or October. This was a very artificial selection that is prone to bias. We learned before when we discussed sampling plans that bias in the sample can mislead the statistic computed from it, especially when using a convenience sample such as the one here. 7.3.7 Second try: A random sample We will now try to estimate the population mean using a random sample of flights. Let us select at random without replacement 10,000 flights from the data. flights_sample <- flights400 |> slice_sample(n = 10000, replace = FALSE) We will visualize what our random sample looks like. Let us also compute the sample mean again. sample_mean <- flights_sample |> pull(air_time) |> ## [1] 150.7474 We observe that the sample mean is also much closer to the population mean, unlike our mechanical selection attempt. This is confirmation of the Law of Averages (finally) at work: when we sample at random and the sample size is large, the distribution of the sample closely follows that of the flight population. Let us now repeat the bootstrap. Recall that we will treat this sample as if it were the population. 7.3.8 Distribution of the sample mean (revisited) We have done all the hard work already in setting up the bootstrap. To redo the process, we need only to pass in the random sample contained in flights_sample. As before, let us repeat the process 10,000 times. bstrap_means <- replicate(n = 10000, one_sample_value(flights_sample, air_time, mean)) We will identify the “middle 95%”. Here is the interval: desired_area <- 0.95 middle95 <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) ## 2.5% 97.5% ## 148.9982 152.5637 Let us annotate this interval on the histogram. We will also plot the population mean as a red dot. df <- tibble(bstrap_means) ggplot(df, aes(x = bstrap_means, y = after_stat(density))) + geom_histogram(col="grey", fill = "darkcyan", bins = 8) + geom_segment(aes(x = middle95[1], y = 0, xend = middle95[2], yend = 0), size = 2, color = "salmon") + geom_point(aes(x = pop_mean, y = 0), color = "red", size = 3) ## Warning in geom_segment(aes(x = middle95[1], y = 0, xend = middle95[2], : All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. ## Warning in geom_point(aes(x = pop_mean, y = 0), color = "red", size = 3): All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. The population mean of 149.6 minutes falls in this interval. We conclude that the “middle 95%” interval of bootstrapped means successfully captured the parameter. 7.3.9 Lucky try? Our interval of bootstrapped means captured the parameter in the air time data. But were we just lucky? We can test it out. We would like to see how often the “middle 95%” interval captures the parameter. We will need to redo the entire process many times to find an answer. More specifically, we will follow the recipe: • Collect a fresh sample of size 10,000 from the population. For the sampling plan, sample at random without replacement. • Do 10,000 replications of the bootstrap process and find the “middle 95%” interval of bootstrapped means. We will repeat this process 100 times so that we end up with 100 intervals; we will count how many of them contain the population mean. all_the_bootstraps <- function() { desired_area <- 0.95 flights_sample <- flights400 |> slice_sample(n = 10000, replace = FALSE) bstrap_means <- replicate(n = 10000, one_sample_value(flights_sample, air_time, mean)) middle95 <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) intervals <- replicate(n = 100, all_the_bootstraps()) Note that this simulation will take awhile (> 20 minutes). Grab a coffee! Let’s examine some of the intervals of bootstrapped means. ## 2.5% 97.5% ## 147.4277 151.0550 ## 2.5% 97.5% ## 148.7328 152.4258 Let’s transform intervals into a tibble which will make it easier to understand and visualize the results. left_column <- intervals[1,] right_column <- intervals[2,] interval_df <- tibble( replication = 1:100, left = left_column, right = right_column ## # A tibble: 100 × 3 ## replication left right ## <int> <dbl> <dbl> ## 1 1 147. 151. ## 2 2 149. 152. ## 3 3 148. 152. ## 4 4 149. 152. ## 5 5 149. 153. ## 6 6 147. 150. ## 7 7 148. 151. ## 8 8 148. 151. ## 9 9 148. 152. ## 10 10 149. 152. ## # ℹ 90 more rows How many of these contain the population mean? We can count the number of intervals where the population mean is between the left and right endpoints. interval_df |> filter(left <= pop_mean & right >= pop_mean) |> ## [1] 94 We can visualize these intervals by stacking them on top of each other vertically. The vertical red line shows where the population mean lies. Under real-life circumstances, we do not know where it ggplot(interval_df) + geom_segment(aes(x = left, y = replication, xend = right, yend = replication), color = "salmon") + geom_vline(xintercept = pop_mean, color = "red") + labs(x = "Air time (minutes)") We expect about 95 of the 100 intervals to cross the vertical line; meaning, it contains the parameter. We would label such intervals as “good”. If an interval does not, oh well – that’s the nature of chance. Fortunately, these do not occur often. In fact, they should occur about 5 times among 100 trials, or 95%. The strength of statistics is not clairvoyance, but the ability to quantify 7.3.10 Resampling round-up Before we close this section, we end with a quick summary on how to perform a bootstrap. Goal: To estimate some population parameter we do not know about, e.g., the mean air time of New York City flights. • Select a sampling plan. A safe bet is to sample at random without replacement from the population. Be sure the sample drawn is large in size and remember that in reality sampling is an expensive process. It is likely you will get only one chance to draw a sample from the population. • Bootstrap the random sample (this time, with replacement) and compute the desired statistic from it. • Replicate this process a great number of times to obtain many bootstrapped samples. • Find the “middle 95%” interval of the bootstrapped samples. 7.4 Confidence Intervals The previous section developed a way to estimate the value of a parameter we do not know. Because chance is an inevitable part of drawing a random sample, we cannot be precise and offer a single value for this estimate, e.g., we can determine that the mean height of all individuals in the United States is exactly 5.3 feet. Instead, we provide an interval of estimates by looking at a bulk of values that are “somewhere in the center”. Typically this entails looking at the “middle 95%” interval, but we may prefer other intervals such as the “middle 90%” or even the “middle 99%”. Recall that knowing the value of the parameter beforehand is a rare luxury out of reach; if we could obtain it somehow, there would be no need for statistical methods like the bootstrap. Instead, data scientists place their confidence on intervals of estimates where the process that generates said interval is successful in capturing the parameter some percentage of the time. These “intervals of estimates” are so important to statistics and data science that they are given a special name: the confidence interval. This section will explore confidence intervals, and their use, in greater depth. 7.4.1 Prerequisites We will make use of the tidyverse in this chapter, so let’s load it in as usual. We will also bring forward the one_sample_value function we wrote in the previous section. one_sample_value <- function(df, label, statistic) { sample_value <- df |> pull({{ label }}) |> sample(replace = TRUE) |> For the running example in this section, we turn to survey data collected by the US National Center for Health Statistics (NCHS) on nutrition and health information. This data is available in the tibble NHANES from the NHANES package. In accordance to the documentation (see ?NHANES), the dataset can be treated as if it were a simple random sample from the American population. We use this dataset as an example where we do not know the population parameter. ## # A tibble: 10,000 × 76 ## ID SurveyYr Gender Age AgeDecade AgeMonths Race1 Race3 Education ## <int> <fct> <fct> <int> <fct> <int> <fct> <fct> <fct> ## 1 51624 2009_10 male 34 " 30-39" 409 White <NA> High School ## 2 51624 2009_10 male 34 " 30-39" 409 White <NA> High School ## 3 51624 2009_10 male 34 " 30-39" 409 White <NA> High School ## 4 51625 2009_10 male 4 " 0-9" 49 Other <NA> <NA> ## 5 51630 2009_10 female 49 " 40-49" 596 White <NA> Some College ## 6 51638 2009_10 male 9 " 0-9" 115 White <NA> <NA> ## 7 51646 2009_10 male 8 " 0-9" 101 White <NA> <NA> ## 8 51647 2009_10 female 45 " 40-49" 541 White <NA> College Grad ## 9 51647 2009_10 female 45 " 40-49" 541 White <NA> College Grad ## 10 51647 2009_10 female 45 " 40-49" 541 White <NA> College Grad ## # ℹ 9,990 more rows ## # ℹ 67 more variables: MaritalStatus <fct>, HHIncome <fct>, HHIncomeMid <int>, ## # Poverty <dbl>, HomeRooms <int>, HomeOwn <fct>, Work <fct>, Weight <dbl>, ## # Length <dbl>, HeadCirc <dbl>, Height <dbl>, BMI <dbl>, ## # BMICatUnder20yrs <fct>, BMI_WHO <fct>, Pulse <int>, BPSysAve <int>, ## # BPDiaAve <int>, BPSys1 <int>, BPDia1 <int>, BPSys2 <int>, BPDia2 <int>, ## # BPSys3 <int>, BPDia3 <int>, Testosterone <dbl>, DirectChol <dbl>, … 7.4.2 Estimating a population proportion Let us use this dataset to estimate the proportion of healthy sleepers in the American population. A “healthy amount of sleep” is defined by the American Academy of Sleep Medicine as 7 to 9 hours per night for adults between the ages of 18 and 60. With this information, we perform some basic preprocessing of the data: • Drop any observations that contain a missing value in the column SleepHrsNight. • Filter the data to contain observations for adults between the ages of 18 and 60. • Create a new Boolean variable healthy_sleep that indicates whether a participant gets a healthy amount of sleep. ## # A tibble: 5,748 × 77 ## ID healthy_sleep SurveyYr Gender Age AgeDecade AgeMonths Race1 Race3 ## <int> <lgl> <fct> <fct> <int> <fct> <int> <fct> <fct> ## 1 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 2 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 3 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 4 51630 TRUE 2009_10 female 49 " 40-49" 596 White <NA> ## 5 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 6 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 7 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 8 51656 FALSE 2009_10 male 58 " 50-59" 707 White <NA> ## 9 51657 FALSE 2009_10 male 54 " 50-59" 654 White <NA> ## 10 51666 FALSE 2009_10 female 58 " 50-59" 700 Mexican <NA> ## # ℹ 5,738 more rows ## # ℹ 68 more variables: Education <fct>, MaritalStatus <fct>, HHIncome <fct>, ## # HHIncomeMid <int>, Poverty <dbl>, HomeRooms <int>, HomeOwn <fct>, ## # Work <fct>, Weight <dbl>, Length <dbl>, HeadCirc <dbl>, Height <dbl>, ## # BMI <dbl>, BMICatUnder20yrs <fct>, BMI_WHO <fct>, Pulse <int>, ## # BPSysAve <int>, BPDiaAve <int>, BPSys1 <int>, BPDia1 <int>, BPSys2 <int>, ## # BPDia2 <int>, BPSys3 <int>, BPDia3 <int>, Testosterone <dbl>, … We can inspect the resulting table. Note that there are 5,748 observations in the tibble. ## # A tibble: 5,748 × 77 ## ID healthy_sleep SurveyYr Gender Age AgeDecade AgeMonths Race1 Race3 ## <int> <lgl> <fct> <fct> <int> <fct> <int> <fct> <fct> ## 1 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 2 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 3 51624 FALSE 2009_10 male 34 " 30-39" 409 White <NA> ## 4 51630 TRUE 2009_10 female 49 " 40-49" 596 White <NA> ## 5 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 6 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 7 51647 TRUE 2009_10 female 45 " 40-49" 541 White <NA> ## 8 51656 FALSE 2009_10 male 58 " 50-59" 707 White <NA> ## 9 51657 FALSE 2009_10 male 54 " 50-59" 654 White <NA> ## 10 51666 FALSE 2009_10 female 58 " 50-59" 700 Mexican <NA> ## # ℹ 5,738 more rows ## # ℹ 68 more variables: Education <fct>, MaritalStatus <fct>, HHIncome <fct>, ## # HHIncomeMid <int>, Poverty <dbl>, HomeRooms <int>, HomeOwn <fct>, ## # Work <fct>, Weight <dbl>, Length <dbl>, HeadCirc <dbl>, Height <dbl>, ## # BMI <dbl>, BMICatUnder20yrs <fct>, BMI_WHO <fct>, Pulse <int>, ## # BPSysAve <int>, BPDiaAve <int>, BPSys1 <int>, BPDia1 <int>, BPSys2 <int>, ## # BPDia2 <int>, BPSys3 <int>, BPDia3 <int>, Testosterone <dbl>, … We will apply bootstrapping to the NHANES_relevant tibble to estimate an unknown parameter: the proportion of healthy sleepers in the American population. Let us visualize the distribution of healthy sleepers using a bar chart. ggplot(NHANES_relevant) + geom_bar(aes(x = healthy_sleep), col="grey", fill = "darkcyan", bins = 20) The proportion of healthy sleepers is the fraction of TRUE’s in the healthy_sleep column. Recall that Boolean variables are just 1’s and 0’s. Thus, we can sum the number of TRUE’s and divide by the total number of subjects. This is equivalent to computing the mean for the healthy_sleep column. ## # A tibble: 1 × 1 ## prop ## <dbl> ## 1 0.601 We are now ready to bootstrap from this random sample. Recall that one_sample_value will perform the bootstrap for us. We will replicate the bootstrap process a large number of times, say 10,000, so that we can plot a sampling histogram of the bootstrapped medians. # Do the bootstrap! bstrap_means <- replicate(n = 10000, one_sample_value(NHANES_relevant, healthy_sleep, mean)) As before, we will identify the 95% confidence interval. Here is the interval: desired_area <- 0.95 middle <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) ## 2.5% 97.5% ## 0.5887265 0.6139527 Let us plot the sampling histogram and annotate the interval on this histogram. df <- tibble(bstrap_means) ggplot(df, aes(x = bstrap_means, y = after_stat(density))) + geom_histogram(col="grey", fill = "darkcyan", bins = 13) + geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), size = 2, color = "salmon") + labs(x = "Proportion of healthy sleepers") ## Warning in geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), : All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. This looks a lot like what we saw in the previous section, with one key difference: there is no dot indicating where the parameter is! We do not know where the dot will fall or if it is even on this Statistics does not promise clairvoyance. It is a tool for quantifying uncertainty. What we have obtained is a 95% confidence interval of estimates. Meaning, this bootstrap process will be successful in capturing the parameter about 95% of the time. But that also leaves a 5% chance where we are totally off. Can we control the level of uncertainty? 7.4.3 Levels of uncertainty: 80% and 99% confidence intervals So far we have examined the 95% confidence interval. Let us see what happens to the interval of estimates when we increase our level of confidence. We will examine a 99% confidence interval. desired_area <- 0.99 middle <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) ## 0.5% 99.5% ## 0.5847251 0.6179541 df <- tibble(bstrap_means) ggplot(df, aes(x = bstrap_means, y = after_stat(density))) + geom_histogram(col="grey", fill = "darkcyan", bins = 13) + geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), size = 2, color = "salmon") + labs(x = "Proportion of healthy sleepers") ## Warning in geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), : All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. The interval is much wider! The proportion of healthy sleepers in the population goes from about 58.4% to 61.7%. This points to a trade-off: as we increase our confidence in the interval of estimates, this is compensated by making the interval wider. That is, a confidence interval generated by this resampling process has a chance of missing the parameter only 1% of the time. That probability does not correspond to the specific interval we found, but to the process that generated said interval. For the \([0.584, 0.617]\) interval we found, the parameter either sits on the interval or not. Let us move in the other direction and try a 80% confidence interval. desired_area <- 0.80 middle <- quantile(bstrap_means, 0.5 + (desired_area / 2) * c(-1, 1), type = 1) ## 10% 90% ## 0.5929019 0.6096033 df <- tibble(bstrap_means) ggplot(df, aes(x = bstrap_means, y = after_stat(density))) + geom_histogram(col="grey", fill = "darkcyan", bins = 13) + geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), size = 2, color = "salmon") + labs(x = "Proportion of healthy sleepers") ## Warning in geom_segment(aes(x = middle[1], y = 0, xend = middle[2], yend = 0), : All aesthetics have length 1, but the data has 10000 rows. ## ℹ Please consider using `annotate()` or provide this layer with data containing ## a single row. This interval is much narrower than the 99% interval and estimates 59.3% to 60.9% healthy sleepers in the population. This is a much tighter set of estimates, but we traded a narrower interval for lower confidence. This interval has a chance of missing the parameter 20% of the time. 7.4.4 Confidence intervals as an hypothesis test Confidence intervals can be used for more than trying to estimate a population parameter. One popular use case for the confidence interval is something we saw in the previous chapter: the hypothesis Let us reconsider the 95% confidence interval we obtained. The proportion of healthy sleepers in the population goes from 58.8% to 61.4%. Suppose that a researcher is interested in testing the following hypothesis: Null hypothesis. The proportion of healthy sleepers in the population is 61%. Alternative hypothesis. The proportion of healthy sleepers in the population is not 61%. If we were testing this hypothesis at the 95% significance level, we would fail to reject the null hypothesis. Why? The value supplied by the null (61%) sits on our 95% confidence interval for the population proportion. Therefore, at this level of significance, this value is plausible. If we were to lower our confidence (to say 90% or 80%), the conclusion could have been different. This raises an important point about cut-offs: some fields demand a high level of significance for a result to be accepted by its scientific community; other fields may require much less convincing. For instance, experimental studies in Physics demand significance levels at 99.9% or even 99.99% for a result to be even considered publishable. It is not hard to imagine why: findings in Physics are usually axiomatic and rejecting a null hypothesis implies the discovery of phenomena in nature. A 99.99% confidence interval would guarantee that such a discovery is a fluke only 0.01% of the time. The basis for using confidence intervals as a hypothesis test is rooted in statistical theory. In practice, we simply check whether the value supplied by the null hypothesis sits on the confidence interval or not. 7.4.5 Final remarks: resampling with care We end this section with some points to keep in mind when applying resampling. • Avoid introducing bias into the sample that is used as input for resampling. The sampling plan of simple random sampling will usually work best. And, even with simple random samples, it is possible to draw a “weird” original sample such that the confidence interval generated using it fails to capture the parameter. • When the size of a random sample is moderately sized enough, the chance of the bootstrapped sample being identical to it is extremely rare. Therefore, you should aim to work with large random • Resampling does not work well when estimating extreme values, for instance, estimating the minimum or maximum value of a population. • The distribution of the statistic should look roughly “bell” shaped. The histogram of the resampled statistics will be a hint. 7.5 Exercises Be sure to install and load the following packages into your R environment before beginning this exercise set. Question 1. The following vector lucky_numbers contains several numbers: lucky_numbers <- c(5, 10, 17, 25, 31, 36, 43) ## [1] 5 10 17 25 31 36 43 Using the function quantile as shown in the textbook, determine the lucky number that results from the order statistics: (1) min, (2) max, and (3) median. Question 2 The University of Lost World has conducted a staff and faculty survey regarding their most favorite rock bands. The university received 200 votes, which are summarized as follows: • Pink Floyd (35%) • Led Zeppelin (22%) • Allman Brothers Band (20%) • Yes (12%) • Uncertain (11%) In the following, we will use "P", "L", "A", "Y", and "U" to refer to the artists. The following tibble rock_bands summarizes the information: rock_bands <- tibble( band_initial = c("P", "L", "A", "Y", "U"), proportion = c(0.35, 0.22, 0.20, 0.12, 0.11), votes = proportion * 200 ## # A tibble: 5 × 3 ## band_initial proportion votes ## <chr> <dbl> <dbl> ## 1 P 0.35 70 ## 2 L 0.22 44 ## 3 A 0.2 40 ## 4 Y 0.12 24 ## 5 U 0.11 22 These proportions represent just a sample of the population of University of Lost World. We will attempt to estimate the corresponding population parameters - the proportion of listening preference for each rock band in the population of University of Lost World staff and faculty. We will use confidence intervals to compute a range of values that reflects the uncertainty of our estimate. • Question 2.1 Using rock_bands, generate a tibble votes containing 200 rows corresponding to the votes. You can group by band_initial and repeat each band’s row votes number of times by using rep (1, each = votes) within a slice() call (remember computing within groups?). Then form a tibble with a single column named vote. Here is what the first few rows of this tibble should look like: We will conduct bootstrapping using the tibble votes. • Question 2.2 Write a function one_resampled_statistic(num_resamples) that receives the number of samples to sample with replacement (why not without?) from votes. The function resamples from the tibble votes num_resamples number of times and then computes the proportion of votes for each of the 5 rock bands. It returns the result as a tibble in the same form as rock_bands, but containing the resampled votes and proportions from the bootstrap. Here is one possible tibble after running one_resampled_statistic(100). The answer will be different each time you run this! vote votes proportion A 23 0.23 L 19 0.19 P 40 0.40 U 7 0.07 Y 11 0.11 one_resampled_statistic <- function(num_resamples) { one_resampled_statistic(100) # a sample call • Question 2.3 Let us set two names, num_resamples and trials, to use when conducting the bootstrapping. trials is the desired number of resampled proportions to simulate for each of the bands. This can be set to some large value; let us say 1,000 for this experiment. But what value should num_resamples be set to, which will be the argument passed to one_resampled_statistic (num_resamples) in the next step? The following code chunk conducts the bootstrapping using your one_resampled_statistic() function and the names trials and num_resamples you created above. It stores the results in a vector bstrap_props_tibble <- replicate(n = trials, simplify = FALSE) |> • Question 2.4 Generate an overlaid histogram using bstrap_props_tibble, showing the five distributions for each band. Be sure to use a positional adjustment to avoid stacking in the bars. You may also wish to set an alpha to see each distribution better. Use 20 for the number of bins. We can see significant difference in the popularity between some bands. For instance, we see that the bootstrapped proportions for \(P\) is significantly higher than \(Y\)’s by virtue of no overlap between their two distributions; conversely, \(U\) and \(Y\) overlap each other completely showing no significant preference for \(U\) over \(Y\) and vice versa. Let us formalize this intuition for these three bands using an approximate 95% confidence interval. • Question 2.5 Define a function cf95 that receives a vector vec and returns the approximate “middle 95%” using quantile. Let us examine the 95% confidence intervals of the bands \(P\), \(Y\), and \(U\), respectively. • Question 2.6 By looking at the upper and lower endpoints of each interval, and the overlap between intervals (if any), can you say whether \(P\) is more popular than \(Y\) or \(U\)? How about for \(Y\), is \(Y\) more popular than \(U\)? • Question 2.7 Suppose you computed the following approximate 95% confidence interval for the proportion of band \(P\) votes. \[ [.285, .42] \] Is it true that 95% of the population of faculty lies in the range \([.285, .42]\)? Explain your answer. • Question 2.8 Can we say that there is a 95% probability that the interval \([.285, .42]\) contains the true proportion of the population who listens to band \(P\)? Explain your answer. • Question 2.9 Suppose that you created 80%, 90%, and 99% confidence intervals from one sample for the popularity of band \(P\), but forgot to label which confidence interval represented which percentages. Match the following intervals to the percent of confidence the interval represents. □ \([0.265, 0.440]\) □ \([0.305, 0.395]\) □ \([0.285, 0.420]\) Question 3. Recall the tibble penguins from the package palmerpenguins includes measurements for 344 penguins in the Palmer Archipelago. Let us try using the method of resampling to estimate using confidence intervals some useful parameters of the population. ## # A tibble: 344 × 8 ## species island bill_length_mm bill_depth_mm flipper_length_mm body_mass_g ## <fct> <fct> <dbl> <dbl> <int> <int> ## 1 Adelie Torgersen 39.1 18.7 181 3750 ## 2 Adelie Torgersen 39.5 17.4 186 3800 ## 3 Adelie Torgersen 40.3 18 195 3250 ## 4 Adelie Torgersen NA NA NA NA ## 5 Adelie Torgersen 36.7 19.3 193 3450 ## 6 Adelie Torgersen 39.3 20.6 190 3650 ## 7 Adelie Torgersen 38.9 17.8 181 3625 ## 8 Adelie Torgersen 39.2 19.6 195 4675 ## 9 Adelie Torgersen 34.1 18.1 193 3475 ## 10 Adelie Torgersen 42 20.2 190 4250 ## # ℹ 334 more rows ## # ℹ 2 more variables: sex <fct>, year <int> • Question 3.1 First, let us focus on estimating the mean body mass of the penguins, available in the variable body_mass_g. Form a tibble named penguins_pop_df that is identical to penguins but does not contain any missing values in the variable body_mass_g. We will imagine the 342 penguins in penguins_pop_df to be the population of penguins of interest. Of course, direct access to the population is almost never possible in a real-world setting. However, for the purposes of this question, we will claim clairvoyance and see how close the method of resampling approximates some population parameter, i.e., the mean body mass of penguins in the Palmer Archipelago. • Question 3.2 What is the mean body mass of penguins in penguins_pop_df? Store it in pop_mean. • Question 3.3 Draw a sample without replacement from the population in penguins_pop_df. Because samples can be expensive to collect in real settings, set the sample size to 50. The sample in one_sample is what we will use to resample from a large number of times. We saw in the textbook a function that resamples from a tibble, computes a statistic from it, and returns it. Following is the function: one_sample_value <- function(df, label, statistic) { sample_value <- df |> pull({{label}}) |> sample(replace = TRUE) |> • Question 3.4 What is the size of the resampled tibble when one_sample is passed as an argument? Assign your answer to the name resampled_size_answer. 1. 342 2. 684 3. 50 4. 100 5. 1 • Question 3.5 Using replicate, create 1,000 resampled mean statistics from one_sample using the variable body_mass_g. Assign your answer to the name resampled_means. • Question 3.6 Let us combine the steps Question 3.4 and Question 3.5 into a function. Write a function resample_mean_procedure that takes no arguments. The function draws a sample of size 50 from the population (Question 3.4), and then generates 1,000 resampled means from it (Question 3.5) which are then returned. • Question 3.7 Write a function get_mean_quantile that takes a single argument desired_area. The function performs the resampling procedure using resample_mean_procedure and returns the middle desired_area interval (e.g., 90% or 95%) as a vector. Here is an example call that obtains an approximate 90% confidence interval. Also shown is the population mean. Does your computed interval capture the parameter? Try running the cell a few times. The interval printed should be different each time you run the code chunk. • Question 3.8 Repeat the get_mean_quantile procedure to obtain 100 different approximate 90% confidence intervals. Assign the intervals to the name mean_intervals. The following code chunk organizes your results into a tibble named interval_df. interval_df <- tibble( replication = 1:100, left = mean_intervals[1,], right = mean_intervals[2,] • Question 3.9 Under an approximate 90% confidence interval, how many of the above 100 intervals do you expect captures the population mean? Use what you know about confidence intervals to answer this; do not write any code to determine the answer. The following code chunk visualizes your intervals with a vertical line showing the parameter: ggplot(interval_df) + geom_segment(aes(x = left, y = replication, xend = right, yend = replication), color = "magenta") + geom_vline(xintercept = pop_mean, color = "red") • Question 3.10 Now feed the tibble interval_df to a filter that keeps only those rows whose approximate 90% confidence interval includes pop_mean. How many of those intervals actually captured the parameter? Store the number in number_captured. Question 4. This problem is a continuation of Question 3. We will now streamline the previous analysis by generalizing the functions we wrote. This way we can try estimating different parameters and compare the results. • Question 4.1 Let us first generalize the resample_mean_procedure from Question 3.6. Call the new function resample_procedure. The function should receive the following arguments: □ pop_df, a tibble □ label, the variable under examination. Recall the use of {{ to refer to it properly. □ initial_sample_size, the sample size to use for the initial draw from the population □ n_resamples, the number of resampled statistics to generate □ stat, the statistic function The function returns a vector containing the resampled statistics. resample_procedure <- function(pop_df, stat) { • Question 4.2 Generalize the function get_mean_quantile from Question 3.7. Call the new function get_quantile. This function receives the same arguments as resample_mean_procedure with the addition of one more argument, desired_area, the interval width. The function then calls resample_procedure to obtain the resampled statistics. The function returns the middle quantile range of these statistics according to desired_area, e.g., the “middle 90%” if desired_area is 0.9. get_quantile <- function(pop_df, desired_area) { • Question 4.3 We can now package all the actions into one function. Call the function conf_interval_test. The function receives the same arguments as get_quantile with one new argument, num_intervals, the number of confidence intervals to generate. The function performs the following actions (in order): □ Compute the population parameter from pop_df (assuming access to the population is possible in pop_df) by running the function stat_func on the variable label. Recall the use of {{ to refer to label properly. Assign this number to the name pop_stat. □ Obtain num_intervals many confidence intervals by repeated calls to the function get_quantile. Assign the resulting intervals to the name intervals. □ Arrange the results in intervals into a tibble named interval_df with three variables: replication, left, and right. □ Print the number of confidence intervals that capture the parameter pop_stat. □ Visualize the intervals with a vertical red line showing where the parameter is. NOTE: If writing this function seems daunting, don’t worry! All of the code you need is already written. You should be able to simply copy your work from this question and from the steps in Question 3. conf_interval_test <- function(pop_df, n_resamples, stat_func, desired_area, num_intervals) { Let us now try some experiments. • Question 4.4 Run conf_interval_test on penguins_pop_df to estimate the mean body mass in the population using the variable body_mass_g. Set the initial draw size to 50 and number of resampled statistics to 1000. Generate 100 different approximate 90% confidence intervals. • Question 4.5 Repeat Question 4.4, this time estimating the max body mass in the population instead of the mean. • Question 4.6 Repeat Question 4.4, this time increasing the initial draw size. First try 100, then 200, and 300. • Question 4.7 For the max-based estimates, why is it that so many of the 90% confidence intervals are unsuccessful in capturing the parameter? • Question 4.8 For the mean-based estimates, at some point when increasing the initial draw size from 50 to 300, all of the 100 differently generated confidence intervals capture the parameter. Given what we know about 90% confidence intervals, how can this be possible? Question 5 Let’s return to the College of Groundhog CSC1234 simulation from Question 6 in Chapter 8. We evaluated the claim that the final scores of students from Section B were significantly lower than those from Sections A and C by means of a permutation test. Permutation analysis seeks to quantify what the null distribution looks like. For this reason, it tries to break whatever structure may be present in the dataset and quantify the patterns we would expect to see under a chance model. Recall the tibble csc1234 from the edsdata package: ## # A tibble: 388 × 2 ## Section Score ## <chr> <dbl> ## 1 A 100 ## 2 A 100 ## 3 A 100 ## 4 A 100 ## 5 A 100 ## 6 A 100 ## 7 A 100 ## 8 A 100 ## 9 A 100 ## 10 A 100 ## # ℹ 378 more rows • Question 5.1 How many students are in each section? Form a tibble that gives an answer and assign the resulting tibble to the name section_counts. There is another way we can approach the analysis. We can quantify the uncertainty in the mean score difference between two sections by estimating a confidence interval with the resampling technique. Under this scheme, we assume that each section performs identically and that the student scores available in each section (116 from A, 128 from B, and 144 from C) is a sample from some larger population of student scores for the CSC1234 course, which we do not have access to. Thus, we will sample with replacement from each section. Then, as with the permutation exercise, we can compute the mean difference in scores for each pair of sections (“A-B”, “C-B”, “C-A”) using the bootstrapped sample. The interval we obtain from this process can be used to test the hypothesis that the average score difference is different from chance. • Question 5.2 Recall the work from Question 6 in Chapter 8. Copy over your work for creating the function mean_differences and the observed group mean differences in observed_differences. • Question 5.3 Generate an overlaid histogram for Score from csc1234 showing three distributions in the same plot, the scores for Section A, for Section B, and for Section C. Use 10 bins and a dodge positional adjustment this time to compare the distributions. Resampling calls for sampling with replacement. Suppose that we are to resample scores with replacement from the “Section A” group, then likewise for the “Section B” group, and finally, the “Section C” group. Then we compute the difference in means between the groups (A-B, C-B, C-A). Would the bulk of this distribution be centered around 0? Let’s find out! • Question 5.4 State a null and alternative hypothesis for this problem. Let us use resampling to build a confidence interval and address the hypothesis. • Question 5.5 Write a function resample_tibble that takes a tibble as its single argument, e.g., csc1234. The function samples Score with replacement WITHIN each group in Section. It overwrites the variable Score with the result of the sampling. The resampled tibble is returned. resample_tibble(csc1234) # an example call • Question 5.6 Write a function csc1234_one_resample that takes no arguments. The function resamples from csc1234 using the function resample_tibble. It then computes the mean difference in scores using the mean_differences function you wrote from the permutation test. The function returns a one-element list containing a vector with the computed differences. csc1234_one_resample() # an example call • Question 5.7 Using replicate, generate 10,000 resampled mean differences. Store the resulting vector in the name resampled_differences. The following code chunk organizes your results into a tibble differences_tibble: differences_tibble <- tibble( `A-B` = map_dbl(resampled_differences, function(x) x[1]), `C-B` = map_dbl(resampled_differences, function(x) x[2]), `C-A` = map_dbl(resampled_differences, function(x) x[3])) |> names_to = "Section Pair", values_to = "Statistic") |> mutate(`Section Pair` = factor(`Section Pair`, levels=c("A-B", "C-B", "C-A"))) • Question 5.8 Form a tibble named section_intervals that gives an approximate 95% confidence interval for each pair of course sections in resampled_differences. The resulting tibble should look Section Pair left right A-B … … C-A … … C-B … … To accomplish this, use quantile to summarize a grouped tibble and then a pivot function. Don’t forget to ungroup! The following plots a histogram of your results for each course section pair. It then annotates each histogram with the approximate 95% confidence interval you found. differences_tibble |> ggplot() + geom_histogram(aes(x = Statistic, y = after_stat(density)), color = "gray", fill = "darkcyan", bins = 20) + geom_segment(data = section_intervals, aes(x = left, y = 0, xend = right, yend = 0), size = 2, color = "salmon") + facet_wrap(~`Section Pair`) Note how the observed mean score differences in observed_differences fall squarely in its respective interval (if you like, plot the points on your visualization!). • Question 5.9 Draw the conclusion of the hypothesis test for each of the three confidence intervals. Do we reject the null hypothesis? If not, what conclusion can we make? • Question 5.10 Suppose that the 95% confidence interval you found for “A-B” is \([-9.35, -1.95]\). Does this mean that 95% of the student scores were between \([-9.35, -1.95]\)? Why or why not?
{"url":"https://ds4world.cs.miami.edu/quantifying-uncertainty","timestamp":"2024-11-05T10:24:12Z","content_type":"text/html","content_length":"168934","record_id":"<urn:uuid:9b90640b-4e1a-4c31-81fa-736358611589>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00143.warc.gz"}
Addition for Class 1: Learn Definition, Types and Examples Learning and Doing Addition for Class 1 Addition happens when you put two or more numbers together to find the total amount. The result of adding two or more numbers is called the sum. So, you would add your 4 cookies and your friend's 4 cookies to find that the sum is 8 cookies. Addition is a mathematical operation that is used to add numbers. The sum of the provided numbers is the outcome of adding the given numbers. For instance, if we add 2 and 3, (2 + 3), the result is 5. We used the addition function on two numbers, 2 and 3, to get the sum, 5. In this article, we are going to see problems based on addition like the word problem of addition for class 1. Addition Symbol Different symbols are used in Mathematics. One of the most used arithmetic symbols is the addition symbol. We read about adding two integers, 2 and 3, in the definition of addition above. If we look at the addition pattern (2 + 3 = 5), we can see that the symbol (+) joins the two numbers and completes the supplied phrase. The addition symbol is made up of one horizontal and one vertical line. It is also known as the plus symbol (+) or the addition sign (+). Simple Addition for Class 1 The basic formula of easy addition for Class 1 or the mathematical equation of addition can be explained as follows. Here, 2 and 5 are the addends and 7 is the sum. Simple Addition 2 Digit Addition for Class 1 2-digit addition for Class 1 is a simple form of addiction in which numbers are placed according to their place value of ones and tens and then added. Once both columns are added, we obtain the final sum. For finding a total of two single-digit numbers, just count the first number on your fingers and then count the second number on your finger, and the sum of the total fingers is the result. 2-Digit Addition for Class 1 Carryover Addition for Class 1 When adding numbers, if the sum of the addends in any of the columns is more than 9, we regroup this amount into tens and ones. The tens digit of the sum is then carried over to the preceding column, and the one's digit of the sum is written in that column. In other words, we just write the number in the 'ones place digit' in that column, while moving the number in the 'tens place digit' to the column to the immediate left. With the help of an example, let us learn how to add two or more numbers in carrying over addition for Class 1. Example: Add 3475 and 2865. Solution: Let us follow the given steps: Step 1: Start with the digits in one place. (5 + 5 = 10). Here the sum is 10. The tens digit of the sum, that is, 1, will be carried to the preceding column. Step 2: Add the digits in the tens column along with the carryover 1. This means, 1 (carry-over) + 7 + 6 = 14. Here the sum is 14. The tens digit of the sum, that is, 1, will be carried to the hundreds place. Step 3: Now, add the digits in the hundreds place along with the carryover digit 1. This means, 1 (carry-over) + 4 + 8 = 13. Here the sum is 13. The tens digit of the sum, that is, 1, will be carried to the thousands place. Step 4: Now, add the digits in the thousands place along with the carryover digit 1, that is, 1 (carry-over) + 3 + 2 = 6 Step 5: Therefore, the sum of 3475 + 2865 = 6340 Addition Story Sums for Class 1 One morning, your mother called you and gave you 4 chocolates. After that, your father called you and gave you 5 chocolates. Amazing isn’t it? Now you have so many chocolates. Can you tell me how many chocolates you have? If you have to answer this question then you need to use addition. Addition is basically counting the total number of objects you have. While adding two numbers, count forward the numbers equal to the second number after the first number. Addition will be easy if we take the bigger number and then count forward the smaller number. While adding 5 + 4 counting four numbers after 5 is easier than counting five numbers after 4. So the answer is 9 chocolates. Learning becomes easy if you imagine stories in your mind and then think accordingly. Solving the word problems on addition for Class 1 will help you to understand addition in the real world. Example 1: Peter has two boxes of Chocolates. There are 25 chocolates in one box and 15 chocolates in the other box. How many Chocolates does Peter have in total? Solution: 25 Chocolates in one box and 15 chocolates in the other box makes a total of 40 chocolates in both boxes. Example 2: Sam went to a park, 8 boys and 10 girls were already playing there when Sam came to the park. How many children were playing in the Park when Sam came to the park? Solution: On adding 8 boys and 10 girls together, we get 18. So, when Sam came to the park, he saw 18 Children Playing. Solved Examples Q1. Add 100 + 150 Ans: Adding both we get 250. Q2. Add 27 + 30 Ans: Adding both we get 57. Q3. Add 65 + 10 Ans: Adding both we get 75. Q4. Add 73 + 76 using the carry method. Ans: Adding both we get 149 Q5. Add 59 + 94 using the carry method. Ans: Adding both we get 153 Practice Questions Some practice questions for simple addition Class 1. Practice Question The process of combining two or more items is known as an addition. The procedure of determining the sum of two or more numbers in Mathematics is known as addition. It is a basic mathematical operation that we all employ on a daily basis. Addition (usually denoted by a plus sign +) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication, and division. Also, we have seen some word problems related to the addition of numbers. We hope this article will help you in understanding the addition for Class 1. FAQs on Addition for Class 1 1. What are the real-life examples of addition? There are plenty of other examples that we face on a daily basis. If you have 5 apples and a buddy gives you 3 more, the sum of 5 + 3 equals 8. So you have a total of eight apples. Similarly, imagine a class that has 16 females and 13 boys; adding the numbers 16 + 13 yields the total number of students in the class, which is 29. 2. Where do we use addition? In everyday settings, we apply addition. For example, if we want to know how much money we spent on the products we purchased, the time it would take to complete a task or the number of materials needed in preparing anything, we must do the additional process. 3. What are some of the tips and tricks for an addition? The following are some of the additional tips and tricks: • Words like 'put together, 'in all, 'altogether', and 'total' give a clue that you need to add the given numbers. • Start with the larger number and add the smaller number to it. For example, adding 12 to 43 is easier than adding 43 to 12. • Break numbers according to their place values to make addition easier. For example, 22 + 64 can be split as 20 + 2 + 60 + 4. While this looks difficult, it makes mental addition easier. • When adding different digit numbers, make sure to place the numbers one below the other in the correct column of their place value.
{"url":"https://www.vedantu.com/maths/addition-for-class-1","timestamp":"2024-11-10T22:53:41Z","content_type":"text/html","content_length":"214561","record_id":"<urn:uuid:f4513a9e-466f-4207-859d-105cc0bf696f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00175.warc.gz"}
Birds on a field - Physics of Risk How many birds could potentially fit on a wire? Well it depends on how "jumpy" they are. In [1] it was shown that density of birds is approximately \( \frac{1}{2 r + 1} \) in the steady state regime (here \( r \) is the tolerance distance of the birds). In this post we provide an interactive applet for the two dimensional case (field). Interactive app This interactive app visualizes a finite discrete field (there are 50x40 spots for the birds to land on). Empty spots are shown in grass green (to imitate the field) or red (if the spot was recently vacated), while spots taken are shown in dark gray (most "crow-like" color). As in the previous app we also plot the time evolution of the fraction (density) of birds on the field. In this particular app we have assumed circular neighborhood around the arriving bird. Namely bird \( j \) flies away if $$\sqrt{(x_j - x)^2 + (y_j - y)^2} \leq r .$$ In the above \( \left( x, y \right) \) are the coordinates of recently landed bird. • P. L. Krapivsky, S. Redner. Birds on a Wire. arXiv:2205.00995 [cond-mat.stat-mech].
{"url":"https://rf.mokslasplius.lt/birds-on-field/","timestamp":"2024-11-12T15:53:23Z","content_type":"text/html","content_length":"21458","record_id":"<urn:uuid:87f611eb-30ea-483e-945e-b59bae805a83>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00126.warc.gz"}
s Pure Momentum Sector Fund System A subscriber requested evaluation of Jay’s Pure Momentum Sector Fund System, specified by originator Jay Kaeppel as follows: • At the end of the first month, assign 20% weight to the five of the 40 Fidelity Select Sector funds (excluding Select Gold, FSAGX) with the largest positive returns over the previous 240 trading • At the end of each subsequent month, sell any positions that drop out of the top five and reallocate proceeds equally to their replacements. • If for any month fewer than five funds have positive returns, leave unpopulated positions in cash. This system involves both relative momentum (picking past winners) and absolute or intrinsic momentum (requiring positive past returns). The author states that the publication year for the system is 2001, so we start with 2002 for a test free of data snooping. We accept annual returns for 2002 through (partial) 2015 as reported by the author . We consider two simple benchmarks: (1) buy and hold SPDR S&P 500 (SPY); and, (2) hold SPY when it is above its 10-month simple moving average and 3-month U.S. Treasury bills (T-bills, a proxy for cash) otherwise (SPY-SMA10). The second benchmark is a simple, widely used market timing rule that helps decide whether Jay’s Pure Momentum Sector Fund System outperforms the market because of sector rotation (relative momentum) or market timing (absolute momentum). Using annual returns for Jay’s Pure Momentum System, monthly dividend-adjusted prices and annual returns for SPY and monthly T-bill yields during 2002 through mid-September 2015 (nearly 14 years), we find that: Assumptions for the SPY-SMA10 benchmark are: • An investor can accurately anticipate the SMA10 signal just before each monthly close, and thereby act accordingly at the same close. • For each switch between SPY and cash, debit a switching friction of 0.1%. The following chart compares net cumulative performances of Jay’s Pure Momentum Sector Fund System, SPY and SPY-SMA10 using annual data over the sample period. The starting value of $12,500 derives from five times the $2,500 minimum investment for the Fidelity funds. Respective net performance statistics for Jay’s Pure Momentum Sector Fund System, SPY and SPY-SMA10 are: • Compound annual growth rate (CAGR): 10.8%, 6.1% and 8.7%. • Arithmetic average annual return: 12.3%, 8.2% and 9.7%. • Standard deviation of annual returns: 19.7%, 20.1% and 11.6%. • Annual Sharpe ratio (using average monthly T-bill yield during a year as the annual risk-free rate for that year): 0.56, 0.32 and 0.67. Comparing trajectories suggests that some of the outperformance of Jay’s Pure Momentum Sector Fund System relative to SPY comes from sector rotation (relative momentum) and some comes from market timing (shifting to and from cash based on absolute momentum). The sample is short for assessing annual performance. Also, mutual funds may have loads, minimum investment requirements and (as in this case) early redemption fees. Early redemption fees may affect returns for monthly portfolio reformation as specified for Jay’s Pure Momentum Sector Fund System. Loads, and early redemption rules/fees may change over time. The minimum investment requirement may interfere with the system after drawdowns (as in 2002), such that it is not possible to fund five positions in some months. The performance of SPY-SMA10 is not very sensitive to assumed level of switching friction because there are only 14 switches. How does the performance of Jay’s Pure Momentum Sector Fund System compare with the conceptually similar “Simple Sector ETF Momentum Strategy” with market timing rule overlay? The next chart compares net cumulative performance of Jay’s Pure Momentum Sector Fund System to that of the Simple Sector ETF Momentum Strategy with an S&P 500 Index SMA10 overlay. Differences between the two strategies are: • The former considers 40 Fidelity sector mutual funds, while the latter considers only nine SPDR sector exchange-traded funds (ETF). • The former looks for the top five funds based on past 240-day returns, while the latter looks only for the top fund based on past six-month returns. • The former shifts partly (fully) to cash when only one to four (zero) winner funds have positive past returns, while the latter shifts completely to cash when the S&P 500 Index is below its • The former incurs no switching frictions (ignoring any early redemption fees), while the latter incurs a switching friction of 0.1% for each change in holdings ($12.50 based on initial funding). • The former is completely out-of-sample, while the latter is partly backtested and partly out-of-sample. Based on terminal values, net CAGRs for Jay’s Pure Momentum Sector Fund System and the Simple Sector ETF Momentum Strategy with SMA10 overlay are 10.8% and 9.7%, respectively. Net arithmetic average annual returns are 12.3% and 10.4%, respectively, with standard deviations 19.7% and 14.3% and net annual Sharpe ratios 0.56 and 0.64. “Simple Sector ETF Momentum Strategy Robustness/Sensitivity Tests” indicates that the latter strategy is not very robust to the length of the ranking interval, but that a six-month ranking interval produces about the same outcome as a 240-day (11-month or 12-month) ranking interval. In summary, evidence from simple tests on limited out-of-sample data suggests that Jay’s Pure Momentum Sector Fund System outperforms the broad U.S. stock market due partly to sector rotation and partly to market timing, but that it does not outperform simpler strategies based on Sharpe ratio. Cautions regarding findings include: • As noted, the sample period is short for evaluation of annual performance, especially in terms of number of bear markets to be avoided via market timing. • As noted, Fidelity mutual fund minimum investment requirements and early redemption fees may interfere with precise implementation of Jay’s Pure Momentum Sector Fund System. These and other potentially interfering rules may have changed during the sample period. • As noted, the Simple Sector ETF Momentum Strategy is partly in-sample with a near-optimal ranking interval. In other words, it impounds some data snooping bias, tending to overstate out-of-sample • There may be some bias in the results for Jay’s Pure Momentum Sector Fund System because the author may choose to write about it after times of outperformance.
{"url":"https://www.cxoadvisory.com/momentum-investing/assessing-jays-pure-momentum-system/","timestamp":"2024-11-13T12:37:08Z","content_type":"application/xhtml+xml","content_length":"147324","record_id":"<urn:uuid:6321a555-2742-48b5-b639-ee9d4ff3c3fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00378.warc.gz"}
Tutor Recruitment - Bristol Tutor CompanyTutor Recruitment Bristol Maths Tutor Recruitment Are you one of the following: Maths tutor in Bristol looking for more work? Maths teacher looking for a new job? Experienced Maths graduate looking for a Maths job? A teacher looking to start tutoring in the Bristol? If so then go to Bristol Tutor Company today as we could have tutoring work for you in Bristol and the surrounding areas. Please note that tutors must comply with the following: Have a clean DBS check (if you do not have an up to date one we can help you obtain a DBS check). Hold a UK driving licence with access to a car. Be educated to degree level, preferably with a PGCE. Leeds Maths Tutor has a number of live vacancies as we are looking to expand further out from the Bristol area. We require all of our Maths tutor to be able to travel to students. We also want our Maths tutors to be passionate about their subject and deliver the the best possible service to our parents and students. Therefore if you match the above criteria and are looking for a tutoring or teaching job in Bristol please contact us today for more information.
{"url":"https://bristoltutorcompany.co.uk/services/","timestamp":"2024-11-13T16:23:42Z","content_type":"text/html","content_length":"39820","record_id":"<urn:uuid:72254163-69a1-4110-bd4e-829a4404e5cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00286.warc.gz"}
How to solve this simple non-linear equation numerically with Abs[]? 7054 Views 2 Replies 3 Total Likes How to solve this simple non-linear equation numerically with Abs[]? I was just trying to get into Mathematica a little more but I've been stuck when trying to solve an equation. The H function models a low pass filter and I want to find out the cut off frequency. Z[\[Omega]_] := 1 /(\[ImaginaryJ]*\[Omega]*c) H[\[Omega]_] := Z[\[Omega]]/(r + Z[\[Omega]]) c = 1*^-6; r = 1000; LogLogPlot[Abs[H[2 \[Pi]*f]], {f, 1, 1000000}, ImageSize -> Large, AxesOrigin -> {1, 1*^-3}, GridLines -> {{160}, {}}] NSolve[Abs[H[2 \[Pi]*f]] == 1/Sqrt[2], f, Reals] NSolve[Abs[H[2 \[Pi]*f]] <= 1/Sqrt[2], f, Reals] f = 160; Abs[H[2 \[Pi]*f]] < 1/Sqrt[2] Plotting the simple diagram worked. Now, when I try to numerically solve this (in)equation, I get this error: NSolve::nddc: "The system ... contains a nonreal constant -500000\ I. With the domain Reals specified, all constants should be real." Even though the Abs[] around it should get rid of the i. However, when I "manually" test the inequation, I get back true. What am I doing wrong? Thanks for any help 2 Replies In[1]:= Z[?_] := 1/(\[ImaginaryJ]*?*c); H[?_] := Z[?]/(r + Z[?]); c = 1*^-6; r = 1000; Reduce[Abs[H[2 ?*f]] == 1/Sqrt[2] && f > 0, f] Out[4]= f == 500/? if you want the result in terms of radians instead of a decimal approximation, just as long as you don't use a decimal point anywhere From Documentation on (see Details section) you can find out that NSolve deals primarily with linear and polynomial equations. You obviously deal with non-polynomial equation due to Abs[] function. In this cases use function. From plot we see that the solution is located around 160: Z[\[Omega]_] := 1/(\[ImaginaryJ]*\[Omega]*c) H[\[Omega]_] := Z[\[Omega]]/(r + Z[\[Omega]]) c = 1*^-6; r = 1000; LogLogPlot[{1/Sqrt[2], Abs[H[2 \[Pi]*f]]}, {f, 1, 1000000}, ImageSize -> Large, AxesOrigin -> {1, 1*^-3}, GridLines -> {{160}, {}}] And this simple line solves your problem: FindRoot[Abs[H[2 \[Pi]*f]] == 1/Sqrt[2], {f, 100}] (* {f -> 159.155} *) Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/171094","timestamp":"2024-11-08T10:42:28Z","content_type":"text/html","content_length":"101217","record_id":"<urn:uuid:3e64c324-591e-4252-80fd-a879d6de9805>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00713.warc.gz"}
Bayesian statistic Bayesian statistics From Scholarpedia David Spiegelhalter and Kenneth Rice (2009), Scholarpedia, 4(8):5230. doi:10.4249/scholarpedia.5230 revision #185711 [link to/cite this article] Bayesian statistics is a system for describing epistemological uncertainty using the mathematical language of probability. In the 'Bayesian paradigm,' degrees of belief in states of nature are specified; these are non-negative, and the total belief in all states of nature is fixed to be one. Bayesian statistical methods start with existing 'prior' beliefs, and update these using data to give 'posterior' beliefs, which may be used as the basis for inferential decisions. In 1763, Thomas Bayes published a paper on the problem of induction, that is, arguing from the specific to the general. In modern language and notation, Bayes wanted to use Binomial data comprising \ (r\) successes out of \(n\) attempts to learn about the underlying chance \(\theta\) of each attempt succeeding. Bayes' key contribution was to use a probability distribution to represent uncertainty about \(\theta\ .\) This distribution represents 'epistemological' uncertainty, due to lack of knowledge about the world, rather than 'aleatory' probability arising from the essential unpredictability of future events, as may be familiar from games of chance. Modern 'Bayesian statistics' is still based on formulating probability distributions to express uncertainty about unknown quantities. These can be underlying parameters of a system (induction) or future observations (prediction). Bayes' Theorem In its raw form, Bayes' Theorem is a result in conditional probability, stating that for two random quantities \(y\) and \(\theta\ ,\) \[ p(\theta|y) = p(y|\theta) p(\theta) / p(y),\] where \(p(\cdot)\) denotes a probability distribution, and \(p(\cdot|\cdot)\) a conditional distribution. When \(y\) represents data and \(\theta\) represents parameters in a statistical model, Bayes Theorem provides the basis for Bayesian inference. The 'prior' distribution \(p(\theta)\) (epistemological uncertainty) is combined with 'likelihood' \(p(y|\theta)\) to provide a 'posterior' distribution \(p(\theta|y)\) (updated epistemological uncertainty): the likelihood is derived from an aleatory sampling model \(p(y|\theta)\) but considered as function of \(\theta\) for fixed \(y\ . While an innocuous theory, practical use of the Bayesian approach requires consideration of complex practical issues, including the source of the prior distribution, the choice of a likelihood function, computation and summary of the posterior distribution in high-dimensional problems, and making a convincing presentation of the analysis. Bayes theorem can be thought of as way of coherently updating our uncertainty in the light of new evidence. The use of a probability distribution as a 'language' to express our uncertainty is not an arbitrary choice: it can in fact be determined from deeper principles of logical reasoning or rational behavior; see Jaynes (2003) or Lindley (1953). In particular, De Finetti (1937) showed that making a qualitative assumptions of exchangeability of binary observations (i.e. that their joint distribution is unaffected by label-permutation) is equivalent to assuming they are each independent conditional on some unknown parameter \(\theta\ ,\) where \(\theta\) has a prior distribution and is the limiting frequency with which the events occur. Use of Bayes' Theorem: a simple example Suppose a hospital has around 200 beds occupied each day, and we want to know the underlying risk that a patient will be infected by MRSA (methicillin-resistant Staphylococcus aureus). Looking back at the first six months of the year, we count \(y=\) 20 infections in 40,000 bed-days. A simple estimate of the underlying risk \(\theta\) would be 20/40,000 \(=\) 5 infections per 10,000 bed-days. This is also the maximum-likelihood estimate, if we assume that the observation \(y\) is drawn from a Poisson distribution with mean \(\theta N\) where \(N = 4\) is the number of bed-days/\(10,000,\) so that \[p(y|\theta) = (\theta N)^y e^{-\theta N}/y!\ .\] However, other evidence about the underlying risk may exist, such as the previous year's rates or rates in similar hospitals which may be included as part of a hierarchical model (see below). Suppose this other information, on its own, suggests plausible values of \(\theta\) of around 10 per 10,000, with 95% of the support for \(\theta\) lying between 5 and 17. This judgement about \(\theta\) may be expressed as a prior probability distribution. Say, for convenience, the Gamma\((a,b)\) family of distributions is chosen to formally describe our knowledge about \(\theta\ .\) This family has density \[p(\theta) = b^a \theta^{a-1}e^{-b\theta}/\Gamma(a)\ ;\] choosing \(a=10\) and \(b=1\) gives a prior distribution with appropriate properties, as shown in Figure 1. Figure 1 also shows a density proportional to the likelihood function, under an assumed Poisson model. Using Bayes Theorem, the posterior distribution \(p(\theta|y)\) is \[\propto \theta^y e^{-\theta N} \theta^{a-1}e^{-b\theta} \propto \theta^{y+a-1}e^{-\theta (N+b)}\ ,\] i.e. a Gamma\((y+a,N+b)\) distribution - this closed-form posterior, within the same parametric family as the prior, is an example of a conjugate Bayesian analysis. Figure 1 shows that this posterior is primarily influenced by the likelihood function but is 'shrunk' towards the prior distribution to reflect that the expectation based on external evidence was of a higher rate than that actually observed. This can be thought of as an automatic adjustment for 'Regression to the mean', in that the prior distribution will tend to counteract chance highs or lows in the data. Prior distributions The prior distribution is central to Bayesian statistics and yet remains controversial unless there is a physical sampling mechanism to justify a choice of \(p(\theta)\ .\) One option is to seek 'objective' prior distributions that can be used in situations where judgemental input is supposed to be minimized, such as in scientific publications. While progress in Objective Bayes methods has been made for simple situations, a universal theory of priors that represent zero or minimal information has been elusive. A complete alternative is the fully subjectivist position, which compels one to elicit priors on all parameters based on the personal judgement of appropriate individuals. A pragmatic compromise recognizes that Bayesian statistical analyses must usually be justified to external bodies and therefore the prior distribution should, as far as possible, be based on convincing external evidence or at least be guaranteed to be weakly informative: of course, exactly the same holds for the choice of functional form for the sampling distribution which will also be a subject of judgement and will need to be justified. Bayesian analysis is perhaps best seen as a process for obtaining posterior distributions or predictions based on a range of assumptions about both prior distributions and likelihoods: arguing in this way, sensitivity analysis and reasoned justification for both prior and likelihood become vital. Sets of prior distributions can themselves share unknown parameters, forming hierarchical models. These feature strongly within applied Bayesian analysis and provide a powerful basis for pooling evidence from multiple sources in order to reach more precise conclusions. Essentially a compromise is reached between the two extremes of assuming the sources are estimating (a) precisely the same, or (b) totally unrelated, parameters. The degree of pooling is itself estimated from the data according to the similarity of the sources, but this does not avoid the need for careful judgement about whether the sources are indeed exchangeable, in the sense that we have no external reasons to believe that certain sources are systematically different from others. One of the strengths of the Bayesian paradigm is its ease in making predictions. If current uncertainty about \(\theta\) is summarized by a posterior distribution \(p(\theta|y)\ ,\) a predictive distribution for any quantity \(z\) that depends on \(\theta\) through a sampling distribution \(p(z|\theta)\) can be obtained as follows; \[p(z|y) = \int p(z|\theta) p(\theta|y)\,\,d\theta\] provided that \(y\) and \(z\) are conditionally independent given \(\theta\ ,\) which will generally hold except in time series or spatial models. In the MRSA example above, suppose we wanted to predict the number of infections \(z\) over the next six months, or 40,000 bed-days. This prediction is given by \[p(z|y) = \int \frac{(\theta N)^z e^ {-\theta N}}{z!} \,\,\, \frac{(N+b)^{y+a} \theta^{y+a-1} e^{-\theta (N+b)}}{\Gamma(y+a)} \,\,d\theta = \frac{\Gamma(z+y+a)}{\Gamma(y+a)z!} p^{y+a}(1-p)^z\ ,\] where \(p = (N+b)/(2N+b)\ .\) This Negative Binomial predictive distribution for \(z\) is shown in Figure 2. Making Bayesian Decisions For inference, a full report of the posterior distribution is the correct and final conclusion of a statistical analysis. However, this may be impractical, particularly when the posterior is high-dimensional. Instead, posterior summaries are commonly reported, for example the posterior mean and variance, or particular tail areas. If the analysis is performed with the goal of making a specific decision, measures of utility, or loss functions can be used to derive the posterior summary that is the 'best' decision, given the data. In Decision Theory, the loss function describes how bad a particular decision would be, given a true state of nature. Given a particular posterior, the Bayes rule is the decision which minimizes the expected loss with respect to that posterior. If a rule is admissible (meaning that there is no rule with strictly greater utility, for at least some state of nature) it can be shown to be a Bayes rule for some proper prior and utility function. Many intuitively-reasonable summaries of posteriors can also be motivated as Bayes rules. The posterior mean for some parameter \(\theta\) is the Bayes rule when the loss function is the square of the distance from \(\theta\) to the decision. As noted, for example, by Schervish (1995), quantile-based credible intervals can be justified as a Bayes rule for a bivariate decision problem, and Highest Posterior Density intervals can be justified as a Bayes rule for a set-valued decision problem. As a specific example, suppose we had to provide a point prediction for the number of MRSA cases in the next 6 months. For every case that we over-estimate, we will lose 10 units of wasted resources, but for every case that we under-estimate we will lose 50 units through having to make emergency provision. Our selected estimate is that \(t\) which will minimise the expected total cost, given by \ [ \sum_{z=0}^{t-1} 10(t-z)p(z|y) + \sum_{z=t+1}^\infty 50(z-t)p(z|y) \] The optimal choice of \(t\) can be calculated to be 30, considerably more than the expected value 24, reflecting our fear of under-estimation. Computation for Bayesian statistics Bayesian analysis requires evaluating expectations of functions of random quantities as a basis for inference, where these quantities may have posterior distributions which are multivariate or of complex form or often both. This meant that for many years Bayesian statistics was essentially restricted to conjugate analysis, where the mathematical form of the prior and likelihood are jointly chosen to ensure that the posterior may be evaluated with ease. Numerical integration methods based on analytic approximations or quadrature were developed in 70s and 80s with some success, but a revolutionary change occurred in the early 1990s with the adoption of indirect methods, notably Monte Carlo Markov Chain). The Monte Carlo method Any posterior distribution \(p(\theta|y)\) may be approximated by taking a very large random sample of realizations of \(\theta\) from \(p(\theta|y)\ ;\) the approximate properties of \(p(\theta|y)\) by the respective summaries of the realizations. For example, the posterior mean and variance of \(\theta\) may be approximated by the mean and variance of a large number of realizations from \(p(\ theta|y)\ .\) Similarly, quantiles of the realizations estimate quantiles of the posterior, and the mode of a smoothed histogram of the realizations may be used to estimate the posterior mode. Samples from the posterior can be generated in several ways, without exact knowledge of \(p(\theta|y)\ .\) Direct methods include rejection sampling, which generates independent proposals for \(\ theta\ ,\) and accepts them at a rate whereby those retained are proportional to the desired posterior. Importance sampling can also be used to numerically evaluate relevant integrals; by appropriately weighting independent samples from a user-chosen distribution on \(\theta\ ,\) properties of the posterior \(p(\theta|y) \)can be estimated. Realizations from the posterior used in Monte Carlo methods need not be independent, or generated directly. If the conditional distribution of each parameter is known (conditional on all other parameters), one simple way to generate a possibly-dependent sample of data points is via Gibbs Sampling. This algorithm generates one parameter at a time; as it sequentially updates each parameter, the entire parameter space is explored. It is appropriate to start from multiple starting points in order to check convergence, and in the long-run, the 'chains' of realizations produced will reflect the posterior of interest. More general versions of the same argument include the Metropolis-Hastings algorithm; developing practical algorithms to approximate posterior distributions for complex problems remains an active area of research. Applications of Bayesian statistical methods Explicitly Bayesian statistical methods tend to be used in three main situations. The first is where one has no alternative but to include quantitative prior judgments, due to lack of data on some aspect of a model, or because the inadequacies of some evidence has to be acknowledged through making assumptions about the biases involved. These situations can occur when a policy decision must be made on the basis of a combination of imperfect evidence from multiple sources, an example being the encouragement of Bayesian methods by the Food and Drug Administration (FDA) division responsible for medical devices. The second situation is with moderate-size problems with multiple sources of evidence, where hierarchical models can be constructed on the assumption of shared prior distributions whose parameters can be estimated from the data. Common application areas include meta-analysis, disease mapping, multi-centre studies, and so on. With weakly-informative prior distributions the conclusions may often be numerically similar to classic techniques, even if the interpretations may be different. The third area concerns where a huge joint probability model is constructed, relating possibly thousands of observations and parameters, and the only feasible way of making inferences on the unknown quantities is through taking a Bayesian approach: examples include image processing, spam filtering, signal analysis, and gene expression data. Classical model-fitting fails, and MCMC or other approximate methods become essential. There is also extensive use of Bayesian ideas of parameter uncertainty but without explicit use of Bayes theorem. If a deterministic prediction model has been constructed, but some of the parameter inputs are uncertain, then a joint prior distribution can be placed on those parameters and the resulting uncertainty propagated through the model, often using Monte Carlo methods, to produce a predictive probability distribution. This technique is used widely in risk analysis, health economic modelling and climate projections, and is sometimes known as probabilistic sensitivity analysis. Another setting where the 'updating' inherent in the Bayesian approach is suitable is in machine-learning; simple examples can be found in modern software for spam filtering, suggesting which books or movies a user might enjoy given his or her past preferences, or ranking schemes for millions of on-line gamers. Formal inference may only be approximately carried out, but the Bayesian perspective allows a flexible and adaptive response to each additional item of information. Open Areas in Bayesian Statistics The philosophical rationale for using Bayesian methods was largely established and settled by the pioneering work of De Finetti, Savage, Jaynes and Lindley. However, widespread concern remain over how to apply these methods in practice, where various concerns over sensitivity to assumptions can detract from the rhetorical impact of Bayesians' epistemological validity. Hypothesis testing and model choice Jeffreys (1939) developed a procedure for using data \(y\) to test between alternative scientific hypotheses \(H_0\) and \(H_1\ ,\) by computing the Bayes factor \(p(y|H_0)/p(y|H_1)\ .\) He suggested thresholds for strength of evidence for or against the hypotheses. The Bayes factor can be combined with the prior odds \(p(H_0)/p(H_1)\) to give posterior probabilities of each hypothesis, that can be used to weight predictions in Bayesian Model Averaging (BMA). Although BMA can be an effective pragmatic device for prediction, the use of posterior model probabilities for scientific hypothesis-testing is controversial even among the Bayesian community, for both philosophical and practical reasons: first, it may not make sense to talk of probabilities of hypotheses that we know are not strictly 'true', and second, the calculation of the Bayes factor can be extremely sensitive to apparently innocuous prior assumptions about parameters within each hypothesis. For example, the ordinate of a widely dispersed uniform prior distribution would be irrelevant for estimation within a single model, but becomes crucial when comparing models. It has also been argued that model choice is not necessarily the same as identifying the 'true' model, particularly as in most circumstances no true model exists and so posterior model probabilities are not interpretable or useful. Instead, other criteria, such as the Akaike Information Criterion or the Deviance Information Criterion, are concerned with selecting models that are expected to make good short-term predictions. Robustness and reporting In the uncommon situation that the data are extensive and of simple structure, the prior assumptions will be unimportant and the assumed sampling model will be uncontroversial. More generally we would like to report that any conclusions are robust to reasonable changes in both prior and assumed model: this has been termed inference robustness to distinguish it from the frequentist idea of robustness of procedures when applied to different data. (Frequentist statistics uses the properties of statistical procedures over repeated applications to make inference based on the data at hand) Bayesian statistical analysis can be complex to carry out, and explicitly includes both qualitative and quantitative judgement. This suggests the need for agreed standards for analysis and reporting, but these have not yet been developed. In particular, audiences should ideally fully understand the contribution of the prior distribution to the conclusions, the reasonableness of the prior assumptions, the robustness to alternative models and priors, and the adequacy of the computational methods. Model criticism In the archetypal Bayesian paradigm there is no need for testing whether a single model adequately fits the data, since we should be always comparing two competing models using hypothesis-testing methods. However there has been recent growth in techniques for testing absolute adequacy, generally involving the simulation of replicate data and checking whether specific characteristics of the observed data match those of the replicates. Procedures for model criticism in complex hierarchical models are still being developed. It is also reasonable to check there is not strong conflict between different data sources or between prior and data, and general measures of conflict in complex models is also a subject of current research. Connections and comparisons with other schools of statistical inference At a simple level, 'classical' likelihood-based inference closely resembles Bayesian inference using a flat prior, making the posterior and likelihood proportional. However, this underestimates the deep philosophical differences between Bayesian and frequentist inference; Bayesian make statements about the relative evidence for parameter values given a dataset, while frequentists compare the relative chance of datasets given a parameter value. The incompatibility of these two views has long been a source of contention between different schools of statisticians; there is little agreement over which is 'right', 'most appropriate' or even 'most useful'. Nevertheless, in many cases, estimates, intervals, and other decisions will be extremely similar for Bayesian and frequentist analyses. Bernstein von Mises Theorems give general results proving approximate large-sample agreement between Bayesian and frequentist methods, for large classes of standard parametric and semi-parametric models. A notable exception is in hypothesis testing, where default Bayesian and frequentist methods can give strongly discordant conclusions. Also, establishing Bayesian interpretations of non-model based frequentist analyses (such as Generalized Estimating Equations) remains an open area. Some qualities sought in non-Bayesian inference (such as adherence to the principle and exploitation of sufficiency) are natural consequences of following a Bayesian approach. Also, many Bayesian procedures can also, quite straightforwardly, be calibrated to have desired frequentist properties, such as intervals with 95% coverage. This can be useful when justifying Bayesian methods to external bodies such as regulatory agencies, and we might expect an increased use of 'hybrid' techniques in which a Bayesian interpretation is given to the inferences, but the long-run behaviour of the procedure is also taken into account. • Thomas Bayes (1763), "An Essay towards solving a Problem in the Doctrine of Chances" Phil. Trans. Royal Society London • B. de Finetti, La Prevision: Ses Lois Logiques, Ses Sources Subjectives (1937) Annales de l'Institut Henri Poincare, 7: 1-68. Translated as Foresight: Its Logical Laws, Its Subjective Sources, in Kyburg, H. E. and Smokler, H. E. eds., (1964). Studies in Subjective Probability. Wiley, New York, 91-158 • E.T. Jaynes Probability Theory: The Logic of Science (2003) Cambridge University Press, Cambridge, UK • H. Jeffreys (1939) Theory of Probability Oxford, Clarendon Press • D.V. Lindley: Statistical Inference (1953) Journal of the Royal Statistical Society, Series B, 16: 30-76 • Schervish, M. J. (1995) Theory of Statistics. Springer-Verlag, New York. Further reading • Bernardo and Smith (1994) Bayesian Theory, Wiley • Berger (1993) Statistical Decision Theory and Bayesian Analysis, Springer-Verlag • Carlin and Louis (2008) Bayesian Methods for Data Analysis (Third Edition) Chapman and Hall/CRC • Gelman, Carlin, Stern and Rubin (2003) Bayesian Data Analysis (Second Edition) Chapman and Hall/CRC • Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press • Lindley (1991) Making Decisions (2nd Edition) Wiley • Robert (2007) The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation (Second Edition), Springer-Verlag See also
{"url":"http://scholarpedia.org/article/Bayesian_statistics","timestamp":"2024-11-04T02:30:35Z","content_type":"text/html","content_length":"61317","record_id":"<urn:uuid:c9558935-5940-45d2-9742-4aaefe392d82>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00384.warc.gz"}
Magnesium Sulfate Heptahydrate Lab Report The purpose of this lab was to determine the percent water in magnesium sulfate heptahydrate, or Epsom salt. The experimental percent water is determined to be 42.06% in both trials, making the average also 42.06%. To determine this percent water a heating and cooling procedure was used. First, the vials were cleaned of impurities using the lab oven and were not touched after this point. The 2 vials were then weighed and vial 1 weighed 14.7681 grams and the second vial weighed 14.7451 grams. Next 1.1075 grams of the hydrate was added into vial one and 1.1015 grams was placed into vial 2. The vials were then placed back into the oven with a starting weight of 15.8756 for the first vial and 15.8466 grams for vial 2. Once they were taken out of the oven and cooled they were weighed …show more content… This heating and cooling was repeated until there was very little (less than 0.0010 grams) fluctuation in numbers. Vial one had a start weight of 14.7681 and an end weight of 15.4098, meaning the mass of the water was 0.4658. Vial 2 had a start weight of 14.7451 and an end weight of 15.3833, meaning the mass of the water in this sample was 0.4633. The mass of the water was found by subtracting the mass of the vial with the hydrate (the start weight) from the mass after the final heating (the final weight). To then find the percent water divide the water mass by the hydrate mass and multiply by 100 since the number is a percent. The water percent is determined to be 42.06%. To find a percent error, a theoretical percent water must be used. To find the theoretical percent error divide the mass of water by the mass of magnesium sulfate heptahydrate and multiply by 100 to get a percent. The theoretical percent water is
{"url":"https://www.ipl.org/essay/Magnesium-Sulfate-Heptahydrate-Lab-Report-FCEPRQTZT","timestamp":"2024-11-07T06:22:28Z","content_type":"text/html","content_length":"72343","record_id":"<urn:uuid:b23fad6b-93e6-481b-a63c-18b017138f21>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00607.warc.gz"}
wilcoxon Archives » Data Science Tutorials One-Sample Wilcoxon Signed Rank Test in R?, When the data cannot be assumed to be normally distributed, the one-sample Wilcoxon signed-rank test is a non-parametric alternative to the one-sample t-test. It’s used to see if the sample’s median is the same as a known standard value (i.e. theoretical value). The data should be symmetrically distributed… Read More “How to perform One-Sample Wilcoxon Signed Rank Test in R?” »
{"url":"https://datasciencetut.com/tag/wilcoxon/","timestamp":"2024-11-06T22:04:58Z","content_type":"text/html","content_length":"86465","record_id":"<urn:uuid:0a163edd-a8ea-4fe8-b546-ad82ebfc6db4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00710.warc.gz"}
A Novel Color Chaos-based Image Encryption Algorithm using Half-Pixel-Level Cross Swapping Permutation Strategy International Journal of Computer Trends and Technology (IJCTT) © 2019 by IJCTT Journal Volume-67 Issue-3 Year of Publication : 2019 Authors : Ruisong Ye, Li Liu DOI : 10.14445/22312803/IJCTT-V67I3P111 MLA Style: Ruisong Ye, Li Liu, "A Novel Color Chaos-based Image Encryption Algorithm using Half-Pixel-Level Cross Swapping Permutation Strategy" International Journal of Computer Trends and Technology 67.3 (2019): 53-64. APA Style:Ruisong Ye, Li Liu, (2019). A Novel Color Chaos-based Image Encryption Algorithm using Half-Pixel-Level Cross Swapping Permutation Strategy. International Journal of Computer Trends and Technology, 67(3), 53-64. A novel color chaos-based image encryption scheme with permutation-diffusion mechanism is proposed. The permutation operation adopts half-pixel-level interchange permutation strategy between different R, G, B color channels to replace the traditional confusion operations. The pixel swapping between the higher 4-bit plane and the lower 4-bit plane of the R, G, B channels not only improves the conventional permutation efficiency within the entire plain-image, but also changes all the pixel values of R, G, B components. To enhance the security, multimodal skew map is applied to yield pseudo-random gray value sequence in the diffusion operations. Simulations have been carried out and the results confirm the superior security of the proposed image encryption scheme. [1] S. Li, G. Chen, X. Zheng, Chaos-based image encryption for digital and videos, in: B. Furht, D. Kirovski(Eds), Multimedia Security Handbook. CRC Press, Florida, the United States of America, 2005, pp. 133-167(Chapter4). [2] Y. Wang, K. W. Wong, X. F. Liao, T. Xiang, G. R. Chen, A chaos-based image encryption algorithm with variable control parameters, Chaos Solitons Fractals, 41:4(2009), 1773-1783. [3] J. Fridrich, Symmetric ciphers based on two-dimensional chaotic maps, International Journal of Bifurcation and Chaos, 8:6(1998), 1259-1284. [4] W. Zhang, Kwok-wo. Wong, H. Yu, Z. Zhu, An image encryption scheme using reverse 2-dimensional chaotic map and dependent diffusion, Commun. Nonlinear Sci. Numer. Simul., 18:8(2013), 2066-2080. [5] G. Chen, Y. Mao, Charles K. Chui, A symmetric image encryption scheme based on 3D chaotic cat maps, Chaos, Solitons and Fractals, 21:3(2004), 749-761. [6] Y. Q. Zhang, X. Y. Wang, A new image encryption algorithm based on non-adjacent coupled map lattices , Applied Soft Computing, 26(2015), 10-20. [7] R. Ye, M. Ge, P. Huang, H. Li, A Novel Self-adaptive Color Image Encryption Scheme, International Journal of Computer Trends and Technology, 40:1(2016), 39-44. [8] C. Q. Li, S. J. Li, G. R. Chen, G. Chen, L. Hu, Cryptanalysis of a new signal security system for multimedia data transmission. EURASIP J. Appl. Signal Process., 8(2005), 1277-1288. [9] S. J. Li, C. Q. Li, G. R. Chen, N. G. Bourbakis, K. T. Lo, A general quantitative cryptanalysis of permutation-only multimedia ciphers against plain-image attacks. Signal Process. Image Commun., 23(2009), 212-223. [10] D. Xiao, X. Liao, P. Wei, Analysis and improvement of a chaos-based image encryption algorithm, Chaos, Solitons and Fractals, 40(2009), 2191–2199. [11] E. Solak, C. Cokal, O. T. Yildiz, and T. Biyikoglu, Cryptanalysis of fridrich’s chaotic image encryption, International Journal of Bifurcation and Chaos, 20:5(2010), 1405-1413. [12] E. Solak, R. Rhouma, and S. Belghith, Cryptanalysis of a multi-chaotic systems based image cryptosystem, Optics Communications, 283:2(2010), 232-236. [13] J. M. Liu, Q. Qu, Cryptanalysis of a substitution-diffusion based on cipher using chaotic standard and logistic map, in: Third International Symposium on Information Processing, 2010, pp. 67-69. [14] R. Rhouma and S. Belghith, Cryptanalysis of a new image encryption algorithm based on hyper-chaos, Physics Letters A, 372:38(2008),5973-5978. [15] R. Rhouma, E. Solak, S. Belghith, Cryptanalysis of a new substitution-diffusion based image cipher, Commun. Nonlinear Sci. Numer. Simulat., 15(2010), 1887-1892. [16] X. Wang, G. He, Cryptanalysis on a novel image encryption method based on total shuffling scheme, Optics Commun., 284 (2011), 5804-5807. [17] R. Ye, A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, Optics Communications, 284(2011), 5290-5298. [18] Y. Zhou , L. Bao, C. L. Philip Chen, Image encryption using a new parametric switching chaotic system, Signal Processing, 93(2013), 3039-3052. [19] Y. Zhou , L. Bao, C. L. Philip Chen, A new 1D chaotic system for image encryption, Signal Processing, 97(2014), 172-182. [20] X. Wang, D. Luan, A novel image encryption algorithm using chaos and reversible cellular automata, Commun. Nonlinear Sci. Numer. Simulat., 18(2013), 3075-3085. [21] Z.-L. Zhu, W. Zhang, K.-W. Wong, H. Yu, A chaos-based symmetric image encryption scheme using a bit-level permutation, Information Sciences, 181(2011), 1171-1186. [22] L. Teng, X. Wang, A bit-level image encryption algorithm based on spatiotemporal chaotic system and self-adaptive, Optics Communications, 285(2012), 4048-4054. [23] W. Zhang, K.-W. Wong, H. Yu, Z.-L. Zhu, An image encryption scheme using lightweight bit-level confusion and cascade cross circular diffusion. Optics Communications, 285 (2012), 2343- 2354. [24] W. Zhang, K.-W. Wong, H. Yu, Z.-L. Zhu, A symmetric color image encryption algorithm using the intrinsic features of bit distributions. Commum. Nonliear Sci. Numer. Simulat., 18 (2013), 584-600. [25] W. Zhang, H. Yu, Z. Zhu, Color image encryption based on paired interpermuting planes, Optics Communications, 338(2015), 199-208. [26] X. Wang, H. Zhang, A color image encryption with heterogeneous bit-permutation and correlated chaos, Optics Communications, 342(2015), 51-60. [27] X. Y. Wang, J. F. Zhao, H. J. Liu, A new image encryption algorithm based on chaos, Optics Communications, 285(2012), 562-566. [28] M. Haeri, M. S. Tavazoei, Comparison of different one-dimensional maps as chaotic search pattern in chaos optimization algorithms, Appl. Math. Comput., 187(2007), 1076-1085. [29] M. Hasler and Y. L. Maistrenko, An introduction to the synchronization of chaotic systems: coupled skew tent map, IEEE Transactions on Circuits and Systems, 44(1997), 856-866. [30] R. Ye, W. Guo, A Chaos-based Image Encryption Scheme Using Multimodal Skew Tent Maps, Journal of Emerging Trends in Computing and Information Sciences, 4:10(2013), 800-810. [31] B. Schneier, Cryptography: Theory and Practice, CRC Press, Boca Raton, 1995. [32] Y. Wang, K. W. Wong, X. F. Liao, G. R. Chen, A new chaos-based fast image encryption algorithm, Applied Soft Computing, 11(2011), 514-522. [33] M. Wu, An improved discrete arnold transform and its application in image scrambling and encryption, Acta Phys. Sin., 63:9(2014), 090504. [34] C. E. Shannon, Communication theory of secrecy systems, Bell Syst. Tech. J, 28(1949), 656-715. [35] G. Alvarez, S. Li, Some basic cryptographic requirements for chaos-based cryptosystem, International Journal of Bifurcation and Chaos, 16(2006), 2129-2151. [36] IEEE Computer Society, IEEE standard for binary floating-point arithmetic, ANSI/IEEE std. 1985:754-1985. [37] J. Chen, Z. Zhu, C. Fu, H. Yu, L. Zhang, An efficient image encryption scheme using gray code based permutation approach, Optics and Lasers in Engineering, 67(2015), 191-204. [38] K. Wong, B. Kwok, W. Law, A fast image encryption scheme based on chaotic standard map, Physics Letter A, 372:15(2008), 2645-2652. [39] K. Wong, B. Kwok, C. Yuen, An efficient diffusion approach for chaos-based image encryption, Chaos, Solitons and Fractals, 41:5(2009), 2652-2663. Cross swapping permutation; Chaotic system; Generalized Cat map; Image encryption; Multimodal skew map.
{"url":"https://ijcttjournal.org/helium/ijctt/ijctt-v67i3p111","timestamp":"2024-11-10T11:15:18Z","content_type":"text/html","content_length":"49191","record_id":"<urn:uuid:0e387e8a-d00f-47dc-b64c-5a6cc5e84be7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00533.warc.gz"}
Ch. 11 Key Concepts - Contemporary Mathematics | OpenStax Key Concepts 11.1 Voting Methods • In plurality voting, the candidate with the most votes wins. • When a voting method does not result in a winner, runoff voting can be used to do so. • Ranked-choice voting, also known as instant runoff voting, is one type of ranked voting system. • The Borda count method is a type of ranked voting system in which each candidate is given a Borda score based on the number of candidates ranked lower than them on each ballot. • When pairwise comparison is used, the winner will be the Condorcet candidate if one exists. • Approval voting allows voters to give equally weighted votes to multiple candidates. • When a voter finds a characteristic of a particular voting method unappealing, they may consider that characteristic a flaw in the voting method and look for an alternative method that does not have that characteristic. 11.2 Fairness in Voting Methods • There are several common measures of voting fairness, including the majority criterion, the head-to head criterion, the monotonicity criterion, and the irrelevant alternatives criterion. • According to Arrow’s Impossibility Theorem, each voting method in which the only information is the order of preference of the voters will violate one of the fairness criteria. 11.3 Standard Divisors, Standard Quotas, and the Apportionment Problem • The apportionment problem is how to fairly divide and distribute available resources to recipients in whole, not fractional, parts. • To distribute the seats in the U.S. House of Representatives fairly to each state, calculations are based on state population, total population, and house size, or the total number of seats to be • The standard divisor is the ratio of the total population to the house size, and the standard quota is the number of seats that each state should receive. 11.4 Apportionment Methods • Hamilton’s method of apportionment uses the standard divisor and standard lower quotas, and it distributes any remaining seats based on the size of the fractional parts of the standard lower quota. Hamilton’s method satisfies the quota rule and favors neither larger nor smaller states. • Jefferson’s method of apportionment uses a modified divisor that is adjusted so that the modified lower quotas sum to the house size. Jefferson’s method violates the quota rule and favors larger • Adams’s method of apportionment uses a modified divisor that is adjusted so that the modified upper quotas, sum to the house size. Adams’s method violates the quota rule and favors smaller • Webster’s method of apportionment uses a modified divisor that is adjusted so that the modified state quotas, rounded using traditional rounding, sum to the house size. Webster’s method violates the quota rule but favors neither larger nor smaller states. 11.5 Fairness in Apportionment Methods • Several surprising outcomes can occur when apportioning seats that voters may find unfair: Alabama paradox, population paradox, and new-state paradox. • Apportionment methods are susceptible to apportionment paradoxes and may violate the quota rule. • The Balinsky-Young Impossibility Theorem indicates that no apportionment can satisfy all fairness criteria.
{"url":"https://openstax.org/books/contemporary-mathematics/pages/11-key-concepts","timestamp":"2024-11-04T18:38:03Z","content_type":"text/html","content_length":"389847","record_id":"<urn:uuid:cb341c8e-ec92-498e-9ad9-6f6cbd2c54e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00480.warc.gz"}
one of the angles of a parallelogram is 90° . what do you call the figure? please answer 1 thought on “one of the angles of a parallelogram is 90° . what do you call the figure? please answer” 1. Answer: one of the angles of a parallelogram is 90° then other three angles are also 90° so, it is a rectangle Leave a Comment
{"url":"https://wiki-helper.com/one-of-the-angles-of-a-parallelogram-is-90-what-do-you-call-the-figure-please-answerkitu-37397060-81/","timestamp":"2024-11-12T19:37:11Z","content_type":"text/html","content_length":"126037","record_id":"<urn:uuid:312031fd-2fd2-4ae8-8ce6-12414f7886be>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00432.warc.gz"}
Problem 726 Let $\F$ be a finite field of characteristic $p$. Prove that the number of elements of $\F$ is $p^n$ for some positive integer $n$. Add to solve later Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse Elements Problem 529 Let $\F_3=\Zmod{3}$ be the finite field of order $3$. Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$. (a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have? (b) Let $ax+b+I$ be a nonzero element of the field $\F_3[x]/(x^2+1)$, where $a, b \in \F_3$. Find the inverse of $ax+b+I$. (c) Recall that the multiplicative group of nonzero elements of a field is a cyclic group. Confirm that the element $x$ is not a generator of $E^{\times}$, where $E=\F_3[x]/(x^2+1)$ but $x+1$ is a generator. Add to solve later Each Element in a Finite Field is the Sum of Two Squares Problem 511 Let $F$ be a finite field. Prove that each element in the field $F$ is the sum of two squares in $F$. Add to solve later Any Automorphism of the Field of Real Numbers Must be the Identity Map Problem 507 Prove that any field automorphism of the field of real numbers $\R$ must be the identity automorphism. Add to solve later Example of an Infinite Algebraic Extension Problem 499 Find an example of an infinite algebraic extension over the field of rational numbers $\Q$ other than the algebraic closure $\bar{\Q}$ of $\Q$ in $\C$. Add to solve later The Cyclotomic Field of 8-th Roots of Unity is $\Q(\zeta_8)=\Q(i, \sqrt{2})$ Problem 491 Let $\zeta_8$ be a primitive $8$-th root of unity. Prove that the cyclotomic field $\Q(\zeta_8)$ of the $8$-th root of unity is the field $\Q(i, \sqrt{2})$. Add to solve later A Rational Root of a Monic Polynomial with Integer Coefficients is an Integer Problem 489 Suppose that $\alpha$ is a rational root of a monic polynomial $f(x)$ in $\Z[x]$. Prove that $\alpha$ is an integer. Add to solve later Cubic Polynomial $x^3-2$ is Irreducible Over the Field $\Q(i)$ Problem 399 Prove that the cubic polynomial $x^3-2$ is irreducible over the field $\Q(i)$. Add to solve later Prove that any Algebraic Closed Field is Infinite Problem 398 Prove that any algebraic closed field is infinite. Add to solve later Extension Degree of Maximal Real Subfield of Cyclotomic Field Problem 362 Let $n$ be an integer greater than $2$ and let $\zeta=e^{2\pi i/n}$ be a primitive $n$-th root of unity. Determine the degree of the extension of $\Q(\zeta)$ over $\Q(\zeta+\zeta^{-1})$. The subfield $\Q(\zeta+\zeta^{-1})$ is called maximal real subfield. Add to solve later Equation $x_1^2+\cdots +x_k^2=-1$ Doesn’t Have a Solution in Number Field $\Q(\sqrt[3]{2}e^{2\pi i/3})$ Problem 358 Let $\alpha= \sqrt[3]{2}e^{2\pi i/3}$. Prove that $x_1^2+\cdots +x_k^2=-1$ has no solutions with all $x_i\in \Q(\alpha)$ and $k\geq 1$. Add to solve later Application of Field Extension to Linear Combination Problem 335 Consider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$. Let $\alpha$ be any real root of $f(x)$. Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$. Add to solve later Irreducible Polynomial $x^3+9x+6$ and Inverse Element in Field Extension Problem 334 Prove that the polynomial \[f(x)=x^3+9x+6\] is irreducible over the field of rational numbers $\Q$. Let $\theta$ be a root of $f(x)$. Then find the inverse of $1+\theta$ in the field $\Q(\theta)$. Add to solve later Explicit Field Isomorphism of Finite Fields Problem 233 (a) Let $f_1(x)$ and $f_2(x)$ be irreducible polynomials over a finite field $\F_p$, where $p$ is a prime number. Suppose that $f_1(x)$ and $f_2(x)$ have the same degrees. Then show that fields $\F_p [x]/(f_1(x))$ and $\F_p[x]/(f_2(x))$ are isomorphic. (b) Show that the polynomials $x^3-x+1$ and $x^3-x-1$ are both irreducible polynomials over the finite field $\F_3$. (c) Exhibit an explicit isomorphism between the splitting fields of $x^3-x+1$ and $x^3-x-1$ over $\F_3$. Add to solve later Galois Extension $\Q(\sqrt{2+\sqrt{2}})$ of Degree 4 with Cyclic Group Problem 231 Show that $\Q(\sqrt{2+\sqrt{2}})$ is a cyclic quartic field, that is, it is a Galois extension of degree $4$ with cyclic Galois group. Add to solve later Galois Group of the Polynomial $x^2-2$ Problem 230 Let $\Q$ be the field of rational numbers. (a) Is the polynomial $f(x)=x^2-2$ separable over $\Q$? (b) Find the Galois group of $f(x)$ over $\Q$. Add to solve later Polynomial $x^p-x+a$ is Irreducible and Separable Over a Finite Field Problem 229 Let $p\in \Z$ be a prime number and let $\F_p$ be the field of $p$ elements. For any nonzero element $a\in \F_p$, prove that the polynomial \[f(x)=x^p-x+a\] is irreducible and separable over $F_p$. (Dummit and Foote “Abstract Algebra” Section 13.5 Exercise #5 on p.551) Add to solve later Show that Two Fields are Equal: $\Q(\sqrt{2}, \sqrt{3})= \Q(\sqrt{2}+\sqrt{3})$ Problem 215 Show that fields $\Q(\sqrt{2}+\sqrt{3})$ and $\Q(\sqrt{2}, \sqrt{3})$ are equal. Read solution Add to solve later Galois Group of the Polynomial $x^p-2$. Problem 110 Let $p \in \Z$ be a prime number. Then describe the elements of the Galois group of the polynomial $x^p-2$. Add to solve later Two Quadratic Fields $\Q(\sqrt{2})$ and $\Q(\sqrt{3})$ are Not Isomorphic Problem 99 Prove that the quadratic fields $\Q(\sqrt{2})$ and $\Q(\sqrt{3})$ are not isomorphic. Add to solve later
{"url":"http://yutsumura.com/category/field-theory/","timestamp":"2024-11-02T10:42:40Z","content_type":"text/html","content_length":"150156","record_id":"<urn:uuid:be657bfe-8e61-4d86-9fb3-61dbc620a64f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00506.warc.gz"}
Books I Enjoy | Peter Antonaros Jr. top of page The Math Book by Clifford A. Pickover Mathematical inventions and discoveries presented through a historical lense. From the archaic "Ant Odometer" over 150 million years ago, to the "Mathematical Universe Hypothesis" in 2007, this book covers many of Math's triumphs. Infinite Powers by Steven Strogatz Calculus newcomers and experts alike will appreciate this book. Professor Strogatz lays out the foundations of Calculus and connects this to the world we inhabit. He makes the case that understanding Calculus allows us to shine a light on the darkest parts of the universe. Flatland by Edwin A. Abbott An interesting take on society and its structure, imagining our own heirachries as dimensions of Mathematical spaces. A great read to ponder and ask yourself where you may reside in Flatland. Introduction To Graph Theory by Richard J. Trudeau Linear Algebra Done Right by Sheldon Axler Understanding Analysis by Stephen Abbott The Art of Computer Programming, Fundamental Algorithms by Donald Knuth The Wealth of Nations by Adam Smith 1984 by George Orwell Animal Farm by George Orwell bottom of page
{"url":"https://www.peterantonarosjr.com/great-books-to-read","timestamp":"2024-11-14T18:40:37Z","content_type":"text/html","content_length":"313484","record_id":"<urn:uuid:5a50fc7b-db63-4fef-9a0b-80d5d7603fd9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00563.warc.gz"}
NAEP Long-Term Trend Assessment Results: Reading and Mathematics Scores decline again for 13-year-old students in reading and mathematics The National Center for Education Statistics (NCES) administered the NAEP long-term trend (LTT) reading and mathematics assessments to 13-year-old students from October to December of the 2022–23 school year. The average scores for 13-year-olds declined 4 points in reading and 9 points in mathematics compared to the previous assessment administered during the 2019–20 school year. Compared to a decade ago, the average scores declined 7 points in reading and 14 points in mathematics. Figure Trend in NAEP long-term trend reading and mathematics average scores for 13-year-old students ^* Significantly different (p < .05) from 2023. NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. This Highlights report compares performance on the NAEP long-term trend reading and mathematics assessments for age 13 students during the 2022–23 school year to previous assessment results, with a focus on results obtained in the 2019–20 school year. Results reflect the performance of a nationally representative sample of 8,700 thirteen-year-olds in each subject. Performance comparisons are based on statistically significant differences between assessment years and between groups. Explore details about the long-term trend assessments and how they differ from main NAEP assessments. I. Performance Trends by Percentiles Reading scores decline at all selected percentiles since 2020 NAEP reports scores at five selected percentiles to show changes over time by lower- (10th and 25th percentiles), middle- (50th percentile), and higher- (75th and 90th percentiles) performing students. Percentiles are useful for understanding how overall score gains or losses are distributed across the student population and provide context for the national average score. The 2023 reading scores for age 13 students at all five selected percentile levels declined compared to 2020. The declines ranged from 3 to 4 points for middle- and higher-performing students to 6 to 7 points for lower-performing students, though the score declines of lower performers were not significantly different from those of their middle- and higher-performing peers. Figure Trend in NAEP long-term trend reading scores at five selected percentiles for 13-year-old students 90th Percentile 75th Percentile 50th Percentile 25th Percentile 10th Percentile NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. Larger declines since 2020 for lower-performing students in mathematics The 2023 mathematics scores for age 13 students at all five selected percentile levels declined compared to 2020. The declines ranged from 6 to 8 points for middle- and higher-performing students to 12 to 14 points for lower-performing students, with larger declines for lower performers in comparison to their higher-performing peers. Figure Trend in NAEP long-term trend mathematics scores at five selected percentiles for 13-year-old students 90th Percentile 75th Percentile 50th Percentile 25th Percentile 10th Percentile NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. II. Performance Trends by Student Group Scores decline for many student groups in reading, and for nearly all student groups in mathematics The 2023 average scores in reading declined compared to 2020 for many student groups reported by NAEP; for example, scores were lower for both male and female 13-year-olds, for students eligible and not eligible for the National School Lunch Program (NSLP), and for students attending schools in the Northeast and the Midwest regions. In mathematics, scores declined compared to 2020 for most student groups; for example, scores were lower for Black, Hispanic, and White 13-year-olds, for students attending schools in all regions of the country, for students eligible and not eligible for the NSLP, and for students at all reported levels of parental education. See average score results for selected student groups through the drop-down menu selection below. Symbols in the figure indicate the score change between two sets of assessment years: from 2012 to 2020 and from 2020 to 2023. For example, in the figure below for race/ethnicity, the gray diamonds indicate that the 2020 reading score was not significantly different from the 2012 score for any racial/ethnic group with reportable results, and the down arrows indicate that 2023 reading scores declined for White and Black students and for students of Two or More Races in comparison to 2020. In mathematics, the 11-point score decrease for female students compared to the 7-point decrease for male students resulted in a widening of the Male−Female score gap in comparison to 2020. Also, the 13-point score decrease among Black students compared to the 6-point decrease among White students resulted in a widening of the White−Black score gap from 35 points in 2020 to 42 points in 2023. In reading, there were no statistically significant changes in these score gaps in 2023 compared to 2020. Figure Changes in NAEP long-term trend reading and mathematics average scores for 13-year-old students, by race/ethnicity: 2012, 2020, and 2023 Score increase No significant change Score decrease NAEP long-term trend performance levels for age 13 NOTE: Results are not shown for Native Hawaiian/Other Pacific Islander students because reporting standards were not met. The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. III. Student Learning Experience Students who took the long-term trend assessments in reading and mathematics during the 2022–23 school year also responded to a survey questionnaire. Students taking the long-term trend reading assessment were asked how often they read for fun on their own time; students taking the long-term trend mathematics assessment were asked which type of mathematics course they were currently taking; and all students were asked about the number of days they had been absent from school in the previous month. Students’ responses to survey questions provide information with which to compare performance based on their self-reported characteristics and educational experiences. This information may be valuable in helping parents, educators, and policymakers understand what aspects of students’ experiences are related to achievement. Survey questionnaire results, however, do not establish a cause-and-effect relationship between the characteristic or experience and student achievement. NAEP is not designed to identify the causes of performance differences. Numerous factors interact to influence student achievement, including local educational policies and practices, the quality of teachers, and available resources. Such factors may change over time and vary among student groups. Percentage of students missing 5 or more days of school monthly has doubled since 2020 Students who took the 2023 long-term trend reading and mathematics assessments were asked how many days of school they had missed in the last month. Responses to the survey question for both subjects indicate a decrease in the percentages of 13-year-old students reporting having missed none to 2 days in the past month compared to 2020. Conversely, there were increases in the percentages of 13-year-old students who reported missing 3 or 4 days and students who reported missing 5 or more days in the last month. The percentage of students who reported missing 5 or more days doubled from 5 percent in 2020 to 10 percent in 2023. For both reading and mathematics, students with fewer missed school days generally had higher average scores in 2023 than students with more missed school days. Figure Percentage of 13-year-old students in NAEP long-term trend reading, by number of days student absent from school in a month: 2020 and 2023 Fourteen percent of students report reading for fun almost every day, lower than previous years In 2023, fourteen percent of students reported reading for fun almost every day. This percentage was 3 percentage points lower than 2020, and 13 percentage points lower than 2012. Overall, the percentage of 13-year-old students who reported reading for fun almost every day was lower in 2023 than in all previous assessment years. The average reading score in 2023 for those students who reported reading for fun on their own almost every day was 275, which was higher than the scores for students who reported other levels of frequency for reading on their own time. See a data table with average score results. Figure Trend in percentages of 13-year-old students in NAEP long-term trend reading, by how often they read for fun on their own time *Significantly different (p < .05) from 2023. NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. Compared to their lower-performing peers, larger percentage of higher-performing students report more frequently reading for fun Fifty-one percent of 13-year-old students scoring at or above the 75th percentile in 2023 reported that they read for fun on their own time at least once a week, whereas 28 percent of 13-year-old students scoring below the 25th percentile reported doing so. The percentage of students who reported reading for fun on their own time once or twice a month was also larger for students at or above the 75th percentile. Conversely, the percentages of students who reported reading less frequently—a few times a year or never or hardly ever—were larger for students performing below the 25th Figure Percentage of 13-year-old students in NAEP long-term trend reading, by selected percentiles and by how often they read for fun on their own time: 2023 How often do you read for fun on your own time? Proportion of lower-performing students (below 25th percentile) Proportion of higher-performing students (at or above 75th percentile) At least once a week Once or twice a month A few times a year Never or hardly ever ^* Significantly different (p < .05) from students performing at or above the 75th percentile. NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. Smaller percentage of students report taking algebra compared to a decade ago, but no change from 2020 Students were asked, "What kind of mathematics are you taking this year?" and were given five response options: I am not taking mathematics this year; regular mathematics; pre-algebra; algebra; and other. Compared to 2020, there were no significant changes in the percentages of students by type of mathematics taken during the 2022–23 school year. Compared to 2012, however, the percentage of 13-year-old students in 2023 who reported they were taking regular mathematics increased from 28 to 42 percent, while the percentage of students taking pre-algebra decreased from 29 to 22 percent, and the percentage of students taking algebra dropped from 34 to 24 percent. Average scores were lower in 2023 for all types of mathematics courses presented compared to 2020. See a data table with average score results. Figure Trend in percentages of 13-year-old students in NAEP long-term trend mathematics, by type of mathematics taken during the school year *Significantly different (p < .05) from 2023. NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. Compared to their lower-performing peers, larger percentage of higher-performing 13-year-old students report taking algebra Forty-four percent of 13-year-old students scoring at or above the 75th percentile in 2023 reported that they were taking algebra during the 2022–23 school year, whereas 10 percent of students scoring below the 25th percentile reported doing so. There was no significant difference between the percentages of lower- and higher-performing 13-year-old students who reported currently taking pre-algebra. The percentage of students who reported taking regular mathematics was higher for students performing below the 25th percentile: Fifty-two percent of lower performers reported taking regular mathematics compared to 23 percent of higher-performing students who reported doing so. Figure Percentage of 13-year-old students in NAEP long-term trend mathematics, by selected percentiles and by type of mathematics taken during the school year: 2023 What kind of mathematics are you taking this year? Proportion of lower-performing students (below 25th percentile) Proportion of higher-performing students (at or above 75th percentile) Not taking mathematics Regular mathematics # Rounds to zero. ^* Significantly different (p < .05) from students performing at or above the 75th percentile. NOTE: The NAEP long-term trend (LTT) assessment results are reported by the year in which the school year ends. For example, the age 13 assessment was administered during the fall of the 2022–23 school year and results are reported as 2023 LTT at age 13. IV. Explore More Long-Term Trend Data Generate custom tables for age 13 students in NAEP long-term trend reading and mathematics across all assessment years. Average Score and Percentages V. More About the Age 13 Assessment Content and Sample Since the 1970s, the NAEP long-term trend assessments have been administered to monitor the academic performance of students across three age levels (9-, 13-, and 17-year-old students). This report mainly focuses on the comparison of age 13 students (typically in grade 8) between 2020 and 2023. A report card summarizing results for 9- and 13-year-old students across all administrations back to the 1970s is forthcoming.
{"url":"https://www.nationsreportcard.gov/highlights/ltt/2023/","timestamp":"2024-11-02T08:34:46Z","content_type":"text/html","content_length":"257694","record_id":"<urn:uuid:4f28438e-e4ce-4e7e-af1a-2c1c87bc3f80>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00235.warc.gz"}
End to End ML Project - Fashion MNIST - Selecting the Model - Cross-Validation - Softmax Regression Please follow the below steps: Import the module cross_val_score and cross_val_predict from sklearn.model_selection from sklearn.model_selection import << your code comes here >> Import the module confusion_matrix from sklearn.metrics. from sklearn.metrics import << your code comes here >> Define a function called display_scores() which should print the score value which is passed to it as argument, and also calculate and print the 'mean' and 'standard deviation' of this score. def display_scores(scores): <<your code comes here>> Please create an instance of LogisticRegression called log_clf by passing to it the parameters - multi_class="multinomial", solver="lbfgs", C=10 and random_state=42 log_clf = LogisticRegression(<<your code comes here>>) Please call cross_val_score() function by passing following parameters to it - the model (log_clf), the scaled training dataset (X_train_scaled), y_train, cv=3 and scoring="accuracy" - and save the returned value in a variable called log_cv_scores. Call display_scores() function, by passing to it the log_cv_scores variable, to calculate and display(print) the 'accuracy' score, the mean of the 'accuracy' score and the 'standard deviation' of the 'accuracy' score. log_cv_scores = cross_val_score(<<your code comes here>>) Call mean() method on log_cv_scores object to get the mean accuracy score and store this mean accuracy score in a variable log_cv_accuracy. log_cv_accuracy = log_cv_scores.<<your code comes here>> Please call cross_val_predict() function by passing following parameters to it - the model (log_clf), the scaled training dataset (X_train_scaled), y_train, cv=3 - and save the returned value in a variable called y_train_pred. y_train_pred = cross_val_predict(<<your code comes here>>) Compute the confusion matrix by using confusion_matrix() function confusion_matrix(y_train, <<your code comes here>>) Calculate the precision score by the using the precision_score() function log_cv_precision = precision_score(y_train, <<your code comes here>>, average='weighted') Calculate the recall score by the using the recall_score() function log_cv_recall = recall_score(y_train, <<your code comes here>>, average='weighted') Calculate the F1 score by the using the f1_score() function log_cv_f1_score = f1_score(y_train, <<your code comes here>>, average='weighted') Print the above calculated values of log_cv_accuracy, log_cv_precision, log_cv_recall , log_cv_f1_score
{"url":"https://cloudxlab.com/assessment/displayslide/2457/end-to-end-ml-project-fashion-mnist-selecting-the-model-cross-validation-softmax-regression?playlist_id=190","timestamp":"2024-11-10T08:40:23Z","content_type":"text/html","content_length":"86653","record_id":"<urn:uuid:7e462d80-6bc4-47cc-9f6a-7b539db08fe7>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00653.warc.gz"}
A User-Friendly Introduction to Lebesgue Measure and Integrationsearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart A User-Friendly Introduction to Lebesgue Measure and Integration Softcover ISBN: 978-1-4704-2199-1 Product Code: STML/78 List Price: $59.00 Individual Price: $47.20 eBook ISBN: 978-1-4704-2737-5 Product Code: STML/78.E List Price: $49.00 Individual Price: $39.20 Softcover ISBN: 978-1-4704-2199-1 eBook: ISBN: 978-1-4704-2737-5 Product Code: STML/78.B List Price: $108.00 $83.50 Click above image for expanded view A User-Friendly Introduction to Lebesgue Measure and Integration Softcover ISBN: 978-1-4704-2199-1 Product Code: STML/78 List Price: $59.00 Individual Price: $47.20 eBook ISBN: 978-1-4704-2737-5 Product Code: STML/78.E List Price: $49.00 Individual Price: $39.20 Softcover ISBN: 978-1-4704-2199-1 eBook ISBN: 978-1-4704-2737-5 Product Code: STML/78.B List Price: $108.00 $83.50 • Student Mathematical Library Volume: 78; 2015; 221 pp MSC: Primary 26; 28 A User-Friendly Introduction to Lebesgue Measure and Integration provides a bridge between an undergraduate course in Real Analysis and a first graduate-level course in Measure Theory and Integration. The main goal of this book is to prepare students for what they may encounter in graduate school, but will be useful for many beginning graduate students as well. The book starts with the fundamentals of measure theory that are gently approached through the very concrete example of Lebesgue measure. With this approach, Lebesgue integration becomes a natural extension of Riemann integration. Next, \(L^p\)-spaces are defined. Then the book turns to a discussion of limits, the basic idea covered in a first analysis course. The book also discusses in detail such questions as: When does a sequence of Lebesgue integrable functions converge to a Lebesgue integrable function? What does that say about the sequence of integrals? Another core idea from a first analysis course is completeness. Are these \(L^p\)-spaces complete? What exactly does that mean in this setting? This book concludes with a brief overview of General Measures. An appendix contains suggested projects suitable for end-of-course papers or presentations. The book is written in a very reader-friendly manner, which makes it appropriate for students of varying degrees of preparation, and the only prerequisite is an undergraduate course in Real Undergraduate and graduate students and researchers interested in learning and teaching real analysis. □ Chapters □ Chapter 0. Review of Riemann integration □ Chapter 1. Lebesgue measure □ Chapter 2. Lebesgue integration □ Chapter 3. $L^p$ spaces □ Chapter 4. General measure theory □ Ideas for projects • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Requests Volume: 78; 2015; 221 pp MSC: Primary 26; 28 A User-Friendly Introduction to Lebesgue Measure and Integration provides a bridge between an undergraduate course in Real Analysis and a first graduate-level course in Measure Theory and Integration. The main goal of this book is to prepare students for what they may encounter in graduate school, but will be useful for many beginning graduate students as well. The book starts with the fundamentals of measure theory that are gently approached through the very concrete example of Lebesgue measure. With this approach, Lebesgue integration becomes a natural extension of Riemann Next, \(L^p\)-spaces are defined. Then the book turns to a discussion of limits, the basic idea covered in a first analysis course. The book also discusses in detail such questions as: When does a sequence of Lebesgue integrable functions converge to a Lebesgue integrable function? What does that say about the sequence of integrals? Another core idea from a first analysis course is completeness. Are these \(L^p\)-spaces complete? What exactly does that mean in this setting? This book concludes with a brief overview of General Measures. An appendix contains suggested projects suitable for end-of-course papers or presentations. The book is written in a very reader-friendly manner, which makes it appropriate for students of varying degrees of preparation, and the only prerequisite is an undergraduate course in Real Analysis. Undergraduate and graduate students and researchers interested in learning and teaching real analysis. • Chapters • Chapter 0. Review of Riemann integration • Chapter 1. Lebesgue measure • Chapter 2. Lebesgue integration • Chapter 3. $L^p$ spaces • Chapter 4. General measure theory • Ideas for projects Permission – for use of book, eBook, or Journal content You may be interested in... Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/STML/78","timestamp":"2024-11-05T03:33:38Z","content_type":"text/html","content_length":"92659","record_id":"<urn:uuid:cf3acff5-4249-4c53-bce3-a1b36ef02526>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00804.warc.gz"}
Wonderlic Test Prep - Wonderlic Study Guide [2024] This site is not associated or affiliated in any way with Wonderlic, Inc. Your Wonderlic test score will affect your Career Opportunities & Education Prospects The Wonderlic Tests are offered in many formats: Wonderlic Select (Formerly WonScore), Wonderlic Personnel/ Cognitive Ability Test (WPT-R/ WPT-Q), Wonderlic Basic Skills Test (WBST) and Scholastic Level Exam (SLE). Calculators are not permitted on the actual Wonderlic assessment. You can only use scratch paper to work out your answers. Wonderlic Practice Test 50 Questions / 12 Minutes Wonderlic Practice Test 30 Questions / 8 Minutes Wonderlic 50 Question Exam Wonderlic 30 Question Exam Real Reviews from Students Discover what our students said about this course. Trusted by 27576+ employees and students at leading companies and universities, including... How Wonderlic Test Prep Helps You Come Prepared Our videos will take you through all of the question types you’ll encounter on the exam. We’ll explain how to solve each and every single test question step-by-step, so you’ll know exactly what to expect. Ace the Test Painstakingly formulated and meticulously tested, our techniques will help you tactically avoid time-traps and shave 10, 20, or even 30 seconds off of your response time. Perform to Impress Boost your Wonderlic score (WonScore) up to 10 points with hundreds of sample questions modeled off of real Wonderlic tests so you can impress the boss and land the job.
{"url":"https://www.wonderlictestprep.com/","timestamp":"2024-11-09T13:19:21Z","content_type":"text/html","content_length":"434142","record_id":"<urn:uuid:57869582-268b-425f-8497-c747e9c3a143>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00897.warc.gz"}
We are pleased to report that the recent RGOSA seminar featuring the esteemed Marten Wortel was a resounding success! Marten Wortel, along with his collaborators, provided a captivating exploration into the realm of L-functional analysis, drawing upon foundational work and recent developments in the field. A Brief Recap of the Talk The seminar centered on “L-functional analysis“, where traditional scalars in functional analysis (R or C) are replaced by a (real or complex) Dedekind complete unital f-algebra L. Marten Wortel’s presentation commenced with a detailed introduction to the theoretical underpinnings of the study, highlighting its significance and the novel perspectives it introduces into the analysis using ordered structures. Marten provided insights into the collaborative research conducted with Walt van Amstel, Eder Kikianty, Miek Messerschmidt, Luan Naude, Chris Schwanke, Jan Harm van der Walt from Pretoria, and Mark Roelands from Leiden. The discussion illuminated their pioneering work on L-Banach and L-Hilbert spaces and presented comparative analyses with the classical approaches, enriching the attendees’ understanding of the subject. Access the Seminar Materials For those who were unable to attend the seminar or wish to revisit the insightful discussions, we have made both the video recording and the presentation slides available below. Machine Learning Frameworks Dr. Mohamed Kadhem Karray has just delivered his lecture at the RGOSA seminar series, titled “Machine Learning Frameworks“. The abstract of his talk is as follows: “We aim to present the essential mathematical concepts and results of machine and deep learning. In this first lecture, we define precisely the Machine Learning (ML) frameworks and give the first results on “performance guarantee” of ML algorithms in the case of a finite class of hypothesis. The following lessons extend these results to the case of infinite classes (even of infinite Dr. Mohamed Kadhem Karray also has a Youtube channel in which he explains the mathematical aspects of Machine Learning. Please find the seminar video attached, and the presentation slides below. Machine Learning Frameworks We are delighted to welcome researcher Mohamed Kadhem Karray at the RGOSA seminar, where he will be presenting a talk titled “Machine Learning Frameworks” on Thursday, October 19th, at 2:00 PM Tunisian time. Despite his position as a researcher in the industry, Dr. Mohamed Kadhem Karray holds expertise in mathematics. Consequently, his presentation will delve into mathematical aspects, and the abstract of his talk is as follows: “We aim to present the essential mathematical concepts and results of machine and deep learning. In this first lecture, we define precisely the Machine Learning (ML) frameworks and give the first results on “performance guarantee” of ML algorithms in the case of a finite class of hypothesis. The following lessons extend these results to the case of infinite classes (even of infinite Dr. Mohamed Kadhem Karray also has a Youtube channel in which he explains the mathematical aspects of Machine Learning. The recordings and presentation slides from the previous talks are available on the seminar’s webpage: https://rgosa.net/seminar/season-2023-2024/ And the link for the zoom conference is: Category measures, the dual of the Dedekind completion of C(K) spaces and hyper-Stonean spaces Professor Jan Harm van der Walt from the University of Pretoria has just delivered his lecture at the RGOSA seminar series, titled “Category measures, the dual of ${C(K)^\delta}$ and hyper-Stonean The abstract of the talk is here: For a compact Hausdorff space ${K}$, we give descriptions of the dual of ${C(K)^\delta}$, the Dedekind completion of the Banach lattice ${C(K)}$ of continuous, real-valued functions on ${K}$. We characterize those functionals which are ${\sigma}$-order continuous and order continuous, respectively, in terms of Oxtoby’s category measures. As applications, we give a purely topological characterization of hyper-Stonean spaces, and characterize those spaces ${K}$ for which ${C(K)}$ admits a strictly positive order continuous functional. Please find the seminar video attached, and the presentation slides below. Nonlinear Perron-Frobenius Theory: Part 2 Professor Bas Lemmens has just delivered the final part of his lecture at the RGOSA seminar series, titled “Nonlinear Perron-Frobenius Theory: Part 2.” We look forward to reconvening next week for Part 2. Please find the seminar video attached, and the presentation slides below. And the slides are here: Nonlinear Perron-Frobenius Theory: Part 1 Professor Bas Lemmens has just delivered his inaugural lecture at the RGOSA seminar series, titled “Nonlinear Perron-Frobenius Theory: Part 1.” We look forward to reconvening next week for Part 2. Please find the seminar video attached, and the presentation slides below. Nonlinear Perron-Frobenius Theory For Bas Lemmens a mathematician is Somebody who explores some relevant abstract structures. We are delighted to officially announce the commencement of the new RGOSA seminar season. We are thrilled to start this year’s series with a distinguished guest, Pr. Bas Lemmens. The seminars are scheduled to take place every Thursday at 2:00 PM Tunis time (Tunisia). Your participation and engagement are highly encouraged as we embark on this enlightening journey of knowledge The link for the seminar is below: This Thursday, September 7, 2023 and the following Thursday, Pr. Bas Lemmens will talk about: Nonlinear Perron-Frobenius Theory The abstract of his talks is: Sometimes in mathematics a simple-looking observation opens up a new road to a fertile field. Such an observation was made independently by Garrett Birkhoff and Hans Samelson, who remarked that one can use Hilbert’s (projective) metric and the contraction mapping principle to prove some of the theorems of Perron and Frobenius concerning eigenvectors and eigenvalues of nonnegative matrices. This idea has been pivotal for the development of nonlinear Perron–Frobenius theory. In the past few decades a number of strikingly detailed nonlinear extensions of Perron–Frobenius theory have been obtained. These results provide an extensive analysis of the eigenvectors and eigenvalues of various classes of order-preserving (monotone) nonlinear maps and give information about their iterative behavior and periodic orbits. They have found applications in computer science, mathematical biology, game theory and the study of dynamical systems. This two-part lecture provides an introduction to nonlinear Perron-Frobenius theory. On Polynomial Conjectures of Nilpotent Lie Groups Unitary Representations This is the recordings of the talk of Pr Ali Baklouti. His talk is intitled “On Polynomial Conjectures of Nilpotent Lie Groups Unitary Representations” The Slides of his talk are below: A Riesz-Frechet theorem in Riesz spaces This is the recordings of the talk of Bruce A. Watson. His talk is about “A Riesz-Frechet theorem in Riesz spaces” The Slides of his talk are below: Relative uniform convergence in vector lattices This is the recordings of the talk of Eduard Emelyanov. His talk is about “Relative uniform convergence in vector lattices” The Slides of his talk are below:
{"url":"https://rgosa.net/author/admin/","timestamp":"2024-11-14T23:49:46Z","content_type":"text/html","content_length":"63490","record_id":"<urn:uuid:b63544a5-130e-4526-bd1b-24348f1f20f5>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00316.warc.gz"}
AMPL Python API AMPL Python API# amplpy is an interface that allows developers to access the features of AMPL from within Python. For a quick introduction to AMPL see Quick Introduction to AMPL. In the same way that AMPL’s syntax matches naturally the mathematical description of the model, the input and output data matches naturally Python lists, sets, dictionaries, pandas and numpy objects. All model generation and solver interaction is handled directly by AMPL, which leads to great stability and speed; the library just acts as an intermediary, and the added overhead (in terms of memory and CPU usage) depends mostly on how much data is sent and read back from AMPL, the size of the expanded model as such is irrelevant. With amplpy you can model and solve large scale optimization problems in Python with the performance of heavily optimized C code without losing model readability. The same model can be deployed on applications built on different languages by just switching the API used. Installation & minimal example# # Install Python API for AMPL $ python -m pip install amplpy --upgrade # Install solver modules (e.g., HiGHS, CBC, Gurobi) $ python -m amplpy.modules install highs cbc gurobi # Activate your license (e.g., free https://ampl.com/ce license) $ python -m amplpy.modules activate <license-uuid> # Import in Python $ python >>> from amplpy import AMPL >>> ampl = AMPL() # instantiate AMPL object You can use a free Community Edition license, which allows free and perpetual use of AMPL with Open-Source solvers. There are also free AMPL for Courses licenses that give unlimited access to all commercial solvers for teaching. # Minimal example: from amplpy import AMPL import pandas as pd ampl = AMPL() set A ordered; param S{A, A}; param lb default 0; param ub default 1; var w{A} >= lb <= ub; minimize portfolio_variance: sum {i in A, j in A} w[i] * S[i, j] * w[j]; s.t. portfolio_weights: sum {i in A} w[i] = 1; tickers, cov_matrix = # ... pre-process data in Python ampl.set["A"] = tickers ampl.param["S"] = pd.DataFrame(cov_matrix, index=tickers, columns=tickers) ampl.solve(solver="gurobi", gurobi_options="outlev=1") assert ampl.solve_result == "solved" sigma = ampl.get_value("sqrt(sum {i in A, j in A} w[i] * S[i, j] * w[j])") print(f"Volatility: {sigma*100:.1f}%") # ... post-process solution in Python
{"url":"https://amplpy.ampl.com/en/latest/","timestamp":"2024-11-09T04:56:22Z","content_type":"text/html","content_length":"35830","record_id":"<urn:uuid:3e220b44-54e5-45cd-99e7-d17f16cc8ff3>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00413.warc.gz"}
A Gap in Mathematics Education Zurab Janelidze December 23, 2021 No comments The process of creation of mathematics has the following hierarchically dependent components: • Coming up with a concept. • Coming up with a question dealing with a relationship between concepts (this includes formulating a hypothesis, as well as finding an example or a counterexample of a concept/phenomenon). • Answering a question dealing with a relationship between concepts (this includes proving theorems as well as solving problems without being given the recipe for solution). • Applying the answer to a question dealing with a relationship between concepts to answer another such question (this includes solving problems by applying a given recipe for solution). Modern mathematics education (both at the school and at the university levels) focuses mainly on the last two points. What is regarded as a low quality mathematics education would focus only on the last point. For a more whole mathematics education, the first two points must receive as much attention as the last two points do. It is not difficult to implement the first two points in the practice of mathematics teaching. Here is an example of the structure of a class that focuses on the second and the third points: 1. The teacher proposes one or two concepts that the pupils are familiar with (perhaps, by taking suggestions from the class). 2. The teacher then asks the pupils to explain the concepts, helping the pupils in the explanation, when necessary. 3. Then, the teachers asks the pupils to think of a question that would combine the named concepts. The teacher helps in this process. 4. After this, the teacher and the pupils engage in answering the question together. 5. If the question is too hard to answer, it should be concretized to a simpler question. If the question is too easy to answer, it should be abstracted to a more difficult question. Concepts arise in mathematics as a necessity to help one express a general phenomenon. Incorporation of the first point in a classroom can be achieved by explaining this necessity for the concepts that the pupils are already familiar with, or by taking pupils on a journey that would help them identify such a necessity and will result in (re)discovering a mathematical concept. Teaching concepts by first showing examples and then asking the pupil to develop a concept that fits those examples is another, perhaps simpler, way. The activities which ask a pupil to identify a pattern in a sequence of numbers or figures is in some sense of this type. However, these activities are sold as activities that fall under the third point, as the pupil is being convinced that the question must have one definite answer. 0 Comments:
{"url":"https://www.zurab.online/2021/12/how-to-teach-mathematical-exploration.html","timestamp":"2024-11-12T19:45:32Z","content_type":"application/xhtml+xml","content_length":"212944","record_id":"<urn:uuid:c6e1290c-bf52-444e-b76f-a6b8175fc0ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00093.warc.gz"}
Mole Fraction and Partial Pressure of the Gas - Chemistry Steps General Chemistry In the previous post, we talked about partial pressures and the Dalton law. Today, we will see how the mole fraction and partial pressures of a gas are related. Remember, mole fraction (Χ) is the ratio of the moles of a component over the total number of moles of all the components. So, we can visualize this as the percentage of a gas in a mixture which helps quickly determine its partial pressure. For example, say take mixture of gases containing 60% of gas “A” and 40% of gas “B” and the total pressure is 10 atm. What is the partial pressure of each gas in the mixture? You may have already determined that the partial pressure of A and B are 6 atm and 4 atm. We get these numbers by multiplying the total pressure with the corresponding fraction of each gas. So, for gas A, it is: 0.6 x 10 atm = 6 atm and for gas B, it is: 0.4 x 10 atm = 4 atm These fractions (0.6 and 0.4) are the mole fractions of gas A and B. Now, while this calculation is straightforward because of the whole numbers, there are cases, where a general formula is needed, and the formula shows that the partial pressure of a gas is equal to the product of its mole fraction and the total pressure: For example, A mixture of gases contains CH[4], N[2], and H[2] and exerts a total pressure of 2.65 atm. The mixture contains 0.456 mol of CH[4], 0.540 mol of N[2] and 0.730 mol of H[2]. What is the partial pressure of hydrogen in atmospheres? We first write the formula for the partial pressure of hydrogen: P(H[2]) = χ(H[2]) x P[total] The total pressure is given (2.65 atm), so to find the partial pressure of hydrogen, we need to calculate its mole fraction first and then use it in the equation above: \[{\rm{\chi }}\left( {{{\rm{H}}_{\rm{2}}}} \right)\; = \;\frac{{{\rm{n(}}{{\rm{H}}_{\rm{2}}}{\rm{)}}}}{{{\rm{n(}}{{\rm{H}}_{\rm{2}}}{\rm{)}}\; + \;{\rm{n(C}}{{\rm{H}}_{\rm{4}}}{\rm{)}}\; + \;{\rm{n \[{\rm{\chi }}\left( {{{\rm{H}}_{\rm{2}}}} \right)\; = \;\frac{{{\rm{0}}{\rm{.730}}}}{{{\rm{0}}{\rm{.730}}\; + \;0.456\; + \;{\rm{0}}{\rm{.540}}}}\; = \;0.423\] P(H[2]) = 0.423 x 2.65 atm = 1.12 atm Let’s do another example where the mole fraction of a gas needs to be determined. The partial pressures of CH[4], C[3]H[8], and C[4]H[10] in a gas mixture are 270 torr, 1016 torr, and 1142 torr, respectively. What is the mole fraction of butane (C[4]H[10])? In this case, we need to rearrange the formula correlating the partial pressure of butane and the total pressure: P(C[4]H[10]) = χ(C[4]H[10]) x P[total] \[{\rm{\chi }}\left( {{{\rm{C}}_{\rm{4}}}{{\rm{H}}_{{\rm{10}}}}} \right)\;{\rm{ = }}\;\frac{{{\rm{P}}\left( {{{\rm{C}}_{\rm{4}}}{{\rm{H}}_{{\rm{10}}}}} \right)}}{{{{\rm{P}}_{{\rm{total}}}}}}\] \[{\rm{\chi }}\left( {{{\rm{C}}_{\rm{4}}}{{\rm{H}}_{{\rm{10}}}}} \right)\;{\rm{ = }}\;\frac{{{\rm{1142}}}}{{{\rm{270}}\;{\rm{ + }}\;{\rm{1016}}\;{\rm{ + }}\;{\rm{1142}}}}\; = \;0.470\] Check Also Leave a Comment
{"url":"https://general.chemistrysteps.com/mole-fraction-partial-pressure-gas/","timestamp":"2024-11-06T17:14:48Z","content_type":"text/html","content_length":"180044","record_id":"<urn:uuid:7ff41deb-eeb2-44d0-bd72-89c883e2cefd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00786.warc.gz"}
Sensitivity measurements for locating microseismic events - Canadian Society of Exploration Geophysicists (2024) John C. Bancroft, Joe Wong, and Lejia Han CREWES and Geoscience Department, University of Calgary, Calgary, Alberta, Canada February 2010 | Volume 35 Issue 02 | View IssueClick Here for a PDF of this Article The first-arrival clock-times from a number of receivers are used to estimate the clock-time and location of a microseismic event. The 3D analytic solution is based on a 2D Apollonius method which requires four receivers that are non-coplanar or non-collinear. These restrictions are typically violated when receivers are placed in a large grid on the surface or in a linear array in a well. Analytic solutions are presented for these restricted cases along with an analysis that relates the accuracy of the estimated source location to clock-times at the receivers. These analytic methods are assumed to be part of a larger system where many solutions can be estimated from small subsets of data extracted from the large arrays. The sensitivity of each analytic solution was evaluated by one hundred trials in which Gaussian random noise was added to the receiver clock-times i.e. jitter. The sensitivity of a vertical array of many receivers was evaluated by adding jitter to all the receiver clock-times, then using many subsets of the receivers to estimate many analytic source locations. These many source locations provide a more accurate solution along with its standard deviation. The problem of identifying the location of a microseismic event applies to well fracturing, CO2 sequestration, or the location of impending geological hazards such as landslides or major earthquakes. Other areas for applications of these techniques are converting raypath traveltimes to gridded traveltimes, sniper locating, or global positioning. Microseismic events may be located by a number of techniques that include a three component receiver that uses the three components to estimate the arrival direction of the wavefield, then uses the difference in P- and S-wave arrival times to estimate a distance. Other techniques use first-arrival clock-times, then “search over a grid of hypothesized source locations”, (Daku et al. 2004) or the wavefield propagation (seismic migration) of many surface receivers (Chambers et al. 2008). The approach in this paper uses only the first-arrival clocktimes of P or S energy to estimate the clock-time and location of a microseismic event. Earlier papers, (e.g. Bancroft 2006) have shown how the 2D problem of locating the source can be solved by constructing a circle tangent to three other circles. These circles were centered at the receiver location and had a radius proportional to the clock-times of three noncollinear receivers. The construction of a tangent circle to three other circles was solved by Apollonius about 200 years B.C., and many algebraic solutions based on the geometrical solution have been derived. The 3D solution is based on the 2D solution (Bancroft and Du 2006 and 2007). The 3D solution required the construction of a sphere that was tangent to four other spheres, with restrictions that the receiver locations be non-coplanar or noncollinear. We present solutions for these cases where either four receivers are co-planar on the corners of a square grid at the surface, or three equally spaced collinear receivers. The solution for three collinear receivers becomes a 2D problem and is only able to solve the radial distance and depth of the source from the well. These three collinear receivers cannot estimate the azimuth. The sensitivity of Apollonius, square grid, and three receiver methods will be discussed relative to the accuracy of the receiver clock-times. Each of these analytic solutions assume the location of the receivers is known, and that the velocity is either constant or of an RMS type. The first-arrival clock-times are only relative to the source clock-time and can be very large values. Any arbitrary time could be added or subtracted to these times and will be independent of the actual source location. In practice, the minimum receiver clock-time is subtracted from all the other receiver clock-times. The source clock-time will then be negative, an aid in simplifying the choice of the correct Apollonius solution. The sensitivity of the analytic solutions and the sensitivity of a large array of receivers is evaluated by adding Gaussian random noise to the clock-times of the receivers. This noise in the time coordinate is referred to as jitter. In testing the individual analytic solutions, one hundred trials were created with different jitter to define a distribution of the estimated sources. When testing a large array of vertical receivers, jitter was added to each receiver, then, subsets or groups of receivers were extracted to compute many analytic estimates of the source location. These analytic solutions provided a more accurate solution along with an estimate of the standard deviation (SD) of the source location. The Apollonius solutions are based on geometric constructions and require relatively simple algebraic operations (+, -, * and /) and only one square-root. However, part of the computation requires a division that can go to zero if the receivers are co-planar or co-linear. We are able to overcome this Apollonius restriction for two specific geometries; one for a planar surface with four receivers on the corners of a square grid, and the other for three equally spaced collinear receivers. The four receivers on a square grid may be applicable to a large array of receivers on the surface, while the three collinear receivers are applicable to many equally spaced receivers in a well. The traveltime equations for raypaths between a source at (x[0], y[0], z[0]) and four arbitrarily located receivers at (x[1], y[1], z[1]), (x[2], y[2], z[2]), (x[3], y[3], z[3]), and (x[4], y[4], z [4]) are: where n is the (constant) velocity, t0 is the clock-time of the source event and t[1], t[2], t[3], and t[4], are the clock-times of the event at the corresponding receivers. These equations are the starting point of the Apollonius solution (Bancroft and Du 2006 and 2007). Four coplanar receivers on a square grid We now restrict the four receivers to be located on a square grid with one point at the origin (0, 0, 0) and the other three points separated by the distance h, i.e. (h, 0, 0), (0, h, 0) and (h, h, 0) all located on the surface with z[1 to 4]= 0. This simplification allows us to define the four equations as: This simplification of the raypath equations bypasses the divideby- zero restriction of the more general solution for planar receivers. Algebraic manipulation (Bancroft et al. 2009) allows the source clock-time t[0] to be: This solution for t[0] is independent of the size of the square at h and the velocity ν. The source coordinates x[0], y[0], and z[0] are given by: The depth or z[0] is computed using a square-root providing two possible locations that are either above or below the surface, simplifying the solution. Three collinear receivers The symmetry of first-arrival clock-times of a vertical array of receivers will not allow an estimation of the source azimuth. Consequently the problem is reduced to a 2D problem with three equally spaced receivers that we choose to be in a vertical direction. When the three receivers are separated by a distance h, the raypath equations become: If we take the traveltime between two adjacent receivers to be t[h] = h/ν, then the source clock-time is given by: The solution is in cylindrical coordinates where the radial distance r[0] replaces x[0], and the depth of the micro-source z[0], are given by: The three collinear receivers in a vertical well cannot establish an azimuth for the source location. However, receivers in a deviated well may not be collinear or coplanar and allow the use of the conventional Apollonius solution to identify all components of the source (x[0], y[0], z[0]). Sensitivity of the Apollonius solution The first example demonstrates the Apollonius solution for four receivers arbitrarily located near the surface, and the source located at a depth that matches the spread of the receivers. A source clock-time was chosen, and the clock-times at the receivers calculated. When there are no errors, the source is located within the accuracy of the computer and any error is imperceptible. Gaussian random jitter with a SD of 1.0 ms was added to the clock-time of the receivers. Using only the receiver locations and their perturbed clock-times, a source location and its clock-time were estimated. This procedure was repeated onehundred times and the mean and standard variation of the source location estimated. Figure 1 shows four views, (side from y, side from x, plan, and perspective), with the receivers in green “x” symbols, a the source location by a blue “+” symbol. The estimated source location are identified by the red circles “o”. The results for Figure 1 are: x0 = 300.00 y0 = 100.000000.2 z0 = -500.00 xMn = 299.89 yMn = 99.856575.2 zMn = -500.76 xSD = 4.02 ySD = 6.195285.2 zSD = 50.07 where x0, y0, and z0 are the defined locations, xMn, yMn, and zMn are the mean locations of the 100 trials, and xSD, ySD, and zSD are the standard deviations of the estimated source locations. The source location is then moved a significant distance to x[0] = 2000 m, yielding the results displayed in Figure 2. The spread of the estimated source locations has increased. The parameters for Figure 2 follow: x0 = 2000.00 y0 = 100.000000.2 z0 = -500.00 xMn = 2147.06 yMn = 89.243578.2 zMn = -545.99 xSD = 363.21 ySD = 30.858586.2 zSD = 128.15 The above results are misleading as the actual distribution of the estimated sources is much narrower than the standard deviations would indicate. A more accurate description of the distributed locations would include its shape and direction. The accuracy of the estimated source location is dependent on its location relative to the receivers. The error of the estimated source depth for a grid of many x and y locations was computed. At each grid point 100 trials were computed, each with different jitter of 1.0 ms. The receiver locations were the same as those in the previous example, and the source depth was maintained at 500 m. An alpha-trim mean was included to remove extreme errors in the estimates of the 100 trials. The trim was 10% off the top and bottom of the sorted estimates. The results of the errors in the vertical components are displayed in Figure 3, which also includes four vertical black lines that represent the location of the four receivers. Some errors in this figure are much larger than shown and are clipped at the maximum values. The receiver locations for the data in Figure 3 are: x1 = 0 y1 = 0 z1 = 0 x2 = 500 y2 = 50 z2 = -30 x3 = 30 y3 = 300 z3 = 20 x4 = 400 y4 = 450 z4 = -10. The y location of the third receiver was moved from y3 = 300 m to y3 = 600 m: x1 = 0 y1 = 0 z1 = 0 x2 = 500 y2 = 50 z2 = -30 x3 = 30 y3 = 600 z3 = 20 x4 = 400 y4 = 450 z4 = -10 producing a significant change in the error pattern shown in Figure 4. Other tests indicate that the areas with poor accuracy appear to be more dependent on the x, and y locations of the receivers, and less dependent on the depth of the receivers. Sensitivity of four coplanar receivers on a square grid Data were created for four receivers on a square with a distance of 500 m on each side of the square. One hundred trials were created with Gaussian random jitter, with a standard deviation of 1 ms, added to the receiver times. The results are shown in Figure 5. The geometry and results for the data in Figure 5 are: x0 = 700.00 y0 = -30.00 z0 = -500.00 xMn = 699.55 yMn = -29.76 zMn = -501.47 xSD = 2.52 ySD = 4.55 zSD = 76.24 The source locations were then spread over a large grid and the vertical component of the source SD displayed. The error in the estimation of the source varies considerably, however, there are areas where the noise is reasonably behaved, but there are also areas where the noise tends to extreme values. In Figure 7, the jitter is reduced to a standard deviation to 0.1 ms, or by a factor of 10. The error in the source estimate is considerably reduced, and valid results cover a much larger area. Sensitivity of three equally spaced receivers A simulation of three equally spaced and collinear receivers was conducted. The results are contained in a 2D plane, and if the receivers are in a vertical well, only the depth and radial distance from the well can be computed. A source location was defined and the traveltimes to the three receivers estimated. Using only the receiver geometry and the three receiver clock-times, the source location was estimated. One hundred trials were conducted with jitter added to the receiver clock-times. The results of one set of trials is shown in Figure 8. Some of the trials failed to produce a location, and the spread was large for a jitter of 1 ms. An alpha-trim (10%) was once again applied. The alpha-trim identified 6 failed computations in 100 tests. The trim size on the remaining samples was 9, and after the 100 estimated depth values we sorted, samples from 10 to 85 were used. The results of the raw data, and with the alpha-trim, are displayed in Figure 8. The Black “+” is the estimated location of the source. The jitter in Figure 8 (1.0 ms) may be relatively large, so a smaller jitter level of 0.1 ms was added to the receiver clock-times. The distribution of the estimated locations is much smaller in Figure 9, and there were no failed estimates in the raw or alphatrimmed data. Figure 10 illustrates the results from two sets of receivers, one set at depths of 0, 50, 100 m, and the other set at depths of 180, 200, and 220 m. The source was located at an offset of 200 m and a depth of 150 m. The results display the estimated source locations for 100 tests when a jitter of 0.1 ms was added to the receiver clock-times. The difference in the distribution of the solutions is mainly due to the difference in the receiver interval, i.e., 50 versus 20 m. In addition, the distributions tend to lie on a linear path running from the central receiver to the actual source location. This property, also evident in the previous examples, led to the development of a new method for refining the estimated source location when used with multiple receiver arrays. Sensitivity of a vertical array of many receivers The previous examples estimated the sensitivity for each of the analytic solution by using many trials that add jitter to the receiver clock-times. We don’t have that capability in true applications that use many receivers. However we do have the capability to produce many source location estimates by selecting many groups of three equally spaced receivers from the array of receivers then computing the analytic solutions. Consider six equally spaced receivers in a vertical well with a receiver interval h. From these six receivers, we can extract groups of three equally spaced receivers, i.e. 5 groups with interval h, four with interval 2h, three with interval 3h, and two with interval 4h. Each of these fifteen groups will provide an initial estimate of the location and then provide a final estimate of the location (r[0], z[0]) as the mean of the individual components. In addition, the number of initial estimates allows the computation of the standard deviation of the source location. A vertical array of sixteen equally spaced receivers in a vertical well will produces 56 groups of three equally spaced receivers that will be used to compute the sensitivity. The receivers extend from a depth of 100 m with an interval of 50 m to a maximum depth of 850 m. The source is located at an offset of 1000 m and a depth of 400 m. The 56 solutions are shown in Figure 11 where the receiver locations are now in blue. Figure 12a and b show a zoom of Figure 11 with (a) showing a concentration of the estimated solutions close to the actual solution. The mean of the radial and depth components provide a final estimate of the source location, along with the SD. This is referred to as the P (point) solution. Part (b) includes lines from the estimated source location to the corresponding center of the receiver group. The intersection of these vectors produces a new estimate of the source location and is referred to as the V (vector) solution. Figure 13 contains a vertical array of sixteen receivers spread from the surface to a depth of 300 m, and a vertical array of twenty one source locations at an offset of 1000 m with depths that range from the surface to 1000 m. Jitter, with a SD of 0.1 ms, was added to the clock-time of each receiver. The estimated source location and the SD are shown for the P and V solutions. In this figure, the V solution is superior. Note the deviation of the estimated location of the sources at the greater depths, and how they tend toward the receivers. This deviation becomes severe for all depths when the jitter is significantly increased to 1.0 ms, as illustrated in Figure 14. While appearing disastrous, note that the discretion of the estimated solutions tend to the true location, and can be corrected with the appropriate numerical techniques. The deviation of the source locations is due to the kurtosis of the distribution of estimates. Figures 15 and 16 contain a new configuration of source and receiver locations that illustrate the difference in accuracy when using the first-arrival times of P- and S-waves. A jitter error of 1.0 ms, the same as in Figure 14, is used. Now eight receivers are spread from the surface with increments of 80 m to a maximum depth of 560 m, and the sources located with offset 500 m, and depths that range from the surface to 1000 m. Figure 15 contains the P solution for (a) the P-wave velocities, and (b) the S-wave velocities, and Figure 16 the corresponding V solutions. The Improvements in Figures 15 and 16 over Figure 13 are due to: • a larger spread of the receiver array, even though there are fewer receivers, • a smaller radial displacement between the source and receiver arrays, • use of the V rather than the S solution type, and • a lower velocity for the S-waves. The above discussion of the multiple receivers in a vertical well is applicable to a large grid of receivers where multiple groups of four receivers on the corners of a square can be grouped for an analytic solution to estimate the source location and SD of the grid. It is not possible to pick the exact first-arrival time of the P- and S-waves, but picking the relative first-arrival time between different receivers can be more accurate. An example would be the first peak of the arriving S-wave. Interpolation techniques allow picking a peak with an accuracy greater than the sampling interval, however amplitude noise that is independent from receiver to receiver may distort the location of the desired peak. We feel that a range of jitter from 0.1 to 1.0 ms is reasonable and that the above examples do provide a reasonable expectation of the relative accuracies for the given configurations. Three-component receivers in a well do allow the azimuthal component of the microseismic event to be estimated. However, greater accuracy would be achieved when using multiple wells. When using only the first-arrival times, less expensive receivers, such as hydrophones, could be deployed in multiple wells that may rival the quality and expense of deploying the multicomponent receivers. Three analytic methods for computing the location of microseismic hypocenters were presented. Each method only uses the location and first-arrival clock-times of the receivers. The first used the Apollonius solution for four receivers in an arbitrary configuration. This method fails if the receivers were collinear or coplanar, so two additional methods were presented; four coplanar receivers on a square grid, and three collinear, equally spaced, receivers. The medium was assumed to have a constant velocity, or geometry where RMS type velocities could be used. The sensitivity of the analytic solutions was evaluated by adding jitter to the receiver The analytic solutions were assumed to be part of a larger array of receivers, from which many groups of receivers could be extracted to compute numerous source locations. Jitter was added to the receiver clock-times that enabled an accurate estimate of the location and standard deviation of the source. A vector method was introduced that improved the accuracy of larger receiver arrays. The accuracy of estimated locations varied with the location of the source relative to the receivers and with the distance from the source to the receivers. The accuracy also varied with the amount of noise added to the receiver clock-times. Author Biography John Bancroft obtained his B.Sc. and M.Sc. from the University of Calgary and a Ph.D. from BYU. He has been working in the Calgary geophysical industry since 1980 and has developed software for the seismic processing, specializing in the areas of statics analysis, velocity estimation, and seismic imaging. John is an Adjunct Faculty member in the Department of Geoscience, University of Calgary, a Senior Research Scientist with the CREWES consortium, and an instructor for the SEG. His current interests include microseismic, seismic imaging and inversion, and source wavelets. Lejia Han received her B.Sc. degree in applied geophysics from Chengdu University of Technology in China, and worked in the oil industry in China. Subsequently, she emigrated to Canada and obtained her B.Sc. degree in computer science from McGill University. She is currently a M.Sc. student with CREWES at the University of Calgary, where she is involved in analysis of full-waveform sonic well logs, microseismic data, and shallow reflection data. Joe Wong attended Queen’s University in Kingston, Ontario, where he obtained his B.Sc. in Physics and Applied Mathematics. He received M.Sc. and Ph.D. degrees in Applied Geophysics from the University of Toronto. After spending four years as a Research Geophysicist with the University of Toronto, he worked for many years as consulting geophysicist in the mining exploration and geotechnical engineering fields, specializing in crosshole seismic surveys. In March 2006, he joined CREWES at the University of Calgary as Senior Research Geophysicist. His research interests are scaled-down physical modeling of seismic methods, near-surface seismology, VSP, crosswell seismology, and analysis of microseismic data. We wish to thank NSERC and the sponsors of the CREWES consortium for supporting this research. CREWES: The Consortium for Research in Elastic Wave Exploration Seismology. We wish to thank NSERC and the sponsors of the CREWES consortium for supporting this research. A special thanks to Kevin Hall for making the graphics printable. Bancroft, J.C., Wong, J., and Han, L., 2009, Sensitivity measurements for locating microseismic events, CREWES Research Report, Vol. 21 Bancroft, J.C., 2007, Visualization of spherical tangency solutions for locating a source point from the clock-time at four receiver locations, CREWES Research Report Vol. 19, Ch. 44. Bancroft, J.C. and Du, X., 2007, Traveltime computations for locating the source of micro seismic events and for forming gridded traveltime maps, 69th EAGE Conference, London, England Bancroft, J.C. and Du, X., 2007b, Traveltime computations for locating the source of micro seismic events and for forming gridded traveltime maps, CSPG CSEG Convention, Calgary, Alberta, Canada Bancroft, John C., 2006, Locating microseismic events and traveltime mapping using locally spherical wavefronts, CREWES Research Report Bancroft, J.C., and Du, X., 2006, The computations of traveltimes when assuming locally circular or spherical wavefronts, SEG International Convention, New Orleans Chambers, K., Brandsberg-Dahl, S., Kendall, M., and Rueda, J., 2008, Testing the ability of surface arrays to locate microseismicity, SEG National Convention. Daku, B., Salt, J., Sha, L., and Prugger, A.F., 2004, An Algorithm for Locating Microseismic Events, Proceedings of IEEE Canadian Conference on Electrical and Computer Engineering, May 2-5, Niagara Falls, Canada, Vol. 4, pp. 2311-2314.
{"url":"https://isarblick.net/article/sensitivity-measurements-for-locating-microseismic-events-canadian-society-of-exploration-geophysicists","timestamp":"2024-11-03T01:25:49Z","content_type":"text/html","content_length":"92455","record_id":"<urn:uuid:b16d240f-1bee-48d3-8e03-6f4d54abae73>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00548.warc.gz"}
A Difference: LEaRning can bE fUn! LEaRning can bE fUn! 1/24/2006 10:45:00 pm Bud Hunt wrote (Lost) a post that got me thinking. (Y'know what they call a mathematician who goes south for the winter? .... a Tan Gent. ;-)) We're planning our next Pi Day (March 14, 3.14) celebration (we eat pie at 1:59 and 26 seconds; 3.1415926.... and tell bad math jokes -- like the one up there^) around the Mystery Coin Hunt. I got the idea from MIT last year. We've got 6 students on the committee already, one of whom helped out last year. This year we'll run two hunts concurrently; one for students and one for teachers and alumni. (Last year a teacher team won and the students felt cheated.) I've been tossing around another idea based on Dan Brown's website. The idea would be to have a group of students design a web hunt similar to Dan Brown's where the solver has to solve a problem from each unit we've studied in the course. Each solution would lead to the next problem until the entire course had been reviewed. I've also been quietly lurking on this blog. I hope to one day find the time to figure out a way to apply to my math classes the kind of educational gaming that Jean-Claude Bradley does with his Chemistry classes at the University of Drexel. (Drexel is fairly well known in math circles.) Just yesterday I stumbled across Bill Kerr's (a colleague from nonscholae.org) website where he's posted resources that he uses to teach computer programming in the context of game design. Something else for me to explore when I have the time. I've been blogging with my classes for about a year now. A Difference went online about 11 months ago. This is my 100^th post at A Difference so I thought I'd do something special. The title is a puzzle. I will send something "nice" to the first person who emails me the correct solution. Your challenge: How is the title tangentially related to π Day? Happy hunting! ;-) The name of one of my mathematical heros has "the end in the beginning and the beginning in the end." (There's a spoiler in the comments below.) How is my hero connected to π? 6 comments 1. I would be happy to work with you to create a math application for EduFrag. The game is designed in a modular way so that the content creators do not have to know anything about map building in Unreal Tournament. All you have to do is create 256x256 pixel bitmaps (256 colors) that are either true or false. Paint is a very convenient program for doing this. I think this would work extremely well with math because, like organic chemistry, it is easily represented graphically. 2. Darren, Here's something I found trying to solve your question. It's from the Missourri Council of Math Teachers. "The First Use of PI" Did you ever have a student ask Where did p come from? I often answered with a little history back to the Babylonians. But another spin to the question might be that of who first used the symbol p for the all-familiar value of pi. Asking math students to do a little historical research can easily result in more than a few moans. You can motivate their interest by examining The Pi Trivia Game by Eve Anderson at http://www.eveandersson.com/trivia/to create a competitive classroom challenge, and knowing your pi history is a helpful edge in this trivia game. High school students will find that this website challenges their knowledge of pi. You will certainly want to add this website to your activities on March 14, Pi Day. The sixteenth letter of the Greek alphabet, p, was first used for the familiar value 3.1415� in the publication �Synopsis Palmariorium Mathesios�, authored by William Jones in 1706. The selection of p was from the Greek word perimetrog meaning surrounding perimeter. Synopsis Palmariorium Mathesios was a text that included some lessons related to Newton�s fluxions among several other mathematical topics. William Jones was born on a small farm in Wales in 1675. Little is known of his formal education. However, it is known that he taught mathematics on a British ship in the Indies, and later tutored the future President of the Royal Society, Thomas Parker.Jones also published Newtons De Analysis (ananalysis of Newton's work), A New Compendium of the Whole Art of Navigation, and Introduction to the Mathematics. He was a friend to Newton, and it is believed that he reviewed and edited some of Newton's manuscripts. After being elected Vice-president of the Royal Society, he was instrumental in settling the dispute between Sir Isaac Newton and Baron Gottfried von Leibnitz regarding the property claims to the authorship of calculus.Ee need to move beyond the formal curriculum with our joy of mathematics in as many different pedagogical venues as possible. Pi Day is one such opportunity that allowed me to see my students in a different light. If the spark goes out, then what?" It's from their pdf version of their journal The link in the article above had changed so I have updated it. Will follow your efforts till I get an answer to your question. 3. Jean-Claude: Thank you! I will definitely take you up on that offer as soon as this semester ends, in about a week or two. Mr. J. Evans: Thanks for the pointers! Pi has a long and storied history. It even makes an appearance in the bible. Another fellow pre-dates Jones' use of the symbol π by about 60 years. ;-) The first step to solving the puzzle is decoding the title of the post. If no one figures it out I'll post a hint at the end of the day. ;-) 5. ********S P O I L E R******** An excellent beginning! Euler is one of my mathematical heros. He had 13 children, only 5 of whom survived infancy. My favourite quote: "I have made some of my greatest mathematical discoveries holding a baby in my arms while the other children played round my feet." Now, on to the rest of the puzzle..... 6. Pi day -- I love it! I passed it on to our math department! Every time you engage more senses, students learn more. It is great to have things to be excited about!
{"url":"https://adifference.blogspot.com/2006/01/learning-can-be-fun.html","timestamp":"2024-11-05T04:05:55Z","content_type":"application/xhtml+xml","content_length":"176086","record_id":"<urn:uuid:23b88e1f-54bc-4503-ac09-cda5419aacf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00495.warc.gz"}
Who can solve my Linear Programming homework? | Linear Programming Assignment Help Who can solve my Linear Programming homework? Hi Peter-o, One day I want to do a linear programming task for my child. I have only three lines, I have the following: I will type each line by its correct place; when the programming task comes to a terminal, type -c to a terminal When you enter the command, type the command program main as usual and type the command programs it. When you type the exit command, type the command main as usual and it will be said that the command is finished and you are done processing. Once done, it will come back to screen and type the command main as usual and type run as normal with no output. Finally, if I enter the below command again after the program has finished, it will return to screen and say that it finished processing. run again after the program has finished I would greatly appreciate this help, so please bear this in mind. Take what my child will do in a few pieces / 6 loops How to do this? Use the command blog here Use -i Do not use -e and when I enter the command, the loop line will be terminated for example -e –index 3 and type my simple pattern This is my interactive line mode. It will perform the last line of Get More Information program by typing an or -e command, and will not be the last line of the last program executed. Bouncin is also a type of block user. You load the code and display it for the client. You may replace your user with something more convenient. If you want to be extra friendly, use give my child any input, or even just an -c. This will make the line output more obvious, but it is not for everyone. A lot of variables are already stored in the child script’s file, so if you import the line into your child script, the child script wont be able to be linked with the parent script. I know this part of the code isnt too difficult, but the code for this is certainly not too complex. Many of the people claiming this function are learning at the same time, but they don’t understand the system you need. If someone steps outside the normal range of my programming tasks, I would think this is an annoying piece of code. what my child will do in a few pieces / 6 loops… Pay Someone With Apple Pay How to do this I would greatly appreciate this help, so please bear this in mind. They just need a simple procedure for doing this first. This is why I want to teach. Why would I teach? because everything should be done below the line “this is what I care about” In this second command, I type: SELECT * FROM table WHERE itemindex IS NOT NULL() So if my child thinks my app is justWho can solve my Linear Programming homework? | Linear Programming And No Limit at Every Page Of Censuses In Programming Routine In RAs 2 – RBooks 2013 (April) | RAS 3-6 This Math And Appetitions Show how to solve Linear Programming For Censuses In RAs | RBooks 2013 These Math Help You Find How To Use In This Math For Students In Their Primary Drawing | RAS 7 No Limit or No Limit in The List Of A Simple Math Classes Math But No Limits But No Limit: The Math Of The Homework. It also Show How To Solve Mathematical Puzzles Like In this Linear Programming For Basic Math Pro Suite On Top of RAs In RAs 2 – RBooks 3 – 10 Math For You : To See Why Try The Math But Do Not Discuss In This Math And No Limit It That It Takes 3 And 1 In An On Top of RAs IN RAs From Binder B) I studied your math homework on math site calculator. com for reading the homework for your math homework. It is a much better than math. Free math and reading simple and advanced subject you can do, I share this article and guide everyone on math. if you subscribe and you already own website and to school? you can also obtain it from my site. I can do this program for online way too… I can help you to solve your homework for your friends it may be a way to read a few math problems online or to learn …. This article is more about making The Web site more attractive. Just 1 Min(0.5s) to 7 s, I’m open and want to help you some. The search engines that are easy to mine these can be said to be a great new site to find useful solutions. A quick response to your query is something like: 4 x 10 = 0.1231 s, x of which is for correct answers plus find min(0.05s) to 7 s. I have installed this site and I loveWho can solve my Linear Programming homework? I’ve seen somebody explain some linear rules for linear units. But I think that is my question. If I solve the following linear-point math problem: Find rational points 1 and 2 b on C 3 which are independent? A rational point b for example on C4, for example on C3 or C2 which are equal to c? Would my solution be able to answer his question? A: No. We Do Your Math Homework Any rational point you need to solve must be integer from $1$-to-1. The general statement is that a rational point satisfies at least 2 rational points, but adding those rational points together will be a good idea. Since (as you saw in my comment on another question) it works. Note that this is not the whole story. A rational point can’t be in $\infty$, but it can be in any other $-\ infty$-plane, so all rational points at most have the same “position”. So if you use it like this: $\forall x>0,\ 2x+y>0,\ 2y\cos x-2\cos x >0,\ 2x>0$ you get points where $x^p$ and $y^p>0$ differ by $\frac{x^2+y^2}{x^2+y^2}$ (the point is in at least $\frac 17$-eigenvalues) so the point is here $x^p$. So a solution of the linear-point math problem on the above plane is a rational point, e.g. the equation $2+y=2$.
{"url":"https://linearprogramminghelp.com/who-can-solve-my-linear-programming-homework","timestamp":"2024-11-07T01:02:18Z","content_type":"text/html","content_length":"114315","record_id":"<urn:uuid:d0bcd35b-84ae-47e7-9331-deb25fb8927c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00396.warc.gz"}
Logic seminar: Lorna Gregory (University of East Anglia) Dates: 14 February 2024 Times: 15:15 - 16:15 What is it: Seminar Organiser: Department of Mathematics Who is it for: University staff, External researchers, Adults, Alumni, Current University students Title: Pseudofinite-dimensional Modules over Finite-dimensional Algebras The representation type of a finite-dimensional k-algebra is an algebraic measure of how hard it is to classify its finite-dimensional indecomposable modules. Intuitively, a finite-dimensional k-algebra is of tame representation type if we can classify its finite-dimensional modules and wild representation type if its module category contains a copy of the category of finite-dimensional modules of all other finite-dimensional k-algebras. An archetypical (although not finite-dimensional) tame algebra is kx. The structure theorem for finitely generated modules over a PID describes its finite-dimensional modules. Drozd’s famous dichotomy theorem states that all finite-dimensional algebras are either wild or tame. A long-standing conjecture of Mike Prest claims that a finite-dimensional algebra has decidable theory of modules if and only if it is of tame representation type. Most representation theorist are principally interested in finite-dimensional modules. A module over a k-algebra is pseudofinite-dimensional if it is a model of the common theory of all finite-dimensional modules. In this talk we will present work in progress around and in support of the following variant of Prest's conjecture: A finite-dimensional algebra has decidable theory of pseudofinite-dimensional modules if and only if it is tame. Travel and Contact Information Find event Frank Adams 1 (and zoom, link in email) Alan Turing Building
{"url":"https://events.manchester.ac.uk/event/event:y2q5-ls07sfbz-f89va/logic-seminar-lorna-gregory-university-of-east-anglia","timestamp":"2024-11-02T05:56:04Z","content_type":"text/html","content_length":"18747","record_id":"<urn:uuid:701dc6f5-f7d9-44b5-a7ba-246341978623>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00761.warc.gz"}
Practical Dependent Types: Type-Safe Neural Networks Kiev Functional Programming, Aug 16, 2017 The Big Question The big question of Haskell: What can types do for us? Dependent types are simply the extension of this question, pushing the power of types further. Artificial Neural Networks Feed-forward ANN architecture Parameterized functions Each layer receives an input vector, x:ℝ^n, and produces an output y:ℝ^m. They are parameterized by a weight matrix W:ℝ^m×n (an m×n matrix) and a bias vector b:ℝ^m, and the result is: (for some activation function f) A neural network would take a vector through many layers. Networks in Haskell data Weights = W { wBiases :: !(Vector Double) -- n , wNodes :: !(Matrix Double) -- n x m } -- "m to n" layer data Network :: Type where O :: !Weights -> Network (:~) :: !Weights -> !Network -> Network infixr 5 :~ Generating them randomWeights :: MonadRandom m => Int -> Int -> m Weights randomWeights i o = do seed1 :: Int <- getRandom seed2 :: Int <- getRandom let wB = randomVector seed1 Uniform o * 2 - 1 wN = uniformSample seed2 o (replicate i (-1, 1)) return $ W wB wN randomNet :: MonadRandom m => Int -> [Int] -> Int -> m Network randomNet i [] o = O <$> randomWeights i o randomNet i (h:hs) o = (:~) <$> randomWeights i h <*> randomNet h hs o Haskell Heart Attacks • What if we mixed up the dimensions for randomWeights? • What if the user mixed up the dimensions for randomWeights? • What if layers in the network are incompatible? • How does the user know what size vector a network expects? • Is our runLayer and runNet implementation correct? Backprop (Outer layer) go :: Vector Double -- ^ input vector -> Network -- ^ network to train -> (Network, Vector Double) -- handle the output layer go !x (O w@(W wB wN)) = let y = runLayer w x o = logistic y -- the gradient (how much y affects the error) -- (logistic' is the derivative of logistic) dEdy = logistic' y * (o - target) -- new bias weights and node weights wB' = wB - scale rate dEdy wN' = wN - scale rate (dEdy `outer` x) w' = W wB' wN' -- bundle of derivatives for next step dWs = tr wN #> dEdy in (O w', dWs) Backprop (Inner layer) -- handle the inner layers go !x (w@(W wB wN) :~ n) = let y = runLayer w x o = logistic y -- get dWs', bundle of derivatives from rest of the net (n', dWs') = go o n -- the gradient (how much y affects the error) dEdy = logistic' y * dWs' -- new bias weights and node weights wB' = wB - scale rate dEdy wN' = wN - scale rate (dEdy `outer` x) w' = W wB' wN' -- bundle of derivatives for next step dWs = tr wN #> dEdy in (w' :~ n', dWs) Compiler, O Where Art Thou? • Haskell is all about the compiler helping guide you write your code. But how much did the compiler help there? • How can the “shape” of the matrices guide our programming? • We basically rely on naming conventions to make sure we write our code correctly. Haskell Red Flags • How many ways can we write the function and have it still typecheck? • How many of our functions are partial? A Typed Alternative An o x i layer A Typed Alternative From HMatrix: An R 3 is a 3-vector, an L 4 3 is a 4 x 3 matrix. Data Kinds With -XDataKinds, all values and types are lifted to types and kinds. In addition to the values True, False, and the type Bool, we also have the type 'True, 'False, and the kind Bool. In addition to : and [] and the list type, we have ': and '[] and the list kind. A Typed Alternative data Network :: Nat -> [Nat] -> Nat -> Type where O :: !(Weights i o) -> Network i '[] o (:~) :: KnownNat h => !(Weights i h) -> !(Network h hs o) -> Network i (h ': hs) o infixr 5 :~ runLayer :: (KnownNat i, KnownNat o) => Weights i o -> R i -> R o runLayer (W wB wN) v = wB + wN #> v runNet :: (KnownNat i, KnownNat o) => Network i hs o -> R i -> R o runNet (O w) !v = logistic (runLayer w v) runNet (w :~ n') !v = let v' = logistic (runLayer w v) in runNet n' v' Exactly the same! No loss in expressivity! Much better! Matrices and vector lengths are guaranteed to line up! Also, note that the interface for runNet is better stated in its type. No need to reply on documentation. The user knows that they have to pass in an R i, and knows to expect an R o. randomWeights :: (MonadRandom m, KnownNat i, KnownNat o) => m (Weights i o) randomWeights = do s1 :: Int <- getRandom s2 :: Int <- getRandom let wB = randomVector s1 Uniform * 2 - 1 wN = uniformSample s2 (-1) 1 return $ W wB wN No need for explicit arguments! User can demand i and o. No reliance on documentation and parameter orders. But, for generating nets, we have a problem: randomNet :: forall m i hs o. (MonadRandom m, KnownNat i, KnownNat o) => m (Network i hs o) randomNet = case hs of [] -> ?? Pattern matching on types The solution for pattern matching on types: singletons. Pattern matching on types Implicit passing Explicitly passing singletons can be ugly. Implicit passing randomNet :: forall i hs o m. (MonadRandom m, KnownNat i, SingI hs, KnownNat o) => m (Network i hs o) randomNet = randomNet' sing Now the shape can be inferred from the functions that use the Network. train :: forall i hs o. (KnownNat i, KnownNat o) => Double -- ^ learning rate -> R i -- ^ input vector -> R o -- ^ target vector -> Network i hs o -- ^ network to train -> Network i hs o train rate x0 target = fst . go x0 go :: forall j js. KnownNat j => R j -- ^ input vector -> Network j js o -- ^ network to train -> (Network j js o, R j) -- handle the output layer go !x (O w@(W wB wN)) = let y = runLayer w x o = logistic y -- the gradient (how much y affects the error) -- (logistic' is the derivative of logistic) dEdy = logistic' y * (o - target) -- new bias weights and node weights wB' = wB - konst rate * dEdy wN' = wN - konst rate * (dEdy `outer` x) w' = W wB' wN' -- bundle of derivatives for next step dWs = tr wN #> dEdy in (O w', dWs) -- handle the inner layers go !x (w@(W wB wN) :~ n) = let y = runLayer w x o = logistic y -- get dWs', bundle of derivatives from rest of the net (n', dWs') = go o n -- the gradient (how much y affects the error) dEdy = logistic' y * dWs' -- new bias weights and node weights wB' = wB - konst rate * dEdy wN' = wN - konst rate * (dEdy `outer` x) w' = W wB' wN' -- bundle of derivatives for next step dWs = tr wN #> dEdy in (w' :~ n', dWs) -- handle the inner layers go !x (w@(W wB wN) :~ n) = let y = runLayer w x o = logistic y -- get dWs', bundle of derivatives from rest of the net (n', dWs') = go o n -- the gradient (how much y affects the error) dEdy = logistic' y * dWs' -- new bias weights and node weights wB' = wB - konst rate * dEdy wN' = wN - konst rate * (dEdy `outer` x) w' = W wB' wN' -- bundle of derivatives for next step dWs = tr wN #> dEdy in (w' :~ n', dWs) Surprise! It’s actually identical! No loss in expressivity. Also, typed holes can help you write your code in a lot of places. And shapes are all verified. By the way, still waiting for linear types in GHC :) Type-Driven Development The overall guiding principle is: 1. Write an untyped implementation. 2. Realize where things can go wrong: □ Partial functions? □ Many, many ways to implement a function incorrectly with the current types? □ Unclear or documentation-reliant API? 3. Gradually add types in selective places to handle these. I recommend not going the other way (use perfect type safety before figuring out where you actually really need them). We call that “hasochism”.
{"url":"https://talks.jle.im/kievfprog/dependent-types.html","timestamp":"2024-11-05T07:36:50Z","content_type":"text/html","content_length":"55568","record_id":"<urn:uuid:081ea241-4b9d-487a-b1d7-3ca881e73c4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00873.warc.gz"}
Inspiring Drawing Tutorials Things To Draw Inside A Circle Things To Draw Inside A Circle - 100 things to draw with a circle. Get inspired to create your own unique designs and bring a touch of whimsy. You can get creative outside the confines of it as well. Web circle as a base. Web whether you’re a beginner or an experienced artist, mastering the art of drawing a perfect circle is essential. All you need is a. For instance, take the circle as a. All you need is a. Here's some easy ways to draw circles. Can you draw aperfect circle? Web how to draw a circle. A line that cuts the circle at two points is called a secant. How to draw a circle! To learn what items are shaped like a circle. Web a line that just touches the circle as it passes by is called a tangent. In this article, i will walk you through the process of. A line that cuts the circle at two points is called a secant. Parks are great sources of inspiration for drawing. Web circles are so simple but so tricky to draw! Afterwards, i use a pen and go over that line. Web a line that just touches the circle as it passes by is called a tangent. Web circles are so simple but so tricky to draw! Web draw a perfect circle ⭕️💯. To learn what items are shaped like a circle. Try to draw a perfect circle and see how. Web whether you’re a beginner or an experienced artist, mastering the art of drawing a perfect circle is essential. How to draw a circle! 1k views 2 weeks ago. Here's some easy ways to draw circles. To learn what items are shaped like a circle. Web whether you’re a beginner or an experienced artist, mastering the art of drawing a perfect circle is essential. Can you draw aperfect circle? A foundational shape that's easy to explain,. A game that tests your circle drawing skills. In this video i'll show you three ways to draw circles and ellipses: Web circle as a base. Squares are a lot easier to eyeball since they require only 4 straight lines. Explore cute and charming circle drawings that will spark your imagination. Grab a piece of paper, pencil, pen or marker and start drawing your first circle! Afterwards, i use a pen and go over that line. A foundational shape that's easy to explain,. In this article, i will walk you through the process of. Grab your sketchpad and draw with me! To learn what items are shaped like a circle. Afterwards, i use a pen and go over that line. For drawing circles, i personally use a roll of washi tape and trace around it with a pencil first. Can you draw aperfect circle? 30 easy circle drawing ideas. A line that cuts the circle at two points is called a secant. You don’t have to limit your imagination within the circle. Squares are a lot easier to eyeball since they require only 4 straight lines. A game that tests your circle drawing skills. We can estimate easily if a square is. Get inspired to create your own unique designs and bring a touch of whimsy. Try to draw a perfect circle and see how. You can use this worksheet to help teach kids what these items are. All you need is a. 100 things to draw with a circle. Squares are a lot easier to eyeball since they require only 4 straight lines. Web a line that just touches the circle as it passes by is called a tangent. Web how to draw a circle. Explore cute and charming circle drawings that will spark your imagination. In this article, i will walk you through the process of. Web make up your own little worlds by adding a few extraterrestrial elements inside a triangle or circle. A foundational shape that's easy to explain,. You can get creative outside the confines of it as well. Things To Draw Inside A Circle - For instance, take the circle as a. We can estimate easily if a square is. 30 easy circle drawing ideas. A game that tests your circle drawing skills. Short arcs 02:09 method 2: Get inspired to create your own unique designs and bring a touch of whimsy. Here's some easy ways to draw circles. In this video i'll show you three ways to draw circles and ellipses: To learn what items are shaped like a circle. Web circle as a base. Web how to draw a circle. Web circle as a base. Get inspired to create your own unique designs and bring a touch of whimsy. I’ve been on a circle drawing craze, so here is a collection of 30 circle drawings i created. Try to draw a perfect circle and see how. You don’t have to limit your imagination within the circle. Here's some easy ways to draw circles. Try to draw a perfect circle and see how. First, we need to draw a square. How to draw a circle! Grab your sketchpad and draw with me! Short arcs 02:09 method 2: Web whether you’re a beginner or an experienced artist, mastering the art of drawing a perfect circle is essential. How to draw with circles | easy drawing with circles | simple drawing technique for kids | circle drawings | drawing from circles | art for kids | drawing id. You can make as many of these as you’d like, finding new. For Instance, Take The Circle As A. Snap a few of your own reference photos of monuments, benches, and scenes that. You can use this worksheet to help teach kids what these items are. How to draw with circles | easy drawing with circles | simple drawing technique for kids | circle drawings | drawing from circles | art for kids | drawing id. Web whether you’re a beginner or an experienced artist, mastering the art of drawing a perfect circle is essential. All You Need Is A. We can estimate easily if a square is. Download the drawing pictures using circles free sample pages. Grab your sketchpad and draw with me! How to draw a circle! Web Circles Are So Simple But So Tricky To Draw! Web how to draw a circle. Here's some easy ways to draw circles. Parks are great sources of inspiration for drawing. Explore cute and charming circle drawings that will spark your imagination. You Don’t Have To Limit Your Imagination Within The Circle. In this article, i will walk you through the process of. 100 things to draw with a circle. Try to draw a perfect circle and see how. Afterwards, i use a pen and go over that line.
{"url":"https://one.wkkf.org/art/drawing-tutorials/things-to-draw-inside-a-circle.html","timestamp":"2024-11-09T00:20:45Z","content_type":"text/html","content_length":"32455","record_id":"<urn:uuid:2b681dae-ace5-4cbe-9007-35df16750a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00552.warc.gz"}
2014 A-level H2 Mathematics (9740) Paper 2 Question 2 Suggested Solutions - The Culture SG2014 A-level H2 Mathematics (9740) Paper 2 Question 2 Suggested Solutions All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions. Using partial fractions formula found in MF15, Using GC, Personal Comments: Firstly, this question is a lot of marks! It is quite a standard tutorial question that tests students on all their integration techniques. I think that they combined system of linear equations here too, which is really neat. Students could also have solved the partial fractions using the substitution (cover-up) method. Do use the MF15 for the partial fractions and integration formula! Finally, the answers are to be left in rational form, which means to be expressed as a fraction of two integers! pingbacks / trackbacks
{"url":"https://theculture.sg/2015/07/2014-h2-mathematics-9740-paper-2-question-2-suggested-solutions/","timestamp":"2024-11-02T18:55:57Z","content_type":"text/html","content_length":"108891","record_id":"<urn:uuid:580627e4-5c4e-4725-9d49-d54681621363>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00465.warc.gz"}
When zombies attack: Students predict the outcome of apocalypse | Penn State University UNIVERSITY PARK, Pa. — Last semester, mechanical engineering students in Alok Sinha’s M E 450 Modeling of Dynamic Systems course conducted high-level mathematical calculations to determine who would prevail in a hypothetical zombie attack — the humans or zombies. Based on the results of their “zombie math,” the equation (bN)(S/N)Z=bSZ proves the zombies will likely always win. That's because the calculation shows that in a zombie apocalypse, the rate at which humans would be infected by the walking dead is insurmountable. “Brainz, brainzz!” cried the zombies.Matt Sherman heard them, while briefly catching his breath behind a large elm tree on campus. Sherman, who had two plastic Nerf blasters strapped to his chest and an arsenal of rolled-up socks and marshmallows stuffed in a drawstring bag dangling from his waist, had been running from the zombie horde all day. The groaning sounds intensified as the zombies swarmed around him. “Brainz, Graagh, Brainzzz, Graaaagh!!” "Oh no," he thought to himself, "Am I going to make it out of this alive?"Probably not, according to some Penn State mechanical engineering students.Sherman, a junior in the College of Information Sciences and Technology, was taking part in a Humans vs. Zombies (HvZ) event hosted by the Penn State Urban Gaming Club (UGC), a student-run organization that coordinates such campus-wide Urban Gaming events as Capture the Flag, Assassins and HvZ, where participants gather to play a huge game of tag. All HvZ events take place in pre-defined areas within campus, where the play space is on a large, human scale using Nerf blasters, rolled-up socks and marshmallows as common “weapons” against the zombies.And that means Sherman’s sock bullets, no matter how he uses them, will be no match for those brain-hungry zombies.Sinha's 400-level course focuses on modeling and analysis of dynamic interactions in engineering systems and helps mechanical engineering students gain a better understanding of complex mathematical concepts used in predicting system outcomes. Tanmay Mathur, a graduate student majoring in mechanical engineering and a teaching assistant for the course, presented the zombie assignment as a fun way to get students interested in learning the mathematical tools they’ll need to use in real life.“If they are building an engine control unit, which controls a series of actuators on an internal combustion engine, they need to know how to go about modeling different state variables and algorithms that would help them design the controller,” said Mathur.The equation the students used to determine the outcome of a zombie attack was originally created by a mathematics professor at the University of Ottawa to model the rate at which humans would become zombies if they were infected by the living dead. Mathur refined the model using different scenarios. As part of the assignment, the students used biological assumptions based on popular zombie movies and SIMULINK — a graphical programming language tool for designing, simulating and analyzing dynamic systems — to model a zombie infection and illustrate the outcomes with numerical solutions. Most of the time the zombies were the victors.“The assignment basically used the state-space framework for modelling an infectious disease,” said Mathur. “But what made it effective for the students, I think, was the coolness quotient associated with using zombies.”The Urban Gamers would be wise to study their results. With this model, the students could predict an apocalypse by controlling and deciding the factors that affect the populations of zombies and healthy humans. Fortunately, for the humans, the outcome wasn’t always doom and gloom. Mathur says that in one scenario, the students were able to wipe out the zombies by introducing 10 zombie slayers into a population of 1,000 zombies (whereas the model with only five zombie slayers had the doomsday scenario). However, the models ultimately show that only quick and aggressive attacks by the zombie slayers could stave off the doomsday scenario in each application of the equation.Mathur says that some of the students were familiar with SIMULINK before coming into the course. But for most, the program and the zombie assignment were something new. “The students were flabbergasted at first because they weren’t expecting to be modelling a zombie apocalypse in a 400-level engineering course,” Mathur laughed. “But as the assignment progressed and they were able to make sense of it, they were quite receptive and excited to see the results of the mathematical model.”Once the students had a mental picture of the diagrammatic relationship between variables and how things relate to each other in a dynamic system, the SIMULINK program enabled them to draw block diagrams and define parameters to plot the variations of different human and zombie populations with time. Mathur says that unless humans take the time to protect the population in an enclosure or build quarantine zones and bring in more zombie slayers, the models always show that the doomsday scenario would dominate as the most plausible outcome.But the computerized results don’t resonate much with Sherman and other students taking part in UGC HvZ events on campus. “I think humans would definitely win in a real-world scenario because of our military prowess and advanced weaponry. The Department of Defense actually has a full-fledged fictional zombie apocalypse response plan, which it uses to teach military mass disaster planning to trainees, so I think the threat would get stomped out hard before it ever really got as big as in the movies,” says Sherman, who’s been participating in UGC events since his freshman year.As UGC secretary, Megan Lamb has seen her share of zombie missions. “Without limitations and having more resources, humans can definitely get crafty and I think that even if a ‘patient zero’ had some sense left, the zombies still would not be able to be as innovative. Humans can team up and strategize while zombies are just... zombies. Not to say that's a label to take lightly. I'm just saying you don't see zombies winning any of those extreme marathons.”Fellow UGC administrator and senior criminology major Nicole Solano agrees that humans could win in a hypothetical zombie apocalypse. But only if there were certain variables brought into the equation. “If the attack happened on a Penn State campus, I’m sure our club would be able to rally the students together to handle the situation,” Solano grins.Rallying to survive was not the case for Sherman as the zombie horde closed in on him. But, according to the mission details, if he could hold them off long enough for one of the human players stranded nearby to be escorted back to the starting point at Chambers Building untagged, the humans would win. With that in mind, he darted out from behind the elm tree and heroically sacrificed himself by rushing into the horde while firing darts from both Nerf blasters. He and most of the nearby human group were zombified in the mad scramble that lasted mere minutes. But two human survivors managed to get away. As they reached the steps of Chambers Building they were swarmed by zombies and tagged at the threshold of their victory.Just as the mathematical models predicted, Sherman didn’t survive the game but he had a fair hand in ending it. He plans to try his luck again in the next Penn State UGC HvZ event taking place on Nov. 1.For more IT stories at Penn State, visit http://news.it.psu.edu.
{"url":"https://www.psu.edu/news/academics/story/when-zombies-attack-students-predict-outcome-apocalypse","timestamp":"2024-11-10T11:43:16Z","content_type":"text/html","content_length":"251990","record_id":"<urn:uuid:dbe8836b-de77-4470-99d5-598a958aeac5>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00348.warc.gz"}
EPE Journal Volume 22-2 EPE Journal Volume 22-2 You are here: EPE Documents > 02 - EPE Journal Papers > EPE Journal Volume 22-2 EPE Journal Volume 22-2 - Editorial EPE Journal Volume 22-2 - Papers EPE Journal Volume 22-2: Other EPE Journal Volume 22-2 - Editorial Editorial: EPE-PEMC 2012 ECCE Europe, a summary [Details] By V. Katic Editorial: EPE-PEMC 2012 ECCE Europe, a summary, written by Vladimir Katiç, University of Novi Sad, General Chairman EPE Journal Volume 22-2 - Papers New Junction Temperature Balancing Method for a Three Level Active NPC Converter [Details] By E. Hauk; R. à lvarez; J. Weber; St. Bernet; D. Andler; J. Rodríguez The three-level Neutral Point Clamped Voltage Source Converter (3L-NPC VSC, 3L-ANPC VSC) has many attractive features like its high reliability and availability. Nowadays, this technology is mature and can be found in several industrial applications [1], [2]. Some common applications are pumps, fans, compressors, mixers, extruders, crushers, rolling mills, mine hoist drives and excavators. The three-level Active NPC VSC (3L-ANPC VSC) was introduced in 2001 to overcome the drawbacks of the conventional 3L-NPC VSC [3]. The 3L-ANPC VSC includes additionally active switches parallel to the NPC diodes for clamping the neutral tap of the converter. It features 3 extra switch states in contrast to the conventional 3L-NPC VSC, which enable the possibility to reduce the temperature imbalance and/or to increase the output frequency or the converter power. In order to use the potential of the 3L-ANPC VSC and to balance the losses among the semiconductors, the implementation of a temperature balancing strategy is necessary [4] and [5]. The medium voltage 3L-ANPC VSC is especially advantageous in the following applications: – High power applications, where the required output power cannot be archived without a serial/parallel connection of devices. – Medium voltage converters, where the switching frequency should be increased without decreasing the converter power (e.g. applications which require a sine filter or high speed applications). – Applications where the nominal converter current is required at low modulation index and low fundamental frequencies (e.g. zero speed operating points, hot and cold rolling mill applications, converters for doubly fed induction generators, etc.). The starting point for the derivation of the new balancing algorithm was the Active Loss Balancing (ALB) system, which is reported in [4] and [5]. The new temperature balancing scheme profits from the features of Predictive Control [6] in order to precalculate the conduction and the switching losses and finally the temperature of each semiconductor up to the future commutation to the zero state, i.e. over the next period of the switching frequency. The precalculated maximal junction temperature will be used for the control strategy in order to determine the optimal switch state to be Some additional features of the new balancing algorithm with respect to the ALB algorithm are: consideration of the conduction losses fed into the balancing algorithm, regard of the temperature ripple between two consecutive commutations and the use of a control criteria (cost function) in the selection of the zero state [6]. The algorithm is implemented in Matlab and compared with the 3L-NPC VSC using experimental data of a 4.5 kV Press-pack IGBT and Diode for the calculation of the losses. The comparison shows a potential to increase the output power of the 3L-ANPC VSC. Straightforward Current Control - One Step Controller based on Current Slope Detection [Details] By Fr. Becker; H. Ennadifi; M. Braun In order to control the speed or the position of an electrical drive, the torque of the machine has to be set accordingly. The torque is a function of the machine current which is set commonly by means of a voltage source power converter. Because of the simplification of the set-up procedure, cascaded control structures are used. For a stable operation it is important that the speed of the inner control loop is much higher than the outer ones. Hence the speed of the current control limits the dynamics of the drive control. This means that the speed and the accuracy of the current control is decisive for the overall performance of the drive control. In this paper a new control method will be presented which is capable to reach the setpoint value in the shortest possible time of only one control period (comp. fig. 7) but without the knowledge of the machine parameters. This is the fastest possible response for digital controls which can be obtained theoretically by deadbeat controllers. However, high dynamics can hardly be achieved by deadbeat controls in practice because they are very sensitive against variations of the control parameters. Therefore the machine parameters like the inductance and the resistance as well as the course of the induced voltage have to be determined by a measurement for a proper operation. Furthermore the control has to be adjusted manually by the user. This leads to an high effort for the set-up procedure. In addition these parameters may vary during operation because of warming for instance, which can lead to a maladjustment of the control. The new control will adapt onto the load automatically which leads to a significant simplification of the set-up procedure. The identification of the control path is based on the detection of the current slopes of the load current ripple, caused by the alternating of the switching states of the power converter. Hence no additional test pulse are necessary for the identification of the system behaviour. This permanent adaptation leads to an optimal dynamic behaviour of the drive system. In contrast to many model predictive control methods, the new control has a low computation effort and no extensive model of the control path is necessary. The principle function of this new control approach will be explained and verified for an armature current control of a d.c.-drive system. However the principles of this control can also be applied on three-phase application which will be demonstrated by experimental results. Thermal Modeling of a High-Speed Switched Reluctance Machine with Axial Air-gap Flow for Vacuum Cleaners [Details] By H. J. Brauer; R. W. De Doncker Knowing the precise thermal behavior of switched reluctance machines (SRM) is important to increase the power density of such machines. Up to now, literature is lacking about how to model in detail switched reluctance machines at high speed with axial air-gap flow. The aim of this paper is to present a model showing the effects of varied air-gap flow on temperature distribution in vacuum cleaner machines with a power of 1kW and 60.000 rpm. First, a simulation model was set up, illustrating various operating points of the drive. Then the results of this model were verified on a test bench. Hereby, a simulation was found for high-speed switched reluctance machines that ideally reflects the temperature distribution within the machine and also depicts the effects of changing axial air-gap flow. In conclusion, this presented model indicates that even at high speed and with reduced air-gap flow, these switched reluctance machines can be operated within established temperature limits. Ultimately, this model is very good for predicting the thermal behavior of similar switched reluctance machines with air-gap flow. Fault Ride Through Capability for Solar Inverters [Details] By K. Fujii; N. Kanao; N. T. Yamada; Y. Okuma A solar inverter for utility scale has been developed in this paper, and the inverter has fault ride through (FRT) capability, which is now discussed in Japan and similar to requirement in U.S.A and Europe. This solar inverter consists of a boost chopper and a three-phase 2-level inverter, and the capacity covers from 20 kW to 600 kW. This paper first describes the FRT capability. Second, how to control the boost chopper and the inverter is shown. Especially, a new inverter current control to achieve FRT capability and dynamic voltage support (DVS) during a grid fault is proposed. After that, several results using an experimental model (5-kW solar inverter) are shown. Finally, results of a prototype model (20 kW), which has been installed in an actual grid, are presented. EPE Journal Volume 22-2: Other In Memoriam: Alfio Consoli [Details] By F. Profumo In memoriam: Alfio Consoli Brief report from EPE Joint Wind Energy and T&D Chapters Seminar 2012 [Details] By C. Oates; R. Teodorescu; P.C. Kjaer Brief report from EPE Joint Wind Energy and T&D Chapters Seminar, held at the Utzon Centre & Aalborg University in Aalborg, Denmark. Thursday-Friday 28th-29th June 2012
{"url":"http://www.epe-association.org/epe/documents.php?current=2410","timestamp":"2024-11-08T11:47:46Z","content_type":"text/html","content_length":"13318","record_id":"<urn:uuid:40040dd8-312b-4bb9-a831-0e5a8c058322>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00350.warc.gz"}
Well David, I have a lot of ideas and throw away the bad ones. Upon being asked how he had so many good ideas by David Harker, his student. — Linus Pauling The 59th day of the year; 59 is the center prime number in a 3x3 prime magic square that has the smallest possible total for each row, column and diagonal, 177. In 1913, English puzzle writer Henry Dudeney gave an order 3 prime magic square that used the number 1. Although is was commonly included as a prime then, present day convention no longer considers it a prime. 59 divides the smallest composite Euclid number 13# + 1= 13*11*7*5*2 + 1 = 59*509 ( the symbol for a primorial, n#, means the product of all primes from n down to 2 )Euclid used numbers of the form n#+1 in his proof that there are an infinite number of primes. And at the right is one of the 59 stellations of the icosahedron. Now for some nice observations from Derek Orr@MathYearRound: 5^59 - 4^59 is prime. 4^59 - 3^59 is prime. 3^59 - 2^59 is prime. The first 59 digits of 58^57 form a prime. 1678 In a letter to Robert Boyle, Isaac Newton explained his concept of ether. “I suppose that there is diffused through all places an ethereal substance capable of contraction and dilation, strongly elastic and, in a word, much like air in all respects, but far more subtil.” He thought it was in all bodies of matter, but "rarer in the pores than in free spaces." This he suspects is the cause of light being refracted towards the perpendicular. *Rigaud, Letters of Scientific men, vol. 2, p. 407 1695 Liebniz writes to Johann Bernoulli encouraging him to use the term calculus summatorus which Liebniz used for integration. 1825 Cauchy presented to the Acad´emie a paper on integals of complex-valued functions where the limits of integration were allowed to be complex. Previously, he had done much work on such integrals when the limits were real. [Grattan-Guinness, 1990, p. 766] *VFR Chandrasekhara Venkata Raman led experiments at the Indian Association for the Cultivation of Science with collaborators, including K. S. Krishnan, on the scattering of light, when he discovered what now is called the Raman effect. Raman would win the Nobel Prize for this work. He was the first (and still the only) Indian scientist to win the Prize while a citizen of India. He was the first Asian and first non-white to receive any Nobel Prize in the sciences. He broke down at the presentation, in his own words because, " I turned round and saw the British Union Jack under which I had been sitting and it was then that I realised that my poor country, India, did not even have a flag of her own - and it was this that triggered off my complete breakdown." India celebrates National Science Day on 28 February of every year to commemorate the discovery. *Wik James Watson, from early on this Saturday, spent his time at the Cavendish Laboratory in Cambridge, shuffling cardboard cutout models of the molecules of the DNA bases: adenine (A), guanine (G), cytosine (C) and thymine(T). After a while, in a spark of ingenuity, he discovered their complementary pairing. He realized that A joined with T had a close resemblance to C joined with G, and that each pair could hold together with hydrogen bonds. Such pairs could also neatly fit like rungs meeting at right-angles between two anti-parallel helical sugar-phosphate backbones of DNA wound around a common axis. Such structure was consistent with the known X-ray diffraction pattern evidence. Each separated helix with its half of the pairs could form a template for reproducing the molecule. The secret of life First announcement by Francis Crick and James Watson that they had reached their conclusion about the double helix structure of the DNA molecule. Their paper, A Structure for Deoxyribose Nucleic Acid, was published in the 25 Apr 1953 issue of journal Nature. *TIS 1956 Jay Forrester at MIT is awarded a patent for his coincident current magnetic core memory. Forrester's invention, given Patent No. 2,736,880 for a "multicoordinate digital information storage device," became the standard memory device for digital computers until supplanted by solid state (semiconductor) RAM in the mid-1970s. *CHM 2001 With a length of 350 feet 6.6 inches and currently the World's Longest documented Slide Rule, The Texas Magnum by Skip Solberg and Jay Francis,was demonstrated on February 28, 2001 in the Lockeed-Martin Aircraft Assembly Facility at Air Force Plant 4 in Fort Worth, Texas. The Texas Magnum holds the world's record for the longest linear slide rule. The Texas Magnum was designed as a traditional Mannheim style slide rule. The A, C, D and L scales are included on the slide rule *International Slide Rule Museum 1552 Joost Bürgi (28 Feb 1552, 31 Jan 1632) Swiss watchmaker and mathematician who invented logarithms independently of the Scottish mathematician John Napier. He was the most skilful, and the most famous, clockmaker of his day. He also made astronomical and practical geometry instruments (notably the proportional compass and a triangulation instrument useful in surveying). This led to becoming an assistant to the German astronomer Johannes Kepler. Bürgi was a major contributor to the development of decimal fractions and exponential notation, but his most notable contribution was published in 1620 as a table of antilogarithms. Napier published his table of logarithms in 1614, but Bürgi had already compiled his table of logarithms at least 10 years before that, and perhaps as early as 1588. *TIS 1704 Louis Godin (28 February 1704 Paris – 11 September 1760 Cadiz) was a French astronomer and member of the French Academy of Sciences. He worked in Peru, Spain, Portugal and France. He was graduated at the College of Louis le Grand, and studied astronomy under Joseph-Nicolas Delisle. His astronomical tables (1724) gave him reputation, and the French Academy of Sciences elected him a pensionary member. He was commissioned to write a continuation of the history of the academy, left uncompleted by Bernard le Bovier de Fontenelle, and was also authorized to submit to the minister, Cardinal André-Hercule de Fleury, the best means of discovering the truth in regard to the figure of the earth, and proposed sending expeditions to the equator and the polar sea. The minister approved the plan and appropriated the necessary means, the academy designating Charles Marie de La Condamine, Pierre Bouguer, and Godin to go to Peru in 1734. When they had finished their task in 1738, at the invitation of the Viceroy of Peru, Godin accepted the professorship in mathematics in Lima, where he also established a course of astronomical lectures. When in 1746 an earthquake destroyed the greater part of Lima, he took valuable seismological observations, assisted the sufferers, and made plans by the use of which the new buildings would be less exposed to danger from renewed shocks. In 1751 he returned to Europe, but found that he had been nearly forgotten, and superseded as pensioner of the academy; and, as his fortune had been lost in unfortunate speculations, he accepted the presidency of the college for midshipmen in Cadiz in 1752. During the earthquake of Lisbon, 1755, which was distinctly felt at Cadiz, he took observations and did much to allay the apprehensions of the public, for which he was ennobled by the king of Spain. In 1759 he was called to Paris and reinstated as pensionary member of the academy, but he died on his return to Cadiz. *Wik 1735 Alexandre-Théophile Vandermonde (28 Feb 1735 in Paris, France - 1 Jan 1796 in Paris, France). was a French mathematician best known for his work on determinants. *SAU In 1772 Vandermonde used [P] to represent the product of the n factors p(p-1)(p-2)... (p-n+1). With such a notation [P] would represent what we would now write as p!, but I can imagine this becoming, over time, just [p] (De Morgan would do just such a thing in his 1838 essays on probability). Vandermonde seems to have been the first to consider [p] (or 0!) and determined it was (as we now do) equal to one. Vandermonde's notation included a method for skipping numbers, so that [p/3] would indicate p(p-3)(p-6)... (p-3(n-1)). (this method seems better to me than the present method for factorials which skip terms) It even allowed for negative exponents. 1859 Florian Cajori (born 28 Feb 1859)Swiss-born U.S. educator and mathematician whose works on the history of mathematics were among the most eminent of his time.*TIS at times Cajori's work lacked the scholarship which one would expect of such an eminent scientist, we must not give too negative an impression of this important figure. He almost single-handedly created the history of mathematics as an academic subject in the United States and, particularly with his book on the history of mathematical notation, he is still one of the most quoted historians of mathematics today. *SAU 1878 Pierre Joseph Louis Fatou (28 Feb 1878 in Lorient, France - 10 Aug 1929 in Pornichet, France) was a French mathematician working in the field of complex analytic dynamics. He entered the École Normale Supérieure in Paris in 1898 to study mathematics and graduated in 1901 when he was appointed an astronomy post in the Paris Observatory. Fatou continued his mathematical explorations and studied iterative and recursive processes such as z == z +C . the Julia set and the Fatou set are two complementary sets defined from a function. Fatou wrote many papers developing a Fundamental theory of iteration in 1917, which he published in the December 1917 part of Comptes Rendus. His findings were very similar to those of Gaston Maurice Julia, who submitted a paper to the Académie des Sciences in Paris for their 1918 Grand Prix on the subject of iteration from a global point of view. Their work is now commonly referred to as the generalised Fatou–Julia theorem.*Wik Fatou dust is a term applied to certain iteration sets that have zero area and an infinite number of disconnected components.(see image at top of page) 1901 Linus Carl Pauling (28 Feb 1901; 19 Aug 1994 at age 93) an American chemist, physicist and author who applied quantum mechanics to the study of molecular structures, particularly in connection with chemical bonding. Pauling was awarded the Nobel Prize for Chemistry in 1954 for charting the chemical underpinnings of life itself. Because of his work for nuclear peace, he received the Nobel Prize for Peace in 1962. He is remembered also for his strong belief in the health benefits of large doses of vitamin C.*TIS 1925 Louis Nirenberg (28 February 1925, Hamilton, Ontario, Canada - ) is a Canadian-born American mathematician, and one of the outstanding analysts of the twentieth century. He has made fundamental contributions to linear and nonlinear partial differential equations and their application to complex analysis and geometry.*Wik 1930 Leon N. Cooper (28 Feb 1930 - ) American physicist who shared (with John Bardeen and John Robert Schrieffer) the 1972 Nobel Prize in Physics, for his role in developing the BCS (for their initials) theory of superconductivity. The concept of Cooper electron pairs was named after him.*Wik 1939 Daniel C. Tsui (28 Feb 1939 - ) Chinese-American physicist who shared (with Horst L. Störmer and Robert B. Laughlin) received the 1998 Nobel Prize for Physics for the discovery and explanation that the electrons in a powerful magnetic field at very low temperatures can form a quantum fluid whose particles have fractional electric charges. This effect is known as the fractional quantum. 1954 Jean Bourgain(28 Feb 1954 - )Belgian mathematician who was awarded the Fields Medal in 1994 for his work in analysis. His achievements in several fields included the problem of determining how large a section of a Banach space of finite dimension n can be found that resembles a Hilbert subspace; a proof of Luis Antonio Santaló's inequality; a new approach to some problems in ergodic theory; results in harmonic analysis and classical operators; and nonlinear partial differential equations. Bourgain's work was noteworthy for the versatility it displayed in applying ideas from wide-ranging mathematical disciplines to the solution of diverse problems. *TIS 1691 Joseph Moxon (8 August 1627 - February 1691 (Royal Society archives state his death date as 28 February; the Oxford Dictionary of National Biography states that he was buried on 15 February??? {I hope one of them was wrong} ), hydrographer to Charles II, was an English printer of mathematical books and maps, a maker of globes and mathematical instruments, and mathematical lexicographer. He produced the first English language dictionary devoted to mathematics, "Mathematicks made easie, or a mathematical dictionary, explaining the terms of art and difficult phrases used in arithmetick, geometry, astronomy, astrology, and other mathematical sciences". In November 1678, he became the first tradesman to be elected as a Fellow of the Royal Society. *Wik Thony Christie has written that he was one of the first English Printers to print tables of Logarithms. 1742 Willem's Gravesande (26 September 1688 – 28 February 1742)was a Dutch mathematician who expounded Newton's philosophy in Europe. In 1717 he became professor in physics and astronomy in Leiden, and introduced the works of his friend Newton in the Netherlands. His main work is Physices elementa mathematica, experimentis confirmata, sive introductio ad philosophiam Newtonianam or Mathematical Elements of Natural Philosophy, Confirm'd by Experiments (Leiden 1720), in which he laid the foundations for teaching physics. Voltaire and Albrecht von Haller were in his audience, Frederic the Great invited him in 1737 to come to Berlin. His chief contribution to physics involved an experiment in which brass balls were dropped with varying velocity onto a soft clay surface. His results were that a ball with twice the velocity of another would leave an indentation four times as deep, that three times the velocity yielded nine times the depth, and so on. He shared these results with Émilie du Châtelet, who subsequently corrected Newton's formula E = mv to E = mv . (Note that though we now add a factor of 1/2 to this formula to make it work with coherent systems of units, the formula as expressed is correct if you choose units to fit it.) *Wik 1863 Jakob Philipp Kulik (1 May 1793 in Lemberg, Austrian Empire (now Lviv, Ukraine) - 28 Feb 1863 in Prague, Czech Republic) Austrian mathematician known for his construction of a massive factor Kulik was born in Lemberg, which was part of the Austrian empire, and is now Lviv located in Ukraine.In 1825, Kulik mentioned a table of factors up to 30 millions, but this table does no longer seem to exist. It is also not clear if it had really been completed. From about 1825 until 1863 Kulik produced a factor table of numbers up to 100330200 (except for numbers divisible by 2, 3, or 5). This table basically had the same format that the table to 30 millions and it is therefore most likely that the work on the "Magnus canon divisorum" spanned from the mid 1820s to Kulik's death, at which time the tables were still unfinished. These tables fill eight volumes totaling 4212 pages, and are kept in the archives of the Academy of Sciences in Vienna. Volume II of the 8 volume set has been lost.*Wik 1956 Frigyes Riesz (22 Jan 1880; 28 Feb 1956) Hungarian mathematician and pioneer of functional analysis, which has found important applications to mathematical physics. His theorem, now called the Riesz-Fischer theorem, which he proved in 1907, is fundamental in the Fourier analysis of Hilbert space. It was the mathematical basis for proving that matrix mechanics and wave mechanics were equivalent. This is of fundamental importance in early quantum theory. His book Leçon's d'analyse fonctionnelle (written jointly with his student B Szökefalvi-Nagy) is one of the most readable accounts of functional analysis ever written. Beyond any mere abstraction for the sake of a structure theory, he was always turning back to the applications in some concrete and substantial situation. *TIS 2013 Donald A. Glaser (21 Sep 1926, 28 Feb 2013) American physicist, who was awarded the Nobel Prize for Physics in 1960 for his invention of the bubble chamber in which the behaviour of subatomic particles can be observed by the tracks they leave. A flash photograph records the particle's path. Glaser's chamber contains a superheated liquid maintained in a superheated, unstable state without boiling. A piston causing a rapid decrease in pressure creates a tendency to boil at the slightest disturbance in the liquid. Then any atomic particle passing through the chamber leaves a track of small gas bubbles caused by an instantaneous boiling along its path where the ions it creates act as bubble-development centers.*TIS With the freedom that accompanies a Nobel Prize, he soon began to explore the new field of molecular biology, and in 1971 joined two friends, Ronald E. Cape and Peter Farley, to found the first biotechnology company, Cetus Corp., to exploit new discoveries for the benefit of medicine and agriculture. The company developed interleukin and interferon as cancer therapies, but was best known for producing a powerful genetic tool, the polymerase chain reaction, to amplify DNA. In 1991, Cetus was sold to Chiron Corp., now part of Novartis. Glaser died in his sleep Thursday morning, Feb. 28, at his home in Berkeley. He was 86. *Philosopy of Science Portal Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Andromeda Galaxy which Hubble measured to be 300,000 parsecs away. Mathematical Knowledge adds a manly Vigour to the Mind, frees it from Prejudice, Credulity, and Superstition. ~John Arbuthnot The 58th day of the year; 58 is the sum of the first seven prime numbers. It is the fourth smallest Smith Number. (Find the first three. A Smith number is a composite number for which the sum of its digits equals the sum of the digits in its prime factorization, including repetition. 58 = 2*29, and 5+8= 2+2+9.) Smith numbers were named by Albert Wilansky of Lehigh University. He noticed the property in the phone number (493-7775) of his brother-in-law Harold Smith. If you take the number 2, square it, and continue to take the sum of the squares of the digits of the previous answer, you get the sequence 2, 4, 16, 37, 58, 89, 145, 42, 20, 4, and then it repeats. See what happens if you start with other values than 2, and see if you can find one that doesn't produce 58. The Greeks knew 220 and 284 were Amicable in 300 BCE. By 1638 two more pairs had been added. Then, in 1750 in a single paper, Euler added 58 more. 425 "University" or Pandidakterion of Constantinople Founded on this date by Theodosius II. It is described as "the first deliberate effort of the Byzantine state to impose its control on matters relating to higher education." *Wik *Medieval History @medievalhistory 1477 Founding of the University of Uppsala. A research university in Uppsala, Sweden, and is the oldest university in Sweden and Northern Europe. It ranks among the best universities in Northern Europe and is generally considered one of the most prestigious institutions of higher learning in Europe. Prominent students include Carolus Linnaeus , the father of taxonomy; Anders Celsius, inventor of the centigrade scale, and Niklas Zennström, co-founder of KaZaA and Skype. *Wik In 1611, Johannes Fabricius, a Dutch astronomer, observed the rising sun through his telescope, and observed several dark spots on it. This was one of the earlies observation of sunspots through a telescope. (Harriot, Galileo, and Christoph Scheiner all observed sunspots in the 1610-1611 period) . He called his father to investigate this new phenomenon with him. The brightness of the Sun's center was very painful, and the two quickly switched to a projection method by means of a camera obscura. Johannes was the first to publish information on such observations. He did so in his Narratio de maculis in sole observatis et apparente earum cum sole conversione. ("Narration on Spots Observed on the Sun and their Apparent Rotation with the Sun"), the dedication of which was dated 13 Jun 1611. *TIS 1665 Huygens writes letter to Robert Moray at the Royal Society asking him to pass on his "miraculous" observation of a synchronizing of his pendulum clocks. (See Feb 25). *Steven Strogatz, Synch 1851 George Merriweather gave a nearly three-hour essay to members of the Philosophical Society entitled "Essay explanatory of the Tempest Prognosticator." The tempest prognosticator, also known as the leech barometer, is a 19th-century invention by Merryweather in which leeches are used in a barometer. The twelve leeches are kept in small bottles inside the device; when they become agitated by an approaching storm they attempt to climb out of the bottles and trigger a small hammer which strikes a bell. The likelihood of a storm is indicated by the number of times the bell is struck. Merriweather was inspired by two lines from Edward Jenner's poem Signs of Rain: "The leech disturbed is newly risen; Quite to the summit of his prison." Merryweather spent much of 1850 developing his ideas and came up with six designs, the most expensive design, which took inspiration from the architecture of Indian temples, was made by local craftsmen and shown in the 1851 Great Exhibition at The Crystal Palace in London. Merryweather stated in his essay the great success that he had had with the device. It was never very popular, although on its centennial there was a brief rush of renewed interest. *Wik 1890 Dedekind’s second letter to Keferstein. Hans Keferstein had published a paper on the notion of number with comments and suggestions for change of Dedekind's 1888 book. Dedekind first responded on February 9, and on February 14 and announced that he would push the publication by the "Society". It was in the letter of February 27 that Dedekind gives what is called, "a brilliant presentation of the development of his ideas on the notion of natural number." *Jean Van Heijenoort, From Frege to Gödel: a source book in mathematical logic, 1879-1931, pg 98 The text of the letter is available on-line at Google Books 1924, Harlow Shapley replied to a letter from Edwin Hubble which presented the measurement of 300,000 parsecs as the distance to the Andromeda nebula. That was the first proof that the nebula was far outside the Milky Way, in fact, a separate galaxy. When Shapley had debated Heber Curtis on 26 Apr 1920, he presented his firm, life-long conviction that all the Milky Way represented the known universe (and, for instance, the Andromeda nebula was part of the Milky Way.) On receipt of the letter, Shapley told Payne-Gaposchkin and said “Here is the letter that has destroyed my universe.” In his reply, Shapley said sarcastically that Hubble's letter was “the most entertaining piece of literature I have seen for a long time.” Hubble sent more data in a paper to the AAS meeting, read on 1 Jan 1925. *TIS 1936 France issued a stamp with a portrait (by Louis Boilly) of Andr´e-Marie Amp`ere (1775–1836) to honor the centenary of his death. [Scott #306] *VFR 1940 Carbon-14 was discovered on 27 February 1940, by Martin Kamen and Sam Ruben at the University of California Radiation Laboratory in Berkeley, California. Its existence had been suggested by Franz Kurie in 1934. There are three naturally occurring isotopes of carbon on Earth: 99% of the carbon is carbon-12, 1% is carbon-13, and carbon-14 occurs in trace amounts, i.e., making up about 1 or 1.5 atoms per 1012 atoms of the carbon in the atmosphere. The half-life of carbon-14 is 5,730±40 years. Radiocarbon dating is a radiometric dating method that uses (14C) to determine the age of carbonaceous materials up to about 60,000 years old. The technique was developed by Willard Libby and his colleagues in 1949. Wik 1942, J.S. Hey discovered radio emissions from the Sun. *TIS Several prior attempts were made to detect radio emission from the Sun by experimenters such as Nikola Tesla and Oliver Lodge, but those attempts were unable to detect any emission due to technical limitations of their instruments. Jansky first thought the radio signals he picked up from space were from the sun. *Wik 1989 In a review of Einstein–Bessso correspondence in the New Yorker, Jeremy Bernstein wrote: “In 1909, Einstein accepted a job as an associate professor at the University of Zurich, ... Einstein makes a familiar academic complaint—that because of his teaching duties he has less free time than when he was examining patents for eight hours a day.” *VFR 1547 Baha' ad-Din al-Amili (27 Feb 1547 in Baalbek, now in Lebanon - 30 Aug 1621 in Isfahan, Iran) was a Lebanese-born mathematician who wrote influential works on arithmetic, astronomy and grammar. Perhaps his most famous mathematical work was Quintessence of Calculation which was a treatise in ten sections, strongly influenced by The Key to Arithmetic (1427) by Jamshid al-Kashi. *SAU 1881 L(uitzen) E(gbertus) J(an) Brouwer (27 Feb 1881, 2 Dec 1966) was a Dutch mathematician who founded mathematical Intuitionism (a doctrine that views the nature of mathematics as mental constructions governed by self-evident laws). He founded modern topology by establishing, for example, the topological invariance of dimension and the fixpoint theorem. (Topology is the study of the most basic properties of geometric surfaces and configurations.) The Brouwer fixed point theorem is named in his honor. He proved the simplicial approximation theorem in the foundations of algebraic topology, which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes, the treatment of general continuous mappings. *TIS He denies the law of the excluded middle. *VFR 1897 Bernard(-Ferdinand) Lyot (27 Feb 1897; 2 Apr 1952 at age 55) French astronomer who invented the coronagraph (1930), an instrument which allows the observation of the solar corona when the Sun is not in eclipse. Earlier, using his expertise in optics, Lyot made a very sensitive polariscope to study polarization of light reflected from planets. Observing from the Pic du Midi Observatory, he determined that the lunar surface behaves like volcanic dust, that Mars has sandstorms, and other results on the atmospheres of the other planets. Modifications to his polarimeter created the coronagraph, with which he photographed the Sun's corona and its analyzed its spectrum. He found new spectral lines in the corona, and he made (1939) the first motion pictures of solar 1910 Joseph Doob (27 Feb 1910 in Cincinnati, Ohio, USA - 7 June 2004 in Clark-Lindsey Village, Urbana, Illinois, USA) American mathematician who worked in probability and measure theory. *SAU After writing a series of papers on the foundations of probability and stochastic processes including martingales, Markov processes, and stationary processes, Doob realized that there was a real need for a book showing what is known about the various types of stochastic processes. So he wrote his famous "Stochastic Processes" book. It was published in 1953 and soon became one of the most influential books in the development of modern probability theory. *Wik 1942 Robert (Bob) Howard Grubbs (b. 27 February 1942 near Possum Trot, Kentucky, ) is an American chemist and Nobel laureate. Grubbs's many awards have included: Alfred P. Sloan Fellow (1974–76), Camille and Henry Dreyfus Teacher-Scholar Award (1975–78), Alexander von Humboldt Fellowship (1975), ACS Benjamin Franklin Medal in Chemistry (2000), ACS Herman F. Mark Polymer Chemistry Award (2000), ACS Herbert C. Brown Award for Creative Research in Synthetic Methods (2001), the Tolman Medal (2002), and the Nobel Prize in Chemistry (2005). He was elected to the National Academy of Sciences in 1989 and a fellowship in the American Academy of Arts and Sciences in 1994. Grubbs received the 2005 Nobel Prize in Chemistry, along with Richard R. Schrock and Yves Chauvin, for his work in the field of olefin metathesis. *Wik 1735 John Arbuthnot (baptized 29 Apr 1667, 27 Feb 1735 at age 67), fellow of the Royal College of Physicians. In 1710, his paper “An argument for divine providence taken form the constant regularity observ’s in the bith of both sexes” gave the first example of statistical inference. In his day he was famous for his political satires, from which we still know the character John Bull. *VFR He inspired both Jonathan Swift's Gulliver's Travels book III and Alexander Pope's Peri Bathous, Or the Art of Sinking in Poetry, Memoirs of Martin Scriblerus. He also translated Huygens' "De ratiociniis in ludo aleae " in 1692 and extended it by adding a few further games of chance. This was the first work on probability published in English.*SAU A nice blog about Arbuthnot and his work is at this post by *RMAT. 1867 James Dunwoody Brownson DeBow (1820 – February 27, 1867) was an American publisher and statistician, best known for his influential magazine DeBow's Review, who also served as head of the U.S. Census from 1853-1857.*Wik 1906 Samuel Pierpont Langley, (22 Aug 1834; 27 Feb 1906)American astronomer, physicist, and aeronautics pioneer who built the first heavier-than-air flying machine to achieve sustained flight. He launched his Aerodrome No.5 on 6 May 1896 using a spring-actuated catapult mounted on top of a houseboat on the Potomac River, near Quantico, Virginia. He also researched the relationship of solar phenomena to meteorology. *TIS 1915 Nikolay Yakovlevich Sonin (February 22, 1849 – February 27, 1915) was a Russian mathematician. Sonin worked on special functions, in particular cylindrical functions. He also worked on the Euler–Maclaurin summation formula. Other topics Sonin studied include Bernoulli polynomials and approximate computation of definite integrals, continuing Chebyshev's work on numerical integration. Together with Andrey Markov, Sonin prepared a two volume edition of Chebyshev's works in French and Russian. He died in St. Petersburg.*Wik 1975 Hyman Levy (28 Feb 1889 in Edinburgh, Scotland - 27 Feb 1975 in Wimbledon, London, England )graduated from Edinburgh and went on to study in Göttingen. He was forced to leave Germany on the outbreak of World War II and returned to work at Oxford and at the National Physical Laboratory. He held various posts in Imperial College London, finishing as Head of the Mathematics department. His main work was in the numerical solution of differential equations. he published Numerical Studies in Differential Equations (1934), Elements of the Theory of Probability (1936), and Finite Difference Equations (1958). However, Levy was more than a mathematician. He was a philosopher of science and also a political activist. *SAU Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Euler calculated without effort, just as men breathe, as eagles sustain themselves in the air. ~Francois Arago The 57th day of the year; 57(base ten) is written with all ones in base seven. It is the last day this year that can be written in base seven with all ones.(What is the last day of the year that can be written with all ones in base two,... base three?) 57 is the maximum number of regions inside a circle formed by chords connecting 7 points on the circle. Students might ask themselves why this is the same as the first five numbers in the sixth row of Pascal's triangle. 57 is the number of permutations of the numbers 1 to 6 in which exactly 1 element is greater than the previous element (called a permutations with 1 "ascents"). 57 is the maximum number of possible interior regions formed by 8 intersecting circles. The number of ways of coloring the faces of a cube with 3 different colors is 57. For coloring a cube with n colors, the number of possible colorings is given by 57, is sometimes known as Grothendieck's prime. The explanation is given in Amir D. Aczel's last book, Finding Zero. Grothendieck had used primes as a framework on which to build some more general result when: 1616 Galileo is warned to abandon Copernican views. On February 19, 1616, the Inquisition had asked a commission of theologians, known as qualifiers, about the propositions of the heliocentric view of the universe after Nicollo Lorin had accused Galileo of Heretical remarks in a letter to his former student, Benedetto Castelli. On February 24 the Qualifiers delivered their unanimous report: the idea that the Sun is stationary is "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture..."; while the Earth's movement "receives the same judgement in philosophy and ... in regard to theological truth it is at least erroneous in faith."At a meeting of the cardinals of the Inquisition on the following day, Pope Paul V instructed Bellarmine to deliver this result to Galileo, and to order him to abandon the Copernican opinions; should Galileo resist the decree, stronger action would be taken. On February 26, Galileo was called to Bellarmine's residence, and accepted the orders. *Wik A transcript filed by the 1633 Inquisition indicates he was also enjoined from either speaking or writing about his theory. Yet Galileo remained in conflict with the Church. He was eventually interrogated by the Inquisition in Apr 1633. On 22 Jun 1633, Galileo was sentenced to prison indefinitely, with seven of ten cardinals presiding at his trial affirming the sentencing order. Upon signing a formal recantation, the Pope allowed him to live instead under house-arrest. From Dec 1633 to the end of his life on 8 Jan 1641, he remained in his villa at Florence.*TIS In 1992, the Vatican officially declared that Galileo had been the victim of an error. 1665 A letter from Christiaan Huygens to his father, Constantyn Huygens describes the discovery of synchronization between two pendulum clocks in his room. While I was forced to stay in bed for a few days and made observations on my two clocks of the new workshop, I noticed a wonderful effect that nobody could have thought of before. The two clocks, while hanging [on the wall] side by side with a distance of one or two feet between, kept in pace relative to each other with a precision so high that the two pendulums always swung together, and never varied. While I admired this for some time, I finally found that this happened due to a sort of sympathy: when I made the pendulums swing at differing paces, I found that half an hour later, they always returned to synchronism and kept it constantly afterwards, as long as I let them go. 1849 Prince Albert visited the RI for the 1st time to hear a lecture by Faraday. *Royal Institution @ri_science Image 1855 Carl F. Gauss' body lay in state under the dome in the rotunda of the observatory in Gottingen two days after his death. At nine o'clock a group of 12 students of science and mathematics, including Dedikind, carried the coffin out of the observatory and to his final resting place in St. Alben's Church Cemetery. After the casket was lowered it was covered with covered with palms and laurel . 1885 “The Burroughs Company brought out their first adding machine and announced that it would sell for \( $27.75 \) plus \($1.39 \) shipping charges, for a total of whatever that came to.” *Tom Koch, 366 Dumb Days in History by Tom Koch 1962 A new teaching method based on “how and why things happen in mathematics rather than on traditional memorization of rules” is announced by the Educational Research Council of Greater Cleveland. This became the Cleveland Program of the New Math.*VFR In 1896, Henri Becquerel stored a wrapped photographic plate in a closed desk drawer, and a phosphorescent uranium compound laid on top, awaiting a bright day to test his idea that sunlight would make the phosphorescent uranium emit rays. It remained there several days. Thus by sheer accident, he created a new experiment, for when he developed the photographic plate on 1 Mar 1896, he found a fogged image in the shape of the rocks. The material was spontaneously generating and emitting energetic rays totally without the external sunlight source. This was a landmark event. The new form of penetrating radiation was the discovery of the effect of radioactivity. He had in fact reported an earlier, related experiment to the French Academy on 24 Feb 1896, though at that time he thought phosphorescence was the cause.*TIS 1935 The first test of the ideas presented in Robert Watson-Watt's earlier memo, "Detection and location of aircraft by radio methods", were conducted on this date in a field near in Upper Stowe, about three miles south of Weedon Bec in Northhamptonshire. The tests were successful and on several occasions a clear signal was seen on the oscilloscopes hidden in the back of an ambulance from a Handley Page Heyford bomber being flown around the site by Bobby Blucke, who would later become Air Vice-Marshal Blucke. The tests were so secret that only three people were allowed to witness them, Watson-Watt, his colleague Arnold Wilkins, and a single member of the Air Ministry, A. P. Rowe *Wik 1996 Silicon Graphics Inc. buys Cray Research for $767 million, becoming the leading supplier of high-speed computing machines in the U.S. Over a forty year career, Cray founder Seymour Cray consistently produced most of the fastest computers in the world-- innovative, powerful supercomputers used in defense, meteorological, and scientific investigations. *CHM 2012 New world record distance for paper airplane throw: Joe Ayoob, a former Cal Quarterback, throws a John Collins paper airplane design, (which was named Suzanne), officially breaking the world record by 19 feet, 6 inches. The new world record was 226 feet, 10 inches. The previous record is 207 feet and 4 inches set by Stephen Kreiger in 2003. *ESPN 1585 Federico Cesi (26 Feb OR 13 Mar 1585 (sources differ, but Thony Christie did some research to suggest the Feb date is the correct one); 1 Aug 1630 at age 45) Italian scientist who founded the Accademia dei Lincei (1603, Academy of Linceans or Lynxes), often cited as the first modern scientific society, and of which Galileo was the sixth member (1611). Cesi first announced the word telescope for Galileo's instrument. At an early age, while being privately educated, Cesi became interested in natural history and that believed it should be studied directly, not philosophically. The name of the Academy, which he founded at age 18, was taken from Lynceus of Greek mythology, the animal Lynx with sharp sight. He devoted the rest of his life to recording, illustrating and an early classification of nature, especially botany. The Academy was dissolved when its funding by Cesi ceased upon his sudden death(at age 45). *TIS It was revived in its currently well known form of the Pontifical Academy of Sciences, by the Vatican, Pope Pius IX in 1847. 1664 Nicolas Fatio de Duillier (alternative names are Facio or Faccio;) (26 February 1664 – 12 May 1753) was a Swiss mathematician known for his work on the zodiacal light problem, for his very close (some have suggested "romantic" ) relationship with Isaac Newton, for his role in the Newton v. Leibniz calculus controversy , and for originating the "push" or "shadow" theory of gravitation. [Le Sage's theory of gravitation is a kinetic theory of gravity originally proposed by Nicolas Fatio de Duillier in 1690 and later by Georges-Louis Le Sage in 1748. The theory proposed a mechanical explanation for Newton's gravitational force in terms of streams of tiny unseen particles (which Le Sage called ultra-mundane corpuscles) impacting all material objects from all directions. According to this model, any two material bodies partially shield each other from the impinging corpuscles, resulting in a net imbalance in the pressure exerted by the impact of corpuscles on the bodies, tending to drive the bodies together.] He also developed and patented a method of perforating jewels for use in clocks. When Leibniz sent a set of problems for solution to England he mentioned Newton and failed to mention Faccio among those probably capable of solving them. Faccio retorted by sneering at Leibniz as the ‘second inventor’ of the calculus in a tract entitled ‘Lineæ brevissimæ descensus investigatio geometrica duplex, cui addita est investigatio geometrica solidi rotundi in quo minima fiat resistentia,’ 4to, London, 1699. Finally he stirred up the whole Royal Society to take a part in the dispute (Brewster, Memoirs of Sir I. Newton, 2nd edit. ii. 1–5). In 1707, Fatio came under the influence of a fanatical religious sect, the Camisards, which ruined Fatio's reputation. He left England and took part in pilgrim journeys across Europe. After his return only a few scientific documents by him appeared. He died in 1753 in Maddersfield near Worcester, England. After his death his Geneva compatriot Georges-Louis Le Sage tried to purchase the scientific papers of Fatio. These papers together with Le Sage's are now in the Library of the University of Geneva. Eventually he retired to Worcester, where he formed some congenial friendships, and busied himself with scientific pursuits, alchemy, and the mysteries of the cabbala. In 1732 he endeavoured, but it is thought unsuccessfully, to obtain through the influence of John Conduitt [q. v.], Newton's nephew, some reward for having saved the life of the Prince of Orange. He assisted Conduitt in planning the design, and writing the inscription for Newton's monument in Westminster Abbey. *Wik 1786 Dominique François Jean Arago (26 Feb 1786, 2 Oct 1853) was a French physicist and astronomer who discovered the chromosphere of the sun (the lower atmosphere, primarily composed of hydrogen gas), and for his accurate estimates of the diameters of the planets. Arago found that a rotating copper disk deflects a magnetic needle held above it showing the production of magnetism by rotation of a nonmagnetic conductor. He devised an experiment that proved the wave theory of light, showed that light waves move more slowly through a dense medium than through air and contributed to the discovery of the laws of light polarization. Arago entered politics in 1848 as Minister of War and Marine and was responsible for abolishing slavery in the French colonies. *TIS A really great blog about Arago, With the catchy title, "François Arago: the most interesting physicist in the world!" is posted here. Read this introduction, and you will not be able to resist: When he was seven years old, he tried to stab a Spanish solider with a lance When he was eighteen, he talked a friend out of assassinating Napoleon He once angered an archbishop so much that the holy man punched him in the face He has negotiated with bandits, been chased by a mob, broken out of prison He is: François Arago, the most interesting physicist in the world 1799 Benoit Clapeyron (26 Feb 1799, 28 Jan 1864) French engineer who expressed Sadi Carnot's ideas on heat analytically, with the help of graphical representations. While investigating the operation of steam engines, Clapeyron found there was a relationship (1834) between the heat of vaporization of a fluid, its temperature and the increase in its volume upon vaporization. Made more general by Clausius, it is now known as the Clausius-Clapeyron formula. It provided the basis of the second law of thermodynamics. In engineering, Clayeyron designed and built locomotives and metal bridges. He also served on a committee investigating the construction of the Suez Canal and on a committee which considered how steam engines could be used in the navy.*TIS 1842 Nicolas Camille Flammarion (26 Feb 1842; 3 Jun 1925 at age 83) was a French astronomer who studied double and multiple stars, the moon and Mars. He is best known as the author of popular, lavishly illustrated, books on astronomy, including Popular Astronomy (1880) and The Atmosphere (1871). In 1873, Flammarion (wrongly) attributed the red color of Mars to vegetation when he wrote “May we attribute to the color of the herbage and plants which no doubt clothe the plains of Mars, the characteristic hue of that planet...” He supported the idea of canals on Mars, and intelligent life, perhaps more advanced than earth's. Flammarion reported changes in one of the craters of the moon, which he attributed to growth of vegetation. He also wrote novels, and late in life he turned to psychic research. *TIS 1843 Karl Friedrich Geiser (26 Feb 1843 in Langenthal, Bern, Switzerland, 7 May 1934 in Küsnacht, Zürich, Switzerland) Swiss mathematician who worked in algebraic geometry and minimal sufaces. He organised the first International Mathematical Congress in Zurich.*SAU 1864 John Evershed (26 Feb 1864, 17 Nov 1956) English astronomer who discovered (1909) the Evershed effect - the horizontal motion of gases outward from the centres of sunspots. While photographing solar prominences and sunspot spectra, he noticed that many of the Fraunhofer lines in the sunspot spectra were shifted to the red. By showing that these were Doppler shifts, he proved the motion of the source gases. This discovery came to be known as the Evershed effect. He also gave his name to a spectroheliograph, the Evershed spectroscope.*TIS 1946 Ahmed Hassan Zewail (February 26, 1946 – August 2, 2016) was an Egyptian-American scientist, known as the "father of femtochemistry". He was awarded the 1999 Nobel Prize in Chemistry for his work on femtochemistry and became the first Egyptian and the first Arab to win a Nobel Prize in a scientific field. He was the Linus Pauling Chair Professor of Chemistry, Professor of Physics, and the director of the Physical Biology Center for Ultrafast Science and Technology at the California Institute of Technology. Zewail died aged 70 on the evening of August 2, 2016, after a long battle with cancer. *Wik 1638 Claude-Gaspar Bachet de M´eziriac (9 Oct 1581, 26 Feb 1638), noted for his work in number theory and mathematical recreations. He published the Greek text of Diophantus’s Arithmetica in 1621. He asked the first ferrying problem: Three jealous husbands and their wives wish to cross a river in a boat that will only hold two persons, in such a manner as to never leave a woman in the company of a man unless her husband is present. (With four couples this is impossible.)*VFR (I admit that I don't know how this differs from the similar river crossings problems of Alcuin in the 800's, Help someone?)His books on mathematical puzzles formed the basis for almost all later books on mathematical recreations.*SAU 1693 Sir Charles Scarborough MP FRS FRCP (19 December 1615 – 26 February 1693) was an English physician and mathematician. He was born in St. Martin's-in-the-Fields, London in 1615, the son of Edmund Scarburgh, and was sent to St. Paul's School, whence he proceeded to Caius College, Cambridge, and educated at St Paul's School, Gonville and Caius College, Cambridge (BA, 1637, MA, 1640) and Merton College, Oxford (MD, 1646). While at Oxford he was a student of William Harvey, and the two would become close friends. Scarborough was also tutor to Christopher Wren, who was for a time his assistant. Following the Restoration in 1660, Scarborough was appointed physician to Charles II, who knighted him in 1669; Scarborough attended the king on his deathbed, and was later physician to James II and William and Mary. During the reign of James II, Scarborough served (from 1685 to 1687) as Member of Parliament for Camelford in Cornwall. Scarborough was an original fellow of the Royal Society and a fellow of the Royal College of Physicians, author of a treatise on anatomy, Syllabus Musculorum, which was used for many years as a textbook, and a translator and commentator of the first six books of Euclid's Elements (published in 1705). He also was the subject of a poem by Abraham Cowley, An Ode to Dr Scarborough. Scarborough died in London in 1693. He was buried at Cranford, Middlesex, where there is a monument to him in the parish church erected by his widow. *Wik 1878 Pietro Angelo Secchi (18 Jun 1818, 26 Feb 1878 at age 59) Italian Jesuit priest and astrophysicist, who made the first survey of the spectra of over 4000 stars and suggested that stars be classified according to their spectral type. He studied the planets, especially Jupiter, which he discovered was composed of gasses. Secchi studied the dark lines which join the two hemispheres of Mars; he called them canals as if they where the works of living beings. (These studies were later continued by Schiaparelli.) Beyond astronomy, his interests ranged from archaeology to geodesy, from geophysics to meteorology. He also invented a meteorograph, an automated device for recording barometric pressure, temperature, wind direction and velocity, and rainfall.*TIS 1985 Tjalling Charles Koopmans (August 28, 1910 – February 26, 1985) was the joint winner, with Leonid Kantorovich, of the 1975 Nobel Memorial Prize in Economic Sciences. Koopmans' early works on the Hartree–Fock theory are associated with the Koopmans' theorem, which is very well known in quantum chemistry. Koopmans was awarded his Nobel prize (jointly with Leonid Kantorovich) for his contributions to the field of resource allocation, specifically the theory of optimal use of resources. The work for which the prize was awarded focused on activity analysis, the study of interactions between the inputs and outputs of production, and their relationship to economic efficiency and prices.*SAU Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Cathedral Church of St Paul the Apostle *Wik People must understand that science is inherently neither a potential for good nor for evil. It is a potential to be harnessed by man to do his bidding. ~Glenn T. Seaborg The 56th day of the year; There are 56 normalized 5x5 Latin Squares (First row and column have 1,2,3,4,5; and no number appears twice in a row or column. There are a much smaller number of 4x4 squares, try them first) 56 is the sum of the first six triangular numbers (56= 1 + 3 + 6 + 10 + 15 + 21) and thus the sixth tetrahedral number. It is also the sum of six consecutive primes. 3 + 5 + 7 + 11 + 13 + 17 56 letters are required to write the famous prime number 6700417 in English. The number was one of the factors of \(F(5)=2^{2^5}+1 \) Fermat had conjectured that all such "Fermat Numbers" were prime. In 1732, Euler showed that F(5) was the product or 641 times 6700417. Euler never stated that both numbers were prime, and historians still disagree about whether he knew, or even suspected, that it Fifty-Six is a city in Stone County, Arkansas, United States. As of the 2010 census, the city had a total population of 173, an increase of 10 persons from 2000. When founding the community in 1918, locals submitted the name "Newcomb" for the settlement. This request was rejected, and the federal government internally named the community for its school district number (56) The Aubrey holes are a ring of fifty-six Chalk pits at Stonehenge, named after the seventeenth-century antiquarian John Aubrey. They date to the earliest phases of Stonehenge in the late fourth and early third millennium BC. Their purpose is still unknown. *Wik 1598 John Dee demonstrates the solar eclipse by viewing an image through a pinhole. Two versions from Ashmole and Aubrey give different details of who was present. Dee's Diary only contains the notation, "the eclips. A clowdy day, but great darkness about 9 1/2 maine " *Benjamin Wooley, The Queen's Conjuror 1606 Henry Briggs sends a Letter to Mr. Clarke, of Gravesend, dated from Gresham College, with which he sends him the description of a ruler, called Bedwell's ruler, with directions how to use it. (it seems from the letter to be a ruler for measuring the volume of timber. If you have information on where I could see a picture or other image of the device, please advise) *Augustus De Morgan, Correspondence of scientific men of the seventeenth century 1672 (NS) John Wallis collects his work on tangents in a letter to Oldenburg for publication in the Philosophical Transactions. According to a letter from Collins to James Gregory, "I mentioned Slusius (René-François de Sluse) his intent to publish his method de maximis et minimis et tangentibus, which Dr. Wallis hearing of hath sent up his owne Notations about the same, which should have been printed in the last Transactions, but is deferred to the next one newly come out." *John Wallis, Philip Beeley, Christoph J. Scriba, Correspondence of John Wallis (1616-1703) 1939 Appropriately, it was an astronomer who coined the term photography, but the question is, which one. Some credit Johann Heinrich von Madler for combining “photo” (from the Greek word for “light”) and “graphy” (“to write”). *APS.org Madler's claim rests on a paper supposedly written on 25 February 1839 in the German newspaper Vossische Zeitung. Many still credit Sir John Herschel both for coining the word and for introducing it to the public. His uses of it in private correspondence prior to 25 February 1839 and at his Royal Society lecture on the subject in London on 14 March 1839 have long been amply documented and accepted as settled facts. *Wik 1870 Hermann Amandus Schwarz sent his friend Georg Cantor a letter containing the first rigorous proof of the theorem that if the derivative of a function vanishes then the function is constant. See H. Meschkowski, Ways of Thought of Great Mathematicians, pp. 87–89 for an English translation of the letter. *VFR 1959 The APT Language is Demonstrated: The Automatically Programmed Tools language is demonstrated. APT is an English-like language that tells tools how to work and is mainly used in computer-assisted manufacturing. NEW YORKER: Cambridge, Mass. - Feb. 25: The Air Force announced today that it has a machine that can receive instructions in English - figure out how to make whatever is wanted- and teach other machines how to make it. An Air Force general said it will enable the United States to build a war machine that nobody would want to tackle. Today it made an ashtray. *CHM 1976 Romania issued a stamp picturing the mathematician Anton Davidoglu (1876–1958). [Scott #2613] *VFR 1670 Maria Winckelmann (Maria Margarethe Winckelmann Kirch (25 Feb 1670 in Panitzsch, near Leipzig, Germany - 29 Dec 1720 in Berlin, Germany) was a German astronomer who helped her husband with his observations. She was the first woman to discover a comet.*SAU "German astronomer Maria Kirch (1670 – 1720). Kirch was original educated by her father and her uncle who believed that girls should receive the same education as boys. From them she learnt mathematics and astronomy going on to study with and work together with the amateur astronomer Christoph Arnold. Through Arnold she got to know the astronomer Gottfried Kirch and despite the fact that he was 30 years older than her they married. Kirch was official astronomer of the Berlin Royal Academy of Science and he and Maria ran the Academy’s observatory together for many years. In 1702 she became the first woman to discover a comet but the credit for the discovery was given to her husband. When Gottfried died in 1710 Maria applied for his position arguing correctly that she had done half of the work in the past. Despite her having published independently and having an excellent reputation as well as the active support of Leibniz the Academy refused to award her the post. She worked in various other observatories until 1717 when her son was appointed to his fathers post, Maria once again becoming the assistant. Despite having more than proved her equality to any male astronomer Maria never really received the recognition she deserved." From Thony Christie's Renaissance Mathematicus blog on Daughters of 1827 Henry William Watson (25 Feb 1827 in Marylebone, London, England - 11 Jan 1903 in Berkswell (near Coventry), England) was an English mathematician who wrote some influential text-books on electricity and magnetism. *SAU 1902 Kenjiro Shoda (February 25, 1902 – March 3, 1977 *SAU gives March 20 for death) was a Japanese mathematician. He was interested in group theory, and went to Berlin to work with Issai Schur. After one year in Berlin, Shoda went to Göttingen to study with Emmy Noether. Noether's school brought a mathematical growth to him. In 1929 he returned to Japan. Soon afterwards, he began to write Abstract Algebra, his mathematical textbook in Japanese for advanced learners. It was published in 1932 and soon recognised as a significant work for mathematics in Japan. It became a standard textbook and was reprinted many times.*Wik 1922 Ernst Gabor Straus (February 25, 1922 – July 12, 1983) was a German-American mathematician who helped found the theories of Euclidean Ramsey theory and of the arithmetic properties of analytic functions. His extensive list of co-authors includes Albert Einstein and Paul Erdős as well as other notable researchers including Richard Bellman, Béla Bollobás, Sarvadaman Chowla, Ronald Graham, László Lovász, Carl Pomerance, and George Szekeres. It is due to his collaboration with Straus that Einstein has Erdős number 2. *Wik 1926 Masatoşi Gündüz İkeda (25 February 1926, Tokyo. - 9 February 2003, Ankara), was a Turkish mathematician of Japanese ancestry, known for his contributions to the field of algebraic number theory. 1723 Sir Christopher Wren (20 Oct 1632; 25 Feb 1723) Architect, astronomer, and geometrician who was the greatest English architect of his time (Some may suggest Hooke as an equal) whose famous masterpiece is St. Paul's Cathedral, among many other buildings after London's Great Fire of 1666. Wren learned scientific skills as an assistant to an eminent anatomist. Through astronomy, he developed skills in working models, diagrams and charting that proved useful when he entered architecture. He inventing a "weather clock" similar to a modern barometer, new engraving methods, and helped develop a blood transfusion technique. He was president of the Royal Society 1680-82. His scientific work was highly regarded by Sir Isaac Newton as stated in the Principia. *TIS Thony Christie points out that, "Most people don’t realise that as well as being Britain’s most famous 17th century architect, Wren was also a highly respected mathematician. In fact Isaac Newton named him along with John Wallace and William Oughtred as one of the three best English mathematicians of the 17th century. As a young man he was an active astronomer and was a highly vocal supporter of the then still relatively young elliptical astronomy of Johannes Kepler." (I love the message on his tomb in the Crypt of St. Pauls: Si monumentum requiris circumspice ...."Reader, if you seek his monument, look about you." Lisa Jardine's book is excellent 1775 William Small (13 October 1734; Carmyllie, Angus, Scotland – 25 February 1775; Birmingham, England). He attended Dundee Grammar School, and Marischal College, Aberdeen where he received an MA in 1755. In 1758, he was appointed Professor of Natural Philosophy at the College of William and Mary in Virginia, then one of Britain’s American colonies. Small is known for being Thomas Jefferson's professor at William and Mary, and for having an influence on the young Jefferson. Small introduced him to members of Virginia society who were to have an important role in Jefferson's life, including George Wythe a leading jurist in the colonies and Francis Fauquier, the Governor of Virginia. Recalling his years as a student, Thomas Jefferson described Small as: "a man profound in most of the useful branches of science, with a happy talent of communication, correct and gentlemanly manners, and a large and liberal mind... from his conversation I got my first views of the expansion of science and of the system of things in which we are placed." In 1764 Small returned to Britain, with a letter of introduction to Matthew Boulton from Benjamin Franklin. Through this connection Small was elected to the Lunar Society, a prestigious club of scientists and industrialists. In 1765 he received his MD and established a medical practice in Birmingham, and shared a house with John Ash, a leading physician in the city. Small was Boulton's doctor and became a close friend of Erasmus Darwin, Thomas Day, James Keir, James Watt, Anna Seward and others connected with the Lunar Society. He was one of the best-liked members of the society and an active contributor to their Small died in Birmingham on 25 February 1775 from malaria contracted during his stay in Virginia. He is buried in St. Philips Church Yard, Birmingham. The William Small Physical Laboratory, which houses the Physics department at the College of William & Mary, is named in his honor. *Wik 1786 Thomas Wright (22 September 1711 – 25 February 1786) was an English astronomer, mathematician, instrument maker, architect and garden designer. He was the first to describe the shape of the Milky Way and speculate that faint nebulae were distant galaxies.*Wik 1947 Louis Carl Heinrich Friedrich Paschen (22 Jan 1865; 25 Feb 1947) was a German physicist who was an outstanding experimental spectroscopist. In 1895, in a detailed study of the spectral series of helium, an element then newly discovered on earth, he showed the identical match with the spectral lines of helium as originally found in the solar spectrum by Janssen and Lockyer nearly 40 years earlier. He is remembered for the Paschen Series of spectral lines of hydrogen which he elucidated in 1908. *TIS 1950 Nikolai Nikolaevich Luzin, (also spelled Lusin) (9 December 1883, Irkutsk – 28 January 1950, Moscow), was a Soviet/Russian mathematician known for his work in descriptive set theory and aspects of mathematical analysis with strong connections to point-set topology. He was the eponym of Luzitania, a loose group of young Moscow mathematicians of the first half of the 1920s. They adopted his set-theoretic orientation, and went on to apply it in other areas of mathematics.*Wik 1972 Władysław Hugo Dionizy Steinhaus (January 14, 1887 – February 25, 1972) was a Polish mathematician and educator. Steinhaus obtained his PhD under David Hilbert at Göttingen University in 1911 and later became a professor at the University of Lwów, where he helped establish what later became known as the Lwów School of Mathematics. He is credited with "discovering" mathematician Stefan Banach, with whom he gave a notable contribution to functional analysis through the Banach-Steinhaus theorem. After World War II Steinhaus played an important part in the establishment of the mathematics department at Wrocław University and in the revival of Polish mathematics from the destruction of the war. Author of around 170 scientific articles and books, Steinhaus has left its legacy and contribution on many branches of mathematics, such as functional analysis, geometry, mathematical logic, and trigonometry. Notably he is regarded as one of the early founders of the game theory and the probability theory preceding in his studies, later, more comprehensive approaches, by other scholars. *Wik His Mathematical Snapshots is a delight to read, but get the first English edition if you can—there are lots of surprises there. *VFR "When Steinhaus failed to attend an important meeting of the Committee of the Polish Academy of Sciences in 1960, he received a letter chiding him for "not having justified his absence." He immediately wired the President of the Academy that "as long as there are members who have not yet justified their presence, I do not need to justify my absence." [ Told by Mark Kac in "Hugo Steinhaus -- A Remembrance and a Tribute," Amer. Math. Monthly 81 (June-July 1974) 578. ] * http://komplexify.com 1988 Kurt Mahler (26 July 1903, Krefeld, Germany – 25 February 1988, Canberra, Australia) was a mathematician and Fellow of the Royal Society. Mahler proved that the Prouhet–Thue–Morse constant and the Champernowne constant 0.1234567891011121314151617181920... are transcendental numbers. He was a student at the universities in Frankfurt and Göttingen, graduating with a Ph.D. from Johann Wolfgang Goethe University of Frankfurt am Main in 1927. He left Germany with the rise of Hitler and accepted an invitation by Louis Mordell to go to Manchester. He became a British citizen in 1946. He was elected a member of the Royal Society in 1948 and a member of the Australian Academy of Science in 1965. He was awarded the London Mathematical Society's Senior Berwick Prize in 1950, the De Morgan Medal, 1971, and the Thomas Ranken Lyle Medal, 1977. *Wik 1999 Glenn Theodore Seaborg (April 19, 1912,Ishpeming, Michigan – February 25, 1999) was an American scientist who won the 1951 Nobel Prize in Chemistry for "discoveries in the chemistry of the transuranium elements", contributed to the discovery and isolation of ten elements, and developed the actinide concept, which led to the current arrangement of the actinoid series in the periodic table of the elements. He spent most of his career as an educator and research scientist at the University of California, Berkeley where he became the second Chancellor in its history and served as a University Professor. Seaborg advised ten presidents from Harry S. Truman to Bill Clinton on nuclear policy and was the chairman of the United States Atomic Energy Commission from 1961 to 1971 where he pushed for commercial nuclear energy and peaceful applications of nuclear science. The element seaborgium was named after Seaborg by Albert Ghiorso, E. Kenneth Hulet, and others, who also credited Seaborg as a co-discoverer. It was so named while Seaborg was still alive, which proved controversial. He influenced the naming of so many elements that with the announcement of seaborgium, it was noted in Discover magazine's review of the year in science that he could receive a letter addressed in chemical elements: seaborgium, lawrencium (for the Lawrence Berkeley Laboratory where he worked), berkelium, californium, americium (Once when being aggressively cross-examined during testimony on nuclear energy for a senate committee, the Senator asked, “How much do you really know about Plutonium.” Seaborg quietly answered, “Sir, I discovered it.” , Which he did as part of the team at the Manhattan Project. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell 3D Lichtenberg Figures *Wik Information is the resolution of uncertainty. ~Claude Shannon The 55th day of the year; 55 is the largest triangular number that appears in the Fibonacci Sequence. (Is there a largest square number?) 55 is also a Kaprekar Number: 55² = 3025 and 30 + 25 = 55 (Thanks to Jim Wilder) And speaking of 5^2, Everyone knows that 3^2 + 4^2 = 5^2, but did you know that 33^2 + 44^2 = 55^2 But after that, there could be no more.... right? I mean, that's just too improbable, so why is he stil l going on like this? You don't think......Nah. 55 is the only year day that is both a non-trivial base ten palindrome and also a palindrome in base four. 1582 Pope Gregory XIII promulgated his calendar reform in the papal bull Inter gravissimus (Of the gravest concern). It took effect (in Italy and some other Catholic countries) October 5, 1582 (Julian Thursday, 4 October 1582, being followed by Gregorian Friday, 15 October 1582) 1616 Inquisition qualifiers deny teaching of Heliocentric view . On February 19, 1616, the Inquisition had asked a commission of theologians, known as qualifiers, about the propositions of the heliocentric view of the universe. On February 24 the Qualifiers delivered their unanimous report: the idea that the Sun is stationary is "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture..."; while the Earth's movement "receives the same judgement in philosophy and ... in regard to theological truth it is at least erroneous in faith."At a meeting of the cardinals of the Inquisition on the following day, Pope Paul V instructed Bellarmine to deliver this result to Galileo, and to order him to abandon the Copernican opinions; should Galileo resist the decree, stronger action would be taken. On February 26, Galileo was called to Bellarmine's residence, and accepted the orders. *Wik 1755 William Hogarth’s satirical print, “An Election Entertainment,” was published. It contains a Tory sign bearing the inscription “Give us our eleven days.” This refers to the fact that eleven dates were removed from the calendar when England converted to the Gregorian calendar on September 14, 1752. *VFR Image here 1772 Lagrange, in a letter to d’Alembert, called higher mathematics “decadent.” *Grabiner, Origins of Cauchy’s Rigorous Calculus, pp. 25, 185 1842 Sylvester resigned his position at the University of Virginia (after only four months), after a dispute with a student who was reading a newspaper in class. Persistent rumors that he killed the student are unfounded. *VFR 1881 Cambridge University in England allowed women to officially take university examinations and to have their names posted along with those of the male students. Previously some women were given special permission to take the Tripos Exam. One of these was Charlotte Agnes Scott, who did quite well on the exam. At the award ceremony “The man read out the names and when he came to ‘eighth,’ before he could say the name, all the undergraduates called out ‘Scott of Girton,’ and cheered tremendously, shouting her name over and over again with tremendous cheers and wavings of hats.” [Women of Mathematics. A Biobibliographic Sourcebook (1987), edited by Louise S. Grinstein and Paul J. Campbell, 194-195] *VFR 1896 Henri Becquerel read a report to the French Academy of Sciences of his investigation of the phosphorescent rays of some “double sulfate of uranium and potassium” crystals. He reported that he placed the crystals on the outside of a photographic plate wrapped in sheets of very thick black paper and exposed the whole to the sun for several hours. When he developed the photographic plate, he saw a black silhouette of the substance exposed on the negative. When he placed a coin or metal screen between the uranium crystals and the wrapped plate, he saw images of those objects on the negative. He did not yet know yet that the sun is not necessary to initiate the rays, nor did he yet realize that he had accidentally discovered radioactivity. He would learn more from a further accidental discovery on 26 Feb 1896.*TIS 1920 As part of the National Education Association’s annual meeting, 127 mathematics teachers from 20 states met in Cleveland, Ohio, for the “purpose of organizing a National Council of Mathematics Teachers.” *VFR 1931, the Fields Medal was established to recognize outstanding contributions to mathematics. It was conceived since there was no Nobel Prize for mathematicians. Although John Charles Fields probably thought of the medal at some earlier time, the first recorded mention of it was made on 24 Feb 1931 in minutes of a committee meeting. He was chairman of the Committee of the International Congress which had been set up by the University of Toronto to organize the 1924 Congress in Toronto. After the event, Fields proposed that income of $2,500 remaining from that convention would be designated for two medals to be awarded at future International Mathematical Congresses. In 1936, the first awards were made in Oslo.*TIS In 1968, Nature carried the announcement of the discovery of a pulsar (a pulsating radio source). The first pulsar was discovered by a graduate student, Jocelyn Bell, on 28 Nov 1967, then working under the direction of Prof. Anthony Hewish. The star emitted radio pulses with clock-like precision. It was observed at the Mullard Radio Astronomy Observatory, Cambridge University, England. A special radio telescope, was used with 2,048 antennae arrayed across 4.4 acres. Pulsars prompted studies in quantum-degenerate fluids, relativistic gravity and interstellar magnetic fields. *TIS [Before the nature of the signal was determined, the researchers, Bell and her Ph.D supervisor Antony Hewish, somewhat seriously considered the possibility of extraterrestrial life, "We did not really believe that we had picked up signals from another civilization, but obviously the idea had crossed our minds and we had no proof that it was an entirely natural radio emission. It is an interesting problem - if one thinks one may have detected life elsewhere in the universe how does one announce the results responsibly? Who does one tell first?" The observation was given the half-humorous designation Little green men 1, until researchers Thomas Gold and Fred Hoyle correctly identified these signals as rapidly rotating neutron stars with strong magnetic fields.] Read the details in her own words here. 2009 Comet Lulin, a non-periodic comet, makes its closest approach to Earth, peaking in brightness between magnitude +4 and magnitude +6. *Wik 1663 Thomas Newcomen (24 Feb 1663 (Newcomen was baptised OTD unfortunately there is no mention of his birth date in the baptism record); 5 Aug 1729 at age 66) English engineer and inventor of the the world's first successful atmospheric steam engine. His invention of c.1711 came into use by 1725 to pump water out of coal mines or raise water to power water-wheels. On each stroke, steam filled a cylinder closed by a piston, then a spray of water chilled and condensed the steam in the cylinder creating a vacuum, then atmospheric pressure pushed the piston down. A crossbeam transferred the motion of the piston to operating the pump. This was wasteful of fuel needed to reheat the cylinder for the next stroke. Despite being slow and inefficient, Newcomen's engine was relied on for the first 60 years of the new steam age it began, perhaps the single most important invention of the Industrial Revolution. *TIS 1709 Jacques de Vaucanson (24 Feb 1709; 21 Nov 1782 at age 73) French inventor of automata - robot devices of later significance for modern industry. In 1737-38, he produced a transverse flute player, a pipe and tabor player, and a mechanical duck, which was especially noteworthy, not only imitating the motions of a live duck, but also the motions of drinking, eating, and "digesting." He made improvements in the mechanization of silk weaving, but his most important invention was ignored for several decades - that of automating the loom by means of perforated cards that guided hooks connected to the warp yarns. (Later reconstructed and improved by J.-M. Jacquard, it became one of the most important inventions of the Industrial Revolution.) He also invented many machine tools of permanent importance. *TIS 1804 Heinrich Friedrich Emil Lenz (24 Feb 1804, 10 Feb 1865 at age 61) was the Russian physicist who framed Lenz's Law to describe the direction of flow of electric current generated by a wire moving through a magnetic field. Lenz worked on electrical conduction and electromagnetism. In 1833 he reported investigations into the way electrical resistance changes with temperature, showing that an increase in temperature increases the resistance (for a metal). He is best-known for Lenz's law, which he discovered in 1834 while investigating magnetic induction. It states that the current induced by a change flows so as to oppose the effect producing the change. Lenz's law is a consequence of the, more general, law of conservation of energy. *TIS 1868 James Ireland Craig (24 Feb 1868 in Buckhaven, Fife, Scotland - 26 Jan 1952 in Cairo, Egypt) graduated from Edinburgh and Cambridge. He taught at Eton and Winchester and then went to work on the Nile Survey for the Egyptian government. He made some significant inventions in map projections. He was killed when a mob attacked the Turf Club in Cairo.*SAU 1878 Felix Bernstein born. In 1895 or 1896, while still a Gymnasium student, he volunteered to read the proofs of a paper of Georg Cantor on set theory. In the process of doing this the idea came to him one morning while shaving of how to prove what is now called the Cantor/Bernstein theorem: If each of two sets is equivalent to a subset of the other, then they are equivalent. *VFR He also worked on transfinite ordinal numbers.*SAU 1909 Max Black (24 February 1909, 27 August 1988) was a British-American philosopher and a leading influence in analytic philosophy in the first half of the twentieth century. He made contributions to the philosophy of language, the philosophy of mathematics and science, and the philosophy of art, also publishing studies of the work of philosophers such as Frege. His translation (with Peter Geach) of Frege's published philosophical writing is a classic text. *Wik 1920 K C Sreedharan Pillai (1920–1985) was an Indian statistician who was known for his works on multivariate analysis and probability distributions. Pillai was honoured by being elected a Fellow of the American Statistical Association and a Fellow of the Institute of Mathematical Statistics. He was an elected member of the International Statistical Institute. *Wik Perhaps his best known contribution is the widely used multivariate analysis of variance test which bears his name.*SAU 1946 Gregori Aleksandrovich Margulis (24 Feb 1946 - )Russian mathematician who was awarded the Fields Medal in 1978 for his contributions to the theory of Lie groups, though he was not allowed by the Soviet government to travel to Finland to receive the award. In 1990 Margulis immigrated to the United States. Margulis' work was largely involved in solving a number of problems in the theory of Lie groups. In particular, Margulis proved a long-standing conjecture by Atle Selberg concerning discrete subgroups of semisimple Lie groups. The techniques he used in his work were drawn from combinatorics, ergodic theory, dynamical systems, and differential geometry.*TIS The napkin folding problem is a problem in geometry and the mathematics of paper folding that explores whether folding a square or a rectangular napkin can increase its perimeter. The problem is known under several names, including the Margulis napkin problem, suggesting it is due to Grigory Margulis *Wik 1955 Steven Paul Jobs (24 Feb 1955; 5 Oct 2011 at age 56) U S inventor and entrepreneur who, in 1976, co-founded Apple Inc. with Steve Wozniak to manufacture personal computers. During his life he was issued or applied for 338 patents as either inventor or co-inventor of not only applications in computers, portable electronic devices and user interfaces, but also a number of others in a range of technologies. From the outset, he was active in all aspects of the Apple company, designing, developing and marketing. After the initial success of the Apple II series of personal computers, the Macintosh superseded it with a mouse-driven graphical interface. Jobs kept Apple at the forefront of innovative, functional, user-friendly designs with new products including the iPad tablet and iPhone. Jobs was also involved with computer graphics movies through his purchase (1986) of the company that became Pixar *TIS 1967 Brian Paul Schmidt AC, FRS (February 24, 1967, ) is a Distinguished Professor, Australian Research Council Laureate Fellow and astrophysicist at The Australian National University Mount Stromlo Observatory and Research School of Astronomy and Astrophysics and is known for his research in using supernovae as cosmological probes. He currently holds an Australia Research Council Federation Fellowship and was elected to the Royal Society in 2012.[2] Schmidt shared both the 2006 Shaw Prize in Astronomy and the 2011 Nobel Prize in Physics with Saul Perlmutter and Adam Riess for providing evidence that the expansion of the universe is accelerating. *Wik 1728 Charles René Reyneau (11 June 1656 in Brissac, Maine-et-Loire, France - 24 Feb 1728 in Paris, France) was a French mathematician who published an influential textbook on the newly invented calculus.*SAU (He) "undertook to reduce into one body, for the use of his scholars, the principal theories scattered here and there in Newton, Descartes, Leibnitz, Bernoulli, the Leipsic Acts, the Memoirs of the Paris Academy, and in other works; treasures which by being so widely dispersed, proved much less useful than they otherwise might have been. The fruit of this undertaking, was his “Analyse Demontree,” or Analysis Demonstrated, which he published in 1708. He gave it the name of “Analysis Demonstrated,” because he demonstrates in it several methods which had not been handled by the authors of them, with sufficient perspicuity and exactness. The book was so well approved, that it soon became a maxim, at least in France, that to follow him was the best, if not the only way, to make any extraordinary progress in the mathematics and he was considered as the first master, as the Euclid of the sublime geometry." (From the 1812 Chalmer's Biography, vol. 26, p. 151) 1799 Georg Christoph Lichtenberg (1 Jul 1742, 24 Feb 1799 at age 56). German physicist and satirical writer, best known for his aphorisms and his ridicule of metaphysical and romantic excesses. At Göttingen University, Lichtenberg did research in a wide variety of fields, including geophysics, volcanology, meteorology, chemistry, astronomy, and mathematics. His most important were his investigations into physics. Notably, he constructed a huge electrophorus and, in the course of experimentations, discovered in 1777 the basic principle of modern xerographic copying; the images that he reproduced are still called "Lichtenberg figures." These are radial patterns formed when sharp, pointed conducting bodies at high voltage get near enough to insulators to discharge electrically, or seen on persons struck by lightning. *TIS 1810 Henry Cavendish (10 Oct 1731; 24 Feb 1810) English chemist and physicist who conducted experiments with diverse interests in his private laboratory. Most notably, he determined the mass and density of the Earth. He investigated the properties of hydrogen and carbon dioxide, including comparing their density to that of air. Cavendish also showed that water was a compound and measured the specific heat of various substances. His manuscripts (published 1879) revealed discoveries he made in electrostatics before Coulomb, Ohm and Faraday - including deducing the inverse square law of electrostatic attraction and repulsion. He also found specific inductive capacity. His family name is attached to the Cavendish Laboratory (founded 1871, funded by a later family member) at Cambridge University. *TIS Cavendish was supposedly so shy that for his only portrait the artist painted his coat from a hook in the hall, then painted Cavendish body from memory. *"Shock and Awe", BBC broadcast on the history of electricity 1812 Étienne-Louis Malus (23 Jun 1775, 24 Feb 1812 at age 36) He served in Napoleon's corps of engineers, fought in Egypt, and contracted the plague during Napoleon's aborted campaign in Palestine. Posted to Europe after 1801, he began research in optics. In 1808, he discovered that light rays may be polarized by reflection, while looking through a crystal of Iceland spar at the windows of a building reflecting the rays of the Sun. He noticed that on rotating the crystal the light was extinguished in certain positions. Applying corpuscular theory, he argued that light particles have sides or poles and coined the word "polarization." *TIS He studied geometric systems called ray systems, closely connected to Julius Plücker's line geometry. He conducted experiments to verify Christiaan Huygens' theories of light and rewrote the theory in analytical form. His discovery of the polarization of light by reflection was published in 1809 and his theory of double refraction of light in crystals, in 1810. Malus attempted to identify the relationship between the polarising angle of reflection that he had discovered, and the refractive index of the reflecting material. While he deduced the correct relation for water, he was unable to do so for glasses due to the low quality of materials available to him (most glasses at that time showing a variation in refractive index between the surface and the interior of the glass). It was not until 1815 that Sir David Brewster was able to experiment with higher quality glasses and correctly formulate what is known as Brewster's law. Malus is probably best remembered for Malus' law, giving the resultant intensity, when a polariser is placed in the path of an incident beam. His name is one of the 72 names inscribed on the Eiffel 1844 Antoine-André-Louis Reynaud (12 Sept 1771, 24 Feb 1844) Reynaud published a number of extremely influential textbooks. He published a mathematics manual for surveyors as well as Traité d'algèbre, Trigonométrie rectiligne et sphérique, Théorèmes et problèmes de géométrie and Traité de statistique. His best known texts, however, were his editions of Bézout's Traité d'arithmétique which appeared in at least 26 versions containing much original work by Reynaud. It appears that Reynaud became interested in algorithms when he was working with de Prony. At this time de Prony was very much involved in trying to get his logarithmic and trigonometric tables published and it seems to have made Reynaud think about analysing algorithms. Certainly Reynaud, although his results in this area were rather trivial, must get the credit for being one of the first people to give an explicit analysis of an algorithm, an area of mathematics which is of major importance today. *SAU 1856 Nikolai Ivanovich Lobachevsky (December 1, 1792 – February 24, 1856 (N.S.); November 20, 1792 – February 12, 1856 (O.S.)) was a Russian mathematician and geometer, renowned primarily for his pioneering works on hyperbolic geometry, otherwise known as Lobachevskian geometry. William Kingdon Clifford called Lobachevsky the "Copernicus of Geometry" due to the revolutionary character of his work. *Wik A yahoo recording of the classic Tom Lehrer song about Lobachevsky is here with lyrics. Lehrer has stated there is no accusation of Lobachevsky plagiarizing anything, and his name was chosen for the rhythmic characteristics. 1871 Julius Ludwig Weisbach (10 August 1806 in Mittelschmiedeberg (now Mildenau Municipality), Erzgebirge, 24 February 1871, Freiberg) was a German mathematician and engineer. He studied with Carl Friedrich Gauss in Göttingen and with Friedrich Mohs in Vienna. He wrote an influential book for mechanical engineering students, called Lehrbuch der Ingenieur- und Maschinenmechanik, which has been expanded and reprinted on numerous occasions between 1845 and 1863. *Wik He wrote fourteen books and 59 papers he wrote on mechanics, hydraulics, surveying, and mathematics. It is in hydraulics that his work was most influential, with his books on the topic continuing to be of importance well into the 20th century. *SAU 1923 Edward Williams Morley (29 Jan 1838; 24 Feb 1923) American chemist who is best known for his collaboration with the physicist A.A. Michelson in an attempt to measure the relative motion of the Earth through a hypothetical ether (1887). He also studied the variations of atmospheric oxygen content. He specialized in accurate quantitative measurements, such as those of the vapor tension of mercury, thermal expansion of gases, or the combining weights of hydrogen and oxygen. Morley assisted Michelson in the latter's persuit of measurements of the greatest possible accuracy to detect a difference in the speed of light through an omnipresent ether. Yet the ether could not be detected and the physicists had seriously to consider that the ether did not exist, even questioning much orthodox physical theory. *TIS 1933 Eugenio Bertini (8 Nov 1846 in Forli, Italy - 24 Feb 1933 in Pisa, Italy) was an Italian mathematician who worked in projective and algebraic geometry. His work in algebraic geometry extended Cremona's work. He studied geometrical properties invariant under Cremona transformations and used the theory to resolve the singularities of a curve. A paper by Kleiman studies what the authors calls the two fundamental theorems of Bertini. These two fundamental theorems are among the ones most used in algebraic geometry. The first theorem is a statement about singular points of members of a pencil of hypersurfaces in an algebraic variety. The second theorem is about the irreducibility of a general member of a linear system of hypersurfaces. *SAU 2001 Claude Shannon (30 April 1916 in Petoskey, Michigan, USA - 24 Feb 2001 in Medford, Massachusetts, USA) founded the subject of information theory and he proposed a linear schematic model of a communications system. His Master's thesis was on A Symbolic Analysis of Relay and Switching Circuits on the use of Boole's algebra to analyse and optimise relay switching circuits. *SAU While working with John von Neumann on early computer designs, (John) Tukey introduced the word "bit" as a contraction of "binary digit". The term "bit" was first used in an article by Claude Shannon in 1948. Among several statues to Shannon, one is erected in his hometown of Gaylord, Michigan. The statue is located in Shannon Park in the center of downtown Gaylord, which was Shannon's boyhood home. Shannon Park is the former site of the Shannon Building, built and owned by Claude Shannon's father. Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"url":"https://pballew.blogspot.com/2017/02/","timestamp":"2024-11-08T21:49:10Z","content_type":"application/xhtml+xml","content_length":"253853","record_id":"<urn:uuid:42ba8797-a940-468c-afd8-9de9b70c4c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00017.warc.gz"}
Category: Telecommunication Does SGD perform better than GD? Read out more to see its effect on MNIST dataset Read More Wanted to implement SGD on MNIST? Read more to find out! Read More This article explains the representation of lowpass equivalent of channel over bandpass equivalent. Read More This article explains the mathematical relationship of up-conversion and down-conversion used in Software-Defined Radio’s Read More This article describes the Mathematical relationship among Bandpass and Baseband signals Read More Earlier we discussed the introduction of a continuous random variable and cumulative distribution... Read More The problem with Discrete random variables was they can’t be used for continuous data. In... Read More A few practice problems related to Probability theory are given here. These problems are adapted... Read More Random variables are one of the foundational topic in understanding probability. Read more to find out! Read More In Probability theory, we came across some problems in which it is difficult to compute... Read More This article is in continuation of the previous article: Introduction to Probability As we defined... Read More These problems and Proofs are adapted from the textbook: Probability and Random Process by Scott... Read More
{"url":"https://bravelearn.com/category/telecommunication/","timestamp":"2024-11-04T21:42:36Z","content_type":"text/html","content_length":"109962","record_id":"<urn:uuid:bd4a9f88-c566-4572-abd1-9a50b64d30e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00632.warc.gz"}
formula for the surface area of a sphere (idea) The volume of a sphere can be described by a number of pyramids with n-gonal bases that completely cover the sphere, with their vertices at the center of the sphere. The volume of the sphere is then V = n×(1/3)br where n is the number of pyramids, b is the area of one of the pyramid's bases, and r is the radius of the sphere. This equation can be rearranged to read: V = n×b(1/3)r But what is n×b equal to? The surface area of the sphere! Thus, we can write: V = SA×(1/3)r where SA is surface area. Now it's time to start solving. (4/3)πr³ = SA(1/3)r (4/3)πr² = (1/3)SA SA = 4πr² Of course, if you know the calculus, or are a smartass, or both, then you could just show that dV/dr = SA, and ∫A dr = V. But where's the fun in that?
{"url":"https://everything2.com/user/DrSeudo/writeups/formula+for+the+surface+area+of+a+sphere","timestamp":"2024-11-15T01:08:30Z","content_type":"text/html","content_length":"28581","record_id":"<urn:uuid:4f8f9d0d-bda3-428c-884b-f54f41d7a86c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00744.warc.gz"}
AW: [Mono-list] Maths gabor gabor@z10n.net Tue, 10 Feb 2004 09:56:31 +0100 On Tue, 2004-02-10 at 09:36, Jochen Wezel wrote: > Well, I hadn't imagined that there are several rounding standards. > Here in Germany I only learned to round up each something.5 to the next integer at school. > Now, I've seen the bank rounding which round up and down. > But what I want to do is not bank rounding but scientific rounding: always round up every .5 to the next greater integer. Does anybody know how to do that? System.Math.Round doesn't support any flags to set up the rounding standard :( hmm, i'm not a dotnet expert, but this is usually solved by adding 0.5 to it and truncating it to an integer. something like: Convert.ToInt32( x + 0.5) p.s: pay attention to negative numbers. i'm not sure what behaviour you want for negative numbers.
{"url":"https://mono.github.io/mail-archives/mono-list/2004-February/018285.html","timestamp":"2024-11-09T19:10:23Z","content_type":"text/html","content_length":"3219","record_id":"<urn:uuid:6aead4fa-83df-4ed3-b56e-a684d25d6acc>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00715.warc.gz"}
Mathematical Sciences Research Institute Phi-gamma modules and p-adic Hodge theory August 22, 2014 (03:30 PM PDT - 04:30 PM PDT) Speaker(s): Gabriel Dospinescu (École Normale Supérieure de Lyon) Location: SLMath: Eisenbud Auditorium Primary Mathematics Subject Classification No Primary AMS MSC Secondary Mathematics Subject Classification No Secondary AMS MSC This series of lectures, which build on Jared Weinstein's talks, will be a light introduction to the theory of phi-gamma modules and their interactions with p-adic Hodge theory. We will discuss Fontaine's equivalence of categories, give examples of phi-gamma modules and present Berger's fundamental results which link phi-gamma modules and p-adic Hodge theory. Depending on time, we may say a few words about the applications of phi-gamma modules to Galois cohomology and to the p-adic Langlands correspondence for GL_2(Q_p).
{"url":"https://legacy.slmath.org/workshops/710/schedules/18773","timestamp":"2024-11-13T08:37:54Z","content_type":"text/html","content_length":"37685","record_id":"<urn:uuid:b2305fa5-8621-4227-b3de-996ffcfb3b82>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00306.warc.gz"}
Modern Farmhouse 14 | Filming and Photography Location | Valley Village Modern Farmhouse Located Near Studio City Nestled in the heart of Valley Village, this home combines modern farmhouse elegance with a sleek, minimalistic design. The exterior showcases a gable roof, expansive large windows, and neutral tones that exude contemporary sophistication. Enjoy the great room that serves as the centerpiece of the home. The open-concept layout features a stunning kitchen on one end and a spacious living room on the other, all connected by a large sliding glass door that invites natural light and opens to a private backyard oasis with a pool, and an attached jacuzzi.
{"url":"https://imagelocations.com/modern-farmhouse-14","timestamp":"2024-11-03T03:37:33Z","content_type":"text/html","content_length":"396176","record_id":"<urn:uuid:b37075e0-cd9f-48ab-953e-0643f4225759>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00583.warc.gz"}
COMPONENT OF POWER SYSTEM MODEL T o know about the components of power system models first of all we will discuss about power system. The power system is defined as the system which have goal to generate and also to transmit the electric power user through the transmission system to the electric power.And this electric power is provided in two different ways: • Alternating current form • Direct current form Where t is the variable refering to time. • It is assumed that all the quantities like power,current and voltage have sinusodial wave form having constant amplitude and such behaviour is called steady state power system. • Symmetric power system operation is assumed for the power flow model of the three-phase power system. The main components of the power system which should be modeled under the assumption of predicted loads and under steady state and symmetric power system operation are the follow: • Overhead transmission lines • Underground cables • Transformers • Shunt elements All of the passive power system transmission elements are modelled as a two-port mathe-matical element, located between electrical nodes i and j (Where shunt elements are associated with only one electrical node i). So the power system model composes of many passive elements situated between nodes i and j giving a network of branch elements. Some of the properties of these passive network is given below Electrical node is connected through passive elements on an average of about 2 or 3 other nodes.And the resulting matrix is very lesser,Where matrix elements (i,j)is not equal to zero but there should be connection between i&J. As the power system is divided into many power system control areas,each one is operated by an electric utility and each utility has the capacity to model its own control areas with high accuracy.To reduce the size and complexity of the power system model ,the individual utilities split into sub-models and the splitting of model is possible for lower high voltage levals that is below 60 KV All the splitted ports are connected through a transformer and hence it does not lead to very high modelling inaccuracies. 1-Define power system? Ans- The power system is defined as the system which have goal to generate and also to transmit the electric power user through the transmission system to the electric power 2-Write the different ways in which electric power is provided? Ans- electric power is provided in two different ways: • Alternating current form • Direct current form Where t is the variable refering to time. 3-What are the assumptions for power system models? Ans-There are two assumptions: • It is assumed that all the quantities like power,current and voltage have sinusodial wave form having constant amplitude and such behaviour is called steady state power system. • Symmetric power system operation is assumed for the power flow model of the three-phase power system. 4-Write down the components of power system model? Ans-The components of power system model are as follow: • Overhead transmission lines • Underground cables • Transformers • Shunt elements 5-Describe about power system modelling? Ans- All of the passive power system transmission elements are modeled as a two-port mathe-matical element, located between electrical nodes i and j (Where shunt elements are associated with only one electrical node i). So the power system model composes of many passive elements situated between nodes i and j giving a network of branch elements. Some of the properties of these passive network is given below Electrical node is connected through passive elements on an average of about 2 or 3 other nodes.And the resulting matrix is very lesser,Where matrix elements (i,j)is not equal to zero but there should be connection between i&J. As the power system is divided into many power system control areas,each one is operated by an electric utility and each utility has the capacity to model its own control areas with high accuracy.To reduce the size and complexity of the power system model ,the individual utilities split into sub-models and the splitting of model is possible for lower high voltage levals that is below 60 KV All the splitted ports are connected through a transformer and hence it does not lead to very high modelling inaccuracies. Tell us Your Queries, Suggestions and Feedback One Response to COMPONENTS OF POWER SYSTEM MODEL 1. The article is about an electric power system is a network of electrical components used to supply, transmit and use electric power. An example of an electric power system is the network that supplies a region’s homes and industry with power – for sizable regions, this power system is known as the grid and can be broadly divided into the generators that supply the power, the transmission system that carries the power from the generating centres to the load centres and the distribution system that feeds the power to nearby homes and industries.
{"url":"https://blog.oureducation.in/components-of-power-system-model/","timestamp":"2024-11-10T12:26:41Z","content_type":"text/html","content_length":"75952","record_id":"<urn:uuid:a1e868aa-5bdc-4cff-8e36-10adf0586bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00045.warc.gz"}
What are FIR filters? In signal processing, filtering is an essential process that removes unwanted components from a signal. One special class of filters are finite impulse response (FIR) filters, which we will discuss in this post in more detail. After a short description of digital filters in general, the structure and function of FIR filters will be discussed. Finally, we give an overview of the implementation of FIR filters in our measurement software OXYGEN. What are digital filters in general? A digital filter is a mathematical algorithm for manipulating a signal to extract information and remove unwanted information, such as blocking or passing a certain frequency range. Thus, it is a digital system that converts an input sequence with a transformation process into an output sequence. There are various different classes of filters. However, based on the length of their impulse response, we can categorize digital filters into the following: • Infinite impulse response (IIR) • Finite impulse response (FIR) Unlike analog filters, which are implemented with electronic components such as capacitors, coils, resistors, etc., digital filters are implemented with logic devices such as ASICs, FPGAs or in the form of a sequential program with a signal processor. What is the difference between IIR and FIR filters? In general, IIR and FIR filters differ in their response of a filter to an input impulse. If the impulse response of the filter drops to zero after a finite time has elapsed, it is referred to as an FIR filter (Finite Impulse Response). On the other hand, if the impulse response is unlimited in time, it is an IIR filter (Infinite Impulse Response). Whether the impulse response of a digital filter drops to zero after a finite time depends on how the output values are calculated. With FIR filters, the output values depend only on the current and preceding input values, while with IIR filters the output values depend additionally on the preceding output values. The advantage of IIR filters over FIR filters is that IIR filters typically require fewer coefficients to perform comparable filtering operations, operate faster, and require less RAM. However, a big disadvantage of IIR filters is their non-linear phase response. For applications that do not require phase information, such as monitoring signal amplitude, IIR filters are well suited. But, for applications that require a linear phase response, FIR filters are generally better suited. How do FIR filters work? Fig. 1 demonstrates the functional operation of an FIR filter. At the input, the data/values x(n) are applied by the A/D converter clock by clock (sample by sample). In the upper row, there are shift elements (z^-1) which shift the data/values applied to the input by one step for each clock cycle. This means that at the end of the following example, the value x (n-3) is three clocks prior to the current value x(n). In the center are the FIR coefficients k[0] – k[m]. These coefficients represent an amplifier that multiplies the input value by the gain „k“. The bottom row is the summation branch which adds up the results of all multiplications (integration). The output y(n) is now the processed signal according to the FIR coefficients and can be represented by the following mathematical expression: $$ y(n)=\sum\limits_{i = 0}^m k(i)*x(n-i) $$ Fig. 1: Calculation process of a FIR filter In our whitepaper, we further present a detailed example of how to determine the filter coefficients of an FIR low-pass filter. FIR filters in OXYGEN OXYGEN is our intuitive test and measurement software. It is an all-in-one software for measurement, visualization, and analysis for various applications. Therefore, it includes a huge variety of features, among others FIR filters. It is an easy-to-use tool, which allows you to choose between four different filter types: • Low pass • High pass • Bandpass • Band stop Once selected, simply enter the filter length, the desired window function, and whether or not you want to compensate for signal delay, and you’re ready to go. For a more detailed guide on how to set up an FIR filter in OXYGEN see our whitepaper. In a nutshell Digital filters are mathematical algorithms for manipulating signals to extract and/or remove unwanted information. FIR filters are a subclass whose impulse response is of finite length, as it settles to zero in finite time. In comparison to IIR filters, FIR filters are fundamentally more stable and can be designed to have a linear phase.
{"url":"https://www.dewetron.com/2023/07/fir-filter-whitepaper/","timestamp":"2024-11-02T09:24:37Z","content_type":"text/html","content_length":"168040","record_id":"<urn:uuid:4d11f9a7-e2a4-47f7-bb89-91315967a4f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00202.warc.gz"}
Bar Bending Schedule for Tie Beams/Strap Beams | BBS of Tie Beam Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity of Tie Beam | Estimation of Reinforcement in Strap Beam In this Article today we will talk about the Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity for Tie Beam | Estimation of Reinforcement in Strap Beam | BBS of Tie Beam | Steel Estimation for Tie Beam What is bar bending Schedule ? Bar Bending Schedule (BBS) is basically the representation of bend shapes and cut length of bars as per structure drawings. BBS is prepared from construction drawings. For each member separate BBS is prepared because bars are bended in various shapes depending on the shape of member. Bar Bending Schedule for Tie Beam / Strap Beam: Bar Bending Schedule Plays a vital role in finding the quantities of the reinforcement required for the building. Well, In order to understand the tie beam/Strap beam reinforcement in Substructure, I refer you to learn the Bar Bending Schedule for footings. Tie Beam and Strap Beam: Tie Beam (Straight beam) is a beam which connects the two footings in the substructure. Tie beam is provided when the two footings are in the same line. Strap Beam (inclined beam) is similar to tie beam but it connects two footings at a certain angle. Strap beam is laid when two footings are in different levels. Tie beam/ Strap beam are specifically located between pile caps and shallow foundations. their primary function is to force all shallow foundations or pile caps to have approximately the same settlements. Quantity of reinforcement (steel) required for Tie Beam/ Strap beams or Bar bending schedule for Tie Beam/ Strap Beam: In this post, I am finding out the Estimation of Steel reinforcement in Tie Beam or Strap Beam / Bar Bending Schedule for Tie Beam/ Strap beam. For this, I considered a plan as shown below. The horizontal bars which ties one footing to the other footing are main bars and the vertical bars are called stirrups. Stirrups helps in framing the main bars in correct position. Before getting into this article I recommend you to remember these Important Points to understand the reinforcement in tie beams: 1. Main Bars (Top bar, Bottom bar, Side bar) are tied to the center the of one footing to the center of another footing. 2. Whereas stirrups starts from one face of footing to the another face of footing. Refer the below image for clear view how reinforcement is tied in Tie beam. Steps to be followed while finding out the total wt. of steel required for constructing Tie beams / Strap beams: Tie beam reinforcement calculation is divided into two parts Main bars and stirrups. Part-I:- Main Bars 1. Check the Length of Main bars in top, bottom, side bars. 2. Then Check the No. of Main bars in top, bottom, side bars 3. Check the Diameter of Main bars in top, bottom, side bars 4. Calculate the total length of Main bars in top, bottom and side direction. 5. Find the total wt of Main bars. BBS of Tie Beam Part-II:- Stirrups 1. Deduct the concrete cover from all sides of tie and find out the length of stirrup. 2. Calculate the length of stirrup including hook. 3. Calculate the total no. of stirrups. 4. Find the total length of stirrups 5. Then Calculate the total wt. of stirrups. Length, Dia and No. of Bars are adopted and designed by the structural engineer by executing the load analysis. Consider the below shown Figure: Assume for calculation: Dia of Top Bars = 10mm, Dia of Bottom bars = 10mm ,Dia of Side bars = 8mm, Dia of Stirrups = 6mm, Spacing between ties = 0.1m. No. of Top Main bars = 4, No. of Bottom Main bars = 4 , No. of Side Main bars = 2 Calculation for the Quantity of Tie beams (Main Bars): Hyp^2 = Adj^2+ Opp^2 Hypotenuse = length of strap beam. From the figure, Part- I. Calculate the Total wt required for Main bars Apply the above method for all the tie beams in Horizontal and the vertical axis. the result of all tie beams have been entered in the below table. Calculation for the quantity of Tie Beams (Stirrups): As I have already mentioned that Stirrups are started from one face of footing to the another face footing. BBS of Tie Beam Tie Beam on Axis I between A-B: Below are the five steps for finding the quantities of Tie beam (Stirrups) 1. Deducting the concrete cover from all sides of the tie beam for finding out the length of each tie. From the figure, it implies that reinforcement details of Tie beams in horizontal axis and vertical axis are different. As per the condition deduct the concrete cover of 0.05 from all sides of stirrups for Horizontal axis tie beams and 0.025 from all sides of stirrups for Vertical axis tie beams Apply the above method for remaining tie beams. The result is mentioned below. Check you result with the below table. Abstract For Finding out the total quantity of Steel reinforcement required for Tie Beam/Strap Beam for the above given plan: Hence, Total wt of Steel 512.81 Kgs required for Tie beam/Strap Beam (for above plan). Full article on Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity for Tie Beam | Estimation of Reinforcement in Strap Beam | BBS of Tie Beam | Steel Estimation for Tie Beam | Footing Estimate Calculation. Thank you for the full reading of this article in “The Civil Engineering” platform in English. If you find this post helpful, then help others by sharing it on social media. If you have any question regarding article please tell me in comments.
{"url":"https://thecivilengineerings.com/bar-bending-schedule-for-tie-beams-strap-beams-bbs-of-tie-beam/","timestamp":"2024-11-09T06:37:37Z","content_type":"text/html","content_length":"161882","record_id":"<urn:uuid:da0b6551-9f4c-4818-a52e-e0cb625e0420>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00829.warc.gz"}
Which of the Following is a Scalar Quantity? - Hard Geek When studying physics, it is essential to understand the difference between scalar and vector quantities. Scalar quantities are those that have only magnitude, while vector quantities have both magnitude and direction. In this article, we will explore various examples of scalar quantities and explain why they fit into this category. By the end, you will have a clear understanding of what makes a quantity scalar and be able to identify them in different contexts. Scalar vs. Vector Quantities Before delving into specific examples, let’s first establish the distinction between scalar and vector quantities. Scalar quantities are described solely by their magnitude, which refers to the size or amount of the quantity. Examples of scalar quantities include time, temperature, mass, speed, and energy. On the other hand, vector quantities have both magnitude and direction. They require both a numerical value and a specific direction to be fully described. Examples of vector quantities include displacement, velocity, acceleration, force, and momentum. Examples of Scalar Quantities Now that we understand the difference between scalar and vector quantities, let’s explore some specific examples of scalar quantities: 1. Time Time is a fundamental scalar quantity that measures the duration between two events. It is often represented in units such as seconds, minutes, hours, or years. Time does not have a direction associated with it, making it a scalar quantity. 2. Temperature Temperature is another scalar quantity that measures the hotness or coldness of an object or environment. It is measured using various scales, such as Celsius, Fahrenheit, or Kelvin. Temperature does not have a direction, making it a scalar quantity. 3. Mass Mass is a scalar quantity that measures the amount of matter in an object. It is often measured in units such as kilograms or pounds. Mass does not have a direction associated with it, making it a scalar quantity. 4. Speed Speed is a scalar quantity that measures how fast an object is moving. It is defined as the distance traveled per unit of time. Speed does not have a direction associated with it, making it a scalar quantity. For example, if a car is traveling at 60 miles per hour, the speed is a scalar quantity. 5. Energy Energy is a scalar quantity that represents the ability of a system to do work. It exists in various forms, such as kinetic energy, potential energy, and thermal energy. Energy does not have a direction associated with it, making it a scalar quantity. Q1: Is velocity a scalar or vector quantity? A1: Velocity is a vector quantity because it has both magnitude and direction. It describes the rate at which an object changes its position. Q2: Is distance a scalar or vector quantity? A2: Distance is a scalar quantity because it only represents the magnitude of the displacement between two points. It does not consider the direction of the movement. Q3: Is force a scalar or vector quantity? A3: Force is a vector quantity because it has both magnitude and direction. It describes the interaction between objects that can cause a change in their motion. Q4: Is power a scalar or vector quantity? A4: Power is a scalar quantity because it represents the rate at which work is done or energy is transferred. It does not have a direction associated with it. Q5: Is acceleration a scalar or vector quantity? A5: Acceleration is a vector quantity because it has both magnitude and direction. It represents the rate at which an object’s velocity changes over time. In summary, scalar quantities are those that have only magnitude and do not have a direction associated with them. Examples of scalar quantities include time, temperature, mass, speed, and energy. These quantities are essential in physics and other scientific disciplines as they provide valuable information about various phenomena. Understanding the distinction between scalar and vector quantities is crucial for accurately describing and analyzing physical phenomena. By familiarizing yourself with scalar quantities, you will be better equipped to solve problems and make predictions in the field of physics. Remember that scalar quantities are all about magnitude, while vector quantities involve both magnitude and direction. So, the next time you encounter a physical quantity, ask yourself whether it is scalar or vector, and you will be on your way to a deeper understanding of the world around us. Visited 7 times, 1 visit(s) today Leave a Reply Cancel reply
{"url":"https://hardgeek.org/which-of-the-following-is-a-scalar-quantity/","timestamp":"2024-11-02T02:42:25Z","content_type":"text/html","content_length":"63677","record_id":"<urn:uuid:91dbcf29-884e-4981-8601-2649810bf3e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00246.warc.gz"}
This is a generalisation of pcfcross to arbitrary collections of points. The algorithm measures the distance from each data point in subset I to each data point in subset J, excluding identical pairs of points. The distances are kernel-smoothed and renormalised to form a pair correlation function. • If divisor="r" (the default), then the multitype counterpart of the standard kernel estimator (Stoyan and Stoyan, 1994, pages 284--285) is used. By default, the recommendations of Stoyan and Stoyan (1994) are followed exactly. • If divisor="d" then a modified estimator is used: the contribution from an interpoint distance \(d_{ij}\) to the estimate of \(g(r)\) is divided by \(d_{ij}\) instead of dividing by \(r\). This usually improves the bias of the estimator when \(r\) is close to zero. There is also a choice of spatial edge corrections (which are needed to avoid bias due to edge effects associated with the boundary of the spatial window): correction="translate" is the Ohser-Stoyan translation correction, and correction="isotropic" or "Ripley" is Ripley's isotropic correction. The arguments I and J specify two subsets of the point pattern X. They may be any type of subset indices, for example, logical vectors of length equal to npoints(X), or integer vectors with entries in the range 1 to npoints(X), or negative integer vectors. Alternatively, I and J may be functions that will be applied to the point pattern X to obtain index vectors. If I is a function, then evaluating I(X) should yield a valid subset index. This option is useful when generating simulation envelopes using envelope. The choice of smoothing kernel is controlled by the argument kernel which is passed to density. The default is the Epanechnikov kernel. The bandwidth of the smoothing kernel can be controlled by the argument bw. Its precise interpretation is explained in the documentation for density.default. For the Epanechnikov kernel with support \([-h,h]\), the argument bw is equivalent to \(h/\sqrt{5}\). If bw is not specified, the default bandwidth is determined by Stoyan's rule of thumb (Stoyan and Stoyan, 1994, page 285) applied to the points of type j. That is, \(h = c/\sqrt{\lambda}\), where \(\ lambda\) is the (estimated) intensity of the point process of type j, and \(c\) is a constant in the range from 0.1 to 0.2. The argument stoyan determines the value of \(c\).
{"url":"https://www.rdocumentation.org/packages/spatstat.explore/versions/3.0-6/topics/pcfmulti","timestamp":"2024-11-03T22:56:16Z","content_type":"text/html","content_length":"80641","record_id":"<urn:uuid:9f5d4f47-e7f7-4b85-89b1-690e12818d63>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00146.warc.gz"}
Good hash for pointers Malcolm McLean 2024-05-23 11:11:19 UTC What is a good hash function for pointers to use in portable ANSI C? The pointers are nodes of a tree, which are read only, and I want to associate read/write data with them. So potentially a lage number of pointers,and they might be consecutively ordered if they are taken from an array, or they might be returned from repeared calls to malloc() with small allocations. Obviously I have no control over pointer size or internal representation. Check out Basic Algorithms and my other books:
{"url":"https://comp.lang.c.narkive.com/EZSxQZYM/good-hash-for-pointers","timestamp":"2024-11-03T02:23:10Z","content_type":"text/html","content_length":"405893","record_id":"<urn:uuid:85b99b84-748d-4640-9f9f-a62d6ab374eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00825.warc.gz"}
op amp differentiator differentiator Op-amp circuit The figure-2 depicts inverting Op-Amp differentiatorcircuit. Frequency Response of Ideal Differentiator, Frequency Response of Practical Differentiator. In the above image, a basic integrator circuit is shown with three simple components. Active differentiators have higher output voltage and much lower output resistance than simple RC differentiators. This site uses Akismet to reduce spam. Best Robot Dog Toys Op-amp Differentiator is an electronic circuit that produces output that is proportional to the differentiation of the applied input. In an op-amp differentiator circuit, the output voltage is directly proportional to the input voltage rate of change with respect to time, which means that a quick change of the input voltage signal, then the high o/p voltage will change in response. An op-amp integrating circuit produces an output voltage which is proportional to the area (amplitude multiplied by time) contained under the waveform. From the following circuit find the gate voltage. Remember output rises with frequency: One of the key facets of having a series capacitor is … i.e. From the input side, the current I can be given as. The non-inverting terminal of the op-amp is connected to the ground. | Examples & Properties, Solar Energy Advantages and Disadvantages. Figure 2: Improved differentiator circuit for practical implementation; Wire up the practical op-amp differentiator shown in Figure 2 using your op-amp of choice (e.g., 741 or 356). Let current I flows through the resistor R. eval(ez_write_tag ([[250,250],'electricalvoice_com-medrectangle-4','ezslot_12',130,'0','0']));The voltage across capacitor (Vc) is given as. The equation for the differentiator op-amp is mentioned. Best Gaming Mouse The addition of resistor R1 and capacitor Cf stabilizes the circuit at higher frequencies, and also reduces the effect of noise on the circuit. The input Vi is applied through capacitor C at the inverting terminal. Breadboard Kits Beginners Robot Cat Toys If the input to the differentiator is changed to a square wave, the output will be a waveform consisting of positive and negative spikes, corresponding to the charging and discharging of the capacitor, as shown in the figure below. Best Brushless Motors As the output of an op-amp differentiator circuit is proportional to the change in input. Best Python Books googletag.cmd.push(function() { googletag.display("div-gpt-ad-1527869606268-7"); }); When the input is a positive-going voltage, a current I flows into the capacitor C1, as shown in the figure. Thus, the circuit behaves like a voltage follower. An op-amp differentiating amplifier uses a capacitor in series with the input voltage source, as shown in the figure below. Differentiating circuits are usually designed to respond for triangular and rectangular input waveforms. Therefore the op-amp differentiator works in an inverting amplifier configuration, which causes … Best Robot Kits Kids The voltage across inductor (VL) is given as. The non-inverting input terminal of the op-amp is connected to ground through a resistor Rcomp, which provides input bias compensation, and the inverting input terminal is connected to the output through the feedback resistor Rf. ; The gain of the circuit (R F /X C1) R with R in frequency at a rate of 20dB/decade. An op-amp differentiator is an inverting amplifier, which uses a capacitor in series with the input voltage. The frequency f1 is the frequency for which the gain of the differentiator becomes unity. Differentiator circuit Design Featured Op Amp See Analog Engineer's Circuit Cookbooks for TI's comprehensive circuit library. The circuit is based … For simplicity, assume the product (C1.Rf) is unity. February 3, 2019 By Administrator 4 Comments. Best Power Supplies Soldering Stations The output voltage of the practical op-amp differentiating amplifier circuit is given as. Solution for Problem #5: The OP AMP differentiator in Figure 3 with R = 10 KN and C = 500 nF has the input Vs(t) = 6(1-e-50t ) u(t) V. Find Vo(t) for t> 0. For each input signal, sketch the input and output waveforms. Since the op-amp is ideal and negative feedback is present, the voltage of the inverting terminal (V−) is equal to the voltage of the non-inverting terminal (V+ = 0V), according to the virtual short concept. In other words the faster or larger the change to the input voltage signal, the greater the input current, the greater will be the output voltage change in response, becoming more of a spike i… Arduino Starter Kit It is used to perform a wide variety of mathematical operations like summation, subtraction, multiplication, differentiation and integration etc. Op-Amp Differentiator (with Derivation and Examples) - YouTube In this video, op-amp differentiator circuit has been discussed (with derivation) and … The output voltage is a square waveform, i.e. A differentiator circuit is a circuit that performs the mathematical operation of differentiation. Raspberry Pi LCD Display Kits Above equation indicates that the output is C1.Rf times the differentiation of the input voltage. Beyond this frequency of the input signal, the gain of the differentiator starts to decrease at a rate of 20dB per decade. Ideal Op-amp Integrator Circuit. Required fields are marked *, Best Rgb Led Strip Light Kits Thus the output of a differentiator for a sine wave input is a cosine wave and the input-output waveforms are shown in the figure below. The main advantage of such an active differentiating amplifier circuit is the small time constant required for differentiation. Differentiation is the mathematical operation that calculates the instantaneous rate of change of the function. Differentiators also find application as wave shaping circuits, to detect high frequency components in the input signal. In this tutorial, we will learn the working and implementation of an Operational Amplifier as Differentiator or a Differentiator Amplifier. While operating on sine wave inputs, differentiating circuits have frequency limitations. The basic Op-amp Differentiator circuit is the exact opposite to that of the Integrator Amplifier circuit that we looked at in the previous video The circuit diagram of an op-amp based differentiator is shown in the following figure − In the above circuit, the non-inverting input terminal of the op-amp is connected to ground. Best Iot Starter Kits Operational Amplifier differentiator. The differentiator performs mathematical differentiation operation on the input signal with respect … FM Radio Kit Buy Online Led Strip Light Kits Buy Online An op-amp based differentiator produces an output, which is equal to the differential of input voltage that is applied to its inverting terminal. Op-Amp differentiator performs a derivative operation on input voltage and gives its result as output voltage. Best Waveform Generators On the other hand, when the input signal frequency is high, it is directly supplied to the inverting … Op Amp Differentiator Circuit. Differentiator is an op amp based circuit, whose output signal is proportional to differentiation of input signal. For an ideal differentiator, the gain increases as frequency increases. It can be seen from the figure that for frequency less than f1, the gain is less than unity. Top Robot Vacuum Cleaners We took a look at op amp integrators in the previous article, Op amps do integration, so it makes sense to round out the picture by covering differentiator circuits.Of course, differentiation is the mathematical opposite of integration, detecting the instantaneous slope of a function. Digital Multimeter Kit Reviews The frequency response curve of a practical differentiator is as shown in the figure below. Op-amp Differentiator is an electronic circuit that produces output that is proportional to the differentiation of the applied input. Arduino Sensors The frequency response of an ideal differentiator is as shown in the figure below. This effect is due to the addition of the resistor R1 and capacitor Cf. The circuit diagram for the Op-Amp Differentiator is given in figure 1. For a sine wave input, the output of a differentiator is also a sine wave, which is out of phase by 180o with respect to the input (cosine wave). Best Capacitor Kits Best Gaming Earbuds Inductor (L), resistor (R) and op-amp are used in the differentiator circuit as shown in figure 3. Drone Kits Beginners Vo is the output voltage. In this article, we will see the different op-amp based differentiator circuits, its working and its applications. Diy Digital Clock Kits This operational amplifier circuit performs the mathematical operation of Differentiation, that is it produces a voltage output which is directly proportional to the input voltages rate-of-change with respect to time. For simplicity, let us assume the product (C1.Rf) is unity. As the frequency of the input signal increases, the output also increases. Best Jumper Wire Kits The basic Differentiator Amplifier circuit is the exact opposite to that of the Integrator operational amplifier circuit.Here, the position of the capacitor and resistor have been reversed and now the Capacitor, C is connected to the input terminal of the In ideal differentiator, when the gain … An op amp differentiator is basically an inverting amplifier with a capacitor of suitable value at its input terminal. Best Function Generator Kits Differentiator circuit using capacitor and op-amp, Differentiator circuit using inductor and op-amp, Voltage Follower | Applications & Advantages, Current to Voltage Converter | Applications, Summing Amplifier or Op-amp Adder | Applications, Voltage to Current Converter | Applications, PIN Diode | Symbol, Characteristics & Applications, What is Square Matrix? Best Wireless Routers Best Arduino Books Of course, differentiation is the mathematical opposite of integration, detecting the instantaneous slope of a function. That is feedback capacitor is replaced by a resistor and input resistor is replaced by a capacitor. And much lower output resistance than simple RC differentiators amp differentiator is op., frequency modulators etc basis of op-amp in “ operational amplifier Basics “ = 0 because the amplitude V constant! Output can be seen from the input signal with respect to the rate of of! Reaches a frequency, f2 that calculates the instantaneous slope of a practical differentiator that the voltage inductor... Circuit but does n't have differentiator model of rogowski coil interchanging the positions of components in figure! Than unity differentiator amplifier internal circuit is a circuit that produces output is! We get at 20dB per decade till the input voltage are ’ t these are 180 degree of! And rectangular signals X resulting in zero output voltage and much lower output resistance simple! Basis of op-amp in “ operational amplifier is an amplifier which is to... This means that the output voltage is directly proportional to the input voltage where R and are. February 3, 2019 by Administrator 4 Comments single output and input resistor is replaced a! ) contained under the waveform, whose output signal of integrator amplifier which... Phase shift of the input voltage to detect High frequency components in integrator. Applied input input side, the gain is less than unity op amp differentiator circuit Design Featured op circuit... And output waveforms like an open-circuit resistor R at the inverting terminal R and. The currents entering both terminals of the circuit ( R ) and beyond f1, the signal! ’ t these are 180 degree out of phase…! grounded and node is... The waveform we use many applications by using op-amps like February 3, by! Content so there is no current flow to the differentiation of the op amp differentiator can be given as Equating. Differentiators also find application as wave shaping circuits, its working and implementation an. Two resistors, which uses reactive components ( usually a capacitor in with..., i.e capacitor C1 remains uncharged and behaves like an open-circuit unstable and cause oscillations which results in...., assume the product ( op amp differentiator ) is given as consists of two resistors, gives... Diagram of an ideal differentiator, the gain increases at 20dB per decade basic circuit diagram for op-amp... Which uses a capacitor than inductor ) X is virtually grounded and node Y is also.!, differentiation is the mathematical operation that calculates the instantaneous rate of 20dB per decade where R I... Input and output waveforms rate of 20dB per decade till the input voltage addition of the applied input a input! A circuit that produces a differentiated version of the op-amp is ideal required differentiation... Input, the output appears like a spike at time t =,... C1.Rf ) is unity are 180 degree out of phase…! R. differentiating amplifiers are commonly... This article, we will see the different op-amp based differentiator circuits, its working and implementation an... F = 0, the circuit behaves like a spike at time t = 0, as shown in input. Circuit behaves like an open-circuit Administrator 4 Comments resistor Rf Advantages and Disadvantages F = 0 the... Calculates the instantaneous rate of change of the input voltage uses a capacitor of suitable value at its input.. Leisure Suit Larry Original, Hatiku Rindu Kepadamu Chord, Online Circuit Classes, Touching A Nerve Meaning, Miya Gouache Review, Comfort Zone Heater Blowing Cold Air, G Loomis Gl3, Walk Highlands Loch Oich, Magnifying Glass For Projector Price, Iberostar Punta Cana Adults Only,
{"url":"http://assurancebiometricsinc.com/1yyt2xri/67184e-op-amp-differentiator","timestamp":"2024-11-14T21:32:44Z","content_type":"text/html","content_length":"74618","record_id":"<urn:uuid:435de14f-afa3-465e-bf8c-f52ba66eb6a5>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00305.warc.gz"}
Likelihood Arguments for Design These are some notes about design arguments for the existence of God. They are based on my readings of Benjamin Jantzen’s excellent book An Introduction to Design Arguments, which was published by Cambridge University Press back in 2014. 1. Likelihood Versions of the Design Argument Design arguments for the existence of God are popular and persistent. They all share a common form. They start with evidence drawn from the real world — the remarkable way in which a stick insect resembles a stick; the echolocation of bats; the fact that the planet earth exists in the habitable zone; the fine tuning of the physical constants for the production of life in the universe; or the collection of all such examples — and then argue that this evidence points to the existence of a designer, i.e. God. This basic common form has been developed in numerous ways over the course of human history. Most recently, it has been common to present design arguments using the formal trappings of probability theory and, quite often, this involves the use of comparisons. ‘Likelihood’ here must be understood in its formal sense. In every day language, the term ‘likely’ is synonymous with ‘probable’. In its formal sense, its meaning is subtly different: it is a measure of how probable some piece of evidence is the truth of some particular theory. Let’s use an example. Suppose you have a jar filled with one hundred beans. You are told that one of three hypotheses about that jar of beans is true, but not which one. The three hypotheses are: H[1]: The jar only contains black beans. H[2]: The jar contains 50 black beans and 50 green beans. H[3]: The jar contains 25 black beans and 75 green beans. Suppose you draw a bean from the jar. It is green. This is now some evidence (E) that you can use to rank the likelihood of the different hypotheses. How likely is it that you would draw a green bean if H were true? Answer: zero. H says that all the beans are black. If you draw a green bean, you immediately disconfirm H . What about H and H ? There, the situation is slightly different. Both of those hypotheses allow for the existence of green beans. Nevertheless, E is more expected on H than it is on H . That is to say, E is more likely on H than it is on H . In formal notation, the picture looks like this: Pr (E|H[2]) = 0.50 Pr (E|H[3]) = 0.75 Therefore - Pr (E|H[2]) < Pr (E|H[3]) Notice that this doesn’t tell us anything about the probability of the respective hypotheses. Likelihood is a measure of the probability of E|H and not a measure of the probability of H|E (the so-called ‘posterior probability’ of a hypothesis). This is pretty important because there are cases in which the posterior probability of a hypothesis and the likelihood it confers on the evidence are radically divergent. Based on the above example, we conclude that H is the more likely theory: it confers the greatest probability on the observed evidence. But suppose we were also told that 90 percent of all jars contain a 50-50 mix of black and green beans, whereas only 5 percent contained the 25-75 mix. If that were true, H would be the more probable hypothesis, even if we did draw a green bean from the jar. (You can do the formal calculation using Bayes Theorem if you like). The only case in which likelihood arguments tell us anything about the posterior probability of a theory are cases in which all the available hypotheses are equally probable prior to observing the evidence (i.e. when the ‘principle of indifference’ can be applied to the hypotheses). This hasn’t deterred some theists from defending likelihood versions of the design argument. The reason for this is that they think that when it comes to comparing certain hypotheses we are in a situation in which the principle of indifference can be applied. More particularly, they think that when it comes to explaining evidence of design in the world, the leading available theories (theism and naturalism) both have equal prior probabilities and hence the fact that the evidence of design is more likely on theism than it is on naturalism gives some succour to the theist. In other words, they think the following argument holds: • Notation: E = Remarkable adaptiveness of life in the universe; T = hypothesis of theistic design; and N = hypothesis of naturalistic causation. • (1) Prior probabilities of T and N are equal. • (2) Pr (E|T) >> Pr (E|N) [probability of E given theism is much higher than the probability of E given naturalism] • (3) Therefore, Pr(T}E) >> Pr (N|E) [theism has more posterior probability than naturalism] Is this argument any good? 2. The Reverse Gambler’s Fallacy There are many things we could challenge about the likelihood argument. An obvious one is its underspecification of the relevant explanatory hypotheses. Consider N. How exactly does naturalistic causation explain the adaptiveness of life? One answer is simply to say that it explains it through chance. The naturalistic view is that the universe churns through different arrangements of matter and energy, and through sheer luck it occasionally stumbles on arrangements of matter and energy that take on the adaptive properties of life. If your understanding of N is that it only explains E in terms of pure chance, then the likelihood argument may well be effective (though see the objection discussed in the next section). But no one thinks that naturalism explains adaptiveness in terms of pure chance: the universe doesn’t constantly rearrange itself in completely random ways. Even before the time of Darwin, there were versions of naturalism that went beyond pure chance as an explanation. David Hume, in his famous Dialogues Concerning Natural Religion argued that design could be explained in Epicurean terms. The idea here is that although the universe does churns through different arrangements of matter and energy, some of those arrangements are more dynamically stable than others. They tend to persist, replicate and adapt. Those are the arrangements to which we attribute the properties of life and adaptiveness. Jantzen fleshes out this Humean/Epicurean hypotheses in the following manner ( 2014, 180 • N[1]: The traits of organisms (and the universe as a whole) are the product of a process involving chance, the laws with which atoms blindly interact with one another, and a great deal of time — after a very long time, the universe eventually stumbled across a configuration that is dynamically stable. If this is your understanding of naturalism, then the likelihood argument is cast into more doubt. It is at least plausible that the probability of E|N is much closer to the probability of E|T (particularly if the universe has been around for long enough). Elliott Sober disputes this Humean argument. He says that proponents of it overstate the likelihood of E because they commit something called the Inverse Gambler’s Fallacy. The regular Gambler’s Fallacy arises from the tendency to assume that if a particular random outcome occurs several times in row it is less likely to happen in the future. Thus, if you flip a coin ten times and get heads on each occasion, you would commit the Gambler’s Fallacy if you assumed that you were more likely to get tails on the next flip. Although the numbers of heads and tails tend to be roughly equal over the very long term, the probability of the next coin flip being tails is the same as it is for every other coin flip, i.e. 0.5. Thus, the regular Gambler’s Fallacy is the tendency to overstate the likelihood of an event (a tails) given a previous set of evidence. The Inverse Gambler’s Fallacy is, as you might expect, the reverse. It’s the tendency to overstate the likelihood of a particular event given a limited set of evidence. Jantzen’s explains the concept with a simple example. Imagine you have just wandered into a casino and you see somebody roll a double-six on a pair of dice. That’s your evidence (call it E ). There are two hypotheses that could explain that observation: • H[4]: This is the first roll of the evening. • H[5]: There have been many rolls of the dice that evening. Although the probability of any particular roll of the dice being a double-six is 1/36, if there were lots of rolls in the course of one evening you would expect to see a double-six at some stage (indeed, given enough rolls the probability of eventually seeing a double six would start to approach 1). Thus, you could argue that: And hence that H is the more likely explanation. But this, according to Sober, is a fallacy. You have overstated the likelihood of the observation you made. The reason for this is that E is ambiguously stated. It could mean ‘a double six was rolled at some point in the evening’ or it could mean ‘a double six was rolled on this particular occasion’. If it means the former, then H is indeed more likely than H . But if it means the latter, then the likelihood of H and H is equal. For any particular throw, they each confer an equal likelihood on E , i.e. 1/36. How does this apply to the Humean argument? The answer, according to Sober, is that the Humean explanation is like H . The Humean idea is that given enough time and enough rolls of the galactic dice, we will eventually see arrangements of matter and energy that have the properties of life and adaptiveness. This could well be true, but for any particular arrangement of matter and energy — e.g. the functional adaptation of the eye for receiving and processing light signals — the Humean explanation does not confer that much likelihood on the outcome. Hence, the person who assigns a high likelihood to the Pr (E|N ) is committing the Inverse Gambler’s Fallacy. There are, however, three problems with this criticism. The first is that the evidence of design that is relevant to the likelihood argument is general, not specific. Theists are appealing to the general presence of adaptiveness in the universe over the course of history, not just individual specific ones. The Humean explanation takes this into account. So the Humean argument does not really involve anything analogous to the Inverse Gambler’s Fallacy. Second, if the focus were on specific instances of adaptiveness, the theistic explanation would be just as much trouble as the Humean one. After all, the generic hypothesis of theism doesn’t explain why God would have chosen to design particular functions and adaptations into animals. You need a much more specific hypothesis for that, and providing one runs into all sorts of trouble (more on this below). Third, the Humean explanation obviously does not exhaust all the possible naturalistic explanations of adaptiveness. The most scientifically credible explanation — Darwinian natural selection — confers a much higher likelihood on adaptiveness than the simple Humean explanation. If we were to compare the likelihood of E given Darwinian natural selection to the likelihood of E given theism, the comparative likelihoods would be much harder to disentangle, and would arguably lean in favour of naturalism. 3. The Problem of Auxiliary Hypotheses There are other problems with likelihood arguments. Sober’s favourite criticism of them focuses on the role of auxiliary hypotheses in their computation. His point is subtle and its significance is often missed. The idea is that whenever we make a claim concerning the likelihood of one hypothesis relative to another, we usually leave a great deal unsaid (implicit) that helps us in making that comparison. When I gave the example of the dice being rolled in the previous section, I assumed a number of things to be true: I assumed that dice rolls are statistically independent; I assumed that there are usually many dice rolls in any given evening of play; I assumed that the dice in question were fair. It was only because of these assumptions that I was able to say, with reasonable confidence, that the probability of any particular roll resulting in a double six was 1/36 or that the probability of observing a double-six at some point in the evening was reasonably high. All of these assumptions are auxiliary hypotheses and they are needed if we are going to make sensible likelihood comparisons. In everyday scenarios, the presence of auxiliary hypotheses in a likelihood calculation is not a major cause for concern. We share common experience of the world and so rightfully take a lot for granted. Things are rather different when it comes to explaining the origins of adaptiveness in the universe as a whole. When we reach this level of explanatory generality, there is less and less that we can assume uncontroversially. This means that it is very difficult to compute sensible likelihoods for general explanations of adaptiveness. This is a particular problem with theism. In order for the general hypothesis of theism to confer plausible likelihoods on the presence of adaptiveness, we would need to add a number of auxiliary hypotheses concerning the intentions and goals of the designer. For example, when looking at the human eye (or any collection of examples of adaptiveness), we would have to be able to say that God has goals X, Y and Z and these explain why the eye (or the collection) has the features it does. Some theists might be willing to speculate about the intentions and goals of God, but doing so gets them into trouble, especially when it comes to explaining away instances of natural evil. They would have to state the intentions and goals that justify God in creating parasites that incubate in and destroy the functionality of the eye (to give but one example). In light of the problem of evil, many theists are unwilling to speculate in too much detail about divine intentions. They resile themselves to view that God’s intentions are unknowable or beyond our ken. But in doing this, they undercut the likelihood argument. Note, however, that the problem with auxiliary hypotheses is not just a problem for the theist. It is also a problem for the naturalist. In order for the naturalist to compute plausible likelihoods, they have to add more detail to explain why the adaptiveness we see have the features it has. There are various ways of doing this, e.g. by making assumptions about natural laws, historical conditions on earth, and so on. They would all have to get added into the mix to make a reasonable likelihood comparison. The problem then, as Jantzen puts it, is that ‘Sober’s objection is not really about picking auxiliary assumptions but rather identifying allowable hypotheses. But [the likelihood principle] tells us nothing about what counts as an acceptable hypothesis. Nor does the principle of Indifference. So it seems we have to either entertain them all or risk begging the question in favour of one or another conclusion” ( Jantzen 2014, 184 The net result is that it is very difficult to come up with a plausible likelihood argument for design.
{"url":"https://philosophicaldisquisitions.blogspot.com/2017/07/likelihood-arguments-for-design.html","timestamp":"2024-11-07T13:36:44Z","content_type":"text/html","content_length":"142068","record_id":"<urn:uuid:7b01f1fb-c0db-476a-9ff8-790d2a1c656d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00363.warc.gz"}
cuSOLVER API Reference The API reference guide for cuSOLVER, a GPU accelerated library for decompositions and linear system solutions for both dense and sparse matrices. 1. Introduction The cuSolver library is a high-level package based on the cuBLAS and cuSPARSE libraries. It consists of two modules corresponding to two sets of API: 1. The cuSolver API on a single GPU 2. The cuSolverMG API on a single node multiGPU Each of these can be used independently or in concert with other toolkit libraries. To simplify the notation, cuSolver denotes single GPU API and cuSolverMg denotes multiGPU API. The intent of cuSolver is to provide useful LAPACK-like features, such as common matrix factorization and triangular solve routines for dense matrices, a sparse least-squares solver and an eigenvalue solver. In addition cuSolver provides a new refactorization library useful for solving sequences of matrices with a shared sparsity pattern. cuSolver combines three separate components under a single umbrella. The first part of cuSolver is called cuSolverDN, and deals with dense matrix factorization and solve routines such as LU, QR, SVD and LDLT, as well as useful utilities such as matrix and vector permutations. Next, cuSolverSP provides a new set of sparse routines based on a sparse QR factorization. Not all matrices have a good sparsity pattern for parallelism in factorization, so the cuSolverSP library also provides a CPU path to handle those sequential-like matrices. For those matrices with abundant parallelism, the GPU path will deliver higher performance. The library is designed to be called from C and C++. The final part is cuSolverRF, a sparse re-factorization package that can provide very good performance when solving a sequence of matrices where only the coefficients are changed but the sparsity pattern remains the same. The GPU path of the cuSolver library assumes data is already in the device memory. It is the responsibility of the developer to allocate memory and to copy data between GPU memory and CPU memory using standard CUDA runtime API routines, such as cudaMalloc(), cudaFree(), cudaMemcpy(), and cudaMemcpyAsync(). cuSolverMg is GPU-accelerated ScaLAPACK. By now, cuSolverMg supports 1-D column block cyclic layout and provides symmetric eigenvalue solver. The cuSolver library requires hardware with a CUDA Compute Capability (CC) of 5.0 or higher. Please see the CUDA C++ Programming Guide for a list of the Compute Capabilities corresponding to all NVIDIA GPUs. 1.1. cuSolverDN: Dense LAPACK The cuSolverDN library was designed to solve dense linear systems of the form where the coefficient matrix \(A\in R^{nxn}\) , right-hand-side vector \(b\in R^{n}\) and solution vector \(x\in R^{n}\) The cuSolverDN library provides QR factorization and LU with partial pivoting to handle a general matrix A, which may be non-symmetric. Cholesky factorization is also provided for symmetric/Hermitian matrices. For symmetric indefinite matrices, we provide Bunch-Kaufman (LDL) factorization. The cuSolverDN library also provides a helpful bidiagonalization routine and singular value decomposition (SVD). The cuSolverDN library targets computationally-intensive and popular routines in LAPACK, and provides an API compatible with LAPACK. The user can accelerate these time-consuming routines with cuSolverDN and keep others in LAPACK without a major change to existing code. 1.2. cuSolverSP: Sparse LAPACK The cuSolverSP library was mainly designed to a solve sparse linear system and the least-squares problem \(x = {argmin}{||}A*z - b{||}\) where sparse matrix \(A\in R^{mxn}\) , right-hand-side vector \(b\in R^{m}\) and solution vector \(x\in R^{n}\) . For a linear system, we require m=n. The core algorithm is based on sparse QR factorization. The matrix A is accepted in CSR format. If matrix A is symmetric/Hermitian, the user has to provide a full matrix, ie fill missing lower or upper part. If matrix A is symmetric positive definite and the user only needs to solve \(Ax = b\) , Cholesky factorization can work and the user only needs to provide the lower triangular part of A. On top of the linear and least-squares solvers, the cuSolverSP library provides a simple eigenvalue solver based on shift-inverse power method, and a function to count the number of eigenvalues contained in a box in the complex plane. 1.3. cuSolverRF: Refactorization The cuSolverRF library was designed to accelerate solution of sets of linear systems by fast re-factorization when given new coefficients in the same sparsity pattern where a sequence of coefficient matrices \(A_{i}\in R^{nxn}\) , right-hand-sides \(f_{i}\in R^{n}\) and solutions \(x_{i}\in R^{n}\) are given for i=1,...,k. The cuSolverRF library is applicable when the sparsity pattern of the coefficient matrices \(A_{i}\) as well as the reordering to minimize fill-in and the pivoting used during the LU factorization remain the same across these linear systems. In that case, the first linear system (i=1) requires a full LU factorization, while the subsequent linear systems (i=2,...,k) require only the LU re-factorization. The later can be performed using the cuSolverRF library. Notice that because the sparsity pattern of the coefficient matrices, the reordering and pivoting remain the same, the sparsity pattern of the resulting triangular factors \(L_{i}\) and \(U_{i}\) also remains the same. Therefore, the real difference between the full LU factorization and LU re-factorization is that the required memory is known ahead of time. 1.4. Naming Conventions The cuSolverDN library provides two different APIs; legacy and generic. The functions in the legacy API are available for data types float, double, cuComplex, and cuDoubleComplex. The naming convention for the legacy API is as follows: where <t> can be S, D, C, Z, or X, corresponding to the data types float, double, cuComplex, cuDoubleComplex, and the generic type, respectively. <operation> can be Cholesky factorization (potrf), LU with partial pivoting (getrf), QR factorization (geqrf) and Bunch-Kaufman factorization (sytrf). The functions in the generic API provide a single entry point for each routine and support for 64-bit integers to define matrix and vector dimensions. The naming convention for the generic API is data-agnostic and is as follows: where <operation> can be Cholesky factorization (potrf), LU with partial pivoting (getrf) and QR factorization (geqrf). The cuSolverSP library functions are available for data types float, double, cuComplex, and cuDoubleComplex. The naming convention is as follows: cusolverSp[Host]<t>[<matrix data format>]<operation>[<output matrix data format>]<based on> where cuSolverSp is the GPU path and cusolverSpHost is the corresponding CPU path. <t> can be S, D, C, Z, or X, corresponding to the data types float, double, cuComplex, cuDoubleComplex, and the generic type, respectively. The <matrix data format> is csr, compressed sparse row format. The <operation> can be ls, lsq, eig, eigs, corresponding to linear solver, least-square solver, eigenvalue solver and number of eigenvalues in a box, respectively. The <output matrix data format> can be v or m, corresponding to a vector or a matrix. <based on> describes which algorithm is used. For example, qr (sparse QR factorization) is used in linear solver and least-square solver. All of the functions have the return type cusolverStatus_t and are explained in more detail in the chapters that follow. Routine Data format Operation Output format Based on csrlsvlu csr linear solver (ls) vector (v) LU (lu) with partial pivoting csrlsvqr csr linear solver (ls) vector (v) QR factorization (qr) csrlsvchol csr linear solver (ls) vector (v) Cholesky factorization (chol) csrlsqvqr csr least-square solver (lsq) vector (v) QR factorization (qr) csreigvsi csr eigenvalue solver (eig) vector (v) shift-inverse csreigs csr number of eigenvalues in a box (eigs) csrsymrcm csr Symmetric Reverse Cuthill-McKee (symrcm) The cuSolverRF library routines are available for data type double. Most of the routines follow the naming convention: where the trailing optional Host qualifier indicates the data is accessed on the host versus on the device, which is the default. The <operation> can be Setup, Analyze, Refactor, Solve, ResetValues, AccessBundledFactors and ExtractSplitFactors. Finally, the return type of the cuSolverRF library routines is cusolverStatus_t. 1.5. Asynchronous Execution The cuSolver library functions prefer to keep asynchronous execution as much as possible. Developers can always use the cudaDeviceSynchronize() function to ensure that the execution of a particular cuSolver library routine has completed. A developer can also use the cudaMemcpy() routine to copy data from the device to the host and vice versa, using the cudaMemcpyDeviceToHost and cudaMemcpyHostToDevice parameters, respectively. In this case there is no need to add a call to cudaDeviceSynchronize() because the call to cudaMemcpy() with the above parameters is blocking and completes only when the results are ready on the host. 1.6. Library Property The libraryPropertyType data type is an enumeration of library property types. (ie. CUDA version X.Y.Z would yield MAJOR_VERSION=X, MINOR_VERSION=Y, PATCH_LEVEL=Z) typedef enum libraryPropertyType_t{ MAJOR_VERSION, MINOR_VERSION, PATCH_LEVEL} libraryPropertyType; The following code can show the version of cusolver library. int major=-1,minor=-1,patch=-1;cusolverGetProperty(MAJOR_VERSION, &major);cusolverGetProperty(MINOR_VERSION, &minor);cusolverGetProperty(PATCH_LEVEL, &patch);printf("CUSOLVER Version (Major,Minor,PatchLevel): %d.%d.%d\n", major,minor,patch); 1.7. High Precision Package The cusolver library uses high precision for iterative refinement when necessary. 2. Using the CUSOLVER API 2.1. General Description This chapter describes how to use the cuSolver library API. It is not a reference for the cuSolver API data types and functions; that is provided in subsequent chapters. 2.1.1. Thread Safety The library is thread-safe, and its functions can be called from multiple host threads. 2.1.2. Scalar Parameters In the cuSolver API, the scalar parameters can be passed by reference on the host. 2.1.3. Parallelism with Streams If the application performs several small independent computations, or if it makes data transfers in parallel with the computation, then CUDA streams can be used to overlap these tasks. The application can conceptually associate a stream with each task. To achieve the overlap of computation between the tasks, the developer should: 1. Create CUDA streams using the function cudaStreamCreate(), and 2. Set the stream to be used by each individual cuSolver library routine by calling, for example, cusolverDnSetStream(), just prior to calling the actual cuSolverDN routine. The computations performed in separate streams would then be overlapped automatically on the GPU, when possible. This approach is especially useful when the computation performed by a single task is relatively small, and is not enough to fill the GPU with work, or when there is a data transfer that can be performed in parallel with the computation. 2.1.4. How to Link cusolver Library cusolver library provides dynamic library libcusolver.so and static library libcusolver_static.a. If the user links the application with libcusolver.so, libcublas.so, libcublasLt.so and libcusparse.so are also required. If the user links the application with libcusolver_static.a, the following libraries are also needed, libcudart_static.a, libculibos.a, libcusolver_lapack_static.a, libcusolver_metis_static.a, libcublas_static.a and libcusparse_static.a. 2.1.5. Link Third-party LAPACK Library Starting with CUDA 10.1 update 2, NVIDIA LAPACK library libcusolver_lapack_static.a is a subset of LAPACK and only contains GPU accelerated stedc and bdsqr. The user has to link libcusolver_static.a with libcusolver_lapack_static.a in order to build the application successfully. Prior to CUDA 10.1 update 2, the user can replace libcusolver_lapack_static.a with a third-party LAPACK library, for example, MKL. In CUDA 10.1 update 2, the third-party LAPACK library no longer affects the behavior of cusolver library, neither functionality nor performance. Furthermore the user cannot use libcusolver_lapack_static.a as a standalone LAPACK library because it is only a subset of LAPACK. • If you use libcusolver_static.a, then you must link with libcusolver_lapack_static.a explicitly, otherwise the linker will report missing symbols. There are no symbol conflicts between libcusolver_lapack_static.a and other third-party LAPACK libraries, which allows linking the same application to libcusolver_lapack_static.a and another third-party LAPACK library. • The libcusolver_lapack_static.a is built inside libcusolver.so. Hence, if you use libcusolver.so, then you don’t need to specify a LAPACK library. The libcusolver.so will not pick up any routines from the third-party LAPACK library even if you link the application with it. 2.1.6. Convention of info Each LAPACK routine returns an info which indicates the position of invalid parameter. If info = -i, then i-th parameter is invalid. To be consistent with base-1 in LAPACK, cusolver does not report invalid handle into info. Instead, cusolver returns CUSOLVER_STATUS_NOT_INITIALIZED for invalid handle. 2.1.7. Usage of _bufferSize There is no cudaMalloc inside cuSolver library, the user must allocate the device workspace explicitly. The routine xyz_bufferSize is to query the size of workspace of the routine xyz, for example xyz = potrf. To make the API simple, xyz_bufferSize follows almost the same signature of xyz even it only depends on some parameters, for example, device pointer is not used to decide the size of workspace. In most cases, xyz_bufferSize is called in the beginning before actual device data (pointing by a device pointer) is prepared or before the device pointer is allocated. In such case, the user can pass null pointer to xyz_bufferSize without breaking the functionality. 2.1.8. cuSOLVERDn Logging cuSOLVERDn logging mechanism can be enabled by setting the following environment variables before launching the target application: • CUSOLVERDN_LOG_LEVEL=<level> - where <level> is one of the following levels: ☆ 0 - Off - logging is disabled (default) ☆ 1 - Error - only errors will be logged ☆ 2 - Trace - API calls that launch CUDA kernels will log their parameters and important information ☆ 3 - Hints - hints that can potentially improve the application’s performance ☆ 4 - Info - provides general information about the library execution, may contain details about heuristic status ☆ 5 - API Trace - API calls will log their parameter and important information • CUSOLVERDN_LOG_MASK=<mask> - where mask is a combination of the following masks: ☆ 0 - Off ☆ 1 - Error ☆ 2 - Trace ☆ 4 - Hints ☆ 8 - Info ☆ 16 - API Trace • CUSOLVERDN_LOG_FILE=<file_name> - where file name is a path to a log file. File name may contain %i, that will be replaced with the process id, e.g. <file_name>_%i.log. If CUSOLVERDN_LOG_FILE is not defined, the log messages are printed to stdout. Another option is to use the experimental cusolverDn logging API. See: cusolverDnLoggerSetCallback(), cusolverDnLoggerSetFile(), cusolverDnLoggerOpenFile(), cusolverDnLoggerSetLevel(), cusolverDnLoggerSetMask(), cusolverDnLoggerForceDisable(). 2.1.9. Deterministic Results Throughout this documentation, a function is declared as deterministic if it computes the exact same bitwise results for every execution with the same input parameters, hard- and software environment. Conversely, a non-deterministic function might compute bitwise different results due to a varying order of floating point operations, e.g., a sum s of four values a, b, c, d can be computed in different orders: 1. s = (a + b) + (c + d) 2. s = (a + (b + c)) + d 3. s = a + (b + (c + d)) 4. … Due to the non-associativity of floating point arithmetic, all results might be bitwise different. By default, cuSolverDN computes deterministic results. For improved performance of some functions, it is possible to allow non-deterministic results with cusolverDnSetDeterministicMode(). 2.2. cuSolver Types Reference 2.2.1. cuSolverDN Types The float, double, cuComplex, and cuDoubleComplex data types are supported. The first two are standard C data types, while the last two are exported from cuComplex.h. In addition, cuSolverDN uses some familiar types from cuBLAS. 2.2.1.1. cusolverDnHandle_t This is a pointer type to an opaque cuSolverDN context, which the user must initialize by calling cusolverDnCreate() prior to calling any other library function. An un-initialized Handle object will lead to unexpected behavior, including crashes of cuSolverDN. The handle created and returned by cusolverDnCreate() must be passed to every cuSolverDN function. 2.2.1.2. cublasFillMode_t The type indicates which part (lower or upper) of the dense matrix was filled and consequently should be used by the function. Value Meaning CUBLAS_FILL_MODE_LOWER The lower part of the matrix is filled. CUBLAS_FILL_MODE_UPPER The upper part of the matrix is filled. CUBLAS_FILL_MODE_FULL The full matrix is filled. Notice that BLAS implementations often use Fortran characters ‘L’ or ‘l’ (lower) and ‘U’ or ‘u’ (upper) to describe which part of the matrix is filled. 2.2.1.3. cublasOperation_t The cublasOperation_t type indicates which operation needs to be performed with the dense matrix. Value Meaning CUBLAS_OP_N The non-transpose operation is selected. CUBLAS_OP_T The transpose operation is selected. CUBLAS_OP_C The conjugate transpose operation is selected. Notice that BLAS implementations often use Fortran characters ‘N’ or ‘n’ (non-transpose), ‘T’ or ‘t’ (transpose) and ‘C’ or ‘c’ (conjugate transpose) to describe which operations needs to be performed with the dense matrix. 2.2.1.4. cusolverEigType_t The cusolverEigType_t type indicates which type of eigenvalue the solver is. Value Meaning CUSOLVER_EIG_TYPE_1 A*x = lambda*B*x CUSOLVER_EIG_TYPE_2 A*B*x = lambda*x CUSOLVER_EIG_TYPE_3 B*A*x = lambda*x Notice that LAPACK implementations often use Fortran integer 1 (A*x = lambda*B*x), 2 (A*B*x = lambda*x), 3 (B*A*x = lambda*x) to indicate which type of eigenvalue the solver is. 2.2.1.5. cusolverEigMode_t The cusolverEigMode_t type indicates whether or not eigenvectors are computed. Value Meaning CUSOLVER_EIG_MODE_NOVECTOR Only eigenvalues are computed. CUSOLVER_EIG_MODE_VECTOR Both eigenvalues and eigenvectors are computed. Notice that LAPACK implementations often use Fortran character 'N' (only eigenvalues are computed), 'V' (both eigenvalues and eigenvectors are computed) to indicate whether or not eigenvectors are 2.2.1.6. cusolverIRSRefinement_t The cusolverIRSRefinement_t type indicates which solver type would be used for the specific cusolver function. Most of our experimentation shows that CUSOLVER_IRS_REFINE_GMRES is the best option. More details about the refinement process can be found in Azzam Haidar, Stanimire Tomov, Jack Dongarra, and Nicholas J. Higham. 2018. Harnessing GPU tensor cores for fast FP16 arithmetic to speed up mixed-precision iterative refinement solvers. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC ‘18). IEEE Press, Piscataway, NJ, USA, Article 47, 11 pages. Value Meaning CUSOLVER_IRS_REFINE_NOT_SET Solver is not set; this value is what is set when creating the params structure. IRS solver will return an error. No refinement solver, the IRS solver performs a factorization followed by a solve without any refinement. For example if the IRS solver was cusolverDnIRSXgesv(), CUSOLVER_IRS_REFINE_NONE this is equivalent to a Xgesv routine without refinement and where the factorization is carried out in the lowest precision. If for example the main precision was CUSOLVER_R_64F and the lowest was CUSOLVER_R_64F as well, then this is equivalent to a call to cusolverDnDgesv(). CUSOLVER_IRS_REFINE_CLASSICAL Classical iterative refinement solver. Similar to the one used in LAPACK routines. CUSOLVER_IRS_REFINE_GMRES GMRES (Generalized Minimal Residual) based iterative refinement solver. In recent study, the GMRES method has drawn the scientific community attention for its ability to be used as refinement solver that outperforms the classical iterative refinement method. Based on our experimentation, we recommend this setting. Classical iterative refinement solver that uses the GMRES (Generalized Minimal Residual) internally to solve the correction equation at each iteration. We call the CUSOLVER_IRS_REFINE_CLASSICAL_GMRES classical refinement iteration the outer iteration while the GMRES is called inner iteration. Note that if the tolerance of the inner GMRES is set very low, lets say to machine precision, then the outer classical refinement iteration will performs only one iteration and thus this option will behave like CUSOLVER_IRS_REFINE_GMRES_GMRES Similar to CUSOLVER_IRS_REFINE_CLASSICAL_GMRES which consists of classical refinement process that uses GMRES to solve the inner correction system; here it is a GMRES (Generalized Minimal Residual) based iterative refinement solver that uses another GMRES internally to solve the preconditioned system. 2.2.1.7. cusolverDnIRSParams_t This is a pointer type to an opaque cusolverDnIRSParams_t structure, which holds parameters for the iterative refinement linear solvers such as cusolverDnXgesv(). Use corresponding helper functions described below to either Create/Destroy this structure or Set/Get solver parameters. 2.2.1.8. cusolverDnIRSInfos_t This is a pointer type to an opaque cusolverDnIRSInfos_t structure, which holds information about the performed call to an iterative refinement linear solver (e.g., cusolverDnXgesv()). Use corresponding helper functions described below to either Create/Destroy this structure or retrieve solve information. 2.2.1.9. cusolverDnFunction_t The cusolverDnFunction_t type indicates which routine needs to be configured by cusolverDnSetAdvOptions(). The value CUSOLVERDN_GETRF corresponds to the routine Getrf. Value Meaning CUSOLVERDN_GETRF Corresponds to Getrf. 2.2.1.10. cusolverAlgMode_t The cusolverAlgMode_t type indicates which algorithm is selected by cusolverDnSetAdvOptions(). The set of algorithms supported for each routine is described in detail along with the routine’s The default algorithm is CUSOLVER_ALG_0. The user can also provide NULL to use the default algorithm. 2.2.1.11. cusolverStatus_t This is the same as cusolverStatus_t in the sparse LAPACK section. 2.2.1.12. cusolverDnLoggerCallback_t cusolverDnLoggerCallback_t is a callback function pointer type. Parameter Memory In/out Description logLevel output See cuSOLVERDn Logging functionName output The name of the API that logged this message. message output The log message. Use the below function to set the callback function: cusolverDnLoggerSetCallback(). 2.2.1.13. cusolverDeterministicMode_t The cusolverDeterministicMode_t type indicates whether multiple cuSolver function executions with the same input have the same bitwise equal result (deterministic) or might have bitwise different results (non-deterministic). In comparison to cublasAtomicsMode_t, which only includes the usage of atomic functions, cusolverDeterministicMode_t includes all non-deterministic programming patterns. The deterministic mode can be set and queried using cusolverDnSetDeterministicMode() and cusolverDnGetDeterministicMode() routines, respectively. Value Meaning CUSOLVER_DETERMINISTIC_RESULTS Compute deterministic results. CUSOLVER_ALLOW_NON_DETERMINISTIC_RESULTS Allow non-deterministic results. 2.2.1.14. cusolverStorevMode_t Specifies how the vectors which define the elementary reflectors are stored. Value Meaning CUBLAS_STOREV_COLUMNWISE Columnwise. CUBLAS_STOREV_ROWWISE Rowwise. 2.2.1.15. cusolverDirectMode_t Specifies the order in which the elementary reflectors are multiplied to form the block reflector. Value Meaning CUBLAS_DIRECT_FORWARD Forward. CUBLAS_DIRECT_BACKWARD Backward. 2.2.2. cuSolverSP Types The float, double, cuComplex, and cuDoubleComplex data types are supported. The first two are standard C data types, while the last two are exported from cuComplex.h. 2.2.2.1. cusolverSpHandle_t This is a pointer type to an opaque cuSolverSP context, which the user must initialize by calling cusolverSpCreate() prior to calling any other library function. An un-initialized Handle object will lead to unexpected behavior, including crashes of cuSolverSP. The handle created and returned by cusolverSpCreate() must be passed to every cuSolverSP function. 2.2.2.2. cusparseMatDescr_t We have chosen to keep the same structure as exists in cuSPARSE to describe the shape and properties of a matrix. This enables calls to either cuSPARSE or cuSOLVER using the same matrix description. typedef struct { cusparseMatrixType_t MatrixType; cusparseFillMode_t FillMode; cusparseDiagType_t DiagType; cusparseIndexBase_t IndexBase;} cusparseMatDescr_t; Please read documentation of the cuSPARSE Library to understand each field of cusparseMatDescr_t. 2.2.2.3. cusolverStatus_t This is a status type returned by the library functions and it can have the following values. CUSOLVER_STATUS_SUCCESS The operation completed successfully. The cuSolver library was not initialized. This is usually caused by the lack of a prior call, an error in the CUDA Runtime API called by the cuSolver routine, or an error in the hardware setup. To correct: call cusolverDnCreate() prior to the function call; and check that the hardware, an appropriate version of the driver, and the cuSolver library are correctly installed. Resource allocation failed inside the cuSolver library. This is usually caused by a cudaMalloc() failure. To correct: prior to the function call, deallocate previously allocated memory as much as possible. An unsupported value or parameter was passed to the function (a negative vector size, for example). To correct: ensure that all the parameters being passed have valid values. The function requires a feature absent from the device architecture; usually caused by the lack of support for atomic operations or double precision. To correct: compile and run the application on a device with compute capability 5.0 or above. The GPU program failed to execute. This is often caused by a launch failure of the kernel on the GPU, which can be caused by multiple reasons. To correct: check that the hardware, an appropriate version of the driver, and the cuSolver library are correctly installed. An internal cuSolver operation failed. This error is usually caused by a cudaMemcpyAsync() failure. To correct: check that the hardware, an appropriate version of the driver, and the cuSolver library are correctly installed. Also, check that the memory passed as a parameter to the routine is not being deallocated prior to the routine’s completion. The matrix type is not supported by this function. This is usually caused by passing an invalid matrix descriptor to the function. To correct: check that the fields in descrA were set correctly. The parameter combination is not supported, e.g. batched version is not supported or M < N is not supported. To correct: consult the documentation, and use a supported configuration. 2.2.3. cuSolverRF Types cuSolverRF only supports double. 2.2.3.1. cusolverRfHandle_t The cusolverRfHandle_t is a pointer to an opaque data structure that contains the cuSolverRF library handle. The user must initialize the handle by calling cusolverRfCreate() prior to any other cuSolverRF library calls. The handle is passed to all other cuSolverRF library calls. 2.2.3.2. cusolverRfMatrixFormat_t The cusolverRfMatrixFormat_t is an enum that indicates the input/output matrix format assumed by the cusolverRfSetupDevice(), cusolverRfSetupHost(), cusolverRfResetValues(), cusolveRfExtractBundledFactorsHost() and cusolverRfExtractSplitFactorsHost() routines. Value Meaning CUSOLVER_MATRIX_FORMAT_CSR Matrix format CSR is assumed. (default) CUSOLVER_MATRIX_FORMAT_CSC Matrix format CSC is assumed. 2.2.3.3. cusolverRfNumericBoostReport_t The cusolverRfNumericBoostReport_t is an enum that indicates whether numeric boosting (of the pivot) was used during the cusolverRfRefactor() and cusolverRfSolve() routines. The numeric boosting is disabled by default. Value Meaning CUSOLVER_NUMERIC_BOOST_NOT_USED Numeric boosting not used. (default) CUSOLVER_NUMERIC_BOOST_USED Numeric boosting used. 2.2.3.4. cusolverRfResetValuesFastMode_t The cusolverRfResetValuesFastMode_t is an enum that indicates the mode used for the cusolverRfResetValues() routine. The fast mode requires extra memory and is recommended only if very fast calls to cusolverRfResetValues() are needed. Value Meaning CUSOLVER_RESET_VALUES_FAST_MODE_OFF Fast mode disabled. (default) CUSOLVER_RESET_VALUES_FAST_MODE_ON Fast mode enabled. 2.2.3.5. cusolverRfFactorization_t The cusolverRfFactorization_t is an enum that indicates which (internal) algorithm is used for refactorization in the cusolverRfRefactor() routine. Value Meaning CUSOLVER_FACTORIZATION_ALG0 Algorithm 0. (default) CUSOLVER_FACTORIZATION_ALG1 Algorithm 1. CUSOLVER_FACTORIZATION_ALG2 Algorithm 2. Domino-based scheme. 2.2.3.6. cusolverRfTriangularSolve_t The cusolverRfTriangularSolve_t is an enum that indicates which (internal) algorithm is used for triangular solve in the cusolverRfSolve() routine. Value Meaning CUSOLVER_TRIANGULAR_SOLVE_ALG1 Algorithm 1. (default) CUSOLVER_TRIANGULAR_SOLVE_ALG2 Algorithm 2. Domino-based scheme. CUSOLVER_TRIANGULAR_SOLVE_ALG3 Algorithm 3. Domino-based scheme. 2.2.3.7. cusolverRfUnitDiagonal_t The cusolverRfUnitDiagonal_t is an enum that indicates whether and where the unit diagonal is stored in the input/output triangular factors in the cusolverRfSetupDevice(), cusolverRfSetupHost() and cusolverRfExtractSplitFactorsHost() routines. Value Meaning CUSOLVER_UNIT_DIAGONAL_STORED_L Unit diagonal is stored in lower triangular factor. (default) CUSOLVER_UNIT_DIAGONAL_STORED_U Unit diagonal is stored in upper triangular factor. CUSOLVER_UNIT_DIAGONAL_ASSUMED_L Unit diagonal is assumed in lower triangular factor. CUSOLVER_UNIT_DIAGONAL_ASSUMED_U Unit diagonal is assumed in upper triangular factor. 2.2.3.8. cusolverStatus_t The cusolverStatus_t is an enum that indicates success or failure of the cuSolverRF library call. It is returned by all the cuSolver library routines, and it uses the same enumerated values as the sparse and dense Lapack routines. 2.3. cuSolver Formats Reference 2.3.1. Index Base Format Both one-based and zero-based indexing are supported in cuSolver. 2.3.2. Vector (Dense) Format The vectors are assumed to be stored linearly in memory. For example, the vector \(x = \begin{pmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \\ \end{pmatrix}\) is represented as \(\begin{pmatrix} x_{1} & x_{2} & \ldots & x_{n} \\ \end{pmatrix}\) 2.3.3. Matrix (Dense) Format The dense matrices are assumed to be stored in column-major order in memory. The sub-matrix can be accessed using the leading dimension of the original matrix. For example, the m*n (sub-)matrix \(\begin{pmatrix} a_{1,1} & \ldots & a_{1,n} \\ a_{2,1} & \ldots & a_{2,n} \\ \vdots & & \\ a_{m,1} & \ldots & a_{m,n} \\ \end{pmatrix}\) is represented as \(\begin{pmatrix} a_{1,1} & \ldots & a_{1,n} \\ a_{2,1} & \ldots & a_{2,n} \\ \vdots & \ddots & \vdots \\ a_{m,1} & \ldots & a_{m,n} \\ \vdots & \ddots & \vdots \\ a_{{lda},1} & \ldots & a_{{lda},n} \\ \end{pmatrix}\) with its elements arranged linearly in memory as \(\begin{pmatrix} a_{1,1} & a_{2,1} & \ldots & a_{m,1} & \ldots & a_{{lda},1} & \ldots & a_{1,n} & a_{2,n} & \ldots & a_{m,n} & \ldots & a_{{lda},n} \\ \end{pmatrix}\) where lda ≥ m is the leading dimension of A. 2.3.4. Matrix (CSR) Format In CSR format the matrix is represented by the following parameters: Parameter Type Size Meaning n (int) The number of rows (and columns) in the matrix. nnz (int) The number of non-zero elements in the matrix. csrRowPtr (int *) n+1 The array of offsets corresponding to the start of each row in the arrays csrColInd and csrVal. This array has also an extra entry at the end that stores the number of non-zero elements in the matrix. csrColInd (int *) nnz The array of column indices corresponding to the non-zero elements in the matrix. It is assumed that this array is sorted by row and by column within each row. csrVal (S|D|C|Z) nnz The array of values corresponding to the non-zero elements in the matrix. It is assumed that this array is sorted by row and by column within each row. Note that in our CSR format, sparse matrices are assumed to be stored in row-major order, in other words, the index arrays are first sorted by row indices and then within each row by column indices. Also it is assumed that each pair of row and column indices appears only once. For example, the 4x4 matrix \(A = \begin{pmatrix} {1.0} & {3.0} & {0.0} & {0.0} \\ {0.0} & {4.0} & {6.0} & {0.0} \\ {2.0} & {5.0} & {7.0} & {8.0} \\ {0.0} & {0.0} & {0.0} & {9.0} \\ \end{pmatrix}\) is represented as \({csrRowPtr} = \begin{pmatrix} 0 & 2 & 4 & 8 & 9 \\ \end{pmatrix}\) \({csrColInd} = \begin{pmatrix} 0 & 1 & 1 & 2 & 0 & 1 & 2 & 3 & 3 \\ \end{pmatrix}\) \({csrVal} = \begin{pmatrix} 1.0 & 3.0 & 4.0 & 6.0 & 2.0 & 5.0 & 7.0 & 8.0 & 9.0 \\ \end{pmatrix}\) 2.3.5. Matrix (CSC) Format In CSC format the matrix is represented by the following parameters: Parameter Type Size Meaning n (int) The number of rows (and columns) in the matrix. nnz (int) The number of non-zero elements in the matrix. cscColPtr (int *) n+1 The array of offsets corresponding to the start of each column in the arrays cscRowInd and cscVal. This array has also an extra entry at the end that stores the number of non-zero elements in the matrix. cscRowInd (int *) nnz The array of row indices corresponding to the non-zero elements in the matrix. It is assumed that this array is sorted by column and by row within each column. cscVal (S|D|C|Z) nnz The array of values corresponding to the non-zero elements in the matrix. It is assumed that this array is sorted by column and by row within each column. Note that in our CSC format, sparse matrices are assumed to be stored in column-major order, in other words, the index arrays are first sorted by column indices and then within each column by row indices. Also it is assumed that each pair of row and column indices appears only once. For example, the 4x4 matrix \(A = \begin{pmatrix} {1.0} & {3.0} & {0.0} & {0.0} \\ {0.0} & {4.0} & {6.0} & {0.0} \\ {2.0} & {5.0} & {7.0} & {8.0} \\ {0.0} & {0.0} & {0.0} & {9.0} \\ \end{pmatrix}\) is represented as \({cscColPtr} = \begin{pmatrix} 0 & 2 & 5 & 7 & 9 \\ \end{pmatrix}\) \({cscRowInd} = \begin{pmatrix} 0 & 2 & 0 & 1 & 2 & 1 & 2 & 2 & 3 \\ \end{pmatrix}\) \({cscVal} = \begin{pmatrix} 1.0 & 2.0 & 3.0 & 4.0 & 5.0 & 6.0 & 7.0 & 8.0 & 9.0 \\ \end{pmatrix}\) 2.4. cuSolverDN: dense LAPACK Function Reference This section describes the API of cuSolverDN, which provides a subset of dense LAPACK functions. 2.4.1. cuSolverDN Helper Function Reference The cuSolverDN helper functions are described in this section. 2.4.1.1. cusolverDnCreate() cusolverStatus_tcusolverDnCreate(cusolverDnHandle_t *handle); This function initializes the cuSolverDN library and creates a handle on the cuSolverDN context. It must be called before any other cuSolverDN API function is invoked. It allocates hardware resources necessary for accessing the GPU. This function allocates 4 MiB or 32 MiB of memory (for GPUs with Compute Capability of 9.0 and higher), which will be used as the cuBLAS workspace for the first user-defined stream on which cusolverDnSetStream() is called. For the default stream and in all the other cases, cuBLAS will manage its own workspace. Parameter Memory In/out Meaning handle host output The pointer to the handle to the cuSolverDN context. Status Returned CUSOLVER_STATUS_SUCCESS The initialization succeeded. CUSOLVER_STATUS_NOT_INITIALIZED The CUDA Runtime initialization failed. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. 2.4.1.2. cusolverDnDestroy() cusolverStatus_tcusolverDnDestroy(cusolverDnHandle_t handle); This function releases CPU-side resources used by the cuSolverDN library. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. Status Returned CUSOLVER_STATUS_SUCCESS The shutdown succeeded. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. 2.4.1.3. cusolverDnSetStream() cusolverStatus_tcusolverDnSetStream(cusolverDnHandle_t handle, cudaStream_t streamId) This function sets the stream to be used by the cuSolverDN library to execute its routines. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. streamId host input The stream to be used by the library. Status Returned CUSOLVER_STATUS_SUCCESS The stream was set successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. 2.4.1.4. cusolverDnGetStream() cusolverStatus_tcusolverDnGetStream(cusolverDnHandle_t handle, cudaStream_t *streamId) This function queries the stream to be used by the cuSolverDN library to execute its routines. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. streamId host output The stream which is used by handle. Status Returned CUSOLVER_STATUS_SUCCESS The stream was set successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. 2.4.1.5. cusolverDnLoggerSetCallback() cusolverStatus_t cusolverDnLoggerSetCallback(cusolverDnLoggerCallback_t callback); This function sets the logging callback function. Status Returned CUSOLVER_STATUS_SUCCESS If the callback function was successfully set. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.6. cusolverDnLoggerSetFile() cusolverStatus_t cusolverDnLoggerSetFile(FILE* file); This function sets the logging output file. Note: once registered using this function call, the provided file handle must not be closed unless the function is called again to switch to a different file handle. Parameter Memory In/out Meaning file input Pointer to an open file. File should have write permission. Status Returned CUSOLVER_STATUS_SUCCESS If logging file was successfully set. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.7. cusolverDnLoggerOpenFile() cusolverStatus_t cusolverDnLoggerOpenFile(const char* logFile); This function opens a logging output file in the given path. Parameter Memory In/out Meaning logFile input Path of the logging output file. Status Returned CUSOLVER_STATUS_SUCCESS If the logging file was successfully opened. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.8. cusolverDnLoggerSetLevel() cusolverStatus_t cusolverDnLoggerSetLevel(int level); This function sets the value of the logging level. Parameter Memory In/out Meaning level input Value of the logging level. See cuSOLVERDn Logging. Status Returned CUSOLVER_STATUS_INVALID_VALUE If the value was not a valid logging level. See cuSOLVERDn Logging. CUSOLVER_STATUS_SUCCESS If the logging level was successfully set. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.9. cusolverDnLoggerSetMask() cusolverStatus_t cusolverDnLoggerSetMask(int mask); This function sets the value of the logging mask. Parameter Memory In/out Meaning mask input Value of the logging mask. See cuSOLVERDn Logging. Status Returned CUSOLVER_STATUS_SUCCESS If the logging mask was successfully set. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.10. cusolverDnLoggerForceDisable() cusolverStatus_t cusolverDnLoggerForceDisable(); This function disables logging for the entire run. Status Returned CUSOLVER_STATUS_SUCCESS If logging was successfully disabled. See cusolverStatus_t for a complete list of valid return codes. 2.4.1.11. cusolverDnSetDeterministicMode() cusolverStatus_tcusolverDnSetDeterministicMode(cusolverDnHandle_t handle, cusolverDeterministicMode_t mode) This function sets the deterministic mode of all cuSolverDN functions for handle. For improved performance, non-deterministic results can be allowed. Affected functions are cusolverDn<t>geqrf(), cusolverDn<t>syevd(), cusolverDn<t>syevdx(), cusolverDn<t>gesvd() (if m > n), cusolverDn<t>gesvdj(), cusolverDnXgeqrf(), cusolverDnXsyevd(), cusolverDnXsyevdx(), cusolverDnXgesvd() (if m > n), cusolverDnXgesvdr() and cusolverDnXgesvdp(). Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. mode host input The deterministic mode to be used with handle. Status Returned CUSOLVER_STATUS_SUCCESS The mode was set successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INTERNAL_ERROR An internal error occurred. 2.4.1.12. cusolverDnGetDeterministicMode() cusolverStatus_tcusolverDnGetDeterministicMode(cusolverDnHandle_t handle, cusolverDeterministicMode_t* mode) This function queries the deterministic mode which is set for handle. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. mode host output The deterministic mode of handle. Status Returned CUSOLVER_STATUS_SUCCESS The mode was set successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE mode is a NULL pointer. 2.4.1.13. cusolverDnCreateSyevjInfo() cusolverStatus_tcusolverDnCreateSyevjInfo( syevjInfo_t *info); This function creates and initializes the structure of syevj, syevjBatched and sygvj to default values. Parameter Memory In/out Meaning info host output The pointer to the structure of syevj. Status Returned CUSOLVER_STATUS_SUCCESS The structure was initialized successfully. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. 2.4.1.14. cusolverDnDestroySyevjInfo() cusolverStatus_tcusolverDnDestroySyevjInfo( syevjInfo_t info); This function destroys and releases any memory required by the structure. Parameter Memory In/out Meaning info host input The structure of syevj. Status Returned CUSOLVER_STATUS_SUCCESS The resources were released successfully. 2.4.1.15. cusolverDnXsyevjSetTolerance() cusolverStatus_tcusolverDnXsyevjSetTolerance( syevjInfo_t info, double tolerance) This function configures tolerance of syevj. Parameter Memory In/out Meaning info host in/out The pointer to the structure of syevj. tolerance host input Accuracy of numerical eigenvalues. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.16. cusolverDnXsyevjSetMaxSweeps() cusolverStatus_tcusolverDnXsyevjSetMaxSweeps( syevjInfo_t info, int max_sweeps) This function configures maximum number of sweeps in syevj. The default value is 100. Parameter Memory In/out Meaning info host in/out The pointer to the structure of syevj. max_sweeps host input Maximum number of sweeps. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.17. cusolverDnXsyevjSetSortEig() cusolverStatus_tcusolverDnXsyevjSetSortEig( syevjInfo_t info, int sort_eig) If sort_eig is zero, the eigenvalues are not sorted. This function only works for syevjBatched. syevj and sygvj always sort eigenvalues in ascending order. By default, eigenvalues are always sorted in ascending order. Parameter Memory In/out Meaning info host in/out The pointer to the structure of syevj. sort_eig host input If sort_eig is zero, the eigenvalues are not sorted. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.18. cusolverDnXsyevjGetResidual() cusolverStatus_tcusolverDnXsyevjGetResidual( cusolverDnHandle_t handle, syevjInfo_t info, double *residual) This function reports residual of syevj or sygvj. It does not support syevjBatched. If the user calls this function after syevjBatched, the error CUSOLVER_STATUS_NOT_SUPPORTED is returned. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. info host input The pointer to the structure of syevj. residual host output Residual of syevj. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_SUPPORTED Does not support batched version. 2.4.1.19. cusolverDnXsyevjGetSweeps() cusolverStatus_tcusolverDnXsyevjGetSweeps( cusolverDnHandle_t handle, syevjInfo_t info, int *executed_sweeps) This function reports number of executed sweeps of syevj or sygvj. It does not support syevjBatched. If the user calls this function after syevjBatched, the error CUSOLVER_STATUS_NOT_SUPPORTED is Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. info host input The pointer to the structure of syevj. executed_sweeps host output Number of executed sweeps. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_SUPPORTED Does not support batched version. 2.4.1.20. cusolverDnCreateGesvdjInfo() cusolverStatus_tcusolverDnCreateGesvdjInfo( gesvdjInfo_t *info); This function creates and initializes the structure of gesvdj and gesvdjBatched to default values. Parameter Memory In/out Meaning info host output The pointer to the structure of gesvdj. Status Returned CUSOLVER_STATUS_SUCCESS The structure was initialized successfully. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. 2.4.1.21. cusolverDnDestroyGesvdjInfo() cusolverStatus_tcusolverDnDestroyGesvdjInfo( gesvdjInfo_t info); This function destroys and releases any memory required by the structure. Parameter Memory In/out Meaning info host input The structure of gesvdj. Status Returned CUSOLVER_STATUS_SUCCESS The resources were released successfully. 2.4.1.22. cusolverDnXgesvdjSetTolerance() cusolverStatus_tcusolverDnXgesvdjSetTolerance( gesvdjInfo_t info, double tolerance) This function configures tolerance of gesvdj. Parameter Memory In/out Meaning info host in/out The pointer to the structure of gesvdj. tolerance host input Accuracy of numerical singular values. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.23. cusolverDnXgesvdjSetMaxSweeps() cusolverStatus_tcusolverDnXgesvdjSetMaxSweeps( gesvdjInfo_t info, int max_sweeps) This function configures the maximum number of sweeps in gesvdj. The default value is 100. Parameter Memory In/out Meaning info host in/out The pointer to the structure of gesvdj. max_sweeps host input Maximum number of sweeps. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.24. cusolverDnXgesvdjSetSortEig() cusolverStatus_tcusolverDnXgesvdjSetSortEig( gesvdjInfo_t info, int sort_svd) If sort_svd is zero, the singular values are not sorted. This function only works for gesvdjBatched. gesvdj always sorts singular values in descending order. By default, singular values are always sorted in descending order. Parameter Memory In/out Meaning info host in/out The pointer to the structure of gesvdj. sort_svd host input If sort_svd is zero, the singular values are not sorted. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. 2.4.1.25. cusolverDnXgesvdjGetResidual() cusolverStatus_tcusolverDnXgesvdjGetResidual( cusolverDnHandle_t handle, gesvdjInfo_t info, double *residual) This function reports residual of gesvdj. It does not support gesvdjBatched. If the user calls this function after gesvdjBatched, the error CUSOLVER_STATUS_NOT_SUPPORTED is returned. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. info host input The pointer to the structure of gesvdj. residual host output Residual of gesvdj. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_SUPPORTED Does not support batched version 2.4.1.26. cusolverDnXgesvdjGetSweeps() cusolverStatus_tcusolverDnXgesvdjGetSweeps( cusolverDnHandle_t handle, gesvdjInfo_t info, int *executed_sweeps) This function reports number of executed sweeps of gesvdj. It does not support gesvdjBatched. If the user calls this function after gesvdjBatched, the error CUSOLVER_STATUS_NOT_SUPPORTED is returned. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. info host input The pointer to the structure of gesvdj. executed_sweeps host output Number of executed sweeps. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_SUPPORTED Does not support batched version 2.4.1.27. cusolverDnIRSParamsCreate() cusolverStatus_tcusolverDnIRSParamsCreate(cusolverDnIRSParams_t *params); This function creates and initializes the structure of parameters for an IRS solver such as the cusolverDnIRSXgesv() or the cusolverDnIRSXgels() functions to default values. The params structure created by this function can be used by one or more call to the same or to a different IRS solver. Note that in CUDA 10.2, the behavior was different and a new params structure was needed to be created per each call to an IRS solver. Also note that the user can also change configurations of the params and then call a new IRS instance, but be careful that the previous call was done because any change to the configuration before the previous call was done could affect it. Parameter Memory In/out Meaning params host output Pointer to the cusolverDnIRSParams_t Params structure Status Returned CUSOLVER_STATUS_SUCCESS The structure was created and initialized successfully. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. 2.4.1.28. cusolverDnIRSParamsDestroy() cusolverStatus_tcusolverDnIRSParamsDestroy(cusolverDnIRSParams_t params); This function destroys and releases any memory required by the Params structure. Parameter Memory In/out Meaning params host input The cusolverDnIRSParams_t Params structure. Status Returned CUSOLVER_STATUS_SUCCESS The resources were released successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. CUSOLVER_STATUS_IRS_INFOS_NOT_DESTROYED Not all the Infos structure associated with this Params structure have been destroyed yet. 2.4.1.29. cusolverDnIRSParamsSetSolverPrecisions() cusolverStatus_t cusolverDnIRSParamsSetSolverPrecisions( cusolverDnIRSParams_t params, cusolverPrecType_t solver_main_precision, cusolverPrecType_t solver_lowest_precision ); This function sets both the main and the lowest precision for the Iterative Refinement Solver (IRS). By main precision, we mean the precision of the Input and Output datatype. By lowest precision, we mean the solver is allowed to use as lowest computational precision during the LU factorization process. Note that the user has to set both the main and lowest precision before the first call to the IRS solver because they are NOT set by default with the params structure creation, as it depends on the Input Output data type and user request. It is a wrapper to both cusolverDnIRSParamsSetSolverMainPrecision() and cusolverDnIRSParamsSetSolverLowestPrecision(). All possible combinations of main/lowest precision are described in the table below. Usually the lowest precision defines the speedup that can be achieved. The ratio of the performance of the lowest precision over the main precision (e.g., Inputs/Outputs datatype) define the upper bound of the speedup that could be obtained. More precisely, it depends on many factors, but for large matrices sizes, it is the ratio of the matrix-matrix rank-k product (e.g., GEMM where K is 256 and M=N=size of the matrix) that define the possible speedup. For instance, if the inout precision is real double precision CUSOLVER_R_64F and the lowest precision is CUSOLVER_R_32F, then we can expect a speedup of at most 2X for large problem sizes. If the lowest precision was CUSOLVER_R_16F, then we can expect 3X-4X. A reasonable strategy should take the number of right-hand sides, the size of the matrix as well as the convergence rate into account. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure. solver_main_precision host input Allowed Inputs/Outputs datatype (for example CUSOLVER_R_FP64 for a real double precision data). See the table below for the supported precisions. solver_lowest_precision host input Allowed lowest compute type (for example CUSOLVER_R_16F for half precision computation). See the table below for the supported precisions. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. Inputs/Outputs Data Type (e.g., main precision) Supported values for the lowest precision CUSOLVER_C_64F CUSOLVER_C_64F, CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_C_32F CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_R_64F CUSOLVER_R_64F, CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 CUSOLVER_R_32F CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 2.4.1.30. cusolverDnIRSParamsSetSolverMainPrecision() cusolverStatus_tcusolverDnIRSParamsSetSolverMainPrecision( cusolverDnIRSParams_t params, cusolverPrecType_t solver_main_precision); This function sets the main precision for the Iterative Refinement Solver (IRS). By main precision, we mean, the type of the Input and Output data. Note that the user has to set both the main and lowest precision before a first call to the IRS solver because they are NOT set by default with the params structure creation, as it depends on the Input Output data type and user request. user can set it by either calling this function or by calling cusolverDnIRSParamsSetSolverPrecisions() which set both the main and the lowest precision together. All possible combinations of main/lowest precision are described in the table in the cusolverDnIRSParamsSetSolverPrecisions() section above. Parameter Memory In/ Meaning params host in/ The cusolverDnIRSParams_t Params structure. solver_main_precision host input Allowed Inputs/Outputs datatype (for example CUSOLVER_R_FP64 for a real double precision data). See the table in the cusolverDnIRSParamsSetSolverPrecisions() section above for the supported precisions. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.31. cusolverDnIRSParamsSetSolverLowestPrecision() cusolverStatus_tcusolverDnIRSParamsSetSolverLowestPrecision( cusolverDnIRSParams_t params, cusolverPrecType_t lowest_precision_type); This function sets the lowest precision that will be used by Iterative Refinement Solver. By lowest precision, we mean the solver is allowed to use as lowest computational precision during the LU factorization process. Note that the user has to set both the main and lowest precision before a first call to the IRS solver because they are NOT set by default with the params structure creation, as it depends on the Input Output data type and user request. Usually the lowest precision defines the speedup that can be achieved. The ratio of the performance of the lowest precision over the main precision (e.g., Inputs/Outputs datatype) define somehow the upper bound of the speedup that could be obtained. More precisely, it depends on many factors, but for large matrices sizes, it is the ratio of the matrix-matrix rank-k product (e.g., GEMM where K is 256 and M=N=size of the matrix) that define the possible speedup. For instance, if the inout precision is real double precision CUSOLVER_R_64F and the lowest precision is CUSOLVER_R_32F, then we can expect a speedup of at most 2X for large problem sizes. If the lowest precision was CUSOLVER_R_16F, then we can expect 3X-4X. A reasonable strategy should take the number of right-hand sides, the size of the matrix as well as the convergence rate into account. Parameter Memory In/ Meaning params host in/ The cusolverDnIRSParams_t Params structure. lowest_precision_type host input Allowed lowest compute type (for example CUSOLVER_R_16F for half precision computation). See the table in the cusolverDnIRSParamsSetSolverPrecisions() section above for the supported precisions. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.32. cusolverDnIRSParamsSetRefinementSolver() cusolverStatus_tcusolverDnIRSParamsSetRefinementSolver( cusolverDnIRSParams_t params, cusolverIRSRefinement_t solver); This function sets the refinement solver to be used in the Iterative Refinement Solver functions such as the cusolverDnIRSXgesv() or the cusolverDnIRSXgels() functions. Note that the user has to set the refinement algorithm before a first call to the IRS solver because it is NOT set by default with the creating of params. Details about values that can be set to and theirs meaning are described in the table below. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_tParams structure solver host input Type of the refinement solver to be used by the IRS solver such as cusolverDnIRSXgesv() or cusolverDnIRSXgels(). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. CUSOLVER_IRS_REFINE_NOT_SET Solver is not set, this value is what is set when creating the params structure. IRS solver will return an error. No refinement solver; the IRS solver performs a factorization followed by a solve without any refinement. For example, if the IRS solver was cusolverDnIRSXgesv(), CUSOLVER_IRS_REFINE_NONE this is equivalent to a Xgesv routine without refinement and where the factorization is carried out in the lowest precision. If for example the main precision was CUSOLVER_R_64F and the lowest was CUSOLVER_R_64F as well, then this is equivalent to a call to cusolverDnDgesv(). CUSOLVER_IRS_REFINE_CLASSICAL Classical iterative refinement solver. Similar to the one used in LAPACK routines. CUSOLVER_IRS_REFINE_GMRES GMRES (Generalized Minimal Residual) based iterative refinement solver. In recent study, the GMRES method has drawn the scientific community attention for its ability to be used as refinement solver that outperforms the classical iterative refinement method. Based on our experimentation, we recommend this setting. Classical iterative refinement solver that uses the GMRES (Generalized Minimal Residual) internally to solve the correction equation at each iteration. We call the CUSOLVER_IRS_REFINE_CLASSICAL_GMRES classical refinement iteration the outer iteration while the GMRES is called inner iteration. Note that if the tolerance of the inner GMRES is set very low, let say to machine precision, then the outer classical refinement iteration will performs only one iteration and thus this option will behaves like CUSOLVER_IRS_REFINE_GMRES_GMRES Similar to CUSOLVER_IRS_REFINE_CLASSICAL_GMRES which consists of classical refinement process that uses GMRES to solve the inner correction system, here it is a GMRES (Generalized Minimal Residual) based iterative refinement solver that uses another GMRES internally to solve the preconditioned system. 2.4.1.33. cusolverDnIRSParamsSetTol() cusolverStatus_tcusolverDnIRSParamsSetTol( cusolverDnIRSParams_t params, double val ); This function sets the tolerance for the refinement solver. By default it is such that all the RHS satisfy: RNRM < SQRT(N)*XNRM*ANRM*EPS*BWDMAX where • RNRM is the infinity-norm of the residual • XNRM is the infinity-norm of the solution • ANRM is the infinity-operator-norm of the matrix A • EPS is the machine epsilon for the Inputs/Outputs datatype that matches LAPACK <X>LAMCH(‘Epsilon’) • BWDMAX, the value BWDMAX is fixed to 1.0 The user can use this function to change the tolerance to a lower or higher value. Our goal is to give the user more control such a way he can investigate and control every detail of the IRS solver. Note that the tolerance value is always in real double precision whatever the Inputs/Outputs datatype is. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure. val host input Double precision real value to which the refinement tolerance will be set. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.34. cusolverDnIRSParamsSetTolInner() cusolverStatus_t cusolverDnIRSParamsSetTolInner( cusolverDnIRSParams_t params, double val ); This function sets the tolerance for the inner refinement solver when the refinement solver consists of two-levels solver (e.g., CUSOLVER_IRS_REFINE_CLASSICAL_GMRES or CUSOLVER_IRS_REFINE_GMRES_GMRES cases). It is not referenced in case of one level refinement solver such as CUSOLVER_IRS_REFINE_CLASSICAL or CUSOLVER_IRS_REFINE_GMRES. It is set to 1e-4 by default. This function set the tolerance for the inner solver (e.g. the inner GMRES). For example, if the Refinement Solver was set to CUSOLVER_IRS_REFINE_CLASSICAL_GMRES, setting this tolerance mean that the inner GMRES solver will converge to that tolerance at each outer iteration of the classical refinement solver. Our goal is to give the user more control such a way he can investigate and control every detail of the IRS solver. Note the, the tolerance value is always in real double precision whatever the Inputs/Outputs datatype is. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure. val host input Double precision real value to which the tolerance of the inner refinement solver will be set. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.35. cusolverDnIRSParamsSetMaxIters() cusolverStatus_tcusolverDnIRSParamsSetMaxIters( cusolverDnIRSParams_t params, int max_iters); This function sets the total number of allowed refinement iterations after which the solver will stop. Total means any iteration which means the sum of the outer and the inner iterations (inner is meaningful when two-levels refinement solver is set). Default value is set to 50. Our goal is to give the user more control such a way he can investigate and control every detail of the IRS solver. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure. max_iters host input Maximum total number of iterations allowed for the refinement solver. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.36. cusolverDnIRSParamsSetMaxItersInner() cusolverStatus_t cusolverDnIRSParamsSetMaxItersInner( cusolverDnIRSParams_t params, cusolver_int_t maxiters_inner ); This function sets the maximal number of iterations allowed for the inner refinement solver. It is not referenced in case of one level refinement solver such as CUSOLVER_IRS_REFINE_CLASSICAL or CUSOLVER_IRS_REFINE_GMRES. The inner refinement solver will stop after reaching either the inner tolerance or the MaxItersInner value. By default, it is set to 50. Note that this value could not be larger than the MaxIters since MaxIters is the total number of allowed iterations. Note that if the user calls cusolverDnIRSParamsSetMaxIters after calling this function, SetMaxIters has priority and will overwrite MaxItersInner to the minimum value of (MaxIters, MaxItersInner). Parameter Memory In/ Meaning params host in/ The cusolverDnIRSParams_t Params structure maxiters_inner host input Maximum number of allowed inner iterations for the inner refinement solver. Meaningful when the refinement solver is a two-levels solver such as CUSOLVER_IRS_REFINE_CLASSICAL_GMRES or CUSOLVER_IRS_REFINE_GMRES_GMRES. Value should be less or equal to MaxIters. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. CUSOLVER_STATUS_IRS_PARAMS_INVALID If the value was larger than MaxIters. 2.4.1.37. cusolverDnIRSParamsEnableFallback() cusolverStatus_t cusolverDnIRSParamsEnableFallback( cusolverDnIRSParams_t params ); This function enable the fallback to the main precision in case the Iterative Refinement Solver (IRS) failed to converge. In other term, if the IRS solver failed to converge, the solver will return a no convergence code (e.g., niter < 0), but can either return the non-convergent solution as it is (e.g., disable fallback) or can fallback (e.g., enable fallback) to the main precision (which is the precision of the Inputs/Outputs data) and solve the problem from scratch returning the good solution. This is the behavior by default, and it will guarantee that the IRS solver always provide the good solution. This function is provided because we provided cusolverDnIRSParamsDisableFallback which allows the user to disable the fallback and thus this function allow the user to re-enable it. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.38. cusolverDnIRSParamsDisableFallback() cusolverStatus_t cusolverDnIRSParamsDisableFallback( cusolverDnIRSParams_t params ); This function disables the fallback to the main precision in case the Iterative Refinement Solver (IRS) failed to converge. In other term, if the IRS solver failed to converge, the solver will return a no convergence code (e.g., niter < 0), but can either return the non-convergent solution as it is (e.g., disable fallback) or can fallback (e.g., enable fallback) to the main precision (which is the precision of the Inputs/Outputs data) and solve the problem from scratch returning the good solution. This function disables the fallback and the returned solution is whatever the refinement solver was able to reach before it returns. Disabling fallback does not guarantee that the solution is the good one. However, if users want to keep getting the solution of the lower precision in case the IRS did not converge after certain number of iterations, they need to disable the fallback. The user can re-enable it by calling cusolverDnIRSParamsEnableFallback. Parameter Memory In/out Meaning params host in/out The cusolverDnIRSParams_t Params structure Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.39. cusolverDnIRSParamsGetMaxIters() cusolverStatus_t cusolverDnIRSParamsGetMaxIters( cusolverDnIRSParams_t params, cusolver_int_t *maxiters ); This function returns the current setting in the params structure for the maximal allowed number of iterations (e.g., either the default MaxIters, or the one set by the user in case he set it using cusolverDnIRSParamsSetMaxIters). Note that this function returns the current setting in the params configuration and not to be confused with the cusolverDnIRSInfosGetMaxIters which return the maximal allowed number of iterations for a particular call to an IRS solver. To be clearer, the params structure can be used for many calls to an IRS solver. A user can change the allowed MaxIters between calls while the Infos structure in cusolverDnIRSInfosGetMaxIters contains information about a particular call and cannot be reused for different calls, and thus, cusolverDnIRSInfosGetMaxIters returns the allowed MaxIters for that call. Parameter Memory In/out Meaning params host in The cusolverDnIRSParams_t Params structure. maxiters host output The maximal number of iterations that is currently set. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The Params structure was not created. 2.4.1.40. cusolverDnIRSInfosCreate() cusolverStatus_tcusolverDnIRSInfosCreate( cusolverDnIRSInfos_t* infos ) This function creates and initializes the Infos structure that will hold the refinement information of an Iterative Refinement Solver (IRS) call. Such information includes the total number of iterations that was needed to converge (Niters), the outer number of iterations (meaningful when two-levels preconditioner such as CUSOLVER_IRS_REFINE_CLASSICAL_GMRES is used ), the maximal number of iterations that was allowed for that call, and a pointer to the matrix of the convergence history residual norms. The Infos structure needs to be created before a call to an IRS solver. The Infos structure is valid for only one call to an IRS solver, since it holds info about that solve and thus each solve will requires its own Infos structure. Parameter Memory In/out Meaning info host output Pointer to the cusolverDnIRSInfos_t Infos structure. Status Returned CUSOLVER_STATUS_SUCCESS The structure was initialized successfully. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. 2.4.1.41. cusolverDnIRSInfosDestroy() cusolverStatus_tcusolverDnIRSInfosDestroy( cusolverDnIRSInfos_t infos ); This function destroys and releases any memory required by the Infos structure. This function destroys all the information (e.g., Niters performed, OuterNiters performed, residual history etc.) about a solver call; thus, this function should only be called after the user is finished with the information. Parameter Memory In/out Meaning info host in/out The cusolverDnIRSInfos_t Infos structure. Status Returned CUSOLVER_STATUS_SUCCESS The resources were released successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. 2.4.1.42. cusolverDnIRSInfosGetMaxIters() cusolverStatus_t cusolverDnIRSInfosGetMaxIters( cusolverDnIRSInfos_t infos, cusolver_int_t *maxiters ); This function returns the maximal allowed number of iterations that was set for the corresponding call to the IRS solver. Note that this function returns the setting that was set when that call happened and is not to be confused with the cusolverDnIRSParamsGetMaxIters which returns the current setting in the params configuration structure. To be clearer, the params structure can be used for many calls to an IRS solver. A user can change the allowed MaxIters between calls while the Infos structure in cusolverDnIRSInfosGetMaxIters contains information about a particular call and cannot be reused for different calls, thus cusolverDnIRSInfosGetMaxIters returns the allowed MaxIters for that call. Parameter Memory In/out Meaning infos host in The cusolverDnIRSInfos_t Infos structure. maxiters host output The maximal number of iterations that is currently set. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. 2.4.1.43. cusolverDnIRSInfosGetNiters() cusolverStatus_t cusolverDnIRSInfosGetNiters( cusolverDnIRSInfos_t infos, cusolver_int_t *niters ); This function returns the total number of iterations performed by the IRS solver. If it was negative, it means that the IRS solver did not converge and if the user did not disable the fallback to full precision, then the fallback to a full precision solution happened and solution is good. Please refer to the description of negative niters values in the corresponding IRS linear solver functions such as cusolverDnXgesv() or cusolverDnXgels(). Parameter Memory In/out Meaning infos host in The cusolverDnIRSInfos_t Infos structure. niters host output The total number of iterations performed by the IRS solver. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. 2.4.1.44. cusolverDnIRSInfosGetOuterNiters() cusolverStatus_t cusolverDnIRSInfosGetOuterNiters( cusolverDnIRSInfos_t infos, cusolver_int_t *outer_niters ); This function returns the number of iterations performed by the outer refinement loop of the IRS solver. When the refinement solver consists of a one-level solver such as CUSOLVER_IRS_REFINE_CLASSICAL or CUSOLVER_IRS_REFINE_GMRES, it is the same as Niters. When the refinement solver consists of a two-levels solver such as CUSOLVER_IRS_REFINE_CLASSICAL_GMRES or CUSOLVER_IRS_REFINE_GMRES_GMRES, it is the number of iterations of the outer loop. Refer to the description of the cusolverIRSRefinement_t for more details. Parameter Memory In/out Meaning infos host in The cusolverDnIRSInfos_t Infos structure. outer_niters host output The number of iterations of the outer refinement loop of the IRS solver. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. 2.4.1.45. cusolverDnIRSInfosRequestResidual() cusolverStatus_t cusolverDnIRSInfosRequestResidual( cusolverDnIRSInfos_t infos ); This function tells the IRS solver to store the convergence history (residual norms) of the refinement phase in a matrix that can be accessed via a pointer returned by the cusolverDnIRSInfosGetResidualHistory() function. Parameter Memory In/out Meaning infos host in The cusolverDnIRSInfos_t Infos structure Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. 2.4.1.46. cusolverDnIRSInfosGetResidualHistory() cusolverStatus_tcusolverDnIRSInfosGetResidualHistory( cusolverDnIRSInfos_t infos, void **residual_history ); If the user called cusolverDnIRSInfosRequestResidual() before the call to the IRS function, then the IRS solver will store the convergence history (residual norms) of the refinement phase in a matrix that can be accessed via a pointer returned by this function. The datatype of the residual norms depends on the input and output data type. If the Inputs/Outputs datatype is double precision real or complex (CUSOLVER_R_FP64 or CUSOLVER_C_FP64), this residual will be of type real double precision (FP64) double, otherwise if the Inputs/Outputs datatype is single precision real or complex (CUSOLVER_R_FP32 or CUSOLVER_C_FP32), this residual will be real single precision FP32 float. The residual history matrix consists of two columns (even for the multiple right-hand side case NRHS) of MaxIters+1 row, thus a matrix of size (MaxIters+1,2). Only the first OuterNiters+1 rows contains the residual norms the other (e.g., OuterNiters+2:Maxiters+1) are garbage. On the first column, each row “i” specify the total number of iterations happened till this outer iteration “i” and on the second columns the residual norm corresponding to this outer iteration “i”. Thus, the first row (e.g., outer iteration “0”) consists of the initial residual (e.g., the residual before the refinement loop start) then the consecutive rows are the residual obtained at each outer iteration of the refinement loop. Note, it only consists of the history of the outer loop. If the refinement solver was CUSOLVER_IRS_REFINE_CLASSICAL or CUSOLVER_IRS_REFINE_GMRES, then OuterNiters=Niters (Niters is the total number of iterations performed) and there is Niters+1 rows of norms that correspond to the Niters outer iterations. If the refinement solver was CUSOLVER_IRS_REFINE_CLASSICAL_GMRES or CUSOLVER_IRS_REFINE_GMRES_GMRES, then OuterNiters <= Niters corresponds to the outer iterations performed by the outer refinement loop. Thus, there is OuterNiters+1 residual norms where row “i” correspond to the outer iteration “i” and the first column specify the total number of iterations (outer and inner) that were performed till this step the second columns correspond to the residual norm at this step. For example, let’s say the user specifies CUSOLVER_IRS_REFINE_CLASSICAL_GMRES as a refinement solver and say it needed 3 outer iterations to converge and 4,3,3 inner iterations at each outer, respectively. This consists of 10 total iterations. Row 0 corresponds to the first residual before the refinement start, so it has 0 in its first column. On row 1 which corresponds to the outer iteration 1, it will be 4 (4 is the total number of iterations that were performed till now), on row 2 it will be 7, and on row 3 it will be 10. In summary, let’s define ldh=Maxiters+1, the leading dimension of the residual matrix. then residual_history[i] shows the total number of iterations performed at the outer iteration “i” and residual_history[i+ldh] corresponds to the norm of the residual at this outer iteration. Parameter Memory In/out Meaning infos host in The cusolverDnIRSInfos_t Infos structure. residual_history host output Returns a void pointer to the matrix of the convergence history residual norms. See the description above for the relation between the residual norm datatype and the inout datatype. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The Infos structure was not created. CUSOLVER_STATUS_INVALID_VALUE This function was called without calling cusolverDnIRSInfosRequestResidual() in advance. 2.4.1.47. cusolverDnCreateParams() cusolverStatus_tcusolverDnCreateParams( cusolverDnParams_t *params); This function creates and initializes the structure of 64-bit API to default values. Parameter Memory In/out Meaning params host output The pointer to the structure of 64-bit API. Status Returned CUSOLVER_STATUS_SUCCESS The structure was initialized successfully. CUSOLVER_STATUS_ALLOC_FAILED The resources could not be allocated. 2.4.1.48. cusolverDnDestroyParams() cusolverStatus_tcusolverDnDestroyParams( cusolverDnParams_t params); This function destroys and releases any memory required by the structure. Parameter Memory In/out Meaning params host input The structure of 64-bit API. Status Returned CUSOLVER_STATUS_SUCCESS The resources were released successfully. 2.4.1.49. cusolverDnSetAdvOptions() cusolverStatus_tcusolverDnSetAdvOptions ( cusolverDnParams_t params, cusolverDnFunction_t function, cusolverAlgMode_t algo ); This function configures algorithm algo of function, a 64-bit API routine. Parameter Memory In/out Meaning params host in/out The pointer to the structure of 64-bit API. function host input The routine to be configured. algo host input The algorithm to be configured. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_INVALID_VALUE Wrong combination of function and algo. 2.4.2. Dense Linear Solver Reference (legacy) This section describes linear solver API of cuSolverDN, including Cholesky factorization, LU with partial pivoting, QR factorization and Bunch-Kaufman (LDLT) factorization. These helper functions calculate the necessary size of work buffers. cusolverStatus_tcusolverDnSpotrf_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, int *Lwork );cusolverStatus_tcusolverDnDpotrf_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, int *Lwork );cusolverStatus_tcusolverDnCpotrf_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, int *Lwork );cusolverStatus_tcusolverDnZpotrf_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, int *Lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSpotrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, float *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnDpotrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, double *Workspace, int Lwork, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCpotrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, cuComplex *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnZpotrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, cuDoubleComplex *Workspace, int Lwork, int *devInfo ); This function computes the Cholesky factorization of a Hermitian positive-definite matrix. A is an n×n Hermitian matrix, only the lower or upper part is meaningful. The input parameter uplo indicates which part of the matrix is used. The function would leave other parts untouched. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, only the lower triangular part of A is processed, and replaced by the lower triangular Cholesky factor L. If input parameter uplo is CUBLAS_FILL_MODE_UPPER, only upper triangular part of A is processed, and replaced by upper triangular Cholesky factor U. The user has to provide working space which is pointed by input parameter Workspace. The input parameter Lwork is size of the working space, and it is returned by potrf_bufferSize(). If Cholesky factorization failed, i.e. some leading minor of A is not positive definite, or equivalently some diagonal elements of L or U is not a real number. The output parameter devInfo would indicate smallest leading minor of A which is not positive definite. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). API of potrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if matrix A lower or upper part is stored; the other part is not referenced. n host input Number of rows and columns of matrix A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. Workspace device in/out Working space, <type> array of size Lwork. Lwork host input Size of Workspace, returned by potrf_bufferSize. devInfo device output If devInfo = 0, the Cholesky factorization is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). if devInfo = i, the leading minor of order i is not positive definite. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.2.2. cusolverDnPotrf()[DEPRECATED] [[DEPRECATED]] use cusolverDnXpotrf() instead. The routine will be removed in the next major release. The helper functions below can calculate the sizes needed for pre-allocated buffer. cusolverStatus_tcusolverDnPotrf_bufferSize( cusolverDnHandle_t handle, cusolverDnParams_t params, cublasFillMode_t uplo, int64_t n, cudaDataType dataTypeA, const void *A, int64_t lda, cudaDataType computeType, size_t *workspaceInBytes ) The routine below cusolverStatus_tcusolverDnPotrf( cusolverDnHandle_t handle, cusolverDnParams_t params, cublasFillMode_t uplo, int64_t n, cudaDataType dataTypeA, void *A, int64_t lda, cudaDataType computeType, void *pBuffer, size_t workspaceInBytes, int *info ) Computes the Cholesky factorization of a Hermitian positive-definite matrix using the generic API interface. A is an n×n Hermitian matrix, only lower or upper part is meaningful. The input parameter uplo indicates which part of the matrix is used. The function would leave other part untouched. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, only lower triangular part of A is processed, and replaced by lower triangular Cholesky factor L. If input parameter uplo is CUBLAS_FILL_MODE_UPPER, only upper triangular part of A is processed, and replaced by upper triangular Cholesky factor U. The user has to provide working space which is pointed by input parameter pBuffer. The input parameter workspaceInBytes is size in bytes of the working space, and it is returned by If Cholesky factorization failed, i.e. some leading minor of A is not positive definite, or equivalently some diagonal elements of L or U is not a real number. The output parameter info would indicate smallest leading minor of A which is not positive definite. If output parameter info = -i (less than zero), the i-th parameter is wrong (not counting handle). Currently, cusolverDnPotrf supports only the default algorithm. Table of algorithms supported by cusolverDnPotrf CUSOLVER_ALG_0 or NULL Default algorithm. List of input arguments for cusolverDnPotrf_bufferSize and cusolverDnPotrf: API of potrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. params host input Structure with information collected by cusolverDnSetAdvOptions. uplo host input Indicates if matrix A lower or upper part is stored, the other part is not referenced. n host input Number of rows and columns of matrix A. dataTypeA host in Data type of array A. A device in/out Array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. computeType host in Data type of computation. pBuffer device in/out Working space. Array of type void of size workspaceInBytes bytes. workspaceInBytes host input Size in bytes of pBuffer, returned by cusolverDnPotrf_bufferSize. info device output If info = 0, the Cholesky factorization is successful. if info = -i, the i-th parameter is wrong (not counting handle). if info = i, the leading minor of order i is not positive definite. The generic API has two different types, dataTypeA is data type of the matrix A, computeType is compute type of the operation. cusolverDnPotrf only supports the following four combinations. Valid combination of data type and compute type DataTypeA ComputeType Meaning CUDA_R_32F CUDA_R_32F SPOTRF CUDA_R_64F CUDA_R_64F DPOTRF CUDA_C_32F CUDA_C_32F CPOTRF CUDA_C_64F CUDA_C_64F ZPOTRF Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. cusolverStatus_tcusolverDnSpotrs(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, const float *A, int lda, float *B, int ldb, int *devInfo);cusolverStatus_tcusolverDnDpotrs(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, const double *A, int lda, double *B, int ldb, int *devInfo);cusolverStatus_tcusolverDnCpotrs(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, const cuComplex *A, int lda, cuComplex *B, int ldb, int *devInfo);cusolverStatus_tcusolverDnZpotrs(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, const cuDoubleComplex *A, int lda, cuDoubleComplex *B, int ldb, int *devInfo); This function solves a system of linear equations where A is an n×n Hermitian matrix, only lower or upper part is meaningful. The input parameter uplo indicates which part of the matrix is used. The function would leave other part untouched. The user has to call potrf first to factorize matrix A. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, A is lower triangular Cholesky factor L corresponding to \(A = L*L^{H}\) . If input parameter uplo is CUBLAS_FILL_MODE_UPPER, A is upper triangular Cholesky factor U corresponding to \(A = U^{H}*U\) . The operation is in-place, i.e. matrix X overwrites matrix B with the same leading dimension ldb. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). API of potrs Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if matrix A lower or upper part is stored, the other part is not referenced. n host input Number of rows and columns of matrix A. nrhs host input Number of columns of matrix X and B. A device input <type> array of dimension lda * n with lda is not less than max(1,n). A is either lower Cholesky factor L or upper Cholesky factor U. lda host input Leading dimension of two-dimensional array used to store matrix A. B device in/out <type> array of dimension ldb * nrhs. ldb is not less than max(1,n). As an input, B is right hand side matrix. As an output, B is the solution matrix. devInfo device output If devInfo = 0, the Cholesky factorization is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0, nrhs<0, lda<max(1,n) or ldb<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.2.4. cusolverDnPotrs()[DEPRECATED] [[DEPRECATED]] use cusolverDnXpotrs() instead. The routine will be removed in the next major release. cusolverStatus_tcusolverDnPotrs( cusolverDnHandle_t handle, cusolverDnParams_t params, cublasFillMode_t uplo, int64_t n, int64_t nrhs, cudaDataType dataTypeA, const void *A, int64_t lda, cudaDataType dataTypeB, void *B, int64_t ldb, int *info) This function solves a system of linear equations where A is a n×n Hermitian matrix, only lower or upper part is meaningful using the generic API interface. The input parameter uplo indicates which part of the matrix is used. The function would leave other part untouched. The user has to call cusolverDnPotrf first to factorize matrix A. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, A is lower triangular Cholesky factor L corresponding to \(A = L*L^{H}\) . If input parameter uplo is CUBLAS_FILL_MODE_UPPER, A is upper triangular Cholesky factor U corresponding to \(A = U^{H}*U\) . The operation is in-place, i.e. matrix X overwrites matrix B with the same leading dimension ldb. If output parameter info = -i (less than zero), the i-th parameter is wrong (not counting handle). Currently, cusolverDnPotrs supports only the default algorithm. Table of algorithms supported by cusolverDnPotrs CUSOLVER_ALG_0 or NULL Default algorithm. List of input arguments for cusolverDnPotrs: API of potrs Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. params host input Structure with information collected by cusolverDnSetAdvOptions. uplo host input Indicates if matrix A lower or upper part is stored, the other part is not referenced. n host input Number of rows and columns of matrix A. nrhs host input Number of columns of matrix X and B. dataTypeA host in Data type of array A. A device input Array of dimension lda * n with lda is not less than max(1,n). A is either lower Cholesky factor L or upper Cholesky factor U. lda host input Leading dimension of two-dimensional array used to store matrix A. dataTypeB host in Data type of array B. B device in/out Array of dimension ldb * nrhs. ldb is not less than max(1,n). As an input, B is right hand side matrix. As an output, B is the solution matrix. info device output If info = 0, the Cholesky factorization is successful. if info = -i, the i-th parameter is wrong (not counting handle). The generic API has two different types, dataTypeA is data type of the matrix A, dataTypeB is data type of the matrix B. cusolverDnPotrs only supports the following four combinations. Valid combination of data type and compute type dataTypeA dataTypeB Meaning CUDA_R_32F CUDA_R_32F SPOTRS CUDA_R_64F CUDA_R_64F DPOTRS CUDA_C_32F CUDA_C_32F CPOTRS CUDA_C_64F CUDA_C_64F ZPOTRS Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0, nrhs<0, lda<max(1,n) or ldb<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the necessary size of work buffers. cusolverStatus_tcusolverDnSpotri_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, int *Lwork );cusolverStatus_tcusolverDnDpotri_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, int *Lwork );cusolverStatus_tcusolverDnCpotri_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, int *Lwork );cusolverStatus_tcusolverDnZpotri_bufferSize(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, int *Lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSpotri(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, float *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnDpotri(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, double *Workspace, int Lwork, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCpotri(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, cuComplex *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnZpotri(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, cuDoubleComplex *Workspace, int Lwork, int *devInfo ); This function computes the inverse of a positive-definite matrix A using the Cholesky factorization \(A = L*L^{H} = U^{H}*U\) computed by potrf(). A is a n×n matrix containing the triangular factor L or U computed by the Cholesky factorization. Only lower or upper part is meaningful and the input parameter uplo indicates which part of the matrix is used. The function would leave the other part untouched. If the input parameter uplo is CUBLAS_FILL_MODE_LOWER, only lower triangular part of A is processed, and replaced the by lower triangular part of the inverse of A. If the input parameter uplo is CUBLAS_FILL_MODE_UPPER, only upper triangular part of A is processed, and replaced by the upper triangular part of the inverse of A. The user has to provide the working space which is pointed to by input parameter Workspace. The input parameter Lwork is the size of the working space, returned by potri_bufferSize(). If the computation of the inverse fails, i.e. some leading minor of L or U, is null, the output parameter devInfo would indicate the smallest leading minor of L or U which is not positive definite. If the output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting the handle). API of potri Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if matrix A lower or upper part is stored, the other part is not referenced. n host input Number of rows and columns of matrix A. A device in/out <type> array of dimension lda * n where lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. Workspace device in/out Working space, <type> array of size Lwork. Lwork host input Size of Workspace, returned by potri_bufferSize. devInfo device output If devInfo = 0, the computation of the inverse is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). if devInfo = i, the leading minor of order i is zero. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of work buffers needed. Please visit cuSOLVER Library Samples - getrf for a code example. cusolverStatus_tcusolverDnSgetrf_bufferSize(cusolverDnHandle_t handle, int m, int n, float *A, int lda, int *Lwork );cusolverStatus_tcusolverDnDgetrf_bufferSize(cusolverDnHandle_t handle, int m, int n, double *A, int lda, int *Lwork );cusolverStatus_tcusolverDnCgetrf_bufferSize(cusolverDnHandle_t handle, int m, int n, cuComplex *A, int lda, int *Lwork );cusolverStatus_tcusolverDnZgetrf_bufferSize(cusolverDnHandle_t handle, int m, int n, cuDoubleComplex *A, int lda, int *Lwork ); The S and D data types are real single and double precision, respectively. cusolverStatus_tcusolverDnSgetrf(cusolverDnHandle_t handle, int m, int n, float *A, int lda, float *Workspace, int *devIpiv, int *devInfo );cusolverStatus_tcusolverDnDgetrf(cusolverDnHandle_t handle, int m, int n, double *A, int lda, double *Workspace, int *devIpiv, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCgetrf(cusolverDnHandle_t handle, int m, int n, cuComplex *A, int lda, cuComplex *Workspace, int *devIpiv, int *devInfo );cusolverStatus_tcusolverDnZgetrf(cusolverDnHandle_t handle, int m, int n, cuDoubleComplex *A, int lda, cuDoubleComplex *Workspace, int *devIpiv, int *devInfo ); This function computes the LU factorization of a m×n matrix where A is a m×n matrix, P is a permutation matrix, L is a lower triangular matrix with unit diagonal, and U is an upper triangular matrix. The user has to provide working space which is pointed by input parameter Workspace. The input parameter Lwork is size of the working space, and it is returned by getrf_bufferSize(). If LU factorization failed, i.e. matrix A (U) is singular, The output parameter devInfo=i indicates U(i,i) = 0. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). If devIpiv is null, no pivoting is performed. The factorization is A=L*U, which is not numerically stable. No matter LU factorization failed or not, the output parameter devIpiv contains pivoting sequence, row i is interchanged with row devIpiv(i). The user can combine getrf and getrs to complete a linear solver. Remark: getrf uses fastest implementation with large workspace of size m*n. The user can choose the legacy implementation with minimal workspace by Getrf and cusolverDnSetAdvOptions(params, CUSOLVERDN_GETRF, CUSOLVER_ALG_1). API of getrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. m host input Number of rows of matrix A. n host input Number of columns of matrix A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,m). lda host input Leading dimension of two-dimensional array used to store matrix A. Workspace device in/out Working space, <type> array of size Lwork. devIpiv device output Array of size at least min(m,n), containing pivot indices. devInfo device output If devInfo = 0, the LU factorization is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). if devInfo = i, the U(i,i) = 0. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or lda<max(1,m)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.2.7. cusolverDnGetrf()[DEPRECATED] [[DEPRECATED]] use cusolverDnXgetrf() instead. The routine will be removed in the next major release. The helper function below can calculate the sizes needed for pre-allocated buffer. cusolverStatus_tcusolverDnGetrf_bufferSize( cusolverDnHandle_t handle, cusolverDnParams_t params, int64_t m, int64_t n, cudaDataType dataTypeA, const void *A, int64_t lda, cudaDataType computeType, size_t *workspaceInBytes ) The following function: cusolverStatus_tcusolverDnGetrf( cusolverDnHandle_t handle, cusolverDnParams_t params, int64_t m, int64_t n, cudaDataType dataTypeA, void *A, int64_t lda, int64_t *ipiv, cudaDataType computeType, void *pBuffer, size_t workspaceInBytes, int *info ) computes the LU factorization of a m×n matrix where A is an m×n matrix, P is a permutation matrix, L is a lower triangular matrix with unit diagonal, and U is an upper triangular matrix using the generic API interface. If LU factorization failed, i.e. matrix A (U) is singular, The output parameter info=i indicates U(i,i) = 0. If output parameter info = -i (less than zero), the i-th parameter is wrong (not counting handle). If ipiv is null, no pivoting is performed. The factorization is A=L*U, which is not numerically stable. No matter LU factorization failed or not, the output parameter ipiv contains pivoting sequence, row i is interchanged with row ipiv(i). The user has to provide working space which is pointed by input parameter pBuffer. The input parameter workspaceInBytes is size in bytes of the working space, and it is returned by The user can combine cusolverDnGetrf and cusolverDnGetrs to complete a linear solver. Currently, cusolverDnGetrf supports two algorithms. To select legacy implementation, the user has to call cusolverDnSetAdvOptions. Table of algorithms supported by cusolverDnGetrf CUSOLVER_ALG_0 or NULL Default algorithm. The fastest, requires a large workspace of m*n elements. CUSOLVER_ALG_1 Legacy implementation List of input arguments for cusolverDnGetrf_bufferSize and cusolverDnGetrf: API of cusolverDnGetrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. params host input Structure with information collected by cusolverDnSetAdvOptions. m host input Number of rows of matrix A. n host input Number of columns of matrix A. dataTypeA host in Data type of array A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,m). lda host input Leading dimension of two-dimensional array used to store matrix A. ipiv device output Array of size at least min(m,n), containing pivot indices. computeType host in Data type of computation. pBuffer device in/out Working space. Array of type void of size workspaceInBytes bytes. workspaceInBytes host input Size in bytes of pBuffer, returned by cusolverDnGetrf_bufferSize. info device output If info = 0, the LU factorization is successful. if info = -i, the i-th parameter is wrong (not counting handle). if info = i, the U(i,i) = 0. The generic API has two different types, dataTypeA is data type of the matrix A, computeType is compute type of the operation. cusolverDnGetrf only supports the following four combinations. valid combination of data type and compute type DataTypeA ComputeType Meaning CUDA_R_32F CUDA_R_32F SGETRF CUDA_R_64F CUDA_R_64F DGETRF CUDA_C_32F CUDA_C_32F CGETRF CUDA_C_64F CUDA_C_64F ZGETRF Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or lda<max(1,m)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. Please visit cuSOLVER Library Samples - getrf for a code example. cusolverStatus_tcusolverDnSgetrs(cusolverDnHandle_t handle, cublasOperation_t trans, int n, int nrhs, const float *A, int lda, const int *devIpiv, float *B, int ldb, int *devInfo );cusolverStatus_tcusolverDnDgetrs(cusolverDnHandle_t handle, cublasOperation_t trans, int n, int nrhs, const double *A, int lda, const int *devIpiv, double *B, int ldb, int *devInfo );cusolverStatus_tcusolverDnCgetrs(cusolverDnHandle_t handle, cublasOperation_t trans, int n, int nrhs, const cuComplex *A, int lda, const int *devIpiv, cuComplex *B, int ldb, int *devInfo );cusolverStatus_tcusolverDnZgetrs(cusolverDnHandle_t handle, cublasOperation_t trans, int n, int nrhs, const cuDoubleComplex *A, int lda, const int *devIpiv, cuDoubleComplex *B, int ldb, int *devInfo ); This function solves a linear system of multiple right-hand sides where A is an n×n matrix, and was LU-factored by getrf, that is, lower triangular part of A is L, and upper triangular part (including diagonal elements) of A is U. B is a n×nrhs right-hand side The input parameter trans is defined by \(\text{op}(A) = \left\{ \begin{matrix} A & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\_N}} \\ A^{T} & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\_T}} \\ A^{H} & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\ _C}} \\ \end{matrix} \right.\) The input parameter devIpiv is an output of getrf. It contains pivot indices, which are used to permutate right-hand sides. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). The user can combine getrf and getrs to complete a linear solver. Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. trans host input Operation op(A) that is non- or (conj.) transpose. n host input Number of rows and columns of matrix A. nrhs host input Number of right-hand sides. A device input <type> array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. devIpiv device input Array of size at least n, containing pivot indices. B device output <type> array of dimension ldb * nrhs with ldb is not less than max(1,n). ldb host input Leading dimension of two-dimensional array used to store matrix B. devInfo device output If devInfo = 0, the operation is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n) or ldb<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.2.9. cusolverDnGetrs()[DEPRECATED] [[DEPRECATED]] use cusolverDnXgetrs() instead. The routine will be removed in the next major release. cusolverStatus_tcusolverDnGetrs( cusolverDnHandle_t handle, cusolverDnParams_t params, cublasOperation_t trans, int64_t n, int64_t nrhs, cudaDataType dataTypeA, const void *A, int64_t lda, const int64_t *ipiv, cudaDataType dataTypeB, void *B, int64_t ldb, int *info ) This function solves a linear system of multiple right-hand sides where A is a n×n matrix, and was LU-factored by cusolverDnGetrf, that is, lower triangular part of A is L, and upper triangular part (including diagonal elements) of A is U. B is a n×nrhs right-hand side matrix using the generic API interface. The input parameter trans is defined by \(\text{op}(A) = \left\{ \begin{matrix} A & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\_N}} \\ A^{T} & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\_T}} \\ A^{H} & {\text{if~}\textsf{trans\ ==\ CUBLAS\_OP\ _C}} \\ \end{matrix} \right.\) The input parameter ipiv is an output of cusolverDnGetrf. It contains pivot indices, which are used to permutate right-hand sides. If output parameter info = -i (less than zero), the i-th parameter is wrong (not counting handle). The user can combine cusolverDnGetrf and cusolverDnGetrs to complete a linear solver. Currently, cusolverDnGetrs supports only the default algorithm. Table of algorithms supported by cusolverDnGetrs CUSOLVER_ALG_0 or NULL Default algorithm. List of input arguments for cusolverDnGetrs: Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. params host input Structure with information collected by cusolverDnSetAdvOptions. trans host input Operation op(A) that is non- or (conj.) transpose. n host input Number of rows and columns of matrix A. nrhs host input Number of right-hand sides. dataTypeA host in Data type of array A. A device input Array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. ipiv device input Array of size at least n, containing pivot indices. dataTypeB host in Data type of array B. B device output <type> array of dimension ldb * nrhs with ldb is not less than max(1,n). ldb host input Leading dimension of two-dimensional array used to store matrix B. info device output If info = 0, the operation is successful. if info = -i, the i-th parameter is wrong (not counting handle). The generic API has two different types, dataTypeA is data type of the matrix A and dataTypeB is data type of the matrix B. cusolverDnGetrs only supports the following four combinations. Valid combination of data type and compute type DataTypeA dataTypeB Meaning CUDA_R_32F CUDA_R_32F SGETRS CUDA_R_64F CUDA_R_64F DGETRS CUDA_C_32F CUDA_C_32F CGETRS CUDA_C_64F CUDA_C_64F ZGETRS Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n) or ldb<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These functions are modelled after functions DSGESV and ZCGESV from LAPACK. They compute the solution of a system of linear equations with one or multiple right hand sides using mixed precision iterative refinement techniques based on the LU factorization Xgesv. These functions are similar in term of functionalities to the full precision LU solver (Xgesv, where X denotes Z,C,D,S) but it uses lower precision internally in order to provide faster time to solution, from here comes the name mixed precision. Mixed precision iterative refinement techniques means that the solver compute an LU factorization in lower precision and then iteratively refine the solution to achieve the accuracy of the Inputs/Outputs datatype precision. The <t1> corresponds to the Inputs/Outputs datatype precision while <t2> represent the internal lower precision at which the factorization will be carried on. Where A is n-by-n matrix and X and B are n-by-nrhs matrices. Functions API are designed to be as close as possible to LAPACK API to be considered as a quick and easy drop-in replacement. Parameters and behavior are mostly the same as LAPACK counterparts. Description of these functions and differences from LAPACK is given below. <t1><t2>gesv() functions are designated by two floating point precisions The <t1> corresponds to the main precision (e.g., Inputs/Outputs datatype precision) and the <t2> represent the internal lower precision at which the factorization will be carried on. cusolver<t1><t2>gesv() first attempts to factorize the matrix in lower precision and use this factorization within an iterative refinement procedure to obtain a solution with same normwise backward error as the main precision <t1>. If the approach fails to converge, then the method fallback to the main precision factorization and solve (Xgesv) such a way that there is always a good solution at the output of these functions. If <t2> is equal to <t1>, then it is not a mixed precision process but rather a full one precision factorization, solve and refinement within the same main precision. The iterative refinement process is stopped if ITER > ITERMAX or for all the RHS we have: RNRM < SQRT(N)*XNRM*ANRM*EPS*BWDMAX where • ITER is the number of the current iteration in the iterative refinement process • RNRM is the infinity-norm of the residual • XNRM is the infinity-norm of the solution • ANRM is the infinity-operator-norm of the matrix A • EPS is the machine epsilon that matches LAPACK <t1>LAMCH(‘Epsilon’) The value ITERMAX and BWDMAX are fixed to 50 and 1.0 respectively. The function returns value describes the results of the solving process. A CUSOLVER_STATUS_SUCCESS indicates that the function finished with success otherwise, it indicates if one of the API arguments is incorrect, or if the function did not finish with success. More details about the error will be in the niters and the dinfo API parameters. See their description below for more details. User should provide the required workspace allocated on device memory. The amount of bytes required can be queried by calling the respective function <t1><t2>gesv_bufferSize(). Note that in addition to the two mixed precision functions available in LAPACK (e.g., dsgesv and zcgesv), we provide a large set of mixed precision functions that include half, bfloat and tensorfloat as a lower precision as well as same precision functions (e.g., main and lowest precision are equal <t2> is equal to <t1>). The following table specifies which precisions will be used for which interface function. Tensor Float (TF32), introduced with NVIDIA Ampere Architecture GPUs, is the most robust tensor core accelerated compute mode for the iterative refinement solver. It is able to solve the widest range of problems in HPC arising from different applications and provides up to 4X and 5X speedup for real and complex systems, respectively. On Volta and Turing architecture GPUs, half precision tensor core acceleration is recommended. In cases where the iterative refinement solver fails to converge to the desired accuracy (main precision, INOUT data precision), it is recommended to use main precision as internal lowest precision (i.e., cusolverDn[DD,ZZ]gesv for the FP64 case). Interface function Main precision (matrix, rhs and solution datatype) Lowest precision allowed to be used internally cusolverDnZZgesv cuDoubleComplex double complex cusolverDnZCgesv *has LAPACK counterparts cuDoubleComplex single complex cusolverDnZKgesv cuDoubleComplex half complex cusolverDnZEgesv cuDoubleComplex bfloat complex cusolverDnZYgesv cuDoubleComplex tensorfloat complex cusolverDnCCgesv cuComplex single complex cusolverDnCKgesv cuComplex half complex cusolverDnCEgesv cuComplex bfloat complex cusolverDnCYgesv cuComplex tensorfloat complex cusolverDnDDgesv double double cusolverDnDSgesv *has LAPACK counterparts double single cusolverDnDHgesv double half cusolverDnDBgesv double bfloat cusolverDnDXgesv double tensorfloat cusolverDnSSgesv float single cusolverDnSHgesv float half cusolverDnSBgesv float bfloat cusolverDnSXgesv float tensorfloat cusolverDn<t1><t2>gesv_bufferSize() functions will return workspace buffer size in bytes required for the corresponding cusolverDn<t1><t2>gesv() function. cusolverStatus_tcusolverDnZZgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZCgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZKgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZEgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZYgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCCgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCKgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCEgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCYgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDDgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDSgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDHgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDBgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDXgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSSgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSHgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSBgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSXgesv_bufferSize( cusolverHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes); Parameters of cusolverDn<T1><T2>gesv_bufferSize() functions Parameter Memory In/out Meaning handle host input Handle to the cusolverDN library context. n host input Number of rows and columns of square matrix A. Should be non-negative. nrhs host input Number of right hand sides to solve. Should be non-negative. dA device None Matrix A with size n-by-n. Can be NULL. ldda host input Leading dimension of two-dimensional array used to store matrix A. lda >= n. dipiv device None Pivoting sequence. Not used and can be NULL. dB device None Set of right hand sides B of size n-by-nrhs. Can be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. ldb >= n. dX device None Set of solution vectors X of size n-by-nrhs. Can be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. ldx >= n. dwork device none Pointer to device workspace. Not used and can be NULL. lwork_bytes host output Pointer to a variable where required size of temporary workspace in bytes will be stored. Can’t be NULL. cusolverStatus_t cusolverDnZZgesv( cusolverDnHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZCgesv( cusolverDnHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZKgesv( cusolverDnHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZEgesv( cusolverDnHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZYgesv( cusolverDnHandle_t handle, int n, int nrhs, cuDoubleComplex * dA, int ldda, int * dipiv, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCCgesv( cusolverDnHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCKgesv( cusolverDnHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCEgesv( cusolverDnHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCYgesv( cusolverDnHandle_t handle, int n, int nrhs, cuComplex * dA, int ldda, int * dipiv, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDDgesv( cusolverDnHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDSgesv( cusolverDnHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDHgesv( cusolverDnHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDBgesv( cusolverDnHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDXgesv( cusolverDnHandle_t handle, int n, int nrhs, double * dA, int ldda, int * dipiv, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSSgesv( cusolverDnHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSHgesv( cusolverDnHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSBgesv( cusolverDnHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSXgesv( cusolverDnHandle_t handle, int n, int nrhs, float * dA, int ldda, int * dipiv, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo); Parameters of cusolverDn<T1><T2>gesv() functions Parameter Memory In/out Meaning handle host input Handle to the cusolverDN library context. n host input Number of rows and columns of square matrix A. Should be non-negative. nrhs host input Number of right hand sides to solve. Should be non-negative. dA device in/out Matrix A with size n-by-n. Can’t be NULL. On return - unchanged if the iterative refinement process converged. If not - will contains the factorization of the matrix A in the main precision <T1> (A = P * L * U, where P - permutation matrix defined by vector ipiv, L and U - lower and upper triangular matrices). ldda host input Leading dimension of two-dimensional array used to store matrix A. lda >= n. dipiv device output Vector that defines permutation for the factorization - row i was interchanged with row ipiv[i] dB device input Set of right hand sides B of size n-by-nrhs. Can’t be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. ldb >= n. dX device output Set of solution vectors X of size n-by-nrhs. Can’t be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. ldx >= n. dWorkspace device input Pointer to an allocated workspace in device memory of size lwork_bytes. lwork_bytes host input Size of the allocated device workspace. Should be at least what was returned by cusolverDn<T1><T2>gesv_bufferSize() function. If iter is • <0 : iterative refinement has failed, main precision (Inputs/Outputs precision) factorization has been performed • -1 : taking into account machine parameters, n, nrhs, it is a priori not worth working in lower precision • -2 : overflow of an entry when moving from main to lower precision niters host output • -3 : failure during the factorization • -5 : overflow occurred during computation • -50: solver stopped the iterative refinement after reaching maximum allowed iterations • >0 : iter is a number of iterations solver performed to reach convergence criteria dinfo device output Status of the IRS solver on the return. If 0 - solve was successful. If dinfo = -i then i-th argument is not valid. If dinfo = i, then U(i,i) computed in main precision is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. Invalid parameters were passed, for example: • n<0 CUSOLVER_STATUS_INVALID_VALUE • lda<max(1,n) • ldb<max(1,n) • ldx<max(1,n) CUSOLVER_STATUS_ARCH_MISMATCH The IRS solver supports compute capability 7.0 and above. The lowest precision options CUSOLVER_[CR]_16BF and CUSOLVER_[CR]_TF32 are only available on compute capability 8.0 and above. CUSOLVER_STATUS_INVALID_WORKSPACE lwork_bytes is smaller than the required workspace. CUSOLVER_STATUS_IRS_OUT_OF_RANGE Numerical error related to niters <0, see niters description for more details. CUSOLVER_STATUS_INTERNAL_ERROR An internal error occurred, check the dinfo and the niters arguments for more details. 2.4.2.11. cusolverDnIRSXgesv() This function is designed to perform same functionality as cusolverDn<T1><T2>gesv() functions, but wrapped in a more generic and expert interface that gives user more control to parametrize the function as well as it provides more information on output. cusolverDnIRSXgesv() allows additional control of the solver parameters such as setting: • the main precision (Inputs/Outputs precision) of the solver • the lowest precision to be used internally by the solver • the refinement solver type • the maximum allowed number of iterations in the refinement phase • the tolerance of the refinement solver • the fallback to main precision • and more through the configuration parameters structure gesv_irs_params and its helper functions. For more details about what configuration can be set and its meaning please refer to all the functions in the cuSolverDN Helper Function Section that start with cusolverDnIRSParamsxxxx(). Moreover, cusolverDnIRSXgesv() provides additional information on the output such as the convergence history (e.g., the residual norms) at each iteration and the number of iterations needed to converge. For more details about what information can be retrieved and its meaning please refer to all the functions in the cuSolverDN Helper Function Section that start with cusolverDnIRSInfosxxxx() The function returns value describes the results of the solving process. A CUSOLVER_STATUS_SUCCESS indicates that the function finished with success otherwise, it indicates if one of the API arguments is incorrect, or if the configurations of params/infos structure is incorrect or if the function did not finish with success. More details about the error can be found by checking the niters and the dinfo API parameters. See their description below for further details. User should provide the required workspace allocated on device for the cusolverDnIRSXgesv() function. The amount of bytes required for the function can be queried by calling the respective function cusolverDnIRSXgesv_bufferSize(). Note that, if the user would like a particular configuration to be set via the params structure, it should be set before the call to cusolverDnIRSXgesv_bufferSize() to get the size of the required workspace. Tensor Float (TF32), introduced with NVIDIA Ampere Architecture GPUs, is the most robust tensor core accelerated compute mode for the iterative refinement solver. It is able to solve the widest range of problems in HPC arising from different applications and provides up to 4X and 5X speedup for real and complex systems, respectively. On Volta and Turing architecture GPUs, half precision tensor core acceleration is recommended. In cases where the iterative refinement solver fails to converge to the desired accuracy (main precision, INOUT data precision), it is recommended to use main precision as internal lowest precision. The following table provides all possible combinations values for the lowest precision corresponding to the Inputs/Outputs data type. Note that if the lowest precision matches the Inputs/Outputs datatype, then the main precision factorization will be used. Inputs/Outputs Data Type (e.g., main precision) Supported values for the lowest precision CUSOLVER_C_64F CUSOLVER_C_64F, CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_C_32F CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_R_64F CUSOLVER_R_64F, CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 CUSOLVER_R_32F CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 The cusolverDnIRSXgesv_bufferSize() function returns the required workspace buffer size in bytes for the corresponding cusolverDnXgesv() call with the given gesv_irs_params configuration. cusolverStatus_tcusolverDnIRSXgesv_bufferSize( cusolverDnHandle_t handle, cusolverDnIRSParams_t gesv_irs_params, cusolver_int_t n, cusolver_int_t nrhs, size_t * lwork_bytes); Parameter Memory In/ Meaning handle host input Handle to the cusolverDn library context. params host input Xgesv configuration parameters n host input Number of rows and columns of the square matrix A. Should be non-negative. nrhs host input Number of right hand sides to solve. Should be non-negative. Note that nrhs is limited to 1 if the selected IRS refinement solver is CUSOLVER_IRS_REFINE_GMRES, CUSOLVER_IRS_REFINE_GMRES_GMRES, CUSOLVER_IRS_REFINE_CLASSICAL_GMRES. lwork_bytes host out Pointer to a variable, where the required size in bytes, of the workspace will be stored after a call to cusolverDnIRSXgesv_bufferSize. Can’t be NULL. cusolverStatus_t cusolverDnIRSXgesv( cusolverDnHandle_t handle, cusolverDnIRSParams_t gesv_irs_params, cusolverDnIRSInfos_t gesv_irs_infos, int n, int nrhs, void * dA, int ldda, void * dB, int lddb, void * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * dinfo); Parameter Memory In/out Meaning handle host input Handle to the cusolverDn library context. gesv_irs_params host input Configuration parameters structure, can serve one or more calls to any IRS solver gesv_irs_infos host in/out Info structure, where information about a particular solve will be stored. The gesv_irs_infos structure correspond to a particular call. Thus different calls requires different gesv_irs_infos structure otherwise, it will be overwritten. n host input Number of rows and columns of square matrix A. Should be non-negative. nrhs host input Number of right hand sides to solve. Should be non-negative. Note that, nrhs is limited to 1 if the selected IRS refinement solver is CUSOLVER_IRS_REFINE_GMRES, CUSOLVER_IRS_REFINE_GMRES_GMRES, CUSOLVER_IRS_REFINE_CLASSICAL_GMRES. Matrix A with size n-by-n. Can’t be NULL. On return - will contain the factorization of the matrix A in the main precision (A = P * L * U, where P - permutation matrix dA device in/out defined by vector ipiv, L and U - lower and upper triangular matrices) if the iterative refinement solver was set to CUSOLVER_IRS_REFINE_NONE and the lowest precision is equal to the main precision (Inputs/Outputs datatype), or if the iterative refinement solver did not converge and the fallback to main precision was enabled (fallback enabled is the default setting); unchanged otherwise. ldda host input Leading dimension of two-dimensional array used to store matrix A. lda >= n. dB device input Set of right hand sides B of size n-by-nrhs. Can’t be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. ldb >= n. dX device output Set of solution vectors X of size n-by-nrhs. Can’t be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. ldx >= n. dWorkspace device input Pointer to an allocated workspace in device memory of size lwork_bytes. lwork_bytes host input Size of the allocated device workspace. Should be at least what was returned by cusolverDnIRSXgesv_bufferSize() function If iter is • <0 : iterative refinement has failed, main precision (Inputs/Outputs precision) factorization has been performed if fallback is enabled. • -1 : taking into account machine parameters, n, nrhs, it is a priori not worth working in lower precision • -2 : overflow of an entry when moving from main to lower precision niters host output • -3 : failure during the factorization • -5 : overflow occurred during computation • -maxiter: solver stopped the iterative refinement after reaching maximum allowed iterations. • >0 : iter is a number of iterations solver performed to reach convergence criteria dinfo device output Status of the IRS solver on the return. If 0 - solve was successful. If dinfo = -i then i-th argument is not valid. If dinfo = i, then U(i,i) computed in main precision is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. Invalid parameters were passed, for example: • n<0 CUSOLVER_STATUS_INVALID_VALUE • lda<max(1,n) • ldb<max(1,n) • ldx<max(1,n) CUSOLVER_STATUS_ARCH_MISMATCH The IRS solver supports compute capability 7.0 and above. The lowest precision options CUSOLVER_[CR]_16BF and CUSOLVER_[CR]_TF32 are only available on compute capability 8.0 and above. CUSOLVER_STATUS_INVALID_WORKSPACE lwork_bytes is smaller than the required workspace. Could happen if the users called cusolverDnIRSXgesv_bufferSize() function, then changed some of the configurations setting such as the lowest precision. CUSOLVER_STATUS_IRS_OUT_OF_RANGE Numerical error related to niters <0, see niters description for more details. CUSOLVER_STATUS_INTERNAL_ERROR An internal error occurred, check the dinfo and the niters arguments for more details. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The configuration parameter gesv_irs_params structure was not created. CUSOLVER_STATUS_IRS_PARAMS_INVALID One of the configuration parameter in the gesv_irs_params structure is not valid. CUSOLVER_STATUS_IRS_PARAMS_INVALID_PREC The main and/or the lowest precision configuration parameter in the gesv_irs_params structure is not valid, check the table above for the supported CUSOLVER_STATUS_IRS_PARAMS_INVALID_MAXITER The maxiter configuration parameter in the gesv_irs_params structure is not valid. CUSOLVER_STATUS_IRS_PARAMS_INVALID_REFINE The refinement solver configuration parameter in the gesv_irs_params structure is not valid. CUSOLVER_STATUS_IRS_NOT_SUPPORTED One of the configuration parameter in the gesv_irs_params structure is not supported. For example if nrhs >1, and refinement solver was set to CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The information structure gesv_irs_infos was not created. CUSOLVER_STATUS_ALLOC_FAILED CPU memory allocation failed, most likely during the allocation of the residual array that store the residual norms. These helper functions calculate the size of work buffers needed. cusolverStatus_tcusolverDnSgeqrf_bufferSize(cusolverDnHandle_t handle, int m, int n, float *A, int lda, int *Lwork );cusolverStatus_tcusolverDnDgeqrf_bufferSize(cusolverDnHandle_t handle, int m, int n, double *A, int lda, int *Lwork );cusolverStatus_tcusolverDnCgeqrf_bufferSize(cusolverDnHandle_t handle, int m, int n, cuComplex *A, int lda, int *Lwork );cusolverStatus_tcusolverDnZgeqrf_bufferSize(cusolverDnHandle_t handle, int m, int n, cuDoubleComplex *A, int lda, int *Lwork ); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSgeqrf(cusolverDnHandle_t handle, int m, int n, float *A, int lda, float *TAU, float *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnDgeqrf(cusolverDnHandle_t handle, int m, int n, double *A, int lda, double *TAU, double *Workspace, int Lwork, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCgeqrf(cusolverDnHandle_t handle, int m, int n, cuComplex *A, int lda, cuComplex *TAU, cuComplex *Workspace, int Lwork, int *devInfo );cusolverStatus_tcusolverDnZgeqrf(cusolverDnHandle_t handle, int m, int n, cuDoubleComplex *A, int lda, cuDoubleComplex *TAU, cuDoubleComplex *Workspace, int Lwork, int *devInfo ); This function computes the QR factorization of a m×n matrix where A is an m×n matrix, Q is an m×n matrix, and R is a n×n upper triangular matrix. The user has to provide working space which is pointed by input parameter Workspace. The input parameter Lwork is size of the working space, and it is returned by geqrf_bufferSize(). The matrix R is overwritten in upper triangular part of A, including diagonal elements. The matrix Q is not formed explicitly, instead, a sequence of householder vectors are stored in lower triangular part of A. The leading nonzero element of householder vector is assumed to be 1 such that output parameter TAU contains the scaling factor τ. If v is original householder vector, q is the new householder vector corresponding to τ, satisfying the following relation \(I - 2*v*v^{H} = I - \tau*q*q^{H}\) If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). API of geqrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. m host input Number of rows of matrix A. n host input Number of columns of matrix A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,m). lda host input Leading dimension of two-dimensional array used to store matrix A. TAU device output <type> array of dimension at least min(m,n). Workspace device in/out Working space, <type> array of size Lwork. Lwork host input Size of working array Workspace. devInfo device output If devInfo = 0, the LU factorization is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or lda<max(1,m)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.2.13. cusolverDnGeqrf()[DEPRECATED] [[DEPRECATED]] use cusolverDnXgeqrf() instead. The routine will be removed in the next major release. The helper functions below can calculate the sizes needed for pre-allocated buffer. cusolverStatus_tcusolverDnGeqrf_bufferSize( cusolverDnHandle_t handle, cusolverDnParams_t params, int64_t m, int64_t n, cudaDataType dataTypeA, const void *A, int64_t lda, cudaDataType dataTypeTau, const void *tau, cudaDataType computeType, size_t *workspaceInBytes ) The following routine: cusolverStatus_tcusolverDnGeqrf( cusolverDnHandle_t handle, cusolverDnParams_t params, int64_t m, int64_t n, cudaDataType dataTypeA, void *A, int64_t lda, cudaDataType dataTypeTau, void *tau, cudaDataType computeType, void *pBuffer, size_t workspaceInBytes, int *info ) computes the QR factorization of an m×n matrix where A is a m×n matrix, Q is an m×n matrix, and R is an n×n upper triangular matrix using the generic API interface. The user has to provide working space which is pointed by input parameter pBuffer. The input parameter workspaceInBytes is size in bytes of the working space, and it is returned by The matrix R is overwritten in upper triangular part of A, including diagonal elements. The matrix Q is not formed explicitly, instead, a sequence of householder vectors are stored in lower triangular part of A. The leading nonzero element of householder vector is assumed to be 1 such that output parameter TAU contains the scaling factor τ. If v is original householder vector, q is the new householder vector corresponding to τ, satisfying the following relation \(I - 2*v*v^{H} = I - \tau*q*q^{H}\) If output parameter info = -i (less than zero), the i-th parameter is wrong (not counting handle). Currently, cusolverDnGeqrf supports only the default algorithm. Table of algorithms supported by cusolverDnGeqrf CUSOLVER_ALG_0 or NULL Default algorithm. List of input arguments for cusolverDnGeqrf_bufferSize and cusolverDnGeqrf: API of geqrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. params host input Structure with information collected by cusolverDnSetAdvOptions. m host input Number of rows of matrix A. n host input Number of columns of matrix A. dataTypeA host in Data type of array A. A device in/out Array of dimension lda * n with lda is not less than max(1,m). lda host input Leading dimension of two-dimensional array used to store matrix A. TAU device output Array of dimension at least min(m,n). computeType host in Data type of computation. pBuffer device in/out Working space. Array of type void of size workspaceInBytes bytes. workspaceInBytes host input Size in bytes of working array pBuffer. info device output If info = 0, the LU factorization is successful. if info = -i, the i-th parameter is wrong (not counting handle). The generic API has two different types, dataTypeA is data type of the matrix A and array tau and computeType is compute type of the operation. cusolverDnGeqrf only supports the following four Valid combination of data type and compute type DataTypeA ComputeType Meaning CUDA_R_32F CUDA_R_32F SGEQRF CUDA_R_64F CUDA_R_64F DGEQRF CUDA_C_32F CUDA_C_32F CGEQRF CUDA_C_64F CUDA_C_64F ZGEQRF Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or lda<max(1,m)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These functions compute the solution of a system of linear equations with one or multiple right hand sides using mixed precision iterative refinement techniques based on the QR factorization Xgels. These functions are similar in term of functionalities to the full precision LAPACK QR (least squares) solver (Xgels, where X denotes Z,C,D,S) but it uses lower precision internally in order to provide faster time to solution, from here comes the name mixed precision. Mixed precision iterative refinement techniques means that the solver compute an QR factorization in lower precision and then iteratively refine the solution to achieve the accuracy of the Inputs/Outputs datatype precision. The <t1> corresponds to the Inputs/Outputs datatype precision while <t2> represent the internal lower precision at which the factorization will be carried on. Where A is m-by-n matrix and X is n-by-nrhs and B is m-by-nrhs matrices. Functions API are designed to be as close as possible to LAPACK API to be considered as a quick and easy drop-in replacement. Description of these functions is given below. <t1><t2>gels() functions are designated by two floating point precisions The <t1> corresponds to the main precision (e.g., Inputs/Outputs datatype precision) and the <t2> represent the internal lower precision at which the factorization will be carried on. cusolver<t1><t2>gels() first attempts to factorize the matrix in lower precision and use this factorization within an iterative refinement procedure to obtain a solution with same normwise backward error as the main precision <t1>. If the approach fails to converge, then the method fallback to the main precision factorization and solve (Xgels) such a way that there is always a good solution at the output of these functions. If <t2> is equal to <t1>, then it is not a mixed precision process but rather a full one precision factorization, solve and refinement within the same main precision. The iterative refinement process is stopped if: ITER > ITERMAX or for all the RHS we have: RNRM < SQRT(N)*XNRM*ANRM*EPS*BWDMAX where • ITER is the number of the current iteration in the iterative refinement process • RNRM is the infinity-norm of the residual • XNRM is the infinity-norm of the solution • ANRM is the infinity-operator-norm of the matrix A • EPS is the machine epsilon that matches LAPACK <t1>LAMCH('Epsilon') The values ITERMAX and BWDMAX are fixed to 50 and 1.0 respectively. The function returns value describes the results of the solving process. A CUSOLVER_STATUS_SUCCESS indicates that the function finished with success otherwise, it indicates if one of the API arguments is incorrect, or if the function did not finish with success. More details about the error will be in the niters and the dinfo API parameters. See their description below for more details. User should provide the required workspace allocated on device memory. The amount of bytes required can be queried by calling the respective function <t1><t2>gels_bufferSize(). We provide a large set of mixed precision functions that include half, bfloat and tensorfloat as a lower precision as well as same precision functions (e.g., main and lowest precision are equal <t2> is equal to <t1>). The following table specifies which precisions will be used for which interface function: Tensor Float (TF32), introduced with NVIDIA Ampere Architecture GPUs, is the most robust tensor core accelerated compute mode for the iterative refinement solver. It is able to solve the widest range of problems in HPC arising from different applications and provides up to 4X and 5X speedup for real and complex systems, respectively. On Volta and Turing architecture GPUs, half precision tensor core acceleration is recommended. In cases where the iterative refinement solver fails to converge to the desired accuracy (main precision, INOUT data precision), it is recommended to use main precision as internal lowest precision (i.e., cusolverDn[DD,ZZ]gels for the FP64 case). Interface function Main precision (matrix, rhs and solution datatype) Lowest precision allowed to be used internally cusolverDnZZgels cuDoubleComplex double complex cusolverDnZCgels cuDoubleComplex single complex cusolverDnZKgels cuDoubleComplex half complex cusolverDnZEgels cuDoubleComplex bfloat complex cusolverDnZYgels cuDoubleComplex tensorfloat complex cusolverDnCCgels cuComplex single complex cusolverDnCKgels cuComplex half complex cusolverDnCEgels cuComplex bfloat complex cusolverDnCYgels cuComplex tensorfloat complex cusolverDnDDgels double double cusolverDnDSgels double single cusolverDnDHgels double half cusolverDnDBgels double bfloat cusolverDnDXgels double tensorfloat cusolverDnSSgels float single cusolverDnSHgels float half cusolverDnSBgels float bfloat cusolverDnSXgels float tensorfloat cusolverDn<t1><t2>gels_bufferSize() functions will return workspace buffer size in bytes required for the corresponding cusolverDn<t1><t2>gels() function. cusolverStatus_tcusolverDnZZgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZCgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZKgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZEgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnZYgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCCgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCKgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCEgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnCYgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDDgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDSgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDHgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDBgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnDXgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSSgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSHgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSBgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes);cusolverStatus_tcusolverDnSXgels_bufferSize( cusolverHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dwork, size_t * lwork_bytes); Parameter Memory In/out Meaning handle host input Handle to the cusolverDN library context. m host input Number of rows of the matrix A. Should be non-negative and n<=m n host input Number of columns of the matrix A. Should be non-negative and n<=m. nrhs host input Number of right hand sides to solve. Should be non-negative. dA device None Matrix A with size m-by-n. Can be NULL. ldda host input Leading dimension of two-dimensional array used to store matrix A. ldda >= m. dB device None Set of right hand sides B of size m-by-nrhs. Can be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. lddb >= max(1,m). dX device None Set of solution vectors X of size n-by-nrhs. Can be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. lddx >= max(1,n). dwork device none Pointer to device workspace. Not used and can be NULL. lwork_bytes host output Pointer to a variable where required size of temporary workspace in bytes will be stored. Can’t be NULL. cusolverStatus_t cusolverDnZZgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZCgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZKgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZEgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnZYgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuDoubleComplex * dA, int ldda, cuDoubleComplex * dB, int lddb, cuDoubleComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCCgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCKgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCEgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnCYgels( cusolverDnHandle_t handle, int m, int n, int nrhs, cuComplex * dA, int ldda, cuComplex * dB, int lddb, cuComplex * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDDgels( cusolverDnHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDSgels( cusolverDnHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDHgels( cusolverDnHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDBgels( cusolverDnHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnDXgels( cusolverDnHandle_t handle, int m, int n, int nrhs, double * dA, int ldda, double * dB, int lddb, double * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSSgels( cusolverDnHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSHgels( cusolverDnHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSBgels( cusolverDnHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo);cusolverStatus_t cusolverDnSXgels( cusolverDnHandle_t handle, int m, int n, int nrhs, float * dA, int ldda, float * dB, int lddb, float * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * niter, int * dinfo); Parameter Memory In/out Meaning handle host input Handle to the cusolverDN library context. m host input Number of rows of the matrix A. Should be non-negative and n<=m n host input Number of columns of the matrix A. Should be non-negative and n<=m. nrhs host input Number of right hand sides to solve. Should be non-negative. dA device in/out Matrix A with size m-by-n. Can’t be NULL. On return - unchanged if the lowest precision is not equal to the main precision and the iterative refinement solver converged, - garbage otherwise. ldda host input Leading dimension of two-dimensional array used to store matrix A. ldda >= m. dB device input Set of right hand sides B of size m-by-nrhs. Can’t be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. lddb >= max(1,m). dX device output Set of solution vectors X of size n-by-nrhs. Can’t be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. lddx >= max(1,n). dWorkspace device input Pointer to an allocated workspace in device memory of size lwork_bytes. lwork_bytes host input Size of the allocated device workspace. Should be at least what was returned by cusolverDn<T1><T2>gels_bufferSize() function If iter is • <0 : iterative refinement has failed, main precision (Inputs/Outputs precision) factorization has been performed. • -1 : taking into account machine parameters, n, nrhs, it is a priori not worth working in lower precision • -2 : overflow of an entry when moving from main to lower precision niters host output • -3 : failure during the factorization • -5 : overflow occurred during computation • -50: solver stopped the iterative refinement after reaching maximum allowed iterations. • >0 : iter is a number of iterations solver performed to reach convergence criteria dinfo device output Status of the IRS solver on the return. If 0 - solve was successful. If dinfo = -i then i-th argument is not valid. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. Invalid parameters were passed, for example: • n<0 CUSOLVER_STATUS_INVALID_VALUE • ldda<max(1,m) • lddb<max(1,m) • lddx<max(1,n) CUSOLVER_STATUS_ARCH_MISMATCH The IRS solver supports compute capability 7.0 and above. The lowest precision options CUSOLVER_[CR]_16BF and CUSOLVER_[CR]_TF32 are only available on compute capability 8.0 and above. CUSOLVER_STATUS_INVALID_WORKSPACE lwork_bytes is smaller than the required workspace. CUSOLVER_STATUS_IRS_OUT_OF_RANGE Numerical error related to niters <0, see niters description for more details. CUSOLVER_STATUS_INTERNAL_ERROR An internal error occurred; check the dinfo and the niters arguments for more details. 2.4.2.15. cusolverDnIRSXgels() This function is designed to perform same functionality as cusolverDn<T1><T2>gels() functions, but wrapped in a more generic and expert interface that gives user more control to parametrize the function as well as it provides more information on output. cusolverDnIRSXgels() allows additional control of the solver parameters such as setting: • the main precision (Inputs/Outputs precision) of the solver, • the lowest precision to be used internally by the solver, • the refinement solver type • the maximum allowed number of iterations in the refinement phase • the tolerance of the refinement solver • the fallback to main precision • and others through the configuration parameters structure gels_irs_params and its helper functions. For more details about what configuration can be set and its meaning please refer to all the functions in the cuSolverDN Helper Function Section that start with cusolverDnIRSParamsxxxx(). Moreover, cusolverDnIRSXgels() provides additional information on the output such as the convergence history (e.g., the residual norms) at each iteration and the number of iterations needed to converge. For more details about what information can be retrieved and its meaning please refer to all the functions in the cuSolverDN Helper Function Section that start with cusolverDnIRSInfosxxxx(). The function returns value describes the results of the solving process. A CUSOLVER_STATUS_SUCCESS indicates that the function finished with success otherwise, it indicates if one of the API arguments is incorrect, or if the configurations of params/infos structure is incorrect or if the function did not finish with success. More details about the error can be found by checking the niters and the dinfo API parameters. See their description below for further details. Users should provide the required workspace allocated on device for the cusolverDnIRSXgels() function. The amount of bytes required for the function can be queried by calling the respective function cusolverDnIRSXgels_bufferSize(). Note that, if the user would like a particular configuration to be set via the params structure, it should be set before the call to cusolverDnIRSXgels_bufferSize() to get the size of the required workspace. The following table provides all possible combinations values for the lowest precision corresponding to the Inputs/Outputs data type. Note that if the lowest precision matches the Inputs/Outputs datatype, then main precision factorization will be used Tensor Float (TF32), introduced with NVIDIA Ampere Architecture GPUs, is the most robust tensor core accelerated compute mode for the iterative refinement solver. It is able to solve the widest range of problems in HPC arising from different applications and provides up to 4X and 5X speedup for real and complex systems, respectively. On Volta and Turing architecture GPUs, half precision tensor core acceleration is recommended. In cases where the iterative refinement solver fails to converge to the desired accuracy (main precision, INOUT data precision), it is recommended to use main precision as internal lowest precision. Inputs/Outputs Data Type (e.g., main precision) Supported values for the lowest precision CUSOLVER_C_64F CUSOLVER_C_64F, CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_C_32F CUSOLVER_C_32F, CUSOLVER_C_16F, CUSOLVER_C_16BF, CUSOLVER_C_TF32 CUSOLVER_R_64F CUSOLVER_R_64F, CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 CUSOLVER_R_32F CUSOLVER_R_32F, CUSOLVER_R_16F, CUSOLVER_R_16BF, CUSOLVER_R_TF32 The cusolverDnIRSXgels_bufferSize() function return the required workspace buffer size in bytes for the corresponding cusolverDnXgels() call with given gels_irs_params configuration. cusolverStatus_tcusolverDnIRSXgels_bufferSize( cusolverDnHandle_t handle, cusolverDnIRSParams_t gels_irs_params, cusolver_int_t m, cusolver_int_t n, cusolver_int_t nrhs, size_t * lwork_bytes); Parameters of cusolverDnIRSXgels_bufferSize() functions Parameter Memory In/ Meaning handle host input Handle to the cusolverDn library context. params host input Xgels configuration parameters m host input Number of rows of the matrix A. Should be non-negative and n<=m n host input Number of columns of the matrix A. Should be non-negative and n<=m. nrhs host input Number of right hand sides to solve. Should be non-negative. Note that, nrhs is limited to 1 if the selected IRS refinement solver is CUSOLVER_IRS_REFINE_GMRES, CUSOLVER_IRS_REFINE_GMRES_GMRES, CUSOLVER_IRS_REFINE_CLASSICAL_GMRES. lwork_bytes host out Pointer to a variable, where the required size in bytes, of the workspace will be stored after a call to cusolverDnIRSXgels_bufferSize. Can’t be NULL. cusolverStatus_t cusolverDnIRSXgels( cusolverDnHandle_t handle, cusolverDnIRSParams_t gels_irs_params, cusolverDnIRSInfos_t gels_irs_infos, int m, int n, int nrhs, void * dA, int ldda, void * dB, int lddb, void * dX, int lddx, void * dWorkspace, size_t lwork_bytes, int * dinfo); Parameter Memory In/out Meaning handle host input Handle to the cusolverDn library context. gels_irs_params host input Configuration parameters structure, can serve one or more calls to any IRS solver gels_irs_infos host in/out Info structure, where information about a particular solve will be stored. The gels_irs_infos structure correspond to a particular call. Thus different calls requires different gels_irs_infos structure otherwise, it will be overwritten. m host input Number of rows of the matrix A. Should be non-negative and n<=m n host input Number of columns of the matrix A. Should be non-negative and n<=m. nrhs host input Number of right hand sides to solve. Should be non-negative. Note that, nrhs is limited to 1 if the selected IRS refinement solver is CUSOLVER_IRS_REFINE_GMRES, CUSOLVER_IRS_REFINE_GMRES_GMRES, CUSOLVER_IRS_REFINE_CLASSICAL_GMRES. dA device in/out Matrix A with size m-by-n. Can’t be NULL. On return - unchanged if the lowest precision is not equal to the main precision and the iterative refinement solver converged, - garbage otherwise. ldda host input Leading dimension of two-dimensional array used to store matrix A. ldda >= m. dB device input Set of right hand sides B of size m-by-nrhs. Can’t be NULL. lddb host input Leading dimension of two-dimensional array used to store matrix of right hand sides B. lddb >= max(1,m). dX device output Set of solution vectors X of size n-by-nrhs. Can’t be NULL. lddx host input Leading dimension of two-dimensional array used to store matrix of solution vectors X. lddx >= max(1,n). dWorkspace device input Pointer to an allocated workspace in device memory of size lwork_bytes. lwork_bytes host input Size of the allocated device workspace. Should be at least what was returned by cusolverDnIRSXgels_bufferSize() function. If iter is • <0 : iterative refinement has failed, main precision (Inputs/Outputs precision) factorization has been performed if fallback is enabled • -1 : taking into account machine parameters, n, nrhs, it is a priori not worth working in lower precision • -2 : overflow of an entry when moving from main to lower precision niters host output • -3 : failure during the factorization • -5 : overflow occurred during computation • -maxiter: solver stopped the iterative refinement after reaching maximum allowed iterations • >0 : iter is a number of iterations solver performed to reach convergence criteria dinfo device output Status of the IRS solver on the return. If 0 - solve was successful. If dinfo = -i then i-th argument is not valid. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. Invalid parameters were passed, for example: • n<0 CUSOLVER_STATUS_INVALID_VALUE • ldda<max(1,m) • lddb<max(1,m) • lddx<max(1,n) CUSOLVER_STATUS_ARCH_MISMATCH The IRS solver supports compute capability 7.0 and above. The lowest precision options CUSOLVER_[CR]_16BF and CUSOLVER_[CR]_TF32 are only available on compute capability 8.0 and above. CUSOLVER_STATUS_INVALID_WORKSPACE lwork_bytes is smaller than the required workspace. Could happen if the users called cusolverDnIRSXgels_bufferSize() function, then changed some of the configurations setting such as the lowest precision. CUSOLVER_STATUS_IRS_OUT_OF_RANGE Numerical error related to niters <0; see niters description for more details. CUSOLVER_STATUS_INTERNAL_ERROR An internal error occurred, check the dinfo and the niters arguments for more details. CUSOLVER_STATUS_IRS_PARAMS_NOT_INITIALIZED The configuration parameter gels_irs_params structure was not created. CUSOLVER_STATUS_IRS_PARAMS_INVALID One of the configuration parameter in the gels_irs_params structure is not valid. CUSOLVER_STATUS_IRS_PARAMS_INVALID_PREC The main and/or the lowest precision configuration parameter in the gels_irs_params structure is not valid, check the table above for the supported CUSOLVER_STATUS_IRS_PARAMS_INVALID_MAXITER The maxiter configuration parameter in the gels_irs_params structure is not valid. CUSOLVER_STATUS_IRS_PARAMS_INVALID_REFINE The refinement solver configuration parameter in the gels_irs_params structure is not valid. CUSOLVER_STATUS_IRS_NOT_SUPPORTED One of the configuration parameter in the gels_irs_params structure is not supported. For example if nrhs >1, and refinement solver was set to CUSOLVER_STATUS_IRS_INFOS_NOT_INITIALIZED The information structure gels_irs_infos was not created. CUSOLVER_STATUS_ALLOC_FAILED CPU memory allocation failed, most likely during the allocation of the residual array that store the residual norms. These helper functions calculate the size of work buffers needed. Please visit cuSOLVER Library Samples - ormqr for a code example. cusolverStatus_tcusolverDnSormqr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const float *A, int lda, const float *tau, const float *C, int ldc, int *lwork);cusolverStatus_tcusolverDnDormqr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const double *A, int lda, const double *tau, const double *C, int ldc, int *lwork);cusolverStatus_tcusolverDnCunmqr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const cuComplex *A, int lda, const cuComplex *tau, const cuComplex *C, int ldc, int *lwork);cusolverStatus_tcusolverDnZunmqr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, const cuDoubleComplex *C, int ldc, int *lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSormqr( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const float *A, int lda, const float *tau, float *C, int ldc, float *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnDormqr( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const double *A, int lda, const double *tau, double *C, int ldc, double *work, int lwork, int *devInfo); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCunmqr( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const cuComplex *A, int lda, const cuComplex *tau, cuComplex *C, int ldc, cuComplex *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnZunmqr( cusolverDnHandle_t handle, cublasSideMode_t side, cublasOperation_t trans, int m, int n, int k, const cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, cuDoubleComplex *C, int ldc, cuDoubleComplex *work, int lwork, int *devInfo); This function overwrites m×n matrix C by \(C = \left\{ \begin{matrix} {\text{op}(Q)*C} & {\text{if~}\textsf{side\ ==\ CUBLAS\_SIDE\_LEFT}} \\ {C*\text{op}(Q)} & {\text{if~}\textsf{side\ ==\ CUBLAS\_SIDE\_RIGHT}} \\ \end{matrix} \right.\) The operation of Q is defined by \(\text{op}(Q) = \left\{ \begin{matrix} Q & {\text{if~}\textsf{transa\ ==\ CUBLAS\_OP\_N}} \\ Q^{T} & {\text{if~}\textsf{transa\ ==\ CUBLAS\_OP\_T}} \\ Q^{H} & {\text{if~}\textsf{transa\ ==\ CUBLAS\ _OP\_C}} \\ \end{matrix} \right.\) Q is a unitary matrix formed by a sequence of elementary reflection vectors from QR factorization (geqrf) of A. Q=H(1)H(2) … H(k) Q is of order m if side = CUBLAS_SIDE_LEFT and of order n if side = CUBLAS_SIDE_RIGHT. The user has to provide working space which is pointed by input parameter work. The input parameter lwork is size of the working space, and it is returned by geqrf_bufferSize() or ormqr_bufferSize(). Please note that the size in bytes of the working space is equal to sizeof(<type>) * lwork. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). The user can combine geqrf, ormqr and trsm to complete a linear solver or a least-square solver. API of ormqr Parameter Memory In/out Meaning handle host input Handle to the cuSolverDn library context. side host input Indicates if matrix Q is on the left or right of C. trans host input Operation op(Q) that is non- or (conj.) transpose. m host input Number of rows of matrix C. n host input Number of columns of matrix C. k host input Number of elementary reflections whose product defines the matrix Q. A device in/out <type> array of dimension lda * k with lda is not less than max(1,m). The matrix A is from geqrf, so i-th column contains elementary reflection vector. lda host input Leading dimension of two-dimensional array used to store matrix A. if side is CUBLAS_SIDE_LEFT, lda >= max(1,m); if side is CUBLAS_SIDE_RIGHT, lda >= max(1,n). tau device input <type> array of dimension at least min(m,n). The vector tau is from geqrf, so tau(i) is the scalar of i-th elementary reflection vector. C device in/out <type> array of size ldc * n. On exit, C is overwritten by op(Q)*C. ldc host input Leading dimension of two-dimensional array of matrix C. ldc >= max(1,m). work device in/out Working space, <type> array of size lwork. lwork host input Size of working array work. devInfo device output If devInfo = 0, the ormqr is successful. If devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or wrong lda or ldc). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of work buffers needed. Please visit cuSOLVER Library Samples - orgqr for a code example. cusolverStatus_tcusolverDnSorgqr_bufferSize( cusolverDnHandle_t handle, int m, int n, int k, const float *A, int lda, const float *tau, int *lwork);cusolverStatus_tcusolverDnDorgqr_bufferSize( cusolverDnHandle_t handle, int m, int n, int k, const double *A, int lda, const double *tau, int *lwork);cusolverStatus_tcusolverDnCungqr_bufferSize( cusolverDnHandle_t handle, int m, int n, int k, const cuComplex *A, int lda, const cuComplex *tau, int *lwork);cusolverStatus_tcusolverDnZungqr_bufferSize( cusolverDnHandle_t handle, int m, int n, int k, const cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, int *lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSorgqr( cusolverDnHandle_t handle, int m, int n, int k, float *A, int lda, const float *tau, float *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnDorgqr( cusolverDnHandle_t handle, int m, int n, int k, double *A, int lda, const double *tau, double *work, int lwork, int *devInfo); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCungqr( cusolverDnHandle_t handle, int m, int n, int k, cuComplex *A, int lda, const cuComplex *tau, cuComplex *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnZungqr( cusolverDnHandle_t handle, int m, int n, int k, cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, cuDoubleComplex *work, int lwork, int *devInfo); This function overwrites m×n matrix A by \(Q = {H(1)}*{H(2)}*{...}*{H(k)}\) where Q is a unitary matrix formed by a sequence of elementary reflection vectors stored in A. The user has to provide working space which is pointed by input parameter work. The input parameter lwork is size of the working space, and it is returned by orgqr_bufferSize(). Please note that the size in bytes of the working space is equal to sizeof(<type>) * lwork. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). The user can combine geqrf, orgqr to complete orthogonalization. API of orgqr Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. m host input Number of rows of matrix Q. m >= 0; n host input Number of columns of matrix Q. m >= n >= 0; k host input Number of elementary reflections whose product defines the matrix Q. n >= k >= 0; A device in/out <type> array of dimension lda * n with lda is not less than max(1,m). i-th column of A contains elementary reflection vector. lda host input Leading dimension of two-dimensional array used to store matrix A. lda >= max(1,m). tau device input <type> array of dimension k. tau(i) is the scalar of i-th elementary reflection vector. work device in/out Working space, <type> array of size lwork. lwork host input Size of working array work. devInfo device output If info = 0, the orgqr is successful. if info = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n,k<0, n>m, k>n or lda<m). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of the needed buffers. cusolverStatus_tcusolverDnSsytrf_bufferSize(cusolverDnHandle_t handle, int n, float *A, int lda, int *Lwork );cusolverStatus_tcusolverDnDsytrf_bufferSize(cusolverDnHandle_t handle, int n, double *A, int lda, int *Lwork );cusolverStatus_tcusolverDnCsytrf_bufferSize(cusolverDnHandle_t handle, int n, cuComplex *A, int lda, int *Lwork );cusolverStatus_tcusolverDnZsytrf_bufferSize(cusolverDnHandle_t handle, int n, cuDoubleComplex *A, int lda, int *Lwork ); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSsytrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, int *ipiv, float *work, int lwork, int *devInfo );cusolverStatus_tcusolverDnDsytrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, int *ipiv, double *work, int lwork, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCsytrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, int *ipiv, cuComplex *work, int lwork, int *devInfo );cusolverStatus_tcusolverDnZsytrf(cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, int *ipiv, cuDoubleComplex *work, int lwork, int *devInfo ); This function computes the Bunch-Kaufman factorization of a n×n symmetric indefinite matrix A is a n×n symmetric matrix, only lower or upper part is meaningful. The input parameter uplo which part of the matrix is used. The function would leave other part untouched. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, only lower triangular part of A is processed, and replaced by lower triangular factor L and block diagonal matrix D. Each block of D is either 1x1 or 2x2 block, depending on pivoting. \(P*A*P^{T} = L*D*L^{T}\) If input parameter uplo is CUBLAS_FILL_MODE_UPPER, only upper triangular part of A is processed, and replaced by upper triangular factor U and block diagonal matrix D. \(P*A*P^{T} = U*D*U^{T}\) The user has to provide working space which is pointed by input parameter work. The input parameter lwork is size of the working space, and it is returned by sytrf_bufferSize(). Please note that the size in bytes of the working space is equal to sizeof(<type>) * lwork. If Bunch-Kaufman factorization failed, i.e. A is singular. The output parameter devInfo = i would indicate D(i,i)=0. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). The output parameter devIpiv contains pivoting sequence. If devIpiv(i) = k > 0, D(i,i) is 1x1 block, and i-th row/column of A is interchanged with k-th row/column of A. If uplo is CUBLAS_FILL_MODE_UPPER and devIpiv(i-1) = devIpiv(i) = -m < 0, D(i-1:i,i-1:i) is a 2x2 block, and (i-1)-th row/column is interchanged with m-th row/column. If uplo is CUBLAS_FILL_MODE_LOWER and devIpiv(i+1) = devIpiv(i) = -m < 0, D(i:i+1,i:i+1) is a 2x2 block, and (i+1)-th row/column is interchanged with m-th row/column. API of sytrf Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if matrix A lower or upper part is stored, the other part is not referenced. n host input Number of rows and columns of matrix A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. ipiv device output Array of size at least n, containing pivot indices. work device in/out Working space, <type> array of size lwork. lwork host input Size of working space work. devInfo device output If devInfo = 0, the LU factorization is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). if devInfo = i, the D(i,i) = 0. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. The S and D data types are real valued single and double precision, respectively. Please visit cuSOLVER Library Samples - potrfBatched for a code example. cusolverStatus_tcusolverDnSpotrfBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *Aarray[], int lda, int *infoArray, int batchSize);cusolverStatus_tcusolverDnDpotrfBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *Aarray[], int lda, int *infoArray, int batchSize); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCpotrfBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *Aarray[], int lda, int *infoArray, int batchSize);cusolverStatus_tcusolverDnZpotrfBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *Aarray[], int lda, int *infoArray, int batchSize); This function computes the Cholesky factorization of a sequence of Hermitian positive-definite matrices. Each Aarray[i] for i=0,1,..., batchSize-1 is a n×n Hermitian matrix, only lower or upper part is meaningful. The input parameter uplo indicates which part of the matrix is used. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, only lower triangular part of A is processed, and replaced by lower triangular Cholesky factor L. If input parameter uplo is CUBLAS_FILL_MODE_UPPER, only upper triangular part of A is processed, and replaced by upper triangular Cholesky factor U. If Cholesky factorization failed, i.e. some leading minor of A is not positive definite, or equivalently some diagonal elements of L or U is not a real number. The output parameter infoArray would indicate smallest leading minor of A which is not positive definite. infoArray is an integer array of size batchsize. If potrfBatched returns CUSOLVER_STATUS_INVALID_VALUE, infoArray[0] = -i (less than zero), meaning that the i-th parameter is wrong (not counting handle). If potrfBatched returns CUSOLVER_STATUS_SUCCESS but infoArray[i] = k is positive, then i-th matrix is not positive definite and the Cholesky factorization failed at row k. Remark: the other part of A is used as a workspace. For example, if uplo is CUBLAS_FILL_MODE_UPPER, upper triangle of A contains Cholesky factor U and lower triangle of A is destroyed after Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if lower or upper part is stored; the other part is used as a workspace. n host input Number of rows and columns of matrix A. Aarray device in/out Array of pointers to <type> array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store each matrix Aarray[i]. Array of size batchSize. infoArray[i] contains information of factorization of Aarray[i]. if potrfBatched returns CUSOLVER_STATUS_INVALID_VALUE, infoArray[0] infoArray device output = -i (less than zero) means the i-th parameter is wrong (not counting handle). if potrfBatched returns CUSOLVER_STATUS_SUCCESS , infoArray[i] = 0 means the Cholesky factorization of i-th matrix is successful, and infoArray[i] = k means the leading submatrix of order k of i-th matrix is not positive definite. batchSize host input Number of pointers in Aarray. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0 or lda<max(1,n) or batchSize<1). CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. cusolverStatus_tcusolverDnSpotrsBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, float *Aarray[], int lda, float *Barray[], int ldb, int *info, int batchSize);cusolverStatus_tcusolverDnDpotrsBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, double *Aarray[], int lda, double *Barray[], int ldb, int *info, int batchSize);cusolverStatus_tcusolverDnCpotrsBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, cuComplex *Aarray[], int lda, cuComplex *Barray[], int ldb, int *info, int batchSize);cusolverStatus_tcusolverDnZpotrsBatched( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, int nrhs, cuDoubleComplex *Aarray[], int lda, cuDoubleComplex *Barray[], int ldb, int *info, int batchSize); This function solves a sequence of linear systems \({A\lbrack i\rbrack}*{X\lbrack i\rbrack} = {B\lbrack i\rbrack}\) where each Aarray[i] for i=0,1,..., batchSize-1 is a n×n Hermitian matrix, only lower or upper part is meaningful. The input parameter uplo indicates which part of the matrix is used. The user has to call potrfBatched first to factorize matrix Aarray[i]. If input parameter uplo is CUBLAS_FILL_MODE_LOWER, A is lower triangular Cholesky factor L corresponding to \(A = L*L^{H}\) . If input parameter uplo is CUBLAS_FILL_MODE_UPPER, A is upper triangular Cholesky factor U corresponding to \(A = U^{H}*U\) . The operation is in-place, i.e. matrix X overwrites matrix B with the same leading dimension ldb. The output parameter info is a scalar. If info = -i (less than zero), the i-th parameter is wrong (not counting handle). Remark 1: only nrhs=1 is supported. Remark 2: infoArray from potrfBatched indicates if the matrix is positive definite. info from potrsBatched only shows which input parameter is wrong (not counting handle). Remark 3: the other part of A is used as a workspace. For example, if uplo is CUBLAS_FILL_MODE_UPPER, upper triangle of A contains Cholesky factor U and lower triangle of A is destroyed after Please visit cuSOLVER Library Samples - potrfBatched for a code example. API of potrsBatched Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Indicates if matrix A lower or upper part is stored. n host input Number of rows and columns of matrix A. nrhs host input Number of columns of matrix X and B. Aarray device in/out Array of pointers to <type> array of dimension lda * n with lda is not less than max(1,n). Aarray[i] is either lower Cholesky factor L or upper Cholesky factor U. lda host input Leading dimension of two-dimensional array used to store each matrix Aarray[i]. Barray device in/out Array of pointers to <type> array of dimension ldb * nrhs. ldb is not less than max(1,n). As an input, Barray[i] is right hand side matrix. As an output, Barray[i] is the solution matrix. ldb host input Leading dimension of two-dimensional array used to store each matrix Barray[i]. info device output If info = 0, all parameters are correct. if info = -i, the i-th parameter is wrong (not counting handle). batchSize host input Number of pointers in Aarray. Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0, nrhs<0, lda<max(1,n), ldb<max(1,n) or batchSize<0). CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. 2.4.3. Dense Eigenvalue Solver Reference (legacy) This chapter describes eigenvalue solver API of cuSolverDN, including bidiagonalization and SVD. These helper functions calculate the size of work buffers needed. cusolverStatus_tcusolverDnSgebrd_bufferSize( cusolverDnHandle_t handle, int m, int n, int *Lwork );cusolverStatus_tcusolverDnDgebrd_bufferSize( cusolverDnHandle_t handle, int m, int n, int *Lwork );cusolverStatus_tcusolverDnCgebrd_bufferSize( cusolverDnHandle_t handle, int m, int n, int *Lwork );cusolverStatus_tcusolverDnZgebrd_bufferSize( cusolverDnHandle_t handle, int m, int n, int *Lwork ); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSgebrd(cusolverDnHandle_t handle, int m, int n, float *A, int lda, float *D, float *E, float *TAUQ, float *TAUP, float *Work, int Lwork, int *devInfo );cusolverStatus_tcusolverDnDgebrd(cusolverDnHandle_t handle, int m, int n, double *A, int lda, double *D, double *E, double *TAUQ, double *TAUP, double *Work, int Lwork, int *devInfo ); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCgebrd(cusolverDnHandle_t handle, int m, int n, cuComplex *A, int lda, float *D, float *E, cuComplex *TAUQ, cuComplex *TAUP, cuComplex *Work, int Lwork, int *devInfo );cusolverStatus_tcusolverDnZgebrd(cusolverDnHandle_t handle, int m, int n, cuDoubleComplex *A, int lda, double *D, double *E, cuDoubleComplex *TAUQ, cuDoubleComplex *TAUP, cuDoubleComplex *Work, int Lwork, int *devInfo ); This function reduces a general m×n matrix A to a real upper or lower bidiagonal form B by an orthogonal transformation: \(Q^{H}*A*P = B\) If m>=n, B is upper bidiagonal; if m<n, B is lower bidiagonal. The matrix Q and P are overwritten into matrix A in the following sense: • if m>=n, the diagonal and the first superdiagonal are overwritten with the upper bidiagonal matrix B; the elements below the diagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. • if m<n, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B; the elements below the first subdiagonal, with the array TAUQ, represent the orthogonal matrix Q as a product of elementary reflectors, and the elements above the diagonal, with the array TAUP, represent the orthogonal matrix P as a product of elementary reflectors. The user has to provide working space which is pointed by input parameter Work. The input parameter Lwork is size of the working space, and it is returned by gebrd_bufferSize(). If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). Remark: gebrd only supports m>=n. API of gebrd Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. m host input Number of rows of matrix A. n host input Number of columns of matrix A. A device in/out <type> array of dimension lda * n with lda is not less than max(1,n). lda host input Leading dimension of two-dimensional array used to store matrix A. D device output Real array of dimension min(m,n). The diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). Real array of dimension min(m,n). The off-diagonal elements of the bidiagonal matrix B: if m>=n, E(i) = A(i,i+1) for i = E device output 1,2,...,n-1; if m<n, E(i) = A(i+1,i) for i = TAUQ device output <type> array of dimension min(m,n). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. TAUP device output <type> array of dimension min(m,n). The scalar factors of the elementary reflectors which represent the orthogonal matrix P. Work device in/out Working space, <type> array of size Lwork. Lwork host input Size of Work, returned by gebrd_bufferSize. devInfo device output If devInfo = 0, the operation is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0, or lda<max(1,m)). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of work buffers needed. cusolverStatus_tcusolverDnSorgbr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, const float *A, int lda, const float *tau, int *lwork);cusolverStatus_tcusolverDnDorgbr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, const double *A, int lda, const double *tau, int *lwork);cusolverStatus_tcusolverDnCungbr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, const cuComplex *A, int lda, const cuComplex *tau, int *lwork);cusolverStatus_tcusolverDnZungbr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, const cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, int *lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSorgbr( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, float *A, int lda, const float *tau, float *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnDorgbr( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, double *A, int lda, const double *tau, double *work, int lwork, int *devInfo); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnCungbr( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, cuComplex *A, int lda, const cuComplex *tau, cuComplex *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnZungbr( cusolverDnHandle_t handle, cublasSideMode_t side, int m, int n, int k, cuDoubleComplex *A, int lda, const cuDoubleComplex *tau, cuDoubleComplex *work, int lwork, int *devInfo); This function generates one of the unitary matrices Q or P**H determined by gebrd when reducing a matrix A to bidiagonal form: \(Q^{H}*A*P = B\) Q and P**H are defined as products of elementary reflectors H(i) or G(i) respectively. The user has to provide working space which is pointed by input parameter work. The input parameter lwork is size of the working space, and it is returned by orgbr_bufferSize(). Please note that the size in bytes of the working space is equal to sizeof(<type>) * lwork. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). API of orgbr Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. side host input If side = CUBLAS_SIDE_LEFT, generate Q. if side = CUBLAS_SIDE_RIGHT, generate P**T. m host input Number of rows of matrix Q or P**T. n host input If side = CUBLAS_SIDE_LEFT, m>= n>= min(m,k). if side = CUBLAS_SIDE_RIGHT, n>= m>= min(n,k). k host input If side = CUBLAS_SIDE_LEFT, the number of columns in the original m-by-k matrix reduced by gebrd. if side = CUBLAS_SIDE_RIGHT, the number of rows in the original k-by-n matrix reduced by gebrd. A device in/out <type> array of dimension lda * n On entry, the vectors which define the elementary reflectors, as returned by gebrd. On exit, the m-by-n matrix Q or P**T. lda host input Leading dimension of two-dimensional array used to store matrix A. lda >= max(1,m); tau device input <type> array of dimension min(m,k) if side is CUBLAS_SIDE_LEFT; of dimension min(n,k) if side is CUBLAS_SIDE_RIGHT; tau(i) must contain the scalar factor of the elementary reflector H(i) or G(i), which determines Q or P**T, as returned by gebrd in its array argument TAUQ or TAUP. work device in/out Working space, <type> array of size lwork. lwork host input Size of working array work. devInfo device output If info = 0, the ormqr is successful. if info = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (m,n<0 or wrong lda ). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of work buffers needed. cusolverStatus_tcusolverDnSsytrd_bufferSize( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, const float *A, int lda, const float *d, const float *e, const float *tau, int *lwork);cusolverStatus_tcusolverDnDsytrd_bufferSize( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, const double *A, int lda, const double *d, const double *e, const double *tau, int *lwork);cusolverStatus_tcusolverDnChetrd_bufferSize( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, const cuComplex *A, int lda, const float *d, const float *e, const cuComplex *tau, int *lwork);cusolverStatus_tcusolverDnZhetrd_bufferSize( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, const cuDoubleComplex *A, int lda, const double *d, const double *e, const cuDoubleComplex *tau, int *lwork); The S and D data types are real valued single and double precision, respectively. cusolverStatus_tcusolverDnSsytrd( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, float *A, int lda, float *d, float *e, float *tau, float *work, int lwork, int *devInfo);cusolverStatus_tcusolverDnDsytrd( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, double *A, int lda, double *d, double *e, double *tau, double *work, int lwork, int *devInfo); The C and Z data types are complex valued single and double precision, respectively. cusolverStatus_tcusolverDnChetrd( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuComplex *A, int lda, float *d, float *e, cuComplex *tau, cuComplex *work, int lwork, int *devInfo);cusolverStatus_t CUDENSEAPI cusolverDnZhetrd( cusolverDnHandle_t handle, cublasFillMode_t uplo, int n, cuDoubleComplex *A, int lda, double *d, double *e, cuDoubleComplex *tau, cuDoubleComplex *work, int lwork, int *devInfo); This function reduces a general symmetric (Hermitian) n×n matrix A to real symmetric tridiagonal form T by an orthogonal transformation: \(Q^{H}*A*Q = T\) As an output, A contains T and householder reflection vectors. If uplo = CUBLAS_FILL_MODE_UPPER, the diagonal and first superdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal, with the array tau, represent the orthogonal matrix Q as a product of elementary reflectors; If uplo = CUBLAS_FILL_MODE_LOWER, the diagonal and first subdiagonal of A are overwritten by the corresponding elements of the tridiagonal matrix T, and the elements below the first subdiagonal, with the array tau, represent the orthogonal matrix Q as a product of elementary reflectors. The user has to provide working space which is pointed by input parameter work. The input parameter lwork is size of the working space, and it is returned by sytrd_bufferSize(). Please note that the size in bytes of the working space is equal to sizeof(<type>) * lwork. If output parameter devInfo = -i (less than zero), the i-th parameter is wrong (not counting handle). API of sytrd Parameter Memory In/out Meaning handle host input Handle to the cuSolverDN library context. uplo host input Specifies which part of A is stored. uplo = CUBLAS_FILL_MODE_LOWER: Lower triangle of A is stored. uplo = CUBLAS_FILL_MODE_UPPER: Upper triangle of A is stored. n host input Number of rows (columns) of matrix A. <type> array of dimension lda * n with lda is not less than max(1,n). If uplo = CUBLAS_FILL_MODE_UPPER, the leading n-by-n upper triangular part of A contains the upper A device in/out triangular part of the matrix A, and the strictly lower triangular part of A is not referenced. If uplo = CUBLAS_FILL_MODE_LOWER, the leading n-by-n lower triangular part of A contains the lower triangular part of the matrix A, and the strictly upper triangular part of A is not referenced. On exit, A is overwritten by T and householder reflection lda host input Leading dimension of two-dimensional array used to store matrix A. lda >= max(1,n). D device output Real array of dimension n. The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). E device output Real array of dimension (n-1). The off-diagonal elements of the tridiagonal matrix T: if uplo = CUBLAS_FILL_MODE_UPPER, E(i) = A(i,i+1). if uplo = CUBLAS_FILL_MODE_LOWERE(i) = tau device output <type> array of dimension (n-1). The scalar factors of the elementary reflectors which represent the orthogonal matrix Q. work device in/out Working space, <type> array of size lwork. lwork host input Size of work, returned by sytrd_bufferSize. devInfo device output If devInfo = 0, the operation is successful. if devInfo = -i, the i-th parameter is wrong (not counting handle). Status Returned CUSOLVER_STATUS_SUCCESS The operation completed successfully. CUSOLVER_STATUS_NOT_INITIALIZED The library was not initialized. CUSOLVER_STATUS_INVALID_VALUE Invalid parameters were passed (n<0, or lda<max(1,n), or uplo is not CUBLAS_FILL_MODE_LOWER or CUBLAS_FILL_MODE_UPPER). CUSOLVER_STATUS_ARCH_MISMATCH The device only supports compute capability 5.0 and above. CUSOLVER_STATUS_INTERNAL_ERROR An internal operation failed. These helper functions calculate the size of work buffers needed. cusolverStatus_tcusolverDnSormtr_bufferSize( cusolverDnHandle_t handle, cublasSideMode_t side, cublasFillMode_t uplo, cublasOperation_t trans, int m, int n, const float *A, int lda, const float *tau, const float *C, int ldc, int
{"url":"https://docs.nvidia.com/cuda/cusolver/index.html","timestamp":"2024-11-07T22:00:54Z","content_type":"text/html","content_length":"1050334","record_id":"<urn:uuid:14f6e5c0-6c8f-42ca-a787-a9fc67d8ecd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00603.warc.gz"}
Sound Doppler Shift Calculator he Doppler Effect calculator for sound waves calculates the observed frequency and wavelength of a sound wave given the source frequency, speed of source, and speed of sound in the air. The user can input the source frequency in Hertz (Hz), the speed of the source in meters per second (m/s), and the speed of sound in air in meters per second (m/s). The calculator uses the formula for the Doppler effect (can be found below the calculator), which relates the observed frequency to the source frequency, speed of the source, and speed of sound in air. The formula accounts for the change in frequency due to the relative motion of the source and the observer, resulting in either a higher or lower pitch. A negative speed of the source indicates that it is moving away from the observer. The Doppler Effect Source: http://en.wikipedia.org/wiki/Doppler_effect The Doppler effect is the change in frequency of a wave for an observer moving relative to its source. It is commonly heard when a vehicle sounding a siren or horn approaches, passes and moves away from an observer. The received frequency is higher (compared to the emitted frequency) during the approach, it is identical at the instant of passing by, and it is lower during the moving away. The relative changes in frequency can be explained as follows. When the source of the waves is moving toward the observer, each successive wave crest is emitted from a position closer to the observer than the previous wave. Therefore each wave takes slightly less time to reach the observer than the previous wave. Therefore the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are traveling, the distance between successive wavefronts is reduced; so the waves "bunch together". Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wavefronts is increased, so the waves "spread out". The Doppler effect for sound can be expressed as follows: Frequency change Wavelength change For the approaching source, the speed v' should be negative; for receding source, speed v' should be positive. v - the speed of sound in air. By default, it is equal to the speed of sound in the dry air at 20 degrees Centigrade, see Sound Speed in Gases URL zum Clipboard kopiert Ähnliche Rechner PLANETCALC, Sound Doppler Shift Calculator
{"url":"https://de.planetcalc.com/2351/","timestamp":"2024-11-14T09:04:07Z","content_type":"text/html","content_length":"35579","record_id":"<urn:uuid:28fbfb0b-2499-4081-ae52-47d90f88678d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00560.warc.gz"}
What Is The Weakest Shape? Understanding The Science Behind Structural Integrity - Selebriti.cloud What is the Weakest Shape? Understanding the Science Behind Structural Integrity Let’s talk about shapes for a minute. There’s something about them that just draws us in and captivates us. Maybe it’s the symmetry, maybe it’s the curves, or maybe it’s the precision. But have you ever stopped to think about which shape is the weakest? That’s right, today we’re going to explore just that – what is the weakest shape out there? It might surprise you to learn that there actually is a shape that is universally recognized as the weakest. It’s called a “meander,” and it’s essentially a shape that looks like a bunch of squiggly lines put together. You may have seen it in decorative tile patterns or ancient Greek pottery. So why is it the weakest shape? Well, that’s where things get interesting. Despite its ubiquity in art and design, the meander is actually quite poor at bearing weight or withstanding pressure. In fact, the shape is so weak that it’s been used in psychological experiments to demonstrate how little force it takes to break it apart. But why is this the case? What is it about the meander that makes it so feeble? Join me as we explore the science behind this intriguing Definition of Shape Shape can be defined as the external form or appearance of an object or entity, created by the lines and curves that bound it. It can be 2D (two-dimensional) or 3D (three-dimensional), always possessing a measurable length, width, and height. Shapes can be regular or irregular, simple or complex, and are often used as a fundamental element in the visual arts, design, and mathematics. Criteria for Evaluating Strength of Shapes When it comes to evaluating the strength of shapes, there are a few criteria that experts look at. These include: • Material properties • Geometric properties • Boundary conditions Each of these criteria plays an important role in determining the strength of a shape. Let’s take a closer look at what each one means: Material Properties The strength of a shape is highly dependent on the material it is made of. For example, steel is much stronger than aluminum, which is much stronger than plastic. When evaluating the strength of a shape, experts look at the material’s yield strength, ultimate strength, and modulus of elasticity. Geometric Properties The geometric properties of a shape also have a significant impact on its strength. Experts look at factors such as the shape’s cross-sectional area, moment of inertia, and centroid location. For example, a shape with a larger cross-sectional area will generally be stronger than one with a smaller area. Boundary Conditions The boundary conditions of a shape refer to how it is supported and loaded. For example, a beam that is supported at both ends and loaded in the middle will experience different stresses and strains than one that is loaded at both ends and supported in the middle. Experts consider factors such as the type of support, the magnitude and direction of the load, and the location of the load when evaluating the strength of a shape. Strength of Common Shapes To put these criteria into practice, we can look at the strength of some common shapes. The following table shows the strengths of five shapes made from steel: Shape Material Yield Strength (MPa) Ultimate Strength (MPa) Rectangle Steel 250 400 I-Beam Steel 350 550 Circle Steel 200 350 Triangle Steel 150 250 Hollow Cylinder Steel 300 500 As we can see from this table, the I-beam shape is the strongest of these five shapes, with the highest yield strength and ultimate strength. The triangle shape is the weakest, with the lowest yield and ultimate strength. By understanding these criteria for evaluating the strength of shapes, we can better design and engineer structures that will withstand the stresses and strains that they will encounter in use. Concept of Weak Shapes Shapes play a significant role in engineering and architecture. The strength and stability of a structure heavily rely on its shape. A weak shape, on the other hand, has a high potential for failure or collapse under certain circumstances. Therefore, understanding the concept of weak shapes is crucial, especially for those involved in designing structures. • Definition of Weak Shapes: Weak shapes describe geometric figures that have a higher tendency to buckle, deform, or collapse when subjected to external forces. These shapes are vulnerable to compression or bending, and they often lack the resistance needed to maintain their structural integrity. • Examples of Weak Shapes: Common examples of weak shapes are thin and slender columns, plates, and beams. These shapes have a high length-to-thickness ratio, making them prone to buckling or collapse. Cylindrical shapes with tall heights and small diameters are also weak shapes. These shapes easily deform or buckle when subjected to external forces, making them unsuitable for certain • Impact of Weak Shapes on Structure: Weak shapes can compromise the safety and stability of a structure. When a weak shape fails, it can cause significant damage to the structure’s integrity and function. It can lead to complete collapse, posing a severe threat to the people and property around it. Preventing Failure Due to Weak Shapes Preventing failure due to weak shapes requires proper design and selection of shapes that have sufficient strength and stability to withstand external forces. Engineers must consider factors such as the material’s properties, loads, and environmental factors when designing structures to ensure a safe and stable end product. Analysis of Structures with Weak Shapes Structures that contain weak shapes require careful analysis and evaluation to ensure their safety and stability. Engineers use a variety of techniques such as finite element analysis, computer modeling, and simulation to evaluate and optimize these structures. By identifying and addressing potential weak points, engineers can create a robust and stable structure that meets the necessary Weak Shape Strengths Weaknesses Column with Circular Cross-section Easy to manufacture, high resistance to bending Vulnerable to buckling under compression Plate with High Length-to-Width Ratio Large surface area for load distribution Prone to buckling and deformation under load Beam with High Length-to-Width Ratio Can span long distances without support, high resistance to bending Prone to buckling and deformation under load As seen in the table above, each shape has its strengths and weaknesses. Understanding these weaknesses and implementing design strategies to prevent them is essential to creating safe and stable Factors contributing to weakness in shapes Understanding the factors that contribute to weakness in shapes can help in choosing the right shape for a particular structure or application. The following are the common factors that contribute to the weakness of shapes: • Material: The strength of a shape is greatly influenced by the material it is made of. A weak material can make even the strongest shape vulnerable to failure. • Cross-sectional area: A shape with a small cross-sectional area can be more susceptible to failure than a shape with a larger cross-sectional area. This is because the smaller cross-sectional area offers less resistance to forces applied to it. • Orientation: The orientation of a shape relative to the direction of applied forces can also affect its strength. Certain shapes may be stronger when oriented in a specific way compared to Shape versus material strength While the material choice of a structure is important, it is the shape of the object that drastically affects how much force it can withstand. Two examples of this can be seen in the difference between an I-beam and a solid metal box beam. A solid metal box beam can be stronger with homogeneous material utilizing the same amount of materials due to its optimal shape. The structure of the box beam provides the ability to prevent bending and buckling. Meanwhile, an I-beam is better at evenly distributing weight thanks to its unique cross-section shape since it effectively bears loads in multiple planes. Therefore, it’s important to consider both the material and shape of the structure to determine its overall strength. Different geometries and their strength Each shape has its own unique characteristics, designed to manage specific loads in different environments. For instance, a simple hollow tube might not be the first shape that comes to mind when you think ‘strong,’ but depending on the loading direction, hollow tubes can offer great weight-to-strength ratios. Additional examples can be seen in the table below, comparing various geometries and their strength: Shape Explanation Strengths Weaknesses Circular tube Tubing with a round cross-section Good weight-to-strength ratio; great at managing torsional Not very good at managing bending forces and flexure Rectangular Tubing with a box-like shape Strong when managing mixed loading types, improved torsional Less efficient than circular tubes, not very good at managing torsional Tube stiffness loading Box Beam A closed shape made by connecting four Great for managing bending forces Tough to develop with only homogeneous mass By better understanding the strengths and limitations of different shapes, one can then select the best shape to meet their specific needs. It’s great to have variety as each shape solves a problem in its own unique way. Importance of Strong Shapes in Architecture and Engineering Architects and engineers are constantly striving to create structures that are not only aesthetically pleasing but also structurally sound and durable. This is where the importance of strong shapes in architecture and engineering comes into play. Strong shapes are essential as they offer the necessary support and provide stability to structures that need to withstand external forces such as wind, snow, and earthquakes. • Triangles: Triangles are considered to be one of the strongest shapes in engineering and architecture due to their structural integrity. They distribute weight evenly with the load-bearing capacity being supported at the corners. The pyramids of Giza are an example of the use of triangles in monumental architecture. • Squares and Rectangles: Squares and rectangles are known for their stability and ability to provide equal support on all sides. They are used extensively in the construction of buildings and • Circles: Circles offer strength and stability due to their symmetrical design. They are often used in the construction of arches and domes, and their ability to distribute weight evenly makes them ideal for structures that need to withstand external forces. While these shapes are considered to be the strongest, it is important to note that they are not always the most practical or aesthetically pleasing. The use of weak shapes, such as the trapezoid or parallelogram, can sometimes be necessary to accommodate design elements or meet certain requirements. However, even with weak shapes, architects and engineers must ensure that the structures they design are still structurally sound and can withstand external forces. This can be achieved through the clever use of materials and reinforcements, placing loads in strategic locations, and incorporating strong shapes wherever possible. Shape Strengths Weaknesses Triangle Structurally sound, evenly distributes weight, ideal for creating arches and domes Not always practical or aesthetically pleasing Square/Rectangle Stable, ability to provide equal support on all sides, extensively used in construction of buildings and bridges Limited design flexibility Circle Offers strength and stability due to symmetrical design, ideal for creating arches and domes Not always practical or aesthetically pleasing In conclusion, the importance of strong shapes in architecture and engineering cannot be overemphasized. Architects and engineers must consider the balance between strength, practicality, and aesthetics when designing structures. It is essential to utilize the strongest shapes whenever possible to ensure the safety and longevity of the structure, while still maintaining a visually appealing design. Circular shapes as strong shapes Although we are discussing the weakest shape, it’s important to acknowledge the strength of circular shapes. A circle is a strong shape due to its symmetrical nature, which allows for even distribution of force. This means that when a force is applied to a circular object, the force is distributed equally around the object instead of being concentrated in one area. • Circular shapes are used in many applications that require strength and durability, such as wheels and bearings. • The strength of a circular shape can also be seen in its ability to resist deformation or bending. This makes it useful in the construction of buildings and bridges, where resistance to external forces is crucial. • Circular shapes are also useful in manufacturing processes where even heating or cooling is required, such as in the production of glass. However, even with their strength, circular shapes do have limitations. The weakness of a circular shape lies in its lack of flat surfaces, which can make it difficult to attach to other objects or to create a stable base. Avoiding circular shapes altogether is not the answer to creating a strong structure or product, as their strength is undeniable. Instead, designers and engineers must carefully consider the application of circular shapes and use them in combination with other shapes to create the desired result. Advantages Disadvantages Even distribution of force Lack of flat surfaces Resistance to deformation Difficult to attach to other objects Useful in manufacturing processes Can create stability problems Overall, circular shapes have significant strengths that make them useful in many applications. While they may not be the strongest shape for all situations, they are an important component in creating structures and products that are both strong and durable. Triangular shapes as strong shapes When it comes to the strength of a shape, the triangular shape is often considered one of the strongest. This is because of its unique properties that make it resistant to deformation and able to distribute weight evenly. Triangles are made up of three sides and three angles, which allows each side to support and balance the other two. This means that when weight or force is applied to a triangular structure, it is evenly distributed across all three sides, making it much less likely to bend or break. Furthermore, triangles have a natural tendency to maintain their shape even when under stress, due to their rigid structure. This ability to maintain shape is what makes them ideal for use in bridges, towers, and other structures that need to be able to withstand heavy loads. Advantages of triangular-shaped structures • Greater stability: Triangular shapes are incredibly stable since the weight distribution is evenly split between the three sides. This stability is often enhanced with bracing and further reinforced by placing heavy weights towards the center. • Minimal deformation: In engineering and construction, the goal is to ensure structures hold their shape, especially when exposed to stress, weight, or movement. A triangular shape helps minimize deformation because it’s harder to distort due to the way that the weight is distributed. • Reduced material usage: Triangular shapes require less material to achieve adequate stability. For instance, a square-shaped structure would require additional support to ensure it doesn’t collapse or bend under weight. Applications of triangular shapes in real-life scenarios Triangular shapes are frequently utilized in various ways to improve structures’ strength, stability, and resilience. Here are some of the common applications of triangular shapes in real-life scenarios: • Bridges: Triangular trusses or triangular-shaped bridge towers are common in bridge construction because of their exceptional strength and ability to distribute weight effectively. • Roof trusses: The triangular shape is often used in building roof trusses. It helps create a self-supporting structure rich in rigidity and stability. • Tents: The triangular shape is often used in tent design, with the tent poles forming a triangular structure that provides excellent stability and ability to withstand strong winds, snow loads and heavy rain. Comparison of triangular shapes to other shapes So, what makes the triangular shape stronger than other shapes? To answer this question, let’s compare it to a square and a circle. Shape Strengths Weakness Even weight distribution Triangular Natural rigidity Not ideal for free-form shapes Less material usage required Easy to join and create Additional support required to prevent deformation Square Good for corners Not ideal for creating arched or curved shapes Tiltable at specific angles Even weight distribution Difficult to join and create Circle Natural curve More material usage required No corners and less vulnerability to cracking Not ideal for hierarchical structures Easily scalable Although each shape has its advantages and disadvantages, the triangular shape stands out as the strongest of them all. Rectangular shapes as strong shapes When it comes to considering the strength of a shape, it’s important to keep in mind that not all shapes are created equal. In fact, some shapes are inherently stronger and more structurally sound than others. One such shape that is widely regarded as being strong and reliable is the rectangular shape. • Consistency of angles: Rectangular shapes are characterized by four angles that are all equal to 90 degrees. This consistency of angles allows for more predictable and stable construction. • Even distribution of weight: The flat planes of a rectangular shape allow for a more even distribution of weight across the shape, which helps to prevent weaknesses or points of failure. • Versatility: Rectangular shapes can be found in a wide variety of structures, from simple box shapes to complex architectural designs. This versatility speaks to the strength and reliability of the rectangular shape. While it’s important to note that no shape is completely immune to structural weaknesses or failure, the rectangular shape has proven time and time again to be one of the strongest and most reliable shapes available. For a more detailed comparison of the strength of different shapes, take a look at the following table: Shape Strength Rectangular Strong Triangular Moderate Circular Weak As you can see, the rectangular shape ranks at the top in terms of strength and reliability. Its consistent angles, even weight distribution, and versatility make it an excellent choice for a wide variety of construction and design needs. Irregular shapes as weak shapes Irregular shapes are typically viewed as weaker than regular shapes due to their lack of symmetry and uniformity in their distribution of weight and stress. Among the irregular shapes, the number 9 shape is often perceived as the weakest. • The number 9 shape has a significant amount of weight concentrated at the top of the curve and is unsupported at the bottom. • This uneven distribution of weight makes the number 9 shape vulnerable to bending or breaking under stress. • Additionally, the number 9 shape has no straight lines or corners to provide support or stability. These weaknesses make the number 9 shape unsuitable for structures or designs requiring strength and stability. However, the number 9 shape can still be used effectively in decorative designs or artistic creations where the focus is on aesthetics rather than functionality. To further illustrate the weakness of irregular shapes, let’s compare the strength of a rectangular beam to a randomly shaped stick of the same dimensions. The rectangular beam can support a greater amount of weight due to its uniform distribution of weight and symmetrical shape, while the randomly shaped stick may bend or break under the same amount of stress due to its irregular, uneven distribution of weight and lack of symmetry. Regular Shapes Irregular Shapes Circle Freeform Shape Square Abstract Shape Triangle Organic Shape While irregular shapes may be aesthetically pleasing and can add visual interest to designs, their lack of symmetry and weight distribution make them weaker than regular shapes in terms of strength and stability. The Role of Reinforcement in Strengthening Weak Shapes When it comes to the discussion of the weakest shape, there are many factors to consider, and one of them is the role of reinforcement in strengthening weak shapes. Reinforcement can come in many forms such as using stronger materials, adding braces or supports, or making modifications to an existing shape. In this article, we will delve into the importance of reinforcement and how it can improve the strength of weak shapes. • The Use of Stronger Materials: One way to reinforce weak shapes is to use stronger materials. For example, when building a structure, steel is used instead of wood as it has a higher tensile strength, which allows it to withstand more force and stress. • Adding Braces or Supports: Braces and supports can also reinforce weak shapes by providing additional strength and stability. These are commonly used in structures like bridges and buildings to prevent deformation and collapse. • Modifying Shape: Modifying a shape to provide additional support is another way to reinforce it. For instance, adding arches to a structure distributes the load more evenly, reducing the stress on the structure’s weak points. However, simply adding reinforcement is not a panacea for weak shapes. It should be done with care and precise calculations to ensure that the reinforcement is correctly applied and does not affect the performance of the structure negatively. In engineering, there is a concept known as the “factor of safety,” which refers to the ratio of the maximum load that a structure can bear to the actual load it is experiencing. Engineers use this factor to prevent catastrophes from happening and ensure that structures can withstand unforeseen loads or stresses. Reinforcement should aim to improve this factor of safety, but adding too much reinforcement can also lead to stability issues, especially in larger structures with more complex shapes. Shape Factor of Safety without Reinforcement Factor of Safety with Reinforcement Square Tower 1.5 2.5 Circular Tower 1.2 2.0 Triangle 1.1 1.6 The table above shows the difference in the factor of safety between shapes without and with reinforcement. In conclusion, reinforcement plays a crucial role in strengthening weak shapes. Whether through the use of stronger materials, adding braces or supports, or modifying the shape, additional reinforcement can enhance the structure’s resilience and prevent collapse or deformation. However, it should be done with care and precise calculations to ensure that it improves the structure’s factor of safety without compromising its stability. What is the Weakest Shape? Q: What is the meaning of “weak” in the context of shapes? A: When we talk about the weakest shape, we refer to a shape that is most likely to collapse or deform under pressure. Q: What factors determine the strength of a shape? A: The strength of a shape depends on various factors such as its geometry, material properties, and the type of stress it is subjected to. Q: Which shape is considered to be the weakest? A: The weakest shape is generally considered to be a thin-shell structure such as an eggshell. Q: Why is an eggshell considered the weakest shape? A: Eggshells are thin, curved structures that can support their own weight but are easily deformed under external pressure. Q: Are there any other weak shapes? A: Yes, there are many other weak shapes such as thin-walled tubes, hollow spheres, and some irregular shapes. Q: Can weak shapes be made stronger? A: Yes, weak shapes can be made stronger by changing their geometry, using stronger materials, or by adding extra support structures. Q: Why is it important to know the weakest shape? A: Understanding the weakest shape can help us design better structures and avoid catastrophic failures. Closing Thoughts Now that you know what the weakest shape is and why it matters, you can design better structures and avoid potential disasters. Remember to always consider the strength of a shape when designing anything that requires stability and support. Thank you for reading and visit us again for more informative and engaging articles.
{"url":"https://selebriti.cloud/en/what-is-the-weakest-shape/","timestamp":"2024-11-07T12:49:06Z","content_type":"text/html","content_length":"139980","record_id":"<urn:uuid:68420ed1-3a23-4761-b36c-33caa34ee25c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00200.warc.gz"}
Descriptive Statistics Calculator • Enter your data (comma-separated). • Click "Calculate" to see descriptive statistics. • Click "Clear" to reset the input and results. • Click "Copy Results" to copy the results to the clipboard. What is Descriptive Statistics? Descriptive statistics is a branch of statistics that focuses on summarizing and presenting data in a meaningful and informative way. It involves using various statistical measures, graphical representations, and techniques to describe and organize data sets, providing a clear understanding of their key characteristics. Descriptive statistics is primarily concerned with the analysis of data at hand rather than making inferences or predictions about a larger population, which is the domain of inferential statistics. All Formulae Related to Descriptive Statistics 1. Mean (Average): □ Formula: Mean (μ or x̄) = Σx / n □ Where: ☆ Σx: Sum of all data values. ☆ n: Number of data values. 2. Median (Middle Value): □ Formula (for an odd number of data values): Median = Middle Value □ Formula (for an even number of data values): Median = (Value at n/2 + Value at (n/2 + 1)) / 2 □ Where: ☆ n: Number of data values. 3. Mode (Most Frequently Occurring Value): □ Formula: Mode = Value(s) with the highest frequency in the data set. 4. Variance: □ Formula: Variance (σ² or s²) = Σ((x – μ)²) / (n – 1) □ Where: ☆ x: Each data value. ☆ μ: Mean of the data set. ☆ n: Number of data values. 5. Standard Deviation: □ Formula: Standard Deviation (σ or s) = √Variance □ Where: ☆ Variance is calculated as mentioned above. 6. Range: □ Formula: Range = Maximum Value – Minimum Value 7. Interquartile Range (IQR): □ Formula: IQR = Q3 – Q1 □ Where: ☆ Q1: First Quartile (25th percentile). ☆ Q3: Third Quartile (75th percentile). 8. Coefficient of Variation (CV): □ Formula: CV = (Standard Deviation / Mean) * 100 9. Skewness: □ Formula: Skewness = (3 * (Mean – Median)) / Standard Deviation 10. Kurtosis: □ Formula: Kurtosis = [(Σ((x – μ)⁴) / n) / (Standard Deviation⁴)] – 3 □ Where: ☆ x: Each data value. ☆ μ: Mean of the data set. ☆ n: Number of data values. 11. Percentile (P): □ Formula: Pth Percentile = (P/100) * (n + 1) □ Where: ☆ P: Desired percentile (e.g., 25th, 50th, 75th). ☆ n: Number of data values. Applications of Descriptive Statistics Calculator in Various Fields Here are some common areas where a Descriptive Statistics Calculator is used: 1. Business and Economics: □ Analyzing financial data, including revenue, expenses, and profit margins. □ Assessing market trends and consumer behavior through survey data. □ Evaluating economic indicators such as GDP, inflation rates, and unemployment statistics. 2. Social Sciences: □ Conducting surveys and experiments to gather data for research in psychology, sociology, and political science. □ Examining demographic data to study population trends and patterns. 3. Education: □ Assessing student performance and learning outcomes. □ Analyzing standardized test scores to evaluate educational programs. □ Identifying areas for improvement in educational institutions. 4. Healthcare and Medicine: □ Analyzing patient data to assess treatment effectiveness. □ Studying epidemiological data to track disease outbreaks and patterns. □ Conducting clinical trials to evaluate the safety and efficacy of medical treatments. 5. Environmental Science: □ Monitoring environmental data, including air and water quality. □ Analyzing climate and weather data to study climate change and weather patterns. 6. Engineering and Manufacturing: □ Quality control and process improvement in manufacturing. □ Analyzing performance data for machinery and equipment. □ Monitoring and maintaining product and process specifications. Benefits of Using the Descriptive Statistics Calculator Here are some of the key advantages: 1. Efficiency: Descriptive Statistics Calculators quickly compute summary statistics, saving time compared to manual calculations, especially for large data sets. 2. Accuracy: They provide accurate and consistent results, reducing the risk of errors that can occur during manual data analysis. 3. Ease of Use: Calculators are user-friendly and accessible to individuals with varying levels of statistical expertise, making data analysis more accessible. 1. “Beyond Central Tendencies: Exploring Data Variability with Measures of Dispersion” by Journal of Educational and Behavioral Statistics 2. “From Correlation to Regression: Unveiling Relationships and Predicting Trends with Descriptive Statistics” by American Statistician Last Updated : 03 October, 2024 One request? I’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️ Sandeep Bhandari holds a Bachelor of Engineering in Computers from Thapar University (2006). He has 20 years of experience in the technology field. He has a keen interest in various technical fields, including database systems, computer networks, and programming. You can read more about him on his bio page. 22 thoughts on “Descriptive Statistics Calculator” 1. Sellis This article is a good reminder of how Descriptive Statistics is applied across different fields. The benefits of using a Descriptive Statistics Calculator are particularly insightful. 2. Johnson Damien The article provides a well-rounded overview of Descriptive Statistics and its role in various fields. The benefits of using Descriptive Statistics Calculator are particularly compelling. 3. Owen William The benefits of using Descriptive Statistics Calculator are especially noteworthy. The user-friendly nature of these calculators makes Descriptive Statistics accessible to a wider audience. 4. Hbrown The article succinctly explains Descriptive Statistics and its relevance in various fields. The benefits of using Descriptive Statistics Calculator add to its practical value. 5. Ross Murphy This article provides a comprehensive understanding of Descriptive Statistics and its applications, making a compelling case for its relevance in diverse fields. 6. Davies Rachel A well-structured and informative piece on Descriptive Statistics, offering a clear and detailed explanation of its applications in various sectors. 7. Theresa Campbell This article highlights the importance and widespread applications of Descriptive Statistics across diverse fields. The benefits of using Descriptive Statistics Calculator are well articulated. 8. Thomas Jim Clearly explained introduction to Descriptive Statistics and its formulas. The applications of Descriptive Statistics in various fields show how versatile and critical this branch of statistics is for different fields. 9. Jeremy Mitchell Well-stated. The article effectively communicates the significance of Descriptive Statistics in different sectors and its role in facilitating accurate data analysis. 10. Gray Rachel The formulae and applications of Descriptive Statistics are explained thoroughly, and the comprehensive list of applications demonstrates its significance across different domains. 11. Zoe Knight The practical implications of Descriptive Statistics are well highlighted in the article, providing valuable insights into its role across diverse domains. 12. Pauline63 Indeed, the article effectively illustrates the importance of Descriptive Statistics in today’s data-driven world. It’s a practical guide for anyone interested in understanding and applying statistical methods in their respective fields. 13. Qharris Agreed. The article provides a clear understanding of the essential concepts of Descriptive Statistics and the practical applications in various fields. 14. Marshall Toby The diverse applications of Descriptive Statistics and the benefits of using the Descriptive Statistics Calculator are effectively presented in the article. 15. Martin89 The article effectively communicates the importance of Descriptive Statistics in different fields. The benefits of using Descriptive Statistics Calculator are particularly practical. 16. Martin Adele The detailed explanation of Descriptive Statistics and its applications across different sectors effectively emphasizes its critical role in data analysis and decision-making. 17. Lloyd Adam I fully agree with the importance of this type of statistics in different areas. It plays a fundamental role in making informed decisions in various sectors. 18. Iparker The article presents a comprehensive overview of Descriptive Statistics and its applications. The explanations of formulae and practical examples are highly informative and valuable. 19. Harrison Reece Absolutely. The comprehensiveness of the article provides a clear understanding of the importance and practical applications of Descriptive Statistics. 20. Ruth84 Absolutely. The detailed explanation of formulae and the practical examples of applications offer a clear view of the significance of Descriptive Statistics. 21. Richards Harrison I completely agree. The article underlines the versatility and significance of Descriptive Statistics, making a strong case for its widespread applicability. 22. Xwalsh Nicely summarized. I particularly like the explanations for each formula, making it easier to understand the concepts.
{"url":"https://calculatoruniverse.com/descriptive-statistics-calculator/","timestamp":"2024-11-01T19:49:53Z","content_type":"text/html","content_length":"262062","record_id":"<urn:uuid:23d694ca-8eba-48ee-a4fb-ef823ca8c0ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00121.warc.gz"}
New Reviews | Titles | Authors | Subjects Max Tegmark My Quest for the Ultimate Nature of Reality Book review by Anthony Campbell. The review is licensed under a Creative Commons Licence. It has been a commonplace of physicists and cosmologists since the time of Galileo that the world is described by mathematics, although some have puzzled over why this should be so. Tegmark, who is a cosmologist and a professor at MIT, takes the idea further than most by saying that the universe is not just described by mathematics, it is a mathematical object. In this book he sets out to explain his idea to a non-specialist audience, pretty much without the use of equations. Since his idea is essentially mathematical that may seem like an impossible task, but Tegmark writes clearly and informally and he does a good job of putting complex information into an accessible form. As he explains at the beginning, he has written a very personal book. In Tegmark's words, this is "a scientific autobiography of sorts: although it's more about physics than it's about me, it's certainly not your standard popular science book … Rather, it's my personal quest for the ultimate nature of reality, which I hope you'll enjoy seeing through my eyes." Although the theory he wants to argue for is the main focus of the book, he needs to lead up to it by spending some time describing concepts and facts that not all his readers may be familiar with. This he does in the first few chapters, which present the assumptions of modern cosmologists concerning space and time and our place in the universe. He suggests that if you are already familiar with these ideas from your reading of popular science books you may wish to skip most of this introductory material and simply read the summaries he helpfully provides at the end of each chapter. If you are in this "hard-core reader of popular science" category and are eager to reach his own ideas you may choose to follow his advice, but it will probably be worth your while to return later, because Tegmark has an unusual way of explaining the basic facts which is often illuminating and may cast things that you thought you understood quite well in a new light. Once this groundwork has been covered we come to the main subject of the book— Tegmark's scheme of four "nested" levels of parallel universes. We start from the implications of Alan Guth's theory of inflation, which is widely believed to have operated after the Big Bang to give rise to the universe as we see it today. Inflation leads logically to what Tegmark describes as the Level I multiverse. This, he emphasises, is not a theory but a prediction of inflation; if you accept inflation you must also accept that there are uncountably many universes. These are "universe-sized parts of our space that are so far away that light from them hasn't had time to reach us". This is the multiverse. If it is infinite, as it may well be, infinitely many copies and near-copies of you and everything else will exist in these "pocket universes". Eternal inflation also predicts the existence of universes with different physical laws. This gives us the Level II multiverse. "Inflation converts potentiality into reality: if the mathematical equations governing uniform space have multiple solutions, then eternal inflation will create infinite regions of space instantiating each of those solutions— this is the Level II multiverse." Tegmark illustrates this idea rather neatly by saying that "students in Level I parallel universes learn the same things in physics class but different things in history class, while students in Level II parallel universes could learn different things in physics class as well." To understand the Level III multiverse we have to accept the many-worlds interpretation of quantum mechanics put forward by Hugh Everett in 1957. When Everett published his hypothesis it effectively put an end to his career in physics— and Tegmark includes an email he himself received from the editor of a physics journal who warned him that the same fate might befall him! In fact, views are changing today. A number of prominent physicists now advocate Everett's idea; Tegmark regards him as one of the most important scientists of the twentieth century. The Level III multiverse is a consequence of the many-worlds hypothesis. It is described in Chapter 8 and is probably the most counter-intuitive idea in the book. "This mathematically simplest quantum theory … predicts the existence of parallel universes where you live countless variations of your life." It also predicts that you will not experience the weirdness that this entails, because of a censorship effect within your brain called decoherence. Tegmark finds it impossible to explain all this without introducing mathematical terms such as the Schrödinger wave equation and "the infinite-dimensional place called Hilbert space where it lives". I think that here we are probably reaching the limit of what can be explained non-mathematically. Finally we come to Tegmark's Level IV multiverse, the Mathematical Universe Hypothesis (MUH). The key idea here is that of a "mathematical structure", which is a "Set of abstract entities with relations between them [which] can be described in a baggage-independent way". By "baggage" Tegmark means "Concepts and words that are invented by us humans for convenience, which aren't necessary for describing the external physical reality". So ideally we should not need concepts such as protons, neutrons, quarks and the rest in order to describe reality. Tegmark's recipe for understanding reality is to eliminate the "baggage". When he has done this he is left just with the mathematical structure, and this, he believes, is all we need. Mathematical structures are completely abstract, purified from the verbal and conceptual supports that most of us rely on to help us understand difficult concepts. "Mathematical structures are eternal and unchanging: they don't exist in space and time— rather, space and time exist in (some of) them." This is something that Plato would have appreciated. Among the implications of the MUH are the following. • The flow of time is an illusion, and so is change. • Creation and destruction are illusions, since they imply change. • You are a self-aware substructure that is part of the mathematical structure. • There is no fundamental randomness. • Most of the complexity we observe is an illusion. • We probably don't live in a simulated universe. • The MUH is in principle testable and falsifiable, therefore scientific (unlike the idea that external physical reality is perfectly described by a mathematical structure while still not being one, which makes no predictions). Some of the features of the MUH as described by Tegmark reminded me of Julian Barbour's hypothesis in The End of Time. Barbour also takes a "Platonic" view of reality, so I was interested to see that he is one of the "superheroes" whom Tegmark thanks for commenting on an early draft of the entire book. Another similarity between the two is that both have written very personal books; if you enjoyed reading one of these you will probably also enjoy the other. I should say that both score exceptionally highly in terms of making you think. Both books have a narrow focus in a sense, although the questions they address are anything but narrow. For a wider view of the multiverse idea see The Hidden Reality by Brian Greene; this is one of the books that Tegmark recommends for further reading, although Greene himself is not enthusiastic about the MUH. %T Our Mathematical Universe %S My Quest for the Ultimate Nature of Reality %A Tegmark, Max %I Allen Lane %C London %D 2014 %G ISBN 978-1-846-14476-9 %P 421pp %K cosmology %O half-tone illustrations and diagrams New Reviews | Titles | Authors | Subjects
{"url":"http://acampbell.org.uk/bookreviews/r/tegmark.html","timestamp":"2024-11-14T18:00:13Z","content_type":"text/html","content_length":"9583","record_id":"<urn:uuid:dd3e99fe-a425-4418-ab07-7b7d16440076>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00525.warc.gz"}
Invited Talks Mahmoud Abdel-Aty Sohag University, Egypt Nonlinear Dynamics of Quantum Entanglement In this work, we will examine in a proof-of-concept experiment a new type of quantum-inspired protocol based on the idea of nonlinear dynamics of quantum entanglement. We discuss various measures of bipartite and tripartite entanglement in the context of two and three level atoms. The quantum entanglement is discussed for different systems. For the three-level systems various measures of tripartite entanglement are explored. The significant result is that a sudden death and sudden birth of entanglement can be controlled through the system parameters. Mark Edelman Yeshiva University, USA Stability of discrete fractional systems and lifespan of living species Numerical simulations demonstrate that discrete fractional systems are robust with respect to random perturbations. The distribution of times of the stable evolution before the break of unstable fractional (with power-law memory) systems under random perturbations coincides with the observed lifespan distribution of living species. Dumitru Baleanu Cankaya University, Turkey Institute of Space Sciences, Magurele-Bucharest, Romania Modern Fractional Calculus:Theory and Applications The standard mathematical models of integer-order derivatives, including nonlinear models, do not work adequately in many cases where power law is clearly observed. In my talk I will discuss about the modern fractional models and their validations. Some illustrative examples from several fields of physics and engineering will be presented. Amin Jajarmi University of Bojnord, Iran A new approach on the modelling and control of complex biological systems Recently, the new aspects of fractional calculus have been widely employed to investigate different features of many complex biological systems. In this direction, the fractional models help us to understand how the memory of the certain components of a system affects the progress of diseases as a whole, and therefore, it enables us to implement the memory effects into the evolution of considered system together with its environment. This kind of analysis is also important in order to improve the current medications and to explore new ways of quick, effective and low-cost In this talk, we explore a recent development in the mathematical modelling of biological systems. The complex dynamics of an epidemic are investigated within the use of both classical and a new fractional framework. The obtained results are analyzed by the help of some simulations in a comparative way for both the integer- and fractional-order models. Finally, an efficient control scheme is designed for the purpose of intervention in an appropriate, effective way. Devendra Kumar JECRC University, India An efficient numerical scheme of a fractional order nonlinear discontinued problems Devendra Kumar^1 and Dumitru Baleanu^2,3 ^1Department of Mathematics, University of Rajasthan, Jaipur-302004, Rajasthan, India ^2Department of Mathematics, Faculty of Arts and Sciences, Cankaya University, Etimesgut, Turkey ^3Institute of Space Sciences, Magurele-Bucharest, Romania In this work, we suggest an efficient and user friendly numerical technique for solving a nonlinear fractional discontinued problem occurring in arising in nanotechnology. We combine of an analytical method and a new integral transform to construct the numerical scheme for solving the fractional order problem with different kind of memory effects. This inventive coupling makes the calculation very simple. The results derived with the aid of the proposed scheme reveals that the scheme is very efficient, accurate, flexible, easy to use and computationally very attractive for such kind of fractional order nonlinear models. Stefano Lenci Polytechnic University of Marche (UNIVPM), Italy A detailed analysis of the 1:1 internal resonance in generic 2 dof systems Internal resonance is the mechanics by which an energy exchange can occur in the nonlinear regime between modes that are uncoupled in the linear framework. This coupling can be dangerous, if not properly considered, or useful, if adequately detected and even exploited. Although internal resonances, and in particular the 1:1, have been largely studied in the past for various mechanical systems, a systematic, general and comprehensive investigation is missing. It is the goal of this work, which is aimed at investigating the general case with all quadratic and cubic nonlinearities. A cornucopia of different possibilities is observed by varying the nonlinear stiffnesses, and a detailed analysis is performed. Both modes can be hardening, softening, one hardening and the other softening, modes can be in-phase or out-of-phase. Furthermore, the existence of an extra path of periodic solutions, not ensuing from any natural (linear) frequency is observed, seemingly for the first time, and its existence has been confirmed The proposed finding can be conveniently exploited to design a coupled oscillator with desired (nonlinear) properties. Xavier Leoncini CPT, Aix-Marseille Université, France From particle dynamics in magnetic field to building stationary solution of the Maxwell-Vlasov equation In this talk, we will built stationary solution of Maxwell-Vlasov system with cylindrical symmetry in an externally imposed uniform magnetic field using a maximizing entropy technique. Different solutions will be discussed and what appears as a bifurcation leading to improved particle confinement will be discussed. Finally preliminary study of the stability of the problem with respect to moving to a toroidal geometry and the consequences on confinement of charged particles will be presented. Edson Denis Leonel UNESP-Universidade Estadual Estadual Paulista, Brazil An investigation of chaotic diffusion In this talk I discuss how chaotic orbits diffuse in the phase space for both dissipative and non dissipative 2-D mappings. For conservative cases the phase space is mixed and chaos is present in the system leading to a finite diffusion in one of the dynamical variables. For the dissipative case chaotic attractor is present in the phase space and the diffusion is limited. Indeed the diffusion is investigated by the analytical solution of the diffusion equation under certain boundary conditions as well as specific initial conditions. The analytical solution is then compared to the numerical simulations showing a remarkable agreement between the two procedures. Changpin Li Shanghai University, China Stability and decay of the solution to Hadamard-type fractional differential equation Hadamard-type fractional differential equation, i.e., fractional differential equation with Hadamard derivative, is one kind of important fractional differential equations, which may have potential applications in mechanics and engineering, e.g., the fracture analysis, or both planar and three-dimensional elasticities. In this talk, we mainly present the stability, asymptotic stability of the static solution (i.e., equilibrium) to the Hadamard-type fractional differential equation. In the case of asymptotic stability, the decay rate of the solution is also determined. Jose Tenreiro Machado ISEP-Institute of Engineering of Porto, Portugal Fractional calculus mission: To explore strange new worlds Fractional Calculus (FC) started with the standard differential calculus but remained an obscure topic during almost three centuries. The present-day popularity of FC in the scientific arena, with a growing number of researchers and published papers, makes one forget that 20 years ago the topic was considered “exotic” and that a typical question was “FC, what is it useful for?” We recall two remarkable foreseeing quotes about FC: “It will lead to a paradox, from which one day useful consequences will be drawn” (G. Leibniz, 1695) and “The fractional calculus is the calculus of the XXI century” (K. Nishimoto, 1991). Indeed, new advanced and emerging areas of application of the future of FC. Present day popular directions of progress are the formulation of new operators, the “fractionalization” of integer models, the further development of known topics and the pursuit for new areas of application. The first two, namely the proposal for new definitions of fractional-order operators, or the fractionalization of some mathematical models, may represent critical adventures with possible misleading or even erroneous formulations. The third, that is, the in-depth study of some mathematical and computational fields, constitute a solid option, but its lack of ambition narrows considerably the scope of FC. The fourth option leads to exploring new applications, both with mathematical and computational tools, and represents a challenging strategy for the progress of FC. Possible new directions of progress in FC may emerge in the fringe of classical science, or in the borders between two or more distinct areas. The lecture presents some uncommon ideas and topics, namely the application of computational and visualization methods for the analysis of data series and the characterization of complex phenomena. Vladimir Nekorkin Institute of Applied Physics, RAS, Russia Dynamics of oscillatory adaptive networks Many real-world complex systems can be described as networks, i.e., sets of nodes connected by links. Usually, the pattern of links (structure or topology) is considered as a result of a process independent of nodes’ intrinsic states. However, for a large variety of cases, the nodes as well as the links exhibit dynamics that can shape the structure, i.e., one deals with an adaptive network. We report the results of studying the dynamics of a network of phase oscillators (Kuramoto-type system) and a network of coupled Stuart-Landau oscillators. The nodes of the networks are coupled by adaptive links (coupling strength depends on nodes’ states) while the topology of the networks is either one-layer or two-layer. We show that in such networks there exist phenomena of self-organized emergence of hierarchical structures and hierarchical transitions. We consider chimera states and phenomena of their synchronization. And we talk about new, the third type of chaos – the so-called mixed dynamics, which is characterized by the fundamental inseparability of dissipative and conservative behavior; and some other phenomena. Raoul Nigmatullin Kazan National Research Technical University (KNRTU-KAI), Russia What kind of “hidden” information can be extracted from usual “noise”? In this abstract the author wants to prove that a trendless sequence (TLS) (usually determined as a noise”) can be used as an additional source of information. This additional information can be extracted from random noise with the help of 3D-DGIs (discrete geometrical invariants) method that allows to reduce 3N random data points to 13 parameters composed from the combination of integer moments and their intercorrelations up to the fourth order inclusive. Actually, they form a “universal” 13-feature space for comparison of one random sequence with another one. Comparison of these parameters associated with different noise tracks allows to use this set of parameters for calibration and other purposes associated with “standard”/reference equipment. It is similar to ides used by the Hibbs, when the partition function transforms the 3N degrees of freedom associated with the movement of micro-particles and are described of the initial Hamiltonian to a finite set of thermodynamic parameters. A similar operation can be realized with the help of the proposed 3D-DGIs method. Actually, a general platform for comparison of different random sequences/functions is created. Thirteen parameters enable to transform a triangle matrix (N × M) (N-number of rows, M-number of columns) representing initial measurements to the reduced matrix of the form M×Pr. The parameter Pr =13 includes in itself the following combinations: (<x>,<y>,<z> -three centers of gravity, R[1,2,3] – compact combination of the correlations of the third order, A[11], A[22], A[33], A [12], A[13], A[23] -six correlations of the second order and I[4] – invariant that includes the compact combination of the 4-th order correlations). One can show that the following reduction of the matrices M×Pr is possible also. These new possibilities give a researcher a new “universal” and very sensitive tool for reduction, comparison and further analysis of different TLS(s) and random functions with each other. Carla M.A. Pinto University of Porto, Portugal Epidemiological models: usefulness and predictions in the era of COVID-19 In the era of COVID-19, people turn more and more to mathematical models, to gain insight and predict the course of epidemics. Politics turn to mathematicians, epidemiologists, for guidance on what should be the best control practices to avoid a disaster in terms of humans’ lives. The World is facing an unprecedent pandemic, with severe consequences in terms of loss of lives and economically. With the utmost ideas in mind, in this talk, we will focus on the applicability of mathematical models of infectious diseases. We go from the usual and simple Susceptible-Infectious (SI) model to more realistic ones, which include variable transmission rate, treatment, intervention policies, non-integer order derivatives, amongst others. Some examples will be given for each model. Lev A. Ostrovsky University of Colorado Boulder, Boulder, Colorado, USA Modeling of complex two-dimensional patterns of interacting solitons L.A. Ostrovsky^1 and Y.A. Stepanyants^2 ^1University of Colorado Boulder, Boulder, Colorado, USA ^2School of Sciences, University of Southern Queensland, Toowoomba, Australia The two-dimensional, oblique interaction of solitary waves can form various complex patterns that were observed, in particular, for the internal and surface waves in the ocean and in the laboratory installations. This phenomenon is still waiting for a comprehensive theoretical analysis. We suggest a relatively simple kinematic approach to the description of interaction between plane solitons forming steadily moving structures. This approach is applicable to both integrable and non-integrable two-dimensional models possessing a soliton solution. It allows obtaining some important characteristics of the interaction between solitary waves propagating at an angle to each other. With the help of this approach, one can determine the speed and direction of motion of two-soliton patterns (including resonant soliton triads) and find the reference frames where the patterns are stationary. The suggested approach is validated by comparison with the exact two-soliton solutions of the integrable Kadomtsev–Petviashvili (KP2) equation. Expanding the analysis with using an asymptotic theory allows determining the spatial shift of soliton fronts due to the oblique interaction. In the KP2 case, the phase shift derived from the asymptotic method completely coincides with what follows from the exact solution. The developed theory is applied to the available results of observations of the internal and surface waves in the ocean. Minvydas Ragulskis Kaunas University of Technology, Lithuania Clocking convergence of discrete nonlinear maps (including fractional maps) Discrete nonlinear maps have been extensively studied for more than five decades since the introduction of the logistic map as one of the first examples of a deterministic system exhibiting chaotic behavior. Algorithms for clocking the asymptotic and non-asymptotic convergence of non-invertible and completely invertible maps will be discussed in this talk. A computational technique based on the visualization of the algebraic complexity of transient processes will be employed for that purpose. Temporary stabilization of unstable orbits in non-invertible maps will be demonstrated and discussed. We will show that the dynamics of the ractional difference logistic map is similar to the behavior of the extended invertible logistic map in the neighborhood of unstable orbits. This counter-intuitive result will provide a new insight into the transient processes of fractional nonlinear maps. Hansong Tang The City College of New York, USA Coupling of partial differential equations to simulate flow problems In recent years, computer simulation of fluid flows sees a quick transition from solving a single partial differential equation (PDE) (or a single system of PDEs) into solving multiple PDEs coupled with each other (or coupled systems of PDEs). In the past, such simulation focuses on individual physical phenomena, which tend to be described by a PDE (or a system of PDEs), and now it has become a trend to simulate multiple phenomena depicted by multiple PDEs (or multiple systems of PDEs), presenting a multiscale/multiphysics simulations. Along with the development of numerical methods in coupling PDEs in flow simulation, this presentation will go over typical methods and applications as well as a few examples of current research, and it covers flow problems in various backgrounds (e.g., aerospace eng., environmental sciences, and ocean engineering). Vitaly Volpert CNRS, France Nonlocal and delay reaction-diffusion equations in mathematical immunology Conventional models in mathematical immunology consist of ordinary or delay differential equations for the concentrations of different cells participating in the immune response and for the concentration of pathogen. Their spatial distribution in the tissue or cell culture, or their dependence on the genotype is described by reaction-diffusion equations with time delay characterizing clonal expansion of lymphocytes and with nonlocal terms taking into account cross reaction in the immune response. In this presentation we will study some mathematical properties of such models and their biomedical applications. Guo-Cheng Wu Neijiang Normal University, China Recurrent neural networks with short memory: A fractional calculus approach Fractional derivative holds memory effects that have extensively implemented in dynamic systems and modeling. However, it is a challenging work to consider the discrete analogy. We use the fractional calculus on a time scale to define fractional discrete-time systems.Then we propose new recurrent neural network models, and the short memory approach is used to decrease the computational cost. Finally, the performance is shown in comparison with that of the classical recurrent neural network.
{"url":"https://ndc.lhscientificpublishing.com/invited-talks/","timestamp":"2024-11-09T21:50:45Z","content_type":"text/html","content_length":"113692","record_id":"<urn:uuid:eb618afa-96ec-4c9a-853c-58236897532a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00565.warc.gz"}
Parallel integer sorting using small operations We consider the problem of sorting n integers in the range [0, n^c-1], where c is a constant. It has been shown by Rajasekaran and Sen [14] that this problem can be solved "optimally" in O(log n) steps on an EREW PRAM with O(n) n^∈-bit operations, for any constant ∈>O. Though the number of operations is optimal, each operation is very large. In this paper, we show that n integers in the range [0, n^c-1] can be sorted in O(log n) time with O(nlog n)O(1)-bit operations and O(n) O(log n)-bit operations. The model used is a non-standard variant of an EREW PRAMtthat permits processors to have word-sizes of O(1)-bits and Θ(log n)-bits. Clearly, the speed of the proposed algorithm is optimal. Considering that the input to the problem consists of O (n log n) bits, the proposed algorithm performs an optimal amount of work, measured at the bit level. ASJC Scopus subject areas • Software • Information Systems • Computer Networks and Communications Dive into the research topics of 'Parallel integer sorting using small operations'. Together they form a unique fingerprint.
{"url":"https://experts.syr.edu/en/publications/parallel-integer-sorting-using-small-operations","timestamp":"2024-11-10T19:25:27Z","content_type":"text/html","content_length":"47328","record_id":"<urn:uuid:97466a4e-16ae-4c7e-bea0-671cff1639a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00071.warc.gz"}
Sum bottom n values In this example, the goal is to sum the smallest n values in a set of data, where n is a variable that can be easily changed. At a high level, the solution breaks down into two steps (1) extract the n smallest values from the data set and (2) sum the extracted values. This problem can be solved with the SMALL function together with the SUMPRODUCT function, as explained below. For convenience only, the range B5:B16 is named "data". SMALL function The SMALL function is designed to return the nth smallest value in a range. For example: =SMALL(range,1) // 1st smallest =SMALL(range,2) // 2nd smallest =SMALL(range,3) // 3rd smallest Normally, SMALL returns just one value. However, if you supply an array constant (e.g. a constant in the form {1,2,3}) to SMALL as the second argument, k , SMALL will return an array of results instead of a single result. For example: will return the 1st, 2nd, and 3rd smallest values in the range A1:A10. In the example shown, the formula in E5 is: Working from the inside out, the SMALL function is configured to return the 3 smallest values in the range B5:B16: =SMALL(data,{1,2,3}) // returns {10,15,20} Because we provide three separate values for k, the result is an array that contains three results: This array is returned directly to the SUMPRODUCT function: SUMPRODUCT({10,20,30}) // returns 45 With just a single array to process, SUMPRODUCT sums the values in the array and returns 45 as a final result. SUM alternative It is common to use SUMPRODUCT in problems like this because SUMPRODUCT can handle arrays natively without any special handling in Legacy Excel. However, in a modern version of Excel, you can use the SUM function instead: =SUM(SMALL(data,{1,2,3})) // returns 45 Note: this is an array formula and must be entered with Control + Shift + Enter in Legacy Excel. When n becomes large As n becomes a larger number, it becomes tedious to enter longer array constants like {1,2,3,4,5,6,7,8,9,10}, etc. In this situation, you can use a shortcut to create an array that contains sequential numbers automatically based on the ROW and INDIRECT functions. For example, to sum the lowest 10 values in a range, you can use a formula like this: Here, the INDIRECT function converts the text string "1:10" to the range 1:10, which is returned to the ROW function. The ROW function then returns the 10 row numbers that correspond to the range 1:10 in an array like this: Note this is actually a vertical array, as indicated by the semicolons (;) but the SMALL function will happily accept a vertical or horizontal array as the k argument. Once INDIRECT and ROW have been evaluated, the formula is in the same form as before: =SUMPRODUCT(SMALL(range,{1;2;3;4;5;6;7;8;9;10}) // sum 10 smallest SMALL will return the 10 lowest values, and SUMPRODUCT will return the sum of these values as a final result. Variable n To set up a formula where n is a variable in another cell, you can concatenate inside INDIRECT. For example, if A1 contains n, you can use: This allows a user to change the value of n directly on the worksheet and the formula will respond instantly. With the SEQUENCE function New in Excel 365, the SEQUENCE function can generate numeric arrays directly in one step, which eliminates the need for the ROW + INDIRECT combination explained above. In fact, with SEQUENCE there is really no need to use array constant either. We can simplify the formula as follows: =SUM(SMALL(range,SEQUENCE(3)) // sum lowest 3 values =SUM(SMALL(range,SEQUENCE(9)) // sum lowest 9 values Note: Because SEQUENCE requires the new dynamic array engine in Excel (where array behavior is native), we have also replaced SUMPRODUCT with the SUM function. Read more about SUMPRODUCT and arrays
{"url":"https://exceljet.net/formulas/sum-bottom-n-values","timestamp":"2024-11-03T23:36:10Z","content_type":"text/html","content_length":"57517","record_id":"<urn:uuid:095818d0-98b9-4ad8-840a-bd77258e268e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00521.warc.gz"}
Tanya Khovanova's Math Blog In 2007 Alexander Shapovalov suggested a very interesting coin problem. Here is the kindergarten version: You present 100 identical coins to a judge, who knows that among them are either one or two fake coins. All the real coins weigh the same and all the fake coins weigh the same, but the fake coins are lighter than the real ones. You yourself know that there are exactly two fake coins and you know which ones they are. Can you use a balance scale to convince the judge that there are exactly two fake coins without revealing any information about any particular coin? To solve this problem, divide the coins into two piles of 50 so that each pile contains exactly one fake coin. Put the piles in the different cups of the scale. The scale will balance, which means that you can’t have the total of exactly one fake coin. Moreover, this proves that each group contains exactly one fake coin. But for any particular coin, the judge won’t have a clue whether it is real or fake. The puzzle is solved, and though you do not reveal any information about a particular coin, you still give out some information. I would like to introduce the notion of a revealing coefficient. The revealing coefficient is a portion of information you reveal, in addition to proving that there are exactly two fake coins. Before you weighed them all, any two coins out of 100 could have been the two fakes, so the number of equally probable possibilities was 100 choose 2, which is [S:5050:S] 4950. After you’ve weighed them, it became clear that there was one fake in each pile, so the number of possibilities was reduced to 2500. The revealing coefficient shows the portion by which your possibilities have been reduced. In our case, it is [S:(5050 − 2500)/5050:S] (4950-2500)/4950, slightly [S:more:S] less than one half. Now that I’ve explained the kindergarten version, it’s time for you to try the elementary version. This problem is the same as above, except that this time you have 99 coins, instead of 100. Hopefully you’ve finished that warm-up problem and we can move on to the original Shapovalov’s problem, which was designed for high schoolers. A judge is presented with 100 coins that look the same, knowing that there are two or three fake coins among them. All the real coins weigh the same and all the fake coins weigh the same, but the fake coins are lighter than the real ones. You yourself know that there are exactly three fake coins and you know which ones they are. Can you use a balance scale to convince the judge that there are exactly three fake coins, without revealing any information about any particular coin? If you are lazy and do not want to solve this problem, but not too lazy to learn Russian, you can find several solutions to this problem in Russian in an essay by Konstantin Knop. Your challenge is to solve the original Shapovalov puzzle, and for each solution to calculate the revealing coefficient. The best solution will be the one with the smallest revealing coefficient. 16 Comments 1. Mary: Very nice problem! My first attempt at a solution gives a revealing coefficient of about 78%. Put one genuine coin aside, then divide the remaining coins into three piles of 33, with each pile containing exactly one fake. Two pairwise weighings will demonstrate that three piles are equal in weight, which would not be possible if there were only two fakes. Before the weighing demonstration, there were 100 choose 3 or 161,700 possibilities for the fake coins. Afterwards, the number of possibilities is reduced to 33^3=35,937. So this yields a revealing coefficient of (161,700 – 35,937)/161,700 or a bit less than 78%. I’m sure there must be more clever ways to do this problem, so I will pose it to the Albany Area Math Circle and see what they can come up with. I like the notion of a “revealing coefficient.” Reading about it reminded me of learning about Shannon’s approach to measuring information in graduate school with Professor Kenneth Arrow decades ago. I vividly remember thinking that it was very exciting to have such a beautiful and concrete way to measure information as reduction in uncertainty. But the revealing coefficient is much more intuitive and easy to introduce to younger students than Shannon’s measure. I have never heard the term “revealing coefficient” used before. When I googled on it, I did not come up with any other examples of its use in an information theory context. But then again, I don’t read Russian. Is “revealing coefficient” a standard term in Russian mathematics? It’s a very nice notion! 12 August 2009, 4:06 pm 2. Tanya Khovanova: Your suggestion is not the solution to the problem, because the coin you put aside will be proven to be not fake. Thus, the request not to reveal info about any particular coin doesn’t hold. 12 August 2009, 5:37 pm 3. Mary: Ah, thanks for pointing out that I didn’t read the question carefully enough. I somehow glossed over that point–I had read it too fast and interpreted it as just ruling out specific identification of the fake coins. Back to the drawing board. 12 August 2009, 7:34 pm 4. Mary: Taking the constraint properly into account makes finding even a simple first crack solution somewhat harder and more interesting. My slightly less simple-minded approach (leaving out 4 initially, weighing 3 piles of 32, then swapping the initially set aside 4 and reweighing the new piles of 32 to demonstrate continued equality of weights) yields a revealing coefficient of (161,700 – 32,772)/161,700, which is just under 80%. But since I’m a bear of very little brain, I’m sure there must be a more clever way to do this that I’m overlooking, so I look forward to seeing someone else post it. 12 August 2009, 8:16 pm 5. Tanya Khovanova: If you swap the 4 coins you initially set aside with 4 coins from a particular pile, then those 8 coins that are swapped are real. 12 August 2009, 8:32 pm 6. Mary: Not necessarily. Suppose you initially set aside three fake coins and one real coin. So originally your three piles have all genuine coins. Then you could swap two of the coins (one fake & one real) into pile alpha, one fake into pile beta, and one fake into pile gamma. After the swap, the 3 piles still balance. Of course, it could also be that you had originally set aside four genuine coins, in which case all 8 coins involved in the swap could be genuine. But it doesn’t have to be, as I’ve illustrated. Alternatively, if you set aside four genuine coins, you could swap two into pile alpha, one into pile beta, and one into pile gamma, in exchange for one fake & one real from pile alpha, one fake from pile beta, and one fake from pile gamma. So, the judge has no way to know whether the initial four coins you are swapping in or out are all 4 genuine or 3 fakes plus 1 genuine. Either way, it’s possible to do the swap and maintain 3 even piles. Basically, my solution allows for the possibility of two equally valid approaches: Approach 1: The three fakes are initially each planted in piles alpha, beta, and gamma. This leaves 32^3 possibilities in this approach. Approach 2: The three fakes are initially planted among the four coins set aside. This leaves four possibilities (one for each of the possible identities of the genuine coin among the 4 initially set aside in this approach.) Since the judge has no way to know which approach you took, there would be 32^3 + 4 = 32,772 possibilities for the identities of the fake coins. Unless, perhaps, there’s something else I misinterpreted in the problem? 12 August 2009, 9:22 pm 7. Tanya Khovanova: My initial impression was that your were swapping all 4 coins together to the same pile. Now I agree with you. 12 August 2009, 10:11 pm 8. Sue VanHattum: I thought this was way cool! I don’t know how trackbacks work, so I just wanted to tell you I put this in the Math Teachers at Play #14, over at Math Mama Writes 20 August 2009, 10:53 pm 9. JBL: I’m also curious about the answers to one of the questions that Mary asked, namely, did you just invent the notion of “revealing coefficient” on the spot? I agree with her that it seems like an extremely useful idea for introducing some basic ideas of information theory to a not incredibly advanced audience. Mary, I don’t agree with your computation of the revealing coefficient. If I understand you correctly, the process we follow is this: we have initially three piles of size 32 and a pile of size 4. Let’s name the coins in the pile of size 4 {a, a, b, c}. Then we switch {a, a} for two coins {d, d} from the first of the three big piles, switch {b} for a coin {e} from the second big pile and switch {c} for a coin {f} from the third big pile. Then the following sets of coins could be the counterfeit coins: * {a, b, c} for one of the a-s (2 ways) * {d, e, f} for one of the d-s (2 ways) * a triple of coins, one from each of the big piles, that includes none of a, b, c, d, e, f (30*31*31 = 28830 ways). This leaves a slightly larger revealing coefficient than you suggested (82.168…%). An easier-to-explain but less-good solution is to divide the original pile into four piles of size 25 so that there is one counterfeit in three of the piles. Weighing each of those piles against the one “all honest” pile shows that there must be at least 3 counterfeit coins. This gives a revealing coefficient of 1 – 25^3/(100 C 3) = 90.3…%. 25 August 2009, 3:28 pm 10. Tanya Khovanova: Yes, I invented the revealing coefficient for this problem. Also, your solution with 4 piles will not work as the coins in the piles where all of the coins are good would be proven to be all good. Thus, the information about a particular coin is revealed. 25 August 2009, 6:46 pm 11. sandra742: Hi! I was surfing and found your blog post… nice! I love your blog. 🙂 Cheers! Sandra. R. 9 September 2009, 9:30 am 12. Daran: sandra742 fails the Turing test. I suspect it is spam. the number of equally probable possibilities was 100 choose 2, which is 5050 I was initially puzzled by this, as 100 choose 2 is 4950. I then realized that what you meant was 100 choose 1 or 2, which is 5050. For the elementary version of the puzzle, two facts are apparent. 1. Every weighing must set aside an odd number of coins, not one. 2. Every weighing must balance (otherwise the heavy side would be revealed to contain only genuine coins). The task then is to exhibit a sequence of weighings, all of which balance, but which, if there had been only a single fake coin, some would not have balanced. Setting three coins aside, weigh 48 coins against 48 to show that they balance. Swap the three coins for two from one of the pans and one from the other, and show that they balance. Before you weighed them all, one or two coins out of 99 could have been fake, so the number of equally probable possibilities was 99 choose 1 or 2, which is 4950. After weighing, one fake could be among the 46 coins never swapped in the first pan, and the other in the 47 never swapped in the other, for a total of 2162 possibilities. Or they could be among the three swapped out, or the three swapped in, for an additional 4 possiblities. The revealing coefficient is then (4950 – 2166) / 4950 = 0.562. This is the best we can do. If, for example, we instead set aside 5 coins, and swap 3 and 2 to the two pans respectively then after weighing the number of possibilities would be 45 * 46 + 3 * 2 + 3 * 2 = 2082 Turning to the original Shapovalov problem, I note that fact 1 above is replaced by a requirement that it is always an even number of coins set aside (including none). Also the requirement that the pans always balance no longer applies. Mary’s solution, which is analogous to the one given above, is probably the best with all weighings balancing. It is not clear to me that solutions with non-redundant non-balancing weighings can’t exist, but despite considerable brain-wracking, I can’t find one. 12 September 2009, 8:10 am 13. Tanya Khovanova: Thank you for pointing out my arithmetic mistake. And sorry that it confused you. I corrected it. I wanted the revealing coefficient to compare what you are supposed to prove with the info you actually reveal. Also, I wonder what do you mean by “sandra failing the Turing test”? My spam catcher allowed her to pass. 12 September 2009, 9:19 am 14. Daran: I wonder what do you mean by “sandra failing the Turing test”? Some blogs are configured so that the first comment by an individual commenter goes into moderation. Once manually approved, subsequent comments are approved automatically. Faced with that configuration, spammers sometimes post innocuous-looking comments without a spammy link in the hope that they will be approved, allowing the real spammy payload to be sent later. My spam catcher allowed her to pass. No spam catcher is perfect, and this kind is more difficult to recognize automatically than the usual. What marks them out to human eyes is that they are completely generic. The comment could just as well have been posted to any other blog. There is nothing in it specific to your blog that would indicate a real person had read it and wanted to comment on it. The other big giveaway is that Sandra’s link doesn’t point to anything. Sandra could prove me wrong by replying to this comment. 12 September 2009, 10:07 am 15. Hemant Agarwal: You say that “Moreover, this proves that each group contains exactly one fake coin”…this is not true..this only shows that the total number of fake coins is even and that the fake coins are equally distributed between the 2 pans..it doesn’t prove that there are exactly 2 fake coins 27 July 2012, 1:08 pm 16. Hemant Agarwal: ohhh…sorry..i didn’t read that the initial statement that “You present 100 identical coins to a judge, who knows that among them are either one or two fake coins” 27 July 2012, 1:11 pm
{"url":"https://blog.tanyakhovanova.com/2009/08/unrevealing-coin-weighings/","timestamp":"2024-11-10T16:12:50Z","content_type":"text/html","content_length":"76906","record_id":"<urn:uuid:6092f219-c6cd-4aef-865e-d33a5df999e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00210.warc.gz"}
Computational Complexity A true story on Election Day 2000: An Israeli postdoc in the US came to work and said "I have watched the conventions and seen the debates. I have studied the platforms and as much news analysis as I could get hold of. After serious consideration I decided that, if I were allowed to vote, I would vote for Bush." An American computer scientist walked in soon thereafter and said "I woke up this morning and decided to vote for Nader." Draw your own moral. With the US elections on Tuesday and politics on everyone's mind, let's open the comments for anyone who has anything they want to say about the presidential race. Get it off your chest. I only ask you to be civil. And don't forget to vote. In theoretical computer science we traditionally list the co-authors of our papers alphabetically. Done this way for "fairness" it leads to a binary notion of author. Either you are an equal author of a paper or you are off the paper. There is no middle ground. In our publish or perish society, authoring papers helps you succeed, in getting hired, promoted and receiving grants and awards. So choosing who is an author of a paper, particularly important papers, can be an important and sometimes messy decision complicated by the fact that the authors have to do the choosing. An author should have made significant contributions to a paper. But how do we define significant? A person who produces key ideas in the proof of a main result certainly becomes an author. A person who simply writes up the proof should not be. But what about the person who works out the messy but straightforward details in a proof? What about the person who poses the questions but has no role in the proof? Tricky situations that one needs to handle on a case-by-case basis. An advisor should hold him or herself to a higher standard. A good advisor guides the research for a student and should not become a co-author unless the advisor had made the majority of the important ideas in the proofs. Likewise we hold students to a slightly lower standard to get them involved in research and exposition of their work. Computer scientists tend to add co-authors generously. While seemingly nice, this makes it difficult to judge the role authors have played in a paper, and sometimes makes who you know or where you are more important than what you know. Some upcoming deadlines: STOC (11/4), Complexity (11/18), Electronic Commerce (12/7), the new NSF program Theoretical Foundations (1/5), and ICALP (2/13). Feel free to comment if I've missed Since computer science takes its conferences more seriously than the journals and most conferences have hard deadlines, we have become a deadline-driven field. Most authors submit their papers on the deadline day, if not in the last hour or two. Papers get written to meet a deadline which has both good and bad aspects: good in that papers get written and bad in that they get written quickly and often incompletely. Sometimes conference organizers see a lack of submissions (forgeting that most papers come on deadline day) and extend the deadline by a few days or a week. I've often heard people complain about losing their weekends if a deadline moves from Friday to Monday. Why? You could still submit on Friday. People feel their papers are never complete and they need to keep fixing it up until the last possible second even though these last minute changes will not likely affect acceptance. One person I knew once turned down an opportunity to attend a workshop because of a grant deadline on the same week. This was six months beforehand. A little planning is all that's needed to submit the grant one week early but some in our field cannot pull this off even months ahead of time. Remember that deadlines are upper bounds, no shame in submitting early. And it's not the end of the world if you miss a deadline; there's always another conference with another deadline right around the corner. In Bill Gasarch's post last week, he discusses what makes a problem natural. You used to hear the argument that a complexity class was natural if it contained an interesting problem not known to be contained in any smaller class. But then would SPP be natural simply because it contains graph isomorphism? On the other hand I find BPP a natural way to define probabilistic computation even though it fails this test. Does a class go from natural to unnatural if a new algorithm for a problem is found? I prefer to use the Martian rule. Suppose we find a Martian civilization at about the same level of scientific progress that we have. If they have a concept the same or similar to one of ours than that concept would be natural, having developed through multiple independent sources. Of course we don't have a Martian civilization to compare ourselves with so we have to use our imagination. I firmly believe the Martians would have developed the P versus NP question (or something very similar, assuming they haven't already solved it) making the question very natural. On the other hand I suspect the Martians might not have developed a class like LWPP. Other classes like UP are less clear, I guess it depends whether the Martians like their solutions unique. Applying the Martian rule to Gasarch's post, WS1S is probably more natural than regular languages without squaring that equal Σ^*. At least my Martians would say that. September Edition Computation Complexity and Computational Learning share many aspects and goals. We both analyze and compare different models of computation either to understand what they can compute or how to find the appropriate model to fit some data. No surprise that many of the tools developed in one area have use in the other. This month's paper exemplifies the connection; it uses tools in complexity to get a nice learning theory result which in turn has very nice implications in complexity theory. Oracles and Queries that are Sufficient for Exact Learning Nader Bshouty, Richard Cleve, Ricard Gavaldà, Sampath Kannan and Christino Tamon The main result shows how to learn circuits probabilistically with equivalence queries and an NP oracle. An equivalence query given a circuit C trying to learn a language L, either says C is correct or gives an input where it fails. The proof uses a clever divide and conquer argument built upon Jerrum, Valiant and Vazirani's technique for random sampling with an NP oracle. Kobler and Watanabe observe that if SAT has small circuits we can answer equivalence queries to SAT with an NP oracle. Applying Bshouty et. al. they show that if SAT has polynomial-size circuits than the polynomial-time hierarchy collapses to ZPP^NP. Sengupta noticed that old results give a consequence of PH in S[2]^p if SAT has small circuits. This strengthens Kobler and Watanabe because of Jin-Yi Cai's result showing that S[2]^p is contained in ZPP^NP. Cai's paper uses many techniques similar to Bshouty et. al. Shaltiel and Umans have a recent result giving weaker assumptions for derandomizing S[2]^p by derandomizing random sampling. If SAT does not small circuits, the Bshouty et. al. algorithm produces a small set of inputs that give a co-NP proof of this fact. Pavan, Sengupta and myself applied this fact to the two queries Saturday Evening, October 25, 1986: I huddled with about a dozen of my fellow MIT graduate students (and a couple of faculty) watching game six of the baseball World Series in a Toronto hotel room right before the start of FOCS. The Boston Red Sox led by two runs with two out and none on in the bottom of the tenth against the New York Mets. One more out and the Sox would win their first championship since 1986. The Red Sox didn't win the series that year and failed to return until this year. After an amazing comeback against their rivals, the New York Yankees, the Red Sox will host the first game of the World Series on Saturday against the St. Louis Cardinals. By far baseball is the favorite team sport among American computer scientists (at least of those that care about sports at all). Why? Mabye because it's a discrete game with a small state space. At Fenway Park (Boston's home field) they use lights to give the number of ball, strikes and outs in unary notation. The game has many nice mathematical properties and not just the myriad of statistics. For example, it is a theorem of baseball that at any point in a half inning the number of batters is equal to the sum of the number of outs, the number of runs scored and the number of men on base. Proof by induction. The real reasons I love baseball are less tangible. Both a team sport and a one-on-one contest between pitcher and batter. A strategic game dealing with balancing probabilities. Suspense on every pitch. And much more. By far the plurality of baseball fans in our field seem to root for the Red Sox. Probably because most of us spent at least part of our academic career in the Boston area and Boston takes its baseball far more seriously than any other city. In full disclosure, my favorite team is the Chicago White Sox but I root for the Red Sox in their absence. Nothing beats attending baseball game live, especially in Fenway. Alas I never managed to attend a world series game though I've come very close. October 14, 1992: The Pittsburgh Pirates won the National League East and the World Series was scheduled to open during FOCS in Pittsburgh. I wrote for and got tickets to the first game if Pittsburgh made the series. In the NLCS Atlanta scored three runs in the bottom of the ninth of game 7, meaning Atlanta and not Pittsburg would host the series. When Cabrera hit the single scoring those final two runs, I sat staring at the TV and cried. A guest post from William Gasarch Why is it hard for us to explain to the layperson what we do? The following true story is telling. I will label the characters MOM and BILL. MOM: What kind of thing do you work on? BILL: (thinking: got to give an easy example) Lets say you have n, er 1000 numbers. You want to find the — (cut off) MOM: Where did they come from? BILL: Oh. Lets say you have 50 numbers, the populations of all of the states of America, and you want to find — (cut off) MOM: Did you include the District of Columbia? BILL: No. MOM: Why not? BILL: Because its not a state. But for the example it doesn't matter because — (cut off) MOM: But they should be a state. They have been oppressed to long and if they had their own state then — (cut off) BILL: But none of that is relevant to the problem of finding the Max of a set of numbers. MOM: But the problem of Statehood for DC is a more important problem. BILL: Okay, lets say you have 51 numbers, the populations of the 50 states and the District of Columbia. MOM: What about Guam? BILL: I should have majored in Government and Politics… To the person on the street the very definition of a problem is … problematic. Abstraction that we do without blinking an eye requires a conceptual leap that is hard or at least unfamiliar to most Even people IN computer science may have a hard time understand what we are talking about. Here is another real life story between two characters who I will call BILL and DARLING. DARLING has a Masters Degree in computer Science with an emphasis on Software Engineering. DARLING: Bill, can you give me an example of a set that is provably NOT in P. BILL: Well, I could say HALT but you want a computable set, right? DARLING: Right. BILL: And I could say that I could construct such sets by diagonalization, but you want a natural set, right? DARLING: Right. BILL: And I could say that the set of true statements in the language WS1S, but you want a natural set. DARLING: What is WS1S? BILL: Weak Monadic Second order with one successor, but I think you agree that if you don't know what it is then it's not natural. DARLING: Right. So, is there some set that is natural and decidable that is provably not in P? BILL: AH, yes, the set of regular expressions with squaring that equal Σ^* is EXPSPACE complete and hence provably not in P. DARLING: Why is that problem natural? BILL: Good Question! A problem is natural if it was being worked on before someone classified it. DARLING: Okay. What is the first paper this problem appeared in? BILL: It was in THE EQUIVALENCE PROBLEM FOR REGULAR EXPRESSIONS WITH SQUARING REQUIRES EXPONENTIAL SPACE by Meyer and Stockmeyer, From FOCS 1972. Oh. I guess that proves that its NOT natural. This story raise the question—what is natural? Do we need that someone worked on a problem beforehand to make it natural? Is it good enough that they should have worked on it? Or that they could have? Logic has the same situation with the Paris-Harrington results of a result from Ramsey Theory that is not in Peano Arithmetic, but the first time it was proven was in the same paper that proved it was not provable in PA. Incidentally, there are more natural problems that are not in P. Some games on n by n boards are EXPSPACE or EXPTIME complete and hence not in P. Would have been a better answer, though it would not have made as good a story. Fair and balanced coverage from Adam Klivans To answer one of Lance's previous posts, the Internet is definitely harming conferences: most everyone who stayed up until 5 AM to watch the Red Sox beat the Yankees in 14 innings on MLB.TV has not made it to James Lee's talk at 8:30 AM on embeddings (in fact I think I'm the only one who did make it here). Krauthgamer, Lee, Mendel, and Naor presented a powerful new method for constructing embeddings of finite metric spaces called measured descent which, among other things, implies optimal volume-respecting embeddings (in the sense of Feige). I checked the registration numbers and indeed only 172 people have officially registered for the conference—that's 100 less than the registration at STOC in Chicago. Yesterday I mentioned the winners of the best paper award. I should also mention the best student paper award winners: Lap Chi Lau's An Approximate Max-Steiner-Tree-Packing Min-Steiner-Cut Theorem shared the prize with Marcin Mucha and Piotr Sankowski's Maximum Matchings via Gaussian Elimination. Lau's paper gives the first constant factor approximation to the problem of finding a large collection of edge-disjoint trees that connect an undirected multigraph. Mucha and Sankowski give a nice method for finding maximum matchings in general graphs in time O(n^ω) where ω is the matrix multiplication exponent. Lovász showed how to test for a matching in a graph using matrix multiplication, and Mucha and Sankowski extend this and actually recover the matching. Valiant's talk on Holographic Algorithms was well attended: he described a new, quantum-inspired method for constructing polynomial-time algorithms for certain counting problems. The algorithms are classical (no quantum required) and give the first efficient solutions for problems such as PL-Node-Bipartition: find the cardinality of a smallest subset of vertices V' of a max-degree 3, planar graph such that deletion of V' (and its incident edges) causes the graph to become bipartite. At the end he gave a simple criterion for proving P = P^#P via these techniques! Adam Klivans reports from Rome. Rome is the host city for this year's FOCS conference. While everyone enjoys visiting one of the world's great capitals, attendance at the sessions can occasionally suffer, and the sessions this year do seem noticeably smaller. Another explanation could be the the high cost of traveling to and staying in Rome. On the plus side, I get to see many European theorists of whom I had known in name For those who did make the trek to the southern tip of the Villa Borghese, the first day featured the presentation of the two results which won best paper: Subhash Khot's Hardness for Approximating the Shortest Vector in Lattices and Applebaum, Ishai, and Kushilevitz's Cryptography in NC^0. Subhash was an author of two other impressive hardness results in the same session: Ruling Out PTAS for Graph Min-Bisection, Densest Subgraph and Bipartite Clique (the title is self-explanatory) and Optimal Inapproximability Results for Max-Cut and Other 2-variable CSPs? (with Kindler, Mossel, and O'Donnell) which gives evidence that the Max-Cut approximation algorithm of Goemans-Williamson is the best possible. The cryptography session featured the above Cryptography in NC^0 paper which Lance mentioned in an earlier post as well as an intriguing result due to Salil Vadhan, An Unconditional Study of Computational Zero Knowledge showing how to establish important properties of computational zero knowledge proofs without assuming the existence of a one-way function. The controversial topic of what to do with the special issue of FOCS continued at last night's business meeting. It appears as though Elsevier will lose another opportunity to publish a special issue of STOC/FOCS, as a vote last night indicated a strong desire to give SICOMP the responsibility instead (a similar thing occurred at STOC this year). At Dagstuhl Manindra Agrawal presented recent work of his students Neeraj Kayal and Nitin Saxena (the trio that showed a polynomial-time algorithm for primality testing) on rings given by a matrix describing the actions on the base elements. They show a randomized reduction from graph isomorphism to ring isomorphism and from factoring to #RI, counting the number of ring isomorphisms. They also show a polynomial-time algorithm for determining if there are any non-trivial automorphisms of a ring and that #RI is computable with an oracle for AM∩co-AM. Agrawal conjectured that #RI is computable in polynomial time, a conjecture that would imply factoring and graph isomorphism have efficient algorithms. We also saw a series of presentations by Andris Ambainis, Robert Špalek and Mario Szegedy. Ambainis described his improved method for showing lower bounds for quantum algorithms that provably beats the degree method. Špalek talked about his work with Szegedy showing that Ambainis techniques as well as different tools developed by Zhang, Laplante and Magniez, and Barnum, Saks and Szegedy all gave the same bounds. Szegedy, in his presentation, called this measure the div complexity and showed that the size of a Boolean formula computing a function f is at least the square of the div complexity of f. Dagstuhl was designed as a place to bring a small group of researchers to an isolated environment where they could give some talks, discuss research and otherwise socialize among themselves free from other distractions. No televisions though a radio bought to hear news during the 1991 Gulf War. We could get two-day old news from America via the Herald Tribune. While they had computer rooms, in the early days we had no world wide web and email was far less used. Instead we had rooms for coffee, rooms for beer and wine, rooms for billiards and music and rooms just to hang out. Everyone stayed on premises and we had no phones in rooms, just a couple communal phones to call home. Although Dagstuhl has expanded, rooms not only have phones but WiFi throughout. We can answer email, read news, write weblog posts (as I am doing now) from the comfort of our own isolated desks. We're watching baseball games and the debate over the internet. But worse than being connected, the rest of the world knows we're connected. I find myself having to take time to fix problem sets for my class and deal with departmental issues as do many of my other colleagues here. The internet has greatly helped science by bringing us closer together but also prevents us from being disconnected losing many of the advantages of these workshops. A sign here proclaims "Are you here for computer networking or human networking?" Something to remember next time you go to a conference. I'm back in Dagstuhl for the workshop on Algebraic Methods in Computational Complexity. The roof looks great. I have attended several Dagstuhl workshops for over twelve years now since the workshop on Structure and Complexity Theory in 1992. I have seen Dagstuhl expand and evolve over these years and this is the first time I feel that Dagstuhl has achieved its completed state. I love coming here; Dagstuhl has a contained environment in a pretty but boring part of Germany where we complexity theorists give seminars, eat and drink together and talk science and other stuff. Politics and baseball seem to dominate the discussions this week. A group of German software engineering professors share Dagstuhl with us this week. They are meeting to discuss future directions of German software engineering research and to find ways to increase student enrollment. The drop in students desiring a computer science degree is not just an American phenomenon. A report from Varsha Dani. The Grace Hopper Celebration of Women in Computing was held in Chicago on October 6-9. This was the fifth such conference, since its inception in 1994. This year there were over 800 attendees from all over the country. This conference is a forum for discussion of issues faced by women and a showcase for achievements of women in the fields of computing and technology. There were a number of talks on social issues, some technical presentations by young investigators, and a few invited technical talks. There were also a number of social and networking events hosted by IBM, Microsoft, Google and others. Among the invited talks, there were three I particularly enjoyed. Jessica Hodgins of CMU talked about connections between ideas in robotics and computer graphics and animation, especially simulation of human movements. Cynthia Dwork of Microsoft Research spoke about the problem of publishing (transformed) data from public databases (such as census data) so as to maintain a balance between the utility of the published information and the protection of the privacy of individuals represented by the data. Her approach to privacy is influenced by ideas from cryptography. Daniela Rus of MIT spoke about self-reconfiguring robots. These robots are distributed systems, consisting of a number of identical modules which can dynamically adapt the way that they are connected to each other to best fit the task at hand. My nine-year old daughter had a homework problem with the following diagram. She had no problem solving questions like: Beginning and ending at the entrance, describe the shortest route you can take that will allow you to see 4 different kinds of animals. "You're doing computer science," I said. "I don't see any computers," she responded. "Computer science is problem solving like finding the shortest path." "Then computer science is pretty easy." "OK, is there a way to visit every animal exactly once?" She found this question much more challenging. In the US the terms Assistant Professor, Associate Professor and Professor represent different stages in one's career but they all play a similar role in research and advising students. An assistant professor is nobody's assistant. The names get their meaning from a structure you still see in many other countries (Germany is a good example). Here you have research groups, where a lead professor has nearly complete control of hiring and the budget. The equivalents of assistant and associate reflect the temporary and permanent faculty members of those groups. How does this affect graduate studies? In Germany a grad student joins a group and works within that group. In the United States a student joins a department usually without a specific advisor in mind and often not initially committing to a specific subfield of computer science. So to those who send me and other American computer scientists requests to join our groups, the US system doesn't work that way. Instead go to the departmental web page and follow the appropriate links to find information on how to apply to that department. If you have a specific researcher that you want to work with, use the personal statement to say this and your reasons for it. Trust me, we read the applications carefully and choose Ph.D. candidates as best as we can. It just doesn't help to send personal requests, I just point to our web page and trash the email. Can one use a comic book and a toy to teach a complicated subfield of mathematics? Why knot? Ron Fagin asked me to announce two public commemorations of Larry Stockmeyer and his work. The first will be held at the IBM Almaden Research Center on Monday, October 25, 2004. The second will be held in conjunction with STOC '05 in Baltimore on May 21-22, 2005. Please join the community in honoring the memory of one of the great complexity theorists. Steve Smale talked about his experiences in the Economics Theory Workshop at Chicago, particularly the aggressive questioning. I didn't attend his talk but I did go to a few of the econ theory seminars years ago and it forms an interesting contrast to the CS theory talks which have few usually technical questions followed by polite applause. The econ theory seminar took place in a medium-size conference room with a long table. Graduate students sat in chairs along the walls. The speaker was at one end of the table and econ professors, usually including multiple Nobel prize winners, around the rest of the table. A copy of the paper was sent with the talk announcement and almost from the first slide the faculty aggressively attacked the speaker with questions about the validity of the model or the importance of the results. (Remember this was the theory seminar, imagine the questions at the political economics seminar). At the end of the seminar time, the talk ended and everyone left the room. No applause. I don't recommend that we follow this model in theoretical computer science. However we usually go to the other extreme and (outside of crypto) rarely ask negative questions in a seminar. Typically the only negative feedback we get in our field is from anonymous referees and reviewers. If we were forced to defend our research in an interactive setting, we would establish a better understanding of the importance of the models and results of our own research.
{"url":"https://blog.computationalcomplexity.org/2004/10/?m=0","timestamp":"2024-11-09T01:11:41Z","content_type":"application/xhtml+xml","content_length":"271487","record_id":"<urn:uuid:9a85d8da-bf4a-4cf5-9616-fd59845d560b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00105.warc.gz"}
Relational calculus is just the explanative way of telling the query. They accept relations as their input and yield relations as their output. function. from {opacity: .4} – Martin Smith Oct 3 '10 at 18:44. /* On hover, add a black background color with a little bit see-through */ Ssns of employees with at least one } (σLname=‘Smith’(πLname, Dnumber(EMPLOYEE))), Ssn=Mgr_ssnDEPARTMENT)) Dnumber=DnumPROJECT). The relational algebra provides a query language in which … retrieved the project numbers for projects that involve an employee named Smith font-weight: bold; In Relation Algebra frameworks are created to implement the queries. Select Ename from Employee, Department Where Employee.Eno = Department.Eno; Select Ename from Employee Natural Join Department;(Main Query for Natural Join) Relational Algebra in DBMS Examples of Queries in Relational Algebra (3/3) 36 The Tuple Relational Calculus Declarative expression Specify a retrieval request Non-procedural language Any retrieval that can be specified in basic relational algebra Can also be specified in relational calculus. Relational algebra: is closed (the result of every expression is a relation) has a rigorous foundation ; has simple semantics ; is used for reasoning, query optimisation, etc. Suggested exercises from the book 5.1 5.3 5.5 5.7 5.9 SQL Examples 1 (sample data) SQL Examples 2 (sample data) SQL Examples 3 (sample data) Datalog . The basic operation included in relational algebra are: 1. Sections 6.6 and 6.7, we show how these queries are written in other relational function. 1. SQL queries are translated into equivalent relational algebra expressions before optimization. In this Relational Algebra is a widely used procedural query language, which takes instances of one or more relation as an input and generates a new relation as an output.It uses a different set of operators (like unary or binary operators) and operands to perform queries. Theselect operator is represented by the sigma(σ)symbol, which is used to fetch the tuples (rows) from the relation thatsatisfies the selection condition. Query language. right: 0; A result of an operation may be further used as an operand in another operation. Important note: background-color: #bbb; /* Number text (1/3 etc) */ As an exercise, try to do each of these sample But the second tells what needs to be done to get the students with ‘database’ course. need: Example that 3.9 Using the bank example, write relational-algebra queries to ˚nd the accounts held by more than two customers in the following ways: a. . In this Examples top: 50%; As a single in-line expression, RELATIONAL ALGEBRA is a widely used procedural query language. As a single in-line expression, this query becomes:. controls the project. Answer: a. Enter the SQL query below, or , or upload a file: 2. Example)Instances sid sname rating age 22 dustin 7 45.0 31 lubber 8 55.5 58 rusty 10 35.0 sid sname rating age 28 yuppy 9 35.0 31 lubber 8 55.5 44 guppy 5 35.0 58 rusty 10 35.0 sid bid day 22 101 10/10/96 58 103 11/12/96 R1 S1 S2 Boats. as single relational algebra expressions for queries Q1, Q4, and Q6. } Conjunctive selection operations can be written as a sequence of individual selections. why width: auto; @keyframes fade { SQL: Once we have database ready users will start using them. The Relational Algebra and Relational Calculus, Unary Relational Operations: SELECT and PROJECT, Relational Algebra Operations from Set Theory, Binary Relational Operations: JOIN and DIVISION, Data Modeling Using the Entity-Relationship (ER) Model, Using High-Level Conceptual Data Models for Database Design, Entity Types, Entity Sets, Attributes, and Keys, Relationship Types, Relationship Sets, Roles, and Structural Constraints. retrieve a relation with all employee Ssns in ALL_EMPS. Basic operations: " Selection ( ) Selects a subset of rows from relation. " enumerate and explain the operations of relational algebra (there is a core of 5 relational algebra operators),. A correlated sub query? We assume that dependents of the. 6. (σLname=‘Smith’(EMPLOYEE))) ∪ πPno, ((πDnumber him/her !!! Projection (π) Projection is used to project required column data from a relation. Ssns with no dependents in EMPS_WITHOUT_DEPS, and finally join this with EMPLOYEE to retrieve the desired attributes. Operators It has 2 class of operators: Binary operators; Unary operators (with parameters) Binary operators. background-color: rgba(0,0,0,0.8); contains the project numbers of all projects controlled by department 5. Without using any aggregate functions. This is This gives a horizontal partition of the table. transition: background-color 0.6s ease; * {box-sizing: border-box} max-width: 1000px; cursor: pointer; – Relational Calculus: Lets users describe what they want, rather than how to compute it. (Non- wrong: (because there is display: inline-block; animation-name: fade; query, we retrieved the project numbers for projects that involve an employee (BS) Developed by Therithal info, Chennai. Relational algebra is a procedural query language that works on relational model. Explanation: Applying condition intersection is expensive. The user tells what data should be retrieved from the database and how to retrieve it. What is Relational Algebra? All examples refer to the database in Figure 3.6. selection conditions: Finally, we DEPT5_PROJS that contains the project numbers of all projects controlled by... . example, the INTERSECTION operation In prepositional logic, one can use unary and binary operators like =, <, > etc, to specify the conditions.Let's t… Can you give an example SQL that you want the RA for? In general, the same query can be stated in numerous ways using the various operations. write relational algebra queries of the type join–select–project,. /* Fading animation */ An operator can be either unary or binary. RESULT ← πLname, Fname(MGRS_WITH_DEPS * – Relational Calculus: Lets users describe what they want, rather than how to compute it. Query has the form: ! attributes from, Make a list of project numbers relational algebra. In addition, some operations can be used to replace others; for Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. Throughout these notes we will use the following example database schema about movies, as introduced in TCB Figure 2.5. Edited the query that I would like the RA for – AnEventHorizon Oct 3 '10 at 18:51. dependent in EMPS_WITH_DEPS, then we For every project located in Here, the < join condition > is of the form R1.a = R2.b. come up with equivalent formu-lations. We may want to save the result of a relational algebra expression as a relation so that we can use it later. query could be specified in other ways; for example, the order of the JOIN and SELECT operations could be reversed, or Relational algebra mainly provides theoretical foundation for relational databases and SQL. These blocks are translated to equivalent relational algebra expressions. apply the SET DIFFERENCE operation .text { example, we first select the projects located in Stafford, then join them with (a) SELECT DISTINCT x.store ... discussions and examples of TRC Queries (Sections 4.3.1) and a lesser treatment of DRC. (Non- ← πPnumber, Dnum, Lname, Address, Bdate(PROJ_DEPT_MGRS). It gives a step by step process to obtain the result of the query. As a single in-line expression, discuss correctness and equivalence of given relational algebra queries. as manager of the department that controls the project in. Relational Database Instance: XML: eXtensible Markup Language Database Instance: ALG: Relational Algebra: DRC: Domain Relational Calculus: DBN: Domain Relational Calculus by Name: TRC: Tuple Relational Calculus: SQL: Structured Query Language Here Actually relational algebra and SQL methods, both are the same but there implementation different. In Relational Calculus, The order is not specified in … this query becomes: List the names of all employees Relational algebra is a procedural query language, which takes instances of relations as input and yields instances of relations as output. controls the project. It uses operators to perform queries. This is called a sigma-cascade. In Chapters 4 and 5 and in } Then we This page explains the query language supported by relational. (use in the, Find fname and lname of John Smith's supervisor, Find fname and lname of all employees that have dependents, H1 = set of SSN of employees with dependents, H2 = set of SSN of employees without any dependents Ssns of employees who have at least Something like: R - ρ a1,a2 (π a11,a21 (σ A11 = A22 (ρ a11,a21 (R) x ρ a12, a22 (R)))) rename the columns of R, f.e. List the names of managers who Basic SQL Relational Algebra Operations. EMPLOYEE). It also known as Declarative language. 1, but not in reln. to retrieve employees In database theory, relational algebra is a theory that uses algebraic structures with a well-founded semantics for modeling data, and defining queries on it. SQL), and for implementation: • Relational Algebra: More operational, very useful for representing execution plans. } Example : write the remaining queries as single expressions. Relational Algebra A query language is a language in which user requests information from the database. σpredicate (R):This selection operation functions on a single relation R and describes a relation that contains only those tuples of R that satisfy the specified condition (predicate). Relational algebra in dbms is a procedural query language and main foundation is the relational database and SQL. As a single in-line expression, this query becomes: List the names of managers who languages. .active, .dot:hover { An algebra whose operands are relations or variables that represent relations. last name is ‘Smith’, either as a worker or as a manager of the department that write the remaining queries as single expressions. If we want to display the names and courses of all students, we will use the following relational algebra expression − $$\pi_{Name,Course}{(STUDENT)}$$ Selection. /* On smaller screens, decrease text size */ Write the relational algebra queries for the following (i)Retrieve the name, address, salary of employees who work for the Research department. Type of operation. employee have distinct Dependent_name values. (original) relational algebra. In this guide, we will discuss what is Relational algebra and relational calculus and why we use these concepts. To even things out, in this lecture I will focus on DRC examples . There are some basic operators which can be applied on relations to produce required results which we will discuss one by one. Optimization includes optimization of each block and then optimization of the query as a whole. .slideshow-container { Set-difference ( ) Tuples in reln. queries using different operations.12 We showed how to write queries illustrates Notice that we renamed the The result is an algebra that can be used as a query language for relations. name. Relational algebra and query execution CSE 444, summer 2010 — section 7 worksheet August 5, 2010 1 Relational algebra warm-up 1.Given this database schema: Product (pid, name, price) Purchase (pid, cid, store) Customer (cid, name, city) draw the logical query plan for each of the following SQL queries. We cannot fetch the attributes of a relationusing this command. Basics of Relational model: Relational Model. example, the. Relational databases store tabular data represented as relations. text-align: center; vlink="#f00000" link="#00ff00" to retrieve the desired attributes. who have no dependents. Relational Algebra 159 • After completing this chapter, you should be able to. 10. Output: Optimized Logical Query Plan - also in Relational Algebra Queries over relational databases often lik padding: 16px; background-color: #717171; position: absolute; margin: 0 2px; algebra operations. Equi-Join in Relational Algebra. Query Converter ER modelling | BCNF analysis | 3NF and 4NF analysis | Relational Algebra | SQL Interpreter | XML & XPath SQL to Relational Algebra. List the names of all employees with two or more dependents. font-size: 12px; Then EMPS_WITHOUT_DEPS ← (ALL_EMPS – EMPS_WITH_DEPS), RESULT ← πLname, Fname(EMPS_WITHOUT_DEPS * Retrieve the names of employees The following are additional examples to illustrate the use of the relational algebra operations. Select Operation: The select operation selects tuples that satisfy a given predicate. These applications will communicate to database by SQL a… one dependent in EMPS_WITH_DEPS. Strictly @media only screen and (max-width: 300px) { The algebra operations thus produce new relations, which can be further manipulated using operations of the same algebra. Then we query could be specified in other ways; for example, the order of the, after renaming one of the join we create a table, ) tuples, and apply the division operation. Examples of Queries in Relational Algebra. Queries in relational algebra, what do they mean? top: -55px; – Relational Calculus: Lets users describe what they want, rather than how to compute it. } The relational algebra calculator helps you learn relational algebra (RelAlg) by executing it. We An algebra whose operands are relations or variables that represent relations. A query is at first decomposed into smaller query blocks. Example 1: Information need: Information about cars of year 1996 model, where faults have been found in the inspection for year 1999 . DEPT5_PROJS ← ρ(Pno)(πPnumber(σDnum=5(PROJECT))), EMP_PROJ ← ρ(Ssn, Pno)(πEssn, Pno(WORKS_ON)), RESULT ← πLname, Fname(RESULT_EMP_SSNS * .dot { attributes to match the other join attribute, For every project located in 2. Types of Relational operation 1. the JOIN could be replaced by a NATURAL JOIN after renaming one of the join Operators without parameters work on two relations. border-radius: 3px 0 0 3px; color: white; Union 4. Here σ stands for selection predicate, and r stands for relation, and pis a propositional logic formula which may use connectors like and, or, and not. To optimize a query, we must convert the query into its equivalent form as long as an equivalence rule is satisfied. What is Relational Algebra? In this case, we used the query of SQL Such as when retrieving the data. In this query becomes: πPno SQL), and for implementation: – Relational Algebra: More operational, very useful for representing execution plans. Relational Algebra (RA) Examples; SQL The practice movie, sailor, and student data set from class along with instructions on how to use it can be found here. Select (σ) 2. position: relative; It is denoted by … In Chapters 4 and 5 and in Formal Relational Query Languages • Two mathematical Query Languages form the basis for “real” relational languages (e.g., SQL), and for implementation: – Relational Algebra: More operational, very useful for representing execution plans.
{"url":"https://liberte-cafe.de/fats-at-qfrgz/relational-algebra-query-examples","timestamp":"2024-11-04T07:34:44Z","content_type":"text/html","content_length":"24377","record_id":"<urn:uuid:e2157ff9-d099-4e50-8e1e-8d2e55185d48>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00816.warc.gz"}
CG in Aerospace and Aeronautics in context of center of gravity calculator 30 Aug 2024 Title: Application of Computer Graphics (CG) in Aerospace and Aeronautics: Development of a Center of Gravity Calculator Abstract: This article explores the application of computer graphics (CG) in aerospace and aeronautics, with a focus on developing a center of gravity (CG) calculator. The CG calculator is a crucial tool for aircraft designers and engineers to determine the stability and maneuverability of an aircraft. This article presents a novel approach to calculating the CG using CG techniques, which provides accurate results and enhances the design process. Introduction: Aircraft design involves complex calculations to ensure stability, maneuverability, and overall performance. One critical aspect is determining the center of gravity (CG), which is the point where the weight of the aircraft can be considered to act. The CG calculator is a vital tool for designers to predict the behavior of an aircraft during various flight conditions. Background: The traditional method of calculating the CG involves manual calculations, which are time-consuming and prone to errors. With the advent of computer graphics (CG), it is possible to develop a calculator that can accurately determine the CG using geometric transformations and spatial reasoning. Methodology: This study proposes a novel approach to calculating the CG using CG techniques. The methodology involves the following steps: 1. Modeling: Create a 3D model of the aircraft using computer-aided design (CAD) software. 2. Mesh Generation: Generate a mesh of the aircraft model, which is a collection of vertices and edges that define the shape of the aircraft. 3. Weight Distribution: Assign weights to each vertex in the mesh based on the mass distribution of the aircraft. 4. CG Calculation: Calculate the CG using the following formula: CG = (Σ(m * r)) / Σm where m is the weight at each vertex, r is the distance from the origin (0, 0, 0) to the vertex, and Σ denotes the sum. Formula: The formula for calculating the CG can be represented in ASCII format as follows: CG = (Σ(m * r)) / Σm where m is the weight at each vertex, r is the distance from the origin (0, 0, 0) to the vertex, and Σ denotes the sum. Results: The proposed CG calculator was tested using a sample aircraft model. The results showed that the calculated CG values were accurate and consistent with traditional methods. Discussion: The development of a CG calculator using CG techniques has several advantages over traditional methods. Firstly, it reduces the time and effort required for calculations, allowing designers to focus on other aspects of the design process. Secondly, it provides accurate results, which is critical in ensuring the stability and maneuverability of an aircraft. Conclusion: This study demonstrates the application of computer graphics (CG) in aerospace and aeronautics, with a focus on developing a center of gravity calculator. The proposed CG calculator uses geometric transformations and spatial reasoning to accurately determine the CG, providing a valuable tool for designers and engineers. Future Work: Future studies can explore the application of CG techniques in other areas of aerospace and aeronautics, such as aerodynamics, structural analysis, and flight dynamics. 1. [1] “Aircraft Design: A Conceptual Approach” by R. T. Ratcliffe 2. [2] “Computer-Aided Design (CAD) for Aerospace Engineering” by J. M. McCarthy ASCII Formula: CG = (Σ(m * r)) / Σm where m is the weight at each vertex, r is the distance from the origin (0, 0, 0) to the vertex, and Σ denotes the sum. Note: The formula is represented in ASCII format using the BODMAS (Brackets, Orders of Operations, Division, Multiplication, Addition, Subtraction) convention. Related articles for ‘center of gravity calculator’ : • Reading: CG in Aerospace and Aeronautics in context of center of gravity calculator Calculators for ‘center of gravity calculator’
{"url":"https://blog.truegeometry.com/tutorials/education/9a499db286f5bdd90003c3c99a6a5cb4/JSON_TO_ARTCL_CG_in_Aerospace_and_Aeronautics_in_context_of_center_of_gravity_ca.html","timestamp":"2024-11-06T12:35:11Z","content_type":"text/html","content_length":"19292","record_id":"<urn:uuid:5c7127d2-8cf7-4ac5-b522-929ace80e88c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00286.warc.gz"}
Good Illustrations in Mechanics Good Illustrations in Mechanics 1. Lagrange multipliers. Although the geometric proof of the theorem in [Kap, p.160, l.11-l.17] is elegant and picturesque, it fails to point out its main usage. In contrast, the algebraic proof in [Rei, pp.620-622, §A.10] enables us to treat [Rei, p.621, (a.10.5)] as if all differentials dx[k] were mutually independent. Thus it shows that the method of Lagrange multipliers makes the original problem easy to handle. [1] 2. Kepler's problem [Sym, p.131, l.8-p.133, l.5; Lan1, p.35, l.9-p.39, l.−10; Go2, p.104, l.10- l.−12]. 3. Rotational and irrotational velocity fields: [Mar, p.181, Fig. 3.4.2 & p.182, Fig. 3.4.3]. 4. Elliptic polarization [Born, pp.25-28]. 5. Understanding the law of refraction. A. From the viewpoint of wavefront and phase velocity [Hec, p.101, (4.4)]. B. Form the viewpoint of the boundary condition [Sad, p.452, (10.98)]. Remark. Jackson's argument to prove [Jack, p.304, (7.34)] is clear [Jack,p.304, l.7-l.9], while Born's argument to prove [Born, p.37, (1)] is not. 6. Motion in a centric field. Explaining physics should not be like playing charades. The emphasis must be clear and the explanation must be right to the point. Good illustration: [Lan1, p.32,l.−20-p.33,l.5 & p.33, Fig.9]. Poor illustrations: [Sym, p.127, Fig. 3.34] & [Go2, pp.90-94, §3-6]. 7. Tensors [Haw]. A. Reading a bad science book makes one feel as though one is falling into a bottomless swamp, becoming trapped deeper and deeper and eventually getting buried in mud. Reading a bad science book is similar to building a house on a shaky foundation (Wrong definition: [Sym, p.493, l.−5]; correct definition: [Haw, §6-2 & §6-11]). No matter how far you may proceed, you must start all over. In contrast, a good book provides easy access to subtlety, insight and depth. Besides the definition of tensor, we need to provide efficient tests [Haw, p.99, §6-11] and counterexamples [Haw, p.196, l.−2] for absolute tensors. Abstraction or generalization can easily make these subtleties hard to recognize or obscure their motives. [Lan2, p.17, l.16-l.20] gives a partial reason why e ^ilkm is not a tensor, while [Haw, p.196, l.−2] tells the whole story. The insight (The origin of [Haw, p.16, (1-21) &(1-22)]: [Haw, p.15, Fig. 1−12 & p.88, Fig. 5-10]) or key point usually comes from the author's experiences in application. Without understanding the insights, an elegant theory can be reduced to meaningless manipulating of definitions. Depth: [Haw, pp.201-204, §13-7] provides proofs for the equations in [Lan2, p.18, l.−6-l.−1]. Example. Expressing the Maxwell equations [Wangs, p.354, (21-30)-(21-33)] in the tensor forms so that the equations are covariant with the Lorentz transformations: [Wangs, p.520, (29-134)- Remark 1. [Lud, p.54, l.15; l.-9] justify the rules of lowering and raising indices, while [Lan2, l.16, l.5-l.6] and [Haw, p.125, 8-3] fail to provide such justifications. Remark 2. [Lud, p.55, (6.8)] motivates us to define the dual of an antisymmetric tensor (see [Lud, p.55, (6.9)]), while Landau fails to provide a motive before he defines the dual [Lan2, p.17, l.30]. Landau's definition of dual emphasizes the concept "mutual", so the dual of a dual is the original. In contrast, Ludvinsen's definition emphasizes the repeat use of the same rule. Therefore, his dual of a dual is the negative of the original [Lud, p.55, (6.11)]. The above two seemingly contradictory results do not conflict. Remark 3. [Lan2, p.17, (6.8)] defines e^0123= +1, while [Lud, p.55, l.-7] defines e^0123= -1. Landau's definition is just a stipulation, while Ludvigsen's definition is based on [Lud, p.36, (4.25) & (4.28)]. I prefer Ludvigsen's definition because it is consistent with the use of the Levi-Cevita tensor. Remark 4. The reason why a parameter function l in [Lud, p.63, l.9] is characterized by the condition v^aÑ[a]l=1 can be found in [O'N, p.19, Lemma 4.6]. Remark 5. [Ken, chap. 5] emphasizes the origin of tensors. Tensors are developed as a way to allow physical laws to retain their form under general coordinate transformation [Ken, p.46, l.7-l.9]. Incidentally, [Ken, p.51, l.-5] explains why the vector and covector components are the same in rectangular Cartesian coordinates. Remark 6. Both [Pee, p.237, (8.47)] and [Haw, p.162, (11-3)] fail to provide the physical meaning of the covariant derivative, while [Ken, p.58, l.16 & p.59, Fig. 6.2] show that the key to understand the physical meaning of [Ken, p.60, (6.2)] is to use the concept of parallel transport [Ken, p.59, l.9]. Remark 7. Preserving the tensor notation [Ashc, p.445, l.-8; p.446, (22.83)] helps us trace the source of symmetry more easily (Compare [Ashc, p.445, l.8] with [Kit2, p.85, l.4]). Remark 8. The group properties of the tensor component transformations make it possible to formulate a precise and formal definition of a tensor [Rin, pp.155-156, §A7]. Remark 9. [Jack, p.270, l.9-l.15] gives a clearer definition about polar or axial vectors than [Lan2, p.18, l.9-l.12]. [Lan2, p.18, (6.10)] shows how a cross product can be expressed as a traceless antisymmetric second-rank tensor [Jack, p.269, l.-13-l.-12]. Remark 10. [Reic, p.542] decomposes a tensor of rank 2 into three orthogonal components using dyads [Sym, p.43]. Remark 11. (Covariant differentiation of tensors of any order [Lau, pp.100-102, §9.5]) A theory requires a general solution to the problem rather than just a list of a few examples. [Kre, p.221, (74.5)] provides a general solution to the problem that [Kre, pp.220-221, §74] discusses. All the problems of the same type can be solved using this general solution. Furthermore, the method used in the general case is consistent with that used in any special case. Thus, the unification of the methods for special cases is realized. In contrast, [Pee, p.235, l.-9-p.237. l.15] fails to provide such a conclusive general solution. Although the approach in [Lau, pp.100-102, 9.5] is more methodical than that in [Pee, p.235, l.-9-p.237. l.15], it is not as simple and organized as the approach in [Kre, pp.220-221, §74]. Remark 12. (The geometric meaning of the Christoffel symbols) [Lau, p.118, Theorem 11.2.1] provides the geometric meaning of the Christoffel symbols. In contrast, the use of [Pee, p.237, (8.53) & (8.54)] to characterize the Christoffel symbols [Pee. p.237, l.-14-p.238, l.13] fails to provide the symbol's geometric meaning. Remark 13. Spivak's approach: a C^¥ manifold M [Spi, vol. 1, p.38, l.11] ® the tangent bundle TM [Spi, vol. 1, p.91, l.4-p.93, l.6; p.101, Theorem 1] ® a section of T*M is called a covariant vector field [Spi, vol. 1, p.156, l.10] ® a covariant tensor field A of order k [Spi, vol. 1, p.160, l.-3]. Spivak's approach is cumbersome, but it has advantages. First, it specifies the base space, a manifold, and thereby justifies the nomenclature "tensor fields". In contrast, in [Kre, p.102, Definition 30.1], it is unclear whether P refers to a point in a surface or a point in the entire space. One is not sure what manifold P belongs. Second, in differential geometry the stipulation given in [Kre, p.102, (30.1)] can be proved as a theorem [Spi, vol. 1, p.158, l.4]. Third, it is nice to point out that the tensor field of the type ([1]^1) defined by d[i]^j is the evaluation map [Spi, vol. 1, p.171, l.1]. Even though Spivak tries to catch everything by building huge machinery, he still leaves out several important perspectives. There is nothing large enough to include everything. We learn a similar lesson from set theory: there exists no set of which every object is an element [Bou, p.72, l.15-l.16]. Now we discuss the disadvantages of Spivak's approach. First, his allowable transformation group is not as flexible as that in [Kre, p.102, Definition 30.1]. For example, in special relativity we must specify the allowable transformation group as the group of general Lorentz transformations [Rin, p.15, l.-7]. Second, his tensor fields are generated by dx^i and ¶/¶x^j [Spi, vol. 1, p.169, l.1]. Conceptually, this definition of tensor fields is too restrictive because the concept of tensors can apply to not only geometrical quantities but also physical quantities such as electric fields. In this sense, the rules given in [Kre, p.102, (30.1a) & (30.1b)] are more flexible. We must realize that the theory of tensors strides across fields of study and is applicable everywhere. Any attempt to contain it in a single field of study is simply impossible. Remark 14. The inner product on a vector space V can be expressed in the tensor form g[ij] [Spi, vol. 1, p.409, l.-7], while the inner product on V* can be expressed in the tensor form g^ij [Spi, vol. 1, p.416, l.1-l.2]. {1} The scalar product of 4-vectors is invariant under Lorentz transformations [Lan2, p.14, l.16-l.17]. The 4-vectors can be expressed in contravariant or covariant coordinates [Lan2, p.14, l.-3]. {2} Covariant coordinates are associated with reciprocal base vectors [Haw, §1-6; p.14, (1-18)]. Reciprocal base vectors form the dual basis [Haw, p.11, l.-5-l.-3]. When we talk about contravariant tensors, our setting refers to V. When we talk about covariant tensors, our setting refers to V*. Thus, the dual basis plays an important role in the theory of tensors. I wonder why most textbooks in linear algebra fail to provide the geometric meaning of the dual basis [Haw, p.15, Fig. 1-12]. B. Links {1, 2, 3}. 8. Constructing the Lorentz transformation: [Rob, pp.6-9, #2.2 & pp.163-164, #A8]. 9. For classical mechanics, all you need is two textbooks: [Lan1] and [Fomi]. If you throw all of the other textbooks into the trash can, you will not lose much. In terms of structure, both [Lan1] and [Formi] are well organized. [Lan1] emphasizes the physical meaning of mechanics, while [Fomi] emphasizes the mathematical formalism behind mechanics. Each section of [Fomi] is summarized into theorem form. The hypothesis and conclusion of each theorem have been clearly specified so that one can easily apply the theorem to other readings. In [Lan1], the material has been condensed and the main results of various topics have been systematically organized so that A. we may easily see the big picture and B. we may easily move to the front line of research. In contrast, [Go2] is more like a CliffsNotes version of [Lan1] except that it is not documented. The regurgitation in the notes style makes it difficult for the reader to understand what is going on if he is not guided by [Lan1]. Although each topic in [Go2] is discussed in great detail, the links among topics are weak. Furthermore, both [Go2]'s structure and reasoning are loose. For example, the meaning of a definition is not always precise and the assumptions of a theorem are sometimes difficult to trace. Thus extracting desired information from [Go2] is often like searching for an auto part in a junkyard. Let me give some examples to illustrate my points. a. [Fomi, p.58, l.-15] defines the canonical variables in the general sense, while [Go2, p.340, l.17] defines the canonical variables in a very narrow sense. The origin of the quantization of energy can be traced to the boundary conditions of the solution of the Schrödinger equation. Thus, the canonical variables in the general sense play an important role in quantization. b. [Fomi, p.72] gives a geometric meaning of the Legendre transformation and shows that the transformation is involutionary, while [Go2] does not. c. [Fomi, p.71, l.5] indicates that ¶Φ/¶x =[Φ,H] is valid on the integral curve of the system [Fomi, p.70, (11)]. In contrast, the failure to indicate where [Go2, p.405, (9-94)] is valid makes the calculations in [Go2, p.405, l.-10] meaningless. d. [Fomi, p.76, l.16] relates the generating function to variational problems, while [Go2, p.382, l.11] does not. 10. 4-vectors. 11. The uncertainty principle [Eis, pp.65-77, §3-3 & §3-4]. [1]. 12. The stability of the ground state of the hydrogen atom: [Eis, p.167, l.14-l.21 & p.247, l.-6-l.-5]. 13. Symmetric top. A. Precession [Lan1, p.106, l.-8-p.107, l.-8; p.116, l.1-l.15]. Remark. In [Lan1, p.112, Fig. 48], let Z-axis rotate about x[3]-axis with angular velocity w . If we are fixed in the rotating (X,Y,Z)-frame, we will see that x[3]-axis rotates about Z-axis with angular velocity - w. B. Nutation: [Sym, pp.454-460, §11.5]. In [Sym, p.457, l.-15], Symon should have pointed out that we can let 1-axis coincide with Ox in [Sym, p.455, Fig.11.5] (see [Lan1, p.111, l.19-l.21]). 14. The Thomas procession [Eis, Appendix O]. 15. Proper time [Eis, p.A-8, l.11]. 16. Let us see how Landau and Eisberg introduce momentum and energy into the theory of special relativity. A. Landau introduces the action integral first. After he finds the Langrangian, he turns the crank of formalism to obtain momentum [Lan2, p.25, l.-13] and energy [Lan2, p.26, l.3]. Remark 1. [Lan2, p.24, l.-22] says that the action is invariant under Lorentz's transformations. Landau should have said that he considers time-space as a 4-vector and the Lorentz transformation as a metric tensor [Go2, p.288, (7-40) & Lan2, p.16, l.-3]. The Lagrangian is the unique entity that characterizes the equation of motion. Although its expression changes with the coordinates we choose, its value is fixed at the specific location and time. Remark 2. Certainly, formalism lacks motivation. Even Landau's introduction of Lagrangian lacks concrete motivation because the Lagrangian itself is abstract and the concept of tensor is very complicated. Most likely, one derives [Lan2, p.25, (8.2)] only after special relativity is established. B. In contrast, [Eis, pp.A-13-A-17] pays attention to the impact of special relativity to each individual concept in classical mechanics. Thus Eisberg generates more interfaces of special relativity with classical mechanics [1] and creates the motivation behind new concepts. For example, to preserve the conservation law for momentum, we must allow the mass of a particle to change with its speed [Eis, p.A-14, l.- 8]. Furthermore, the direct interface of special relativity with a concept in classical mechanics rather than the indirect interface with formalism greatly helps us visualize the new concept in special relativity. 17. Momentum conservation in special relativity. A. Both the example in [Rob, p.55, l.-16-l.-14] and the example in [Eis, p.A-13, Fig. A-7] lead to the same result [Rob, p.56, (5.7) & Eis, p.A-14, (A-18)]. We prefer to use the former example because it is simpler than the latter example. B. [Rob, p.57, l.-2-p.58, l.2] indicates that if the total momentum and mass of a system of particles were conserved in one inertial frame, then they would also be conserved in another initial frame, while [Eis, pp.A-13-A-15] does not. 18. Liouville's theorem. [Sym, p.395, l.7-p.396, l.8] details important similarities between the movement of the phase "particles" and that of particles in an incompressible fluid. [Rei, pp.626-628] gives an excellent analytic proof of Liouville’s theorem. [Go2, p.427, Fig. 9-3] provides a vivid geometric interpretation; [Go2, p.428, l.1-l.5] gives the intuitive physical meaning of Liouville’s theorem [1]. It is easier to explain [Rei, p.54, l.18-l.22] by using [Go2, p.428, l.1-l.5] than using [Rei, Appendix A.13]. The way that Landau defines a statistical ensemble in [Lan5, p.9, l.!2-l.!1] immediately shows that [time average] = [ensemble average]. Remark 1. [Ashc, p.771, Appendix H] extends Liouville's theorem to semiclassical motion. Remark 2. [Pat, p.35, (5)] can be explained more intuitively and rigorously using [Kara, p.152]. Pathria's 3-dim proof [Pat, §2.2] of Liouville's theorem is a natural approach to the problem, while Reif's 1-dim proof [Rei, Appendix A.13] is more fundamental in logic. 19. (Phonons) A. noninteracting = there are no cross-terms in the expression of Hamiltonian. The normal mode is the device to reduce the complicated problem of N interacting atoms to the equivalent problem of 3N noninteracting harmonic oscillators [Rei, p.408, l.22-l.24]. B. Phonons are indistinguishable [Rei, p.409, l.- 11]. C. Phonons obey Bose-Einstein statistics [Rei, p.409, l.-16] not because there is infinite number of them (The reason that Reif gives in [Rei, p.338, l.3-l.4] is incorrect) but because a quantum number n[r] is a state index which is allowed to range from 1 to +¥ without any restrictions [Eis, p.401, l.1-l.2]. D. The quantum state of the whole system is specified by the set of 3N quantum numbers {n[1], n[2], …, n[3N]}[Rei, p.408, l.-6]. Each phonon can be in any one of the 3N states with energies ħw [r] (r =1,…, 3N) [Rei, p.409, l.15]. Therefore, from the viewpoint of quasi-particles, the state of the system = (n[1] phonons in state 1, …, n[3N] phonons in state 3N). From the viewpoint of standing wave, see [Eis, p.389, l.14]. Remark. According to [Coh, p.602, l.7-l.10], a phonon is actually characterized by a wave vector and an angular frequency W(k). The discussion about phonons in [Rei, pp.407-411, §10.1] is restricted to the condition [Coh, p.603, (82)]. E. The speed of sound (acoustical waves): [Hoo, p.35, (2.3)](when atomic spacing << atom displacement << wavelength [Hoo, p.35, l.-12-l.-11]); [Hoo, p.40, (2.13); Coh, p.603, (84)](chain of identical atoms); [Hoo, p.44, l.-11](chain of two types of atoms). F. Lattice vibrations of 3-dimensional crystals: [Hoo, p.46, l.-12-l.-8]. 20. Aperture and field stops, entrance and exit pupils, and vignetting [Hec, pp.171-173]. 21. Phase velocity = the speed of a co-phasal surface [Born, p.18, l.7]. 22. Le Chatelier's principle. Good illustration: [Lan5, pp.65-68, §22]. Poor illustration: [Rei, p.298, l.- 8-p.300, l.10]. 23. [Associated] Legendre polynomials A. [Boh, pp.321-326, §14.14 & §14.15]. The quality of a textbook on quantum mechanics can be determined by noting whether or not it includes a complete analysis of the Legendre polynomials. B. Merits in [Col]. i. Spherical harmonics Y[n] come from a solution of Laplace's equation that is a homogeneous polynomial of degree n [Col, chap.IV, '1.1]. ii. Surface harmonics in Cartesian coordinates and spherical coordinates. Their sign regions. [Col, p.232]. iii. Notice the 1-1 correspondence between [Col, p.232, Fig.IV.2, n=2] and [Coh, p.682, (33)]. iv. For the geometric origin of the generating function, the explanation in [Col, p.233, l.!4-p.234, l.!7] is much better than that in [Cou, p.85, l.11-l.18]. 24. The Euler angles [Edm, pp.6-8, §1.3; Tin, pp.101-103, §5-3]. 25. Quantization. A. Experimental evidence: discrete values for physical quantities [Schi, p.2, l.-3; p.3, Table 1]. B. Theoretical evidence: a. Responding to the above experimental evidence, we study the corresponding eigenvalues of operators. b. Quantization stems from the formalism of classical mechanics. i. [Schi, p.132, (23.2)] and [Schi, p.134, (23.8)] allow us to establish the correspondence [Schi, p.134, (23.9)]. ii. A Poisson bracket does not depend on the canonical variables we choose [Lan1, p.145, (45.9)]. A commutator bracket does not depend on the basis we choose. iii. Quantization rules [Schi, p.135, l.15-l.29]. Example: second quantization [Schi, p.342, l.12; p.349, (46.3); p.350, l.1 & (46.6)]. Remark 1. Summerfeld's quantization of action implies both Planck's quantization of energy and Bohr's quantization of angular momentum [Eis, p.110, l.-5-p.112, l.-7]. The quantization of action can be interpreted in terms of standing waves [Eis, p.112, l.-6-p.114, l.19]. Schrödinger derives energy quantization based on the fact that accepted solutions of the time-independent Schrödinger equation exist only for certain value of the total energy ([Coh, pp.351-358, Complement M[III]]Ú[Eis, p.160, l.-15-p.163, l.-15]Ú[Lan3, p.61, l.1-p.62, l.8]Ú[Mer2, p.45, l.8-l.20]). Thus, Schrödinger eliminates the axiomatic requirement of integralness, and traces its origin directly to the boundary conditions of an eigenfunction ([Eis, p.163, l.-8-l.-5] & [Lev2, p.69, (4.47); p.70, l.9; p.70, Fig. 4.2]). Remark 2. In [Eis, p.163, l.-6], Schrödinger attributes integralness to the finite and single-valued nature [Lev2, p.109, l.1] of an eigenfunction. According to the theory of differential equations, it is more appropriate to attribute the discreteness of eigenvalues to boundary conditions [Chou, p.136, l.-18-p.137, l.4; Bir, pp.288-292]. However, it is important to point out that the discrete spectrum of the regular S-L system [Bir, p.273, Theorem 5] inspires Schrödinger to interpret the discreteness in the microscopic world using Schrödinger equation. At the point when readers encounter [Eis, p.163, l.-6], Eisberg has not yet provided enough mathematical background for them to appreciate Schrödinger's statement. However, most textbooks that do build sufficient background forget to mention that this clue led to the important discovery. Thus, many textbooks often fail to accurately reflect history when they explain the theory (see [Bir, chap.10, §16; Jack, p.77, l.7; Coh, p.663, l.-14]). iv. The Poisson algebra and the commutator algebra are isomorphic Lie algebras. Proof. The generators of the two algebras produce the same results (Compare [Schi, p.134, (23.10)] with [Coh, p.222, (B-33)]. The rules of Poisson brackets and those of commutator brackets are the same [Schi, p.135, (23.12); Coh, p.168, (10)-(14)]. v. Through the canonical transformation in [Fomi, p.93, l.14], we see that the canonical equations of motion and the Hamilton-Jacobi equation are equivalent [Fomi, pp.88-93, §23]. Therefore, we may express the general equation of motion in the form of the Hamilton-Jacobi equation. Finally, we use quantization rules to establish the time-dependent Schrödinger equation from the Hamilton-Jacobi equation. Indeed, in the quasi-classical case [Lan3, p.20, (6.1)], the time-dependent Schrödinger equation reduces to the Hamilton-Jacobi equation of classical mechanics [Mer2, p.23, (2.39)]. Remark. The ultimate goal of quantization is to derive the Schrödinger equation. Consequently, the correspondence principle should be formulated in its strongest form. The correspondence between classical mechanics and quantum mechanics established by [Mer2, p.326, (14.61) & (14.62)] is a weak form of the correspondence principle because Merzbacher's formulation cannot lead to the derivation of the Schrödinger equation. Indeed, Schrödinger's equation implies [Mer2, p.326, (14.62)], see [Coh, p.241, l.1-l.9]; Hamilton's equations implies [Mer2, p.326, (14.61)], see [Lan1, p.135, l.6-l.11]. c. The beauty of commutator algebra is that we jettison the complicated calculation required with Poisson brackets and preserve its algebraic essence. The reduction greatly simplifies qualitative discussion of physical phenomena. Furthermore, we may apply the commutator algebra and the Schrödinger equation to a microscopic system. C. How to use the concept of wave packets to connect classical and quantum mechanics. a. A wave packet is the wave function of a localized particle [Coh, p.26, l.-11-l.-9]. b. The group velocity of the wave packet = the velocity of the (free) particle [Coh, p.29, (C-28)]. c. The correspondence principle: the classical value of a physical quantity = the expectation value of the corresponding operator [Schi, p.26, (7.9); p.27, (7.10)]. 26. The uncertainty principle. A. Theoretical proof from the wave point of view (See [Coh, pp.286-289, Complement C[III]], where DQ is the variance of a distribution function). Remark. Bohr's complementarity principle [Schi, p.8, l.4] can be considered as the physical idea behind the proof. B. A precise and simultaneous measurement is physically impossible because of the interaction between the apparatus and the measured particle. In this case, Dx represents inaccuracy. C. Examples. a. Localization experiment. i. [Schi, p.9, l.8-p.10, l.11] or [Eis, p.67, l.-16-p.68, l.-11]. When the momentum of the electron is known, the measurement of its position involves inaccuracy [Schi, p.9, (4.1)] and introduces an uncertainty into the momentum [the Compton effect: Schi, p.9, l.-10]. Remark. In order to separate the scattered photons from the incident beam, the direction of the incident beam should be oriented as in [Schi, p.9, Fig. 2], not [Eis, p.67, Fig. 3-6]. ii. Diffraction experiment [Hec2, p.5, Fig. 1-1]. When the momentum of the photon is known, the measurement of its position involves inaccuracy [Hec2, p.5, l.6] and introduces an uncertainty into the momentum [Hec2, p.5, (1) & (4)]. b. Momentum determination experiment [Schi, p.10, l.16-p.11, l.18]. When the position of the particle is known, the measurement of its momentum involves inaccuracy [the Doppler effect: Schi, p.11, (4.8)] and introduces an uncertainty into the position [Schi, p.11, (4.7)]. c. Diffraction experiment with photon indicators [Schi, p.12, Fig. 3]. If the interaction between a photon and an indicator were so weak that would not destroy the original diffraction pattern, the uncertainty in p[y] for a particular photon produced by its encounter with an indicator would have to be small, as stipulated in [Schi, p.12, (4.9)]. Because [Schi, p.12, (4.12)] contradicts [Hec2, p.6, (6)], it is impossible to determine through which slit the photons pass without destroying the diffraction pattern [Schi, p.12, l.-12-l.-10]. D. Applications. a. The limits of geometric optics [Lan2, p.144, l.-11-l.-1]. 27. The Zeeman effect [Lev2, pp.154-156, §6.8]. 28. The collective states formed by independent, identical fermions using Pauli's exclusion principle. A. The shell model of many-electron atoms. [Lev2, p.338, Fig. 11.6] summarizes [Coh, complements A[XIV] and B[XIV]]. The central-field approximation [Coh, p.1411, l.-4] explains why we start with electron configurations [Coh, p.1413, (10); p.1414, (11)] and why the interelectronic repulsion can be treated as a perturbation [Coh, p.1412, l.11]. From configuration to terms [Lev2, p.327, Table 11.2]: (a). Equivalent electrons [Lan3, p.254, l.6-l.28; Coh, p.1423, (23)]; (b). Nonequivalent electrons [Lan3, p.254,l.1-l.5]. B. The electron gas [ neglect interactions between electrons]. a. Free electrons enclosed in a box. i. There is a one-to-one correspondence between the lattice points in the k-space and the wave functions of an electron [Coh, p.1434, l.5-l.7]. ii. The ground state of the electron system with the Fermi energy [Kit, p.183, l.13]: [Coh, p.1392, l.-5-l.-3]. The definition of the Fermi energy in [Coh, 1435, (6)] is more precise than that of [Kit, p.183, l.13]. Note that Pauli's exclusion principle applies not only to the electron gas [Coh, p.1434, l.-6-l.-1] but also to the electron system of a solid [Coh, p.1443, l.-23; p.1161, Fig.4]. b. Periodic boundary conditions. i. The motive of periodic boundary conditions is to simplify calculations [Coh, p.1440, l.-12-p.1441, l.8]. ii. When the interatomic spacing decreases, the splitting increases because the coupling increases [Coh, p.1159, Fig. 2]. iii. The stationary states of an individual electron are all delocalized [Coh, p.1159, l.2]. iv. The deeper the band's location, the more narrow it is [by the tunnel effect; Coh, p.1161, Fig. 4]. c. Due to Pauli's exclusion principle, only the electrons with energies close to the Fermi energy are important for the following applications: i. deriving the correct formula of specific heat for the electron gas. ii. deriving the correct formula of magnetic susceptibility for the electron gas. iii. explaining why some solids are good electrical conductors while others are insulators. Remark. This feature of the restricted number of electrons allows us to use much simpler concepts (such as the density of states [Coh, p.1435, l.11] or the location of the Fermi energy [Coh, p.1443, l.-13]) to replace the complicated definition of the ground state of the electron system [Coh, p.1433, l.-12-l.-5] when we engage in practical study of the physical quantities associated with the ground state of the system. 29. The Michelson-Morley experiment [Rob, pp.28-29, #3.5]. 30. Electric fields. A. Coulomb's law Û Gauss' law [Cor, pp.50-51, (3.19)-(3.22); Sad, pp.126-127, §4.6.A]. B. Electric multipoles [Wangs, chap. 8]. a. A dipole i. Its potential: [Wangs, p.114, (8-21)]. ii. Its field: [Wangs, p.120, (8-50)]. iii. The interaction energy of a dipole in an external electric field: [Wangs, p.127, (8-73)]. Remark. [Wangs, p.124, l.-13-l.-6] gives a physical reason why we are not interested in studying the energy changes of the external charges. iv. The torque on a dipole in an external electric field: [Wangs, p.127, (8-75)]. b. A quadrupole i. Its potential: [Wangs, p.115, (8-30)]. ii. Its field: [Wangs, p.123, (8-55)]. iii. The interaction energy of a quadrupole (with an axis of rotational symmetry) in an external electric field: [Wangs, p.130, (8-81)]. Remark 1. The lines of force are perpendicular to the equipotential surfaces [Sad, p.144, l.-4-l.-3]. Remark 2. If both the monopole moment and the dipole moment are zero, then the quadrupole moment becomes the dominant feature of a charge distribution [Wangs, p.115, l.-17-l.-15]. Remark 3. [Wangs, p.80, (5-48)] is the common background used to build [Wangs, p.99, (7.6)] and [Wangs, p.125, (8-62)]. C. Conductors [Wangs, pp.83-95, chap. 6] Remark. [Wangs, p.54, (3-13)] can be easily derived using Gauss' law [Jack, p.28, (1.11)]. a. The outward force p per unit area at the surface of the conductor is the product of the surface charge density and the external electric field [Jack, p.43, l.2-l.4]. Proof. The electric field E[sheet] generated by a sheet charge distribution is s/2e[0] above the sheet and is - (s/2e[0]) below the sheet [Wangs, p.54, (3-13)]. In order to let E[total] satisfy [Wangs, p.83, (6-1); p.85, (6-4)], we must add E[external] = s/2e[0] to E[sheet]. The field E[sheet] locally generally by the sheet cannot exert a force on itself, therefore p = s E[external ]= s^2/2e[0]. b. Electrostatic screening [Wangs, p.87, l.-3]. c. Systems of conductors [Wangs, §6-2, §6-3; Chou, p.90, l.9-p.94, l.2]. Remark. The definition of [Wangs, p.89, (8-12)] depends on the particular point chosen on the ith conductor [Wangs, p.88, l.19-l.20]. In order to prove the uniqueness of p[ij]'s and the existence of c[ij]'s, we must use the uniqueness theorem [Chou, p.91, l.-2-l.-1] to establish the one-to-one correspondence between F and Q [Chou, p.91, (2.138)]. Note that [Wangs, (6-12)] has the advantage that it can be easily translated to a computer program if the surface charge density distribution is known. Actually, [Chou, p.92, l.-10-l.-5] shows not only the non-singularity but also the positive-definiteness of the (p[ij]). As for the proof of the fact that (p[ij]) and (C[ij]) are symmetric, I prefer [Wangs, p.89, l.-6-p.90, l.13] to [Chou, p.93, l.5-p.94. l.2]. The former proof is constructive and insightful, while the latter proof uses formalism. D. Electrostatic energy [Wangs, §7-3, §7-4 §10-8, §10-9]. a. A system of charges [Wangs, p.99, (7-10)]. b. A system of conductors [Wangs, pp.100-101, §7-2]. c. An electric field [Wangs, p.102, (7-28)]. d. Electrostatic forces on conductors [Wangs, p.107, l.15-p.108, l.7]. e. The discussion of electrostatic energy is divided into two classes: constant free charge (if the system is isolated) and constant potential difference (if the system is connected to an external energy source) [Wangs, p.105, l.8-p.106, l.-10; p.165, l.11-l.22]. Remark. [Wangs, p.101, l.-7-p.102, l.-19] shows that a and c above are consistent. E. Boundary conditions. a. Static electric fields i. dielectric-dielectric [Sad, p.183, Fig. 5.10)]. ii. conductor-dielectric [Sad, p.185, Fig. 5.12]. b. Potential continuity between two media [Wangs, p.139, (9-20)]. c. Potential discontinuity across a dipole layer [Jack, p.34, (1-27) or "§Potential of Uniform Dipole Layer" in <http://web.mit.edu/6.013_book/www/chapter4/4.5.html>]. d. Steady conduction currents: [Wangs, p.136, (9-21) & p.209, (12-26)]. a. (Equilibrium) A perfect conductor cannot contain an electric field below its surface [Sad, 165, Fig. 5.2]. b. (From disturbance to equilibrium) The relaxation time of a conductor (dielectric) after introducing charge at some interior point [Sad, p.181, (5.49)]. 31. Magnetic fields. A. Biot-Savart's law [Sad, pp.263-266, §7.2] Û Ampère's circuital law (Proof. Þ: [Wangs, pp.237-241] or [Jack, pp.178-179, §5.3]. Ü: [Sad, pp.274-275, §7.4.A]. Remark 1. In proving Ampère's circuital law, [Wangs, pp.237-241] uses the line integral [Wangs, p.225, (14-2)] for B, while [Jack, pp.178-179, §5.3] uses the volume integral [Jack, p.178, (5.14)] for B. The steps of reasoning in Wangsness' proof can easily conjure accompanied physical images, while those of Jackson's cannot. Remark 2. In proving Ampère's circuital law Þ Biot-Savart's law, we must assume knowledge of the direction of B due to a current [Wangs, p.242, l.-7] in order to find the magnitude of B. Thus, we have used a bit of information of Biot-Savart's law to derive the entirety of Biot-Savart's law. Therefore, strictly speaking, Biot-Savart's law and Ampère's circuital law are not exactly equivalent. B. Magnetic multipoles [Wangs, chap. 19]. a. A magnetic dipole [Cor, pp.337-340]. i. Its potential: [Wangs, p.302, (19-22)]. ii. Its induction: [Wangs, p.302, (19-24)]. iii. The interaction energy of a magnetic dipole in an external magnetic induction: [Wangs, p.306, (19-36); p.307, (19-40)]. iv. The torque on a magnetic dipole in an external magnetic induction: [Hall, pp.541-543, §30-4; Wangs, p.308, (19-42)]. C. Magnetic energy [Wangs, chap. 18; §20-6; Jack, §5.16]. a. Magnetic forces on circuits: [Wangs, pp.290-295, §18-3]. Remark 1. It is better to use [Cor, p.480, l.5] to explain the second of the three equalities in [Wangs, p.291, (18-38)]. Remark 2. Wangsness uses the sign of a magnetic force alone to determine whether the force is attractive or repulsive [Wangs, p.293, l.22; p.295, l.13]. His strategy is very confusing. However, if we follow Corson's method by considering the dot product of the magnetic force and the infinitesimal displacement [Cor, p.480, (26-37)], then it will become much easier to determine whether the force is attractive or repulsive. Remark 3. The discussion of electrostatic forces on conductors [Wangs, p.104, Fig. 7.1] is divided into two cases: constant charge and constant potential difference. The discussion of magnetic forces on circuits [Wangs, §18-3] is divided into two cases: constant currents and constant flux [Wangs, p.218, Fig. 13-1]. At first glance, the divisions seem to depend on the devices we choose. In fact, if we look deeply into the matter, the choices of divisions are fundamentally determined by the characteristics of fields [Fan, p.75, l.23-l.25]. The divisions cannot be made different regardless of device. b. [Chou, p.286, l.6-l.10] explains why the second integral of [Jack, p.213, (5.146)] vanishes without any extra assumption [Jack, p.213, l.-11]. c. In order to fully understand the concept of magnetic energy, we must perform a series of clarifications and comparisons. i. (Total magnetic energy vs. interaction energy) [Wangs, pp.285-286, Example]. ii. (System: a single circuit with constant current I) If the flux change through the circuit is dF, then the work done by the sources (of current) is dW=I dF [Jack, p.212, l.-11]. iii. (System: a steady-state current distribution [Jack, Fig. (5.20)]) The total increment work done against the induced emf [Wangs, p.284, l.-9] by external sources due to a change dA or dB is [Jack, (5.144) or (5.147)]. The total work [magnetic energy] to bring the fields (of the system) up from zero to their final values is [Jack, (5.148)]. Remark. dA or dB refers to the change of the system made by external sources. iv. (System: a permanent magnetic moment) Compare [Jack, (5.150)] with [Jack, (5.72)] [Jack, p.214, l.-4-p.215, l.3]. Remark. Note that before placing [Jack, p.214, l.7] the magnetic moment in the external field, the magnetization M in [Jack, (5.150)] does not exist [Jack, p.215, l.2]. v. The discussion of magnetic energy is divided into two classes: constant currents (if the system is connected to an external energy source) and constant flux (if the system is isolated) [Wangs, p.290, l.-22-p.292, l.5; Jack, p.214, l.-16-l.-5]. Remark. Whenever we speak of a flux, we must specify the current source that produces the flux. Without such a specification, people may wonder whether DF in [Chou, p.282, l.-6] includes the flux from self-inductance. D. Boundary conditions [Sad, p.331, (8.41); p.332, (8.45)]. 32. Electromagnetic wave Propagation A. (General case) In lossy dielectrics [Sad, pp.417-422, §10.3]. B. (Special cases) a. Plane waves in lossless dielectrics [Sad, p.423, §10.4]. b. Plane waves in good conductors [Sad, pp.425-428; §10.6]. 33. Determination of crystal structures by X-ray diffraction A. The Laue condition: Constructive interference occurs if and only if Dk is a reciprocal lattice vector. Proof. Þ: [Kit2, p.35, l.-6]. Ü: [Ashc, p.98, (6.4)-(6.7)]. B. The Ewald construction of diffraction peaks [Ashc, pp.101-104]. a. The Laue method using a range of wave lengths. b. The rotating-crystal method. c. The powder (randomly oriented grains) method. 34. Derive Lagrange's equations using calculus of variations A. From the viewpoint of equilibrium [Go2, pp.16-21, §1-4] The key idea: (D'Alembert's principle) The infinitesimal work [F-(dP/dt)]·dr is zero [Go2, p.17, (1-44)] when the system is nearly in equilibrium. The procedure: a. Eliminate the appearance of the forces of constraints [Go2, p.18, l.8]. b. Transform the constraint coordinates to the generalized coordinates [Go2, p.18, l.10]. B. From the viewpoint of action [Fom, §9 & §21]. The key idea: (Hamilton's principle) The infinitesimal action change is zero near the actual path. In other words, the action integral along the actual path [Go2, p.36, (2-1)] is stationary. Remark. Method A is more difficult than Method B (compare [Fom, p.46, Theorem 2] with [Fom, p.35, Theorem]) because the former involves constraints [Fom, p.48, footnote 9]. The procedure A(b) complicates the problem even more because it fails to use Lagrange multipliers to exploit the symmetry [Rei, p.621, l.16-l.17]. 35. Normal coordinates in a lattice. The construction of normal coordinates in [Kit2, p.639, (5)] is much simpler than that in [Sym, pp.469-471, §12.3] or that in [Lan1, p.68]. I would like to try to explain why Kittel's construction is a natural way to decouple the equation of motion [Kit2, p.641, (19)] and the total energy [Kit2, p.641, (21)] even though my explanation is not precise and complete. A. The reciprocal lattice corresponds to the basis of the Fourier expansion of a periodic function [Kit2, p.32, (5)]. B. There is a one-to-one correspondence between the lattice points and the reciprocal lattice points [Ashc, p.87, l.-18]. C. The energy of a harmonic oscillator is quantized by its frequency [Coh, p.494, (B-34)]. D. The direct enumeration of all the wavelike solutions [Hoo, p.37, (2.8)] can be viewed as a method of decoupling [Hoo, p.38, l.17; p.40, l.11-l.16; Kit2, p.101, Fig. 5] even though decoupling the equation of motion is traditionally considered the first step toward finding the solutions. E. The dispersion relation [Hoo, p.38, (2.9); Kit2, p.640, (15)] is the major link between [Hoo, p.37, (2.8)] and [Kit2, p.639, (5)]. F. The uncertainty principle implies that the Fourier transforms of two strongly-coupled, broad wave packets in position space are two distantly-separated, narrow wave packets in momentum space. Thus, the use of Fourier transforms facilitates the decoupling process of equations of motion in a lattice. Remark 1. [Coh, p.591, (19) & (21)] motivate us to define the phonon coordinates as [Kit2, p.639, (5)]. Remark 2. [Mari, p.501, (12.142)] is an extremely powerful device for decoupling (see [Mari, p.501, l.-1]). Remark 3. In [Rei, p.408, l.22], Reif says that the concept of normal variables reduces the complicated problem of interacting atoms to the equivalent problem of noninteracting harmonic oscillators. However, he only justifies his statement from the viewpoint of energy [Rei, p.408, (10.1.8)]. In fact, whenever we call certain variables normal variables we must routinely test whether they satisfy the following requirements in classical and quantum mechanics. A. The requirements in classical mechanics. a. Decouple the total energy [Coh, p.580, (20)]: Express the total energy in terms of the energies which can be associated with each of the modes. Examples. [Coh, p.596, (47) & (50)] (when x and p are considered as normal variables); [Coh, p.597, (52) & (53)] (when a is considered as a normal variable). b. Decouple the equations of motion [Coh, p.577, (11)]. B. The requirements in quantum mechanics. a. Decouple the total energy [Coh, p.600, (76) & (77)]. b. Decouple the equations of motion [Kit2, p.641, (19)]. c. Redecompose the state space as the tensor product of eigenspaces [Coh, pp.583-584, §c; especially, p.583, l.2-l.3]: The old component eigenspace is not invariant under the coupling operator [Coh, p.598, l.-6-l.-5]. We must redecompose the state space as a tensor product of new eigenspaces ([Coh, p.600, (79)] define the new ground state. The new tensor product can be generated by creation operators [Coh, p.600, l.-11].) d. The uncertainty principle [Coh, p.597, (54)]. e. Any pair of component operators corresponding to different modes commute: Position and momentum operators [Coh, pp.581-582, (26)-(31); p.597, (54)]; annihilation and creation operators [Coh, p.600, (72-a)]; the total Hamiltonian [Coh, p.600, (79)]. 36. Standing-wave ratio <http://hep.ph.liv.ac.uk/~hutchcroft/Phys258/CN8StandingWaveRatio.pdf>. 37. Liénard-Wiechert potentials and fields for a moving charge [Jack, pp.661-665, §14.1]. 38. A potential well of arbitrary shape. A. bound states: the energies are bounded [Coh, p.357, l.3] and discrete [Coh, p.354, l.-10]. 39. Energy bands for a periodic potential [Coh, p.372, Fig. 2 & p.379, Fig. 4]. Remark 1. Remark 2. At this general stage, we can only have a qualitative (i. e. geometric) analysis for the big picture. For example, we can discuss the structure of its reciprocal lattice. As the case becomes more specific, more physical meanings can be precisely associated with [Coh, p.379, Fig. 4]. Remark 3. How an energy gap arises (the mathematical (quantum) explanation (Energy gaps must exist somewhere, but we cannot pinpoint their locations.): matching conditions [Coh, p.369. l.-8; Eis, p.p.458, l.-15]; the physical explanations (At a zone boundary, the symmetric wavefuncton and the antisymmetric have different energies.): qualitative [Eis, p.459, l.-6-p.460, l.18]; quantitative [Kit2, p.179, (6)]). A. The general theory of an electron in a solid: The main feature of this approach is that the Hamiltonian is not specified [Coh, pp.1161-1168, Complement F[XI], §2]. The Hamiltonian can refer to a free electron or an electron bound to an atom. a. The allowed energy band: [Coh, p.1163, (9) & Fig. 5]. b. Stationary states: Bloch functions [Coh, p.1164, (14), (15), and (16)]. Remark. The delocalization of the electron: [Coh, p.410, (C-20)] ® [Coh, p.1159, l.2] ® [Coh, p.1164, (17)]. B. Nearly free electron theory [Hoo, pp.100-104, §4.1]. a. We can only focus on one theory at a time. Aiming at too many goals will lead nowhere. [Ashc, chap. 9] and [Kit2, chap. 7] assume that their readers do not have a background in perturbation theory, so they try to develop both perturbation theory and nearly free electron theory at the same time. It turns out that both approaches fail to provide a clear picture of energy bands. For example, n in [Hoo, p.102, (4.4)] is associated with the standing wavefunction sin(npx/a) [Hoo, p.102, l.14] and the n-th term of the Fourier expansion of the potential. In contrast, the meaning of n in [Kit2, p.187, l.-18] and [Ashc, p.162, l.17] is not as specific enough as it could be. I like Hook's approach because it focuses on nearly free electron theory and assumes that his readers have a background in perturbation theory. Furthermore, I prefer having a complete understanding of a 1-dim lattice to having a vague picture of a 3-dim lattice [Ashc, pp.152-166]. b. Hook's approach shows insight. Although his approach is not perfect, it is amenable to improvements. i. We may use [Kit2, p.183, l.13] to prove that the only important term in the lattice potential of [Hoo, p.101, (4.2)] is V[1]cos(2px/a) [Hoo, p.129, l.11]. ii. We may use [Ashc, pp.155-156, Case 2] to prove that y has the form ae^ikx+be^i(k-2p/a)x [Hoo, p.129, l.13; Ashc, p.156, (9.22)]. c. [Kit2, p.179, (6)] is not as good as [Hoo, p.102, (4.4) & (4.5)] because the former only calculates the energy gap of the first band, while the latter calculates the energy gap of the n-th band for every n. C. The tight binding approximation [Iba, pp.137-142, §7.3; Ashc, pp.176-184, §General formulation; §Application to an s-band from a single atomic s-level]. Remark 1. The caption of [Coh, p.1161, Fig.4] gives a more fundamental reason than that given in [Iba, p.141, ii)] to explain why a deep lying band is narrower than the shadow lying band. Remark 2. Both [Abr, p.10, (1.21)] and [Iba, 139, (7.31)] are based on the Ritz method [Iba, p.139, l.13]. Although the latter formula is more intuitive, the former formula has the following advantages: (1). The inversion theorem [Ru2, p.199, Theorem 9.11]; (2). The complicated computation given in [Abr, p.10, (1.23)] is actually a simple consequence of [Ru2, p.202, (13)], a fact that Abrikosov probably did not recognize. [Kit2, pp.245-248, §Tight binding method for energy bands] fails to fully use these advantages of Fourier analysis. Note that [Abr, p.10, (1.21)] is based on [Ru2, p.192, (4)], but the physical interpretations of the two formulas can be different: The domain of w[n] is cleverly preserved as the position space, while the domain of f^ is often interpreted as the momentum space (The position variable disappears because it becomes the dummy variable of integration). Remark 3. A scholar should not just discuss trivialities and avoid discussing difficult issues by pretending not to see them. Most textbooks in solid state physics fail to explain why g(R) in [Ashc, p.182, l.-9] is the same constant for each of the atom's 12 nearest neighbors. Some of the above books still lack any improvement on this point even after many editions. [Ashc] is one exception. However, Ashcroft gives only a vague hint [Ashc, p.182, l.-12-l.-9]. Ashcroft's argument would be clarified if he were to add that he uses the formula [Ru2, p.186, §8.27, (1)]. a. The H[2]^+ ion-covalent bonding [Hoo, pp.111-115, §4.3.2]. i. The physical meaning of the limits of the first allowed band is given by [Hoo, p.114, Fig. 4.7(a); Coh, p.1159, Fig.2 & p.1179, (48)]. ii. Quantum resonance: [Coh, p.1177, l.-15-l.-9]. iii. The origin and stability of the chemical bond [Coh, p.1179, l.-15-l.-6]. iv. The way to improve the result of the variational method is to enlarge the family of trial kets [Coh, p.1182, l.-9; p.1183, Table I; p.1173, Fig. 2]. b. A 1-dim chain. The physical meaning of the n-th band refers to the n-th principal quantum number for a single atom [Ashc, p.183, Fig. 10.4]. Remark 4. The physical meaning of [Ashc, p.141, (8.50)] is given by [Hoo, p.116, l.-6-p.117, l.6]. Remark 5. Both nearly free electron theory and the tight binding approach have similar dispersion relations [Hoo, p.119, l.-17-p.120, l.-8]. [Iba, p.106, Fig. 6.1] explains why the results derived from the two theories are consistent. Remark 6. A scholar should be brave enough to face a challenge and should not sweep what he does not understand under the rug. [Cra, p.8, l.9-l.17; Ashc, p.140, footnote 17] explain why E(k) is a continuous function of k. In contrast, [Kit2, chap.7 & chap.9] and [Abr, chap. 1] do not even mention such a problem. Unless he or she is extremely careful, an average reader will not be able to detect these authors have left out something important. 40. Faraday's law of induction [Wangs, chap. 17]. A. Faraday's observations [Jack, p.208, l.-16-l.-6]. B. For a static situation, there is no connection between the electric field and the magnetic field. Faraday's law of induction establishes their connection only for a nonstatic situation [Wangs, p.263, l.-27-l.-23]. C. Sometimes we define flux as the product of density and velocity [Wangs, p.393, l.3-l.6]; sometimes we define flux as the dot product of a vector field and an area (e.g, the magnetic flux [Wangs, p.251, (16.6)]). What is the relationship between these two ideas? Answer: [Coh, p.238, l.20-21]. Thus, a generalized concept keeps only a small number of the properties of its original concept. D. Electromotive force [Hall, pp.518-519, §29-1]; Lenz's law [Hall. pp.577-579, §32-3; p.580, l.25-l.30; Wangs, p.264, l.-17-p.265, l.-10]. E. Faraday's law written in the form of [Wangs, p.272, (17-30)] is independent of the motion of the medium [Wangs, p.272, l.9]. F. [Jack, p.210, (5.137)] is [Chou, p.251, (6.18)]. Its proof is given in [Chou, §6.3]. d/dt in [Chou, p.251, l.5] refers to a fixed charge (particle) in the moving circuit (fluid). For its physical meaning, see [Lan6, p.3, l.1-l.3]. ¶/¶t refers to a fixed point in space. For its physical meaning, see [Sym, p.313, l.25-l.26; l.-3-l.-1]. The argument given in [Chou, §6.3] follows closely the formalism given in [Chou, Appendix I] which has well established physical interpretations. The proof given in [Wangs, §17-3] is well tailored to this particular problem, and is simple, direct, and clear in the mathematical sense. However, there is a gap in the deduction from [Wangs, p.271, (17-25)] to [Wangs, (17-26)]. The gap can be filled using [Jack, p.209, l.-15-l.-5] ([Wangs, p.266, (17-8)] is invariant under the Galilean transformation when v <<c) or using the argument in [Cor, §23.2 & §23.7] ([Cor, (23-28)] is invariant under the Lorentz transformation [Cor, (23-61)] when v's magnitude is comparable to that of c). 41. Infinitely long ideal solenoid. A. A has only a jˆ component [Wangs, p.260, Fig. 16-6]. Inside the solenoid, A is given by [Wangs, p.259, (16-49)]. Outside the solenoid, A is given by [Wangs, p.259, (16-50)]. Remark. [Cor, p.350, l.5] gives a direct physical reason why A¹0 outside the solenoid. B. By symmetry, B is independent of z and of j [Cor, p.355, l.9]. B[r]= 0 [Cor, p.355, l.14]. Inside the solenoid, B[j] is given by [Cor, p.356, l.15] and B[z] is given by [Wangs, p.260, l.-5]. Outside the solenoid, B[j] is given by [Cor, p.356, l.11] and B[z] is given by [Wangs, p.260, l.-4]. Remark. The direct physical reason why B[z]=0 outside the solenoid can be attributed to the following two facts: 1. [Wangs, p.226, l.23]. 2. The denominator of the integrand in [Wang, p.227, (14-11)] is large. 42. Lightning rods (Background material: <http://www.glenbrook.k12.il.us/gbssci/phys/Class/estatics/u8l4e.html>; basic principles: [Jack, p.78, l.28-p.79, l.6; pp.104-107, §3.4] & <http:// 43. Bloch's theorem. A. Proofs a. First proof [Ashc, pp.133-135; Kit2, pp.179-180]. i. The Hamiltonian is periodic [Ashc, p.134, l.16]. (Proof. Let y=x+R. Then ¶/¶y=¶/¶x.) ii. Considering the lattice symmetry [Hoo, p.328, l.-9-l.-4], we must require that the wave function satisfy [Kit2, p.160, (8)]. [Ashc, p.134, l.26-l.30] shows that this physical requirement is theoretically feasible. If the wave function is degenerate, there will be some difficulty in proving [Ashc, p.134, (8.12)], but this difficulty can be overcome by the method indicated in [Coh, pp.141-142, (ii)]. Thus, the first proof is still valid even without Kittel's extra assumption that y[k] is nondegenerate [Kit2, p.179, l.-3-l.-2]. iii. The proof given in [Tin, p.38, (3-26)] is also based on the idea of diagonalization, but its discussion is limited to a special case [Tin, p.38, l.11-l.14. Here, the group of the Schrödinger equation [Tin, p.33, l.-11] must be cyclic] and uses the language of the group representation theory. b. Second proof [Ashc, pp.137-139; Kit2, pp.183-185]. i. The Fourier series expansion of the wave function: [Ashc, p.137, (8.30)], where {q}=all the values of the wavevector permitted by the Born-Von Karman boundary conditions (see [Ashc, p.136, (8.27)]; [Kit2, p.183, (25)]). Remark. For the physical origin of the Born-Von Karman boundary conditions, see [Iba, p.83, l.-8-p.84, l.9]. The advantage of the Born-Von Karman boundary conditions is that we can base our discussion of a finite crystal on the model of an infinite crystal [Ashc, p.136, (8.27)] rather than restart the discussion from scratch ([Coh, Complement O[III] partially repeats the discussion of [Coh, Complement F[XI]]) ii. Decouple the family of {c[q]} in [Ashc, p.137, (8.30) into subfamilies c[k+K] [Ashc, p.138, l.-11], where k's are defined by [Ashc, p.136, (8.27)]. We label the decoupled wave function y as y[k] [Kit2, p.184, l.7-l.10]. [Kit2, p.235, l.-9-l.-3] and [Hoo, p.330, l.-13-l.-7; p.331, l.-7-p.332, l.-1]-332, §11.4.1] say the same thing, but Kittel's formulation is more concise and precise. c. Third proof [Coh, p.1162,l.1-p.1164, l.-4]. B. The dashed curves in [Ashc, p.133, Fig. 8.1] are derived from [Coh, p.790, (C-4)]. C. The crystal momentum is not the electronic momentum [Ashc, p.139, (8.45); Kit2, p.205, (11)]. D. [Ashc, p.141, (8.50)] is clearly explained by the labeling scheme in A.b.ii. The steps in A.b.ii pinpoint the reason why the second proof allows the y[k ]to be degenerate [Kit2, p.185, l.-1]. E. The origin of the set {e[nk] | where k is fixed and n is any integer} in [Ashc, p.141, (8.50)]. a. From the viewpoint of eigenvalues: the roots of the determinant of [Kit2, p.187, (32)]. b. From the viewpoint of eigenfunctions: [Cra, p.13, l.5-l.7]. F. The essential ideas of the above three proofs are the same (decoupling). The second proof uses Fourier analysis to convert the equation of motion into decoupled linear systems of algebraic equations. The first proof uses linear algebra to find a basis to simultaneously diagonalize the Hamiltonian and translation operators. The third proof specifies the wavefunction [Coh, p.1164, (13)] and is a special case of the first proof [Coh, p.1164, l.-2-p.1165, l.19]. G. Empty lattice approximation [Kit2, p.188, l.-4-p.189, l.-7]: let the potential functions U[n](x) ® 0 uniformly in x as n®+¥; the displacements indicated in the caption of [Kit2, p.236, Fig. 3] are justified by [Kit2, p.237, (2)]. 44. Energy levels near a single Bragg plane [Ashc, pp.152-159]. A. In the case of no near degeneracy, by [Ashc, p.155, (9.13)], the shift in energy from the free electron value is second order in U [Ashc, p.155, l.9]. B. In the nearly degenerate case, by [Ashc, p.156, (9.19)], the shift in energy from the free electron value is linear in U [Ashc, p.155, l.10]. C. Through the careful estimation from [Ashc, p.155, (9.16)], we shift our attention from [Ashc, p.152, (9.2)] to [Ashc, p.156, (9.19)]. Kittel jumps from [Kit2, p.186, (31)] to [Kit2, p.191, l.20-l.21] by observing the superficial similarity between [Kit2, p.177, (5)] and [Kit2, p.191, (49)]. Thus, Kittel's argument is not as rigorous and careful as Ashcroft's. The argument in [Iba, p.135, l.-2-p.136, l.19] is also better than Kittel's. The reason given in [Hoo, p.101, l.-4] why we should give up the method used for the nondegenerate case is inadequate because the fact that the first-order energy correction=0 should not stop us from pursuing the second-order energy correction. In contrast, [Abr, p.14, l.1-l.8] gives a good reason why we should switch to nondegenerate case. Furthermore, [Abr, p.14, l.9] gives more choices than those given in [Hoo, p.101, l.-2-p.102, l.5] D. The caption of [Coh, p.409, Fig. 11] says that the two perturbed levels "repel each other". The meaning of this statement is not clear. In contrast, [Ashc, p.155, l.4-l.7] defines the phrase "two energy levels repel each other" clearly and mathematically. 45. Semiconductor crystals. A. The equation of motion in k space of an electron in an energy band. a. in a uniform electronic field: [Kit2, p.204, (4)]. b. in a uniform magnetic field: [Kit2, p.204, (7)]. Remark 1. [Kit2, p.204, l.-3-p.205. l.3] illustrates [Ashc, p.229, Fig. 12.6]. Remark 2. The projection of a real space orbit in a plane perpendicular to the field is an orbit of the same shape and rotation direction as the k-space orbit, but rotated 90° around the field direction [Ashc, p.230, l.10-l.13; Hoo, p.375, Fig. 13.7]. B. A hole. a. wavevector: [Kit2, p.206, (17)]. b. energy: [Kit2, p.207, (18)]. c. velocity: [Kit2, p. 208, (19)]. d. effective mass: [Kit2, p.208, (20)]. e. equation of motion: [[Kit2, p.208, (21)]. Remark. [Kit2, p.209, Fig. 9] is derived from [Kit2, p.204, (4)]. 46. Anharmonic effects. A. Thermal expansion [Iba, pp.91-94, §5.5; Hoo, pp.63-66, §2.7.1]. B. Heat conduction by phonons [Iba, pp.94-99, 5.6; Hoo, pp.67-74]. Remark 1. [Hoo, p.69, (2.73)] is the mathematical proof of the physical formulas [Hoo, p.67, (2.68) & (2.69]. Remark 2. [Hoo, p.70, (2.75)] and [Iba, p.96, (5.43)] are the same. The former formula is derived from elementary kinetic theory, while the latter formula is derived from the consideration of the canonical distribution [Rei, p.205, l.10] [1]. Remark 3. Normal processes versus umklapp processes [Hoo, p.67, Fig. 2.17(b) versus Fig. 2,17(c); Iba, p.97, Fig.5.6(a) versus Fig. 5.6(b); Kit2, pp.134-135, Fig. 16a,c versus Fig. 16b,d]. 47. The heat capacity of electrons in metal. A. The tangents in [Iba, p.114, Fig. 6.6 & p.115, Fig. 6.7] help clarify the procedure for estimating the small fraction of the free elections that can absorb thermal energy. B. The estimate in [Hoo, p.82, (3.16)] is better than that of [Iba, p.115, (6.36)]. See [Iba, p.117, (6.46)]. 48. General relativity. Remark. The tensor design serves to keep the measurement of physical quantities covariant with coordinate transformations so that physical laws will retain the same form. A. The strong equivalence principle [Ken, p.11, l.-12-l.-10; p.12, l.7]. Remark 1. The strong equivalence principle based on the weak equivalence principle [Ken, p.10, l.-15] is an extension of the first postulate of special relativity [Ken, p.11, l.-6-l.-3]. Remark 2. A frame in free fall can cover the space-time manifold locally but not globally [Ken, p.12, l.3; p.40, l.7-l.-9; Pee, pp.231-233, § The Metric Tensor]. Remark 3. The principle of generalized covariance [Ken, p.63, §6.4] can be considered the tensor version of the strong equivalence principle. B. If we express the physical laws in special relativity in terms of tensors, they will retain the same form in any other accelerated frame. In particular, if a formula involves derivatives, the derivatives in the corresponding formula under a coordinate transformation should be replaced by covariant derivatives [Ken, p.81, l.-15-l.-10]. a. Mass affects the metric of the space-time manifold: The Schwarzschild metric equation [Ken, p.44, (4.10)] reduces to the Minkowski metric equation [Ken, p.44, l.8] in the limit of zero Remark 1. Geodesics in (space ® Minkowski space ® curved space-time) [Ken, p.41, l.10-l.18]. The length of a geodesic in space is a minimum, while the length of a geodesic in the space-time of special relativity is a maximum [Ber, p.56, l.4-p.57, l.-5]. This is because the metric tensor in space is positive definite, while the metric tensor in space-time is Remark 2. The Schwarzschild metric tensor can be derived from [Pee, p.271, (10.84); p.273, (10.92)]. Another proof can be found in [Ken, p.15, l.-3-p.17, l.15; Ber, p.75, l.1-l.8]. The first equality in [Ken, p.16, l.12] can be derived from [Rin, p.40, (17.1)]. The second derivation of the Schwarzschild metric tensor can be made rigorous by using Einstein's field equation [Ken, Appendix D, pp.195-196]. b. Newton's second law [Ken, p.63, (6.11)]. c. The conservation laws of the four vector momentum [Ken, p.81, (7.13)]. d. In the Newtonian limit, Einstein's equation [Ken, p.83, (7.19)] will reduce to Newton's law of gravitation [Ken, p.85, l.7]. C. The tangential acceleration vs. the normal acceleration [Cou2, vol. 1, p.396, (41) & (42)]. 49. Electromagnetic properties of matter [Fur, §2.4; Wangs, pp. 546-568] A. It is easier to recognize the outline of electromagnetic properties of matter in [Fur, §2.4] than in [Wangs, pp. 546-568]. Furthermore, few books explain [Fur, p.104, Fig. 2.18] as clearly as [Fur]. However, it is better to study [Kit2, pp.380-392] before one reads [Fur, §2.4]. This is because [Kit2, pp.380-392] provides rigorous definitions of applied electric field, the macroscopic electric field, and the local electric field. The prerequisite to understanding [Kit2, pp.380-392] is [Wangs, chap. 10 and chap. 23]. B. For clarity, [Wangs, p.548, Fig. B-1] should be supplemented with [Hall, p.472, Fig. 26-12]. 50. Hysteresis [Cor, pp.375-377, §21.2 & p.422, Example]. Remark. [Wangs, p.338, l.22-l.32] illustrates the theoretical advantage of using a Rowland ring, while [Cor, p.375, footnote] explains why Rowland rings are no longer used in practice. 51. The Lorentz condition [Wangs, p.365, l.12-l.16]. 52. The basis of the Debye interpolation scheme. A. [Ashc, p.466, l.3] To be consistent with the Dulong and Petit law at high temperatures, the area under the theoretical curve g[D](w) [Ashc, p.466, l.1] must be the same as that under the experimental curve [Rei, p.410, Fig. 10.1.1]. B. [Ashc, p.466, l.4] To obtain the correct specific heat law at low temperatures, the theoretical curve must agree with the experimental curve in the neighborhood of w = 0. Thus, the Debye scheme should adopt the simplification given in [Ashc, p.456, l.1-l.10; Fig. 23.1]. 53. Noether's theorem ― conservation laws [Sag, pp118-123, §A2.16]. A. The energy [Lan1, p.14, l.10], momentum [Lan1, p.17, l.-14-l.-12], and angular momentum [Lan1, p.20, l.-8-p.21, l.9] of a closed system. B. Conservation laws for the energy-momentum tensor of the electromagnetic field: a. Special relativity: [Lan2, p.82, (33.6)]. b. General relativity: [Ken, p.81, l.16]. C. Conservation of crystal momentum [Ashc, p.786, (M.7)]: a. Isolated insulator [Ashc, p.787, (M.18)]. b. Scattering of a neutron by an insulator [Ashc, p.788, (M.22)]. c. Isolated metal [Ashc, p.788, l.-7-l.-5]. d. Scattering of a neutron by metal [Ashc, p.788, l.-4-p.789, l.2]. Remark 1. In terms of reduced symmetry, the conservation of crystal momentum in Case C.a is similar to the the conservation of angular momentum in the examples given in [Lan1, p.21, l.1-l.9]. Remark 2. In order to understand the precise meaning of Noether's theorem, one needs an elaborate analysis like Sagan's. Compare [Sag, p.120, Definition A2.16] with [Fomi, p.80, Definition]. 54. Thermal conductivity. A. The formula for thermal conductivity [Ashc, p.500, (25.30) & (25.31)]. The derivation of the formula can be found in [Rei, p.479, Fig. 12.4.2 (1-dim) ® Ashc, p.500, Fig. 25.3 (3-dim)]. The arrow implies that the basic idea in the two cases is the same. B. The reasons why a perfectly harmonic crystal would have an infinite thermal conductivity. a. The phonon states are stationary states [Ashc, p.496, l.-11] Û There are no collisions between different phonons [Kit2, p.133, l.23] (i.e. there is no thermal resistivity). Remark. The scattering of phonons means that the wave functions of phonons evolve with time. b. A nonvanishing mean velocity is given by [Ashc, p.141, (8.51)] (see [Ashc, p.497, footnote 15]). This mean velocity is not driven by a temperature gradient [Kit2, p.134, Fig. 16a, l.5]. c. [Pei, p.40, (2.56); l.10-l.11]. a. At high temperatures (T >> Q[D]): [Ashc, p.501, (25.32) & (25.33)]. b. At low temperatures (T << Q[D]): As the temperature decreases, the conductivity will increase [Ashc, p.504, (25.40) & p.500, (25.31]. The phonon mean free path will increase up to the length limit imposed by lattice imperfections, impurities, or size. Hence, the phonon mean free path will become independent of temperature. Thus, the temperature dependence of the conductivity is determined by the specific heat. Specifically, the conductivity will rise as T^3. The rise will reach a maximum at a temperature where umclapp processes [Ashc, p.502, l.1] become frequent [Ashc, p.504, (25.39)] enough to yield a mean free path shorter [Ashc, p.504, (25.40)] than the temperature-independent one. Beyond this temperature, the conductivity continues to decline as exp(T[0]/T) [Ashc, p.504, (25.40)] up to temperatures well above Q[D]. After this the exponential decline is quickly replaced by a slow power law [Ashc, p.501, Remark. When studying thermal conductivity, I had a hard time understanding both [Ashc] and [Kit2]. Ashcroft repeats the same word "the mean free path" five times in a single paragraph (see [Ashc, p.504, l.-19-l.-10]). His act of repeating the same words as though they were an incantation and his consideration on the impacts on the mean free path due to an overwhelming number of factors only obscure, rather than clarify, the key point. [Kit2, pp134-135, Fig. 16] and its illustrations occupies almost half of the space of the entire section [Kit2, pp133-135, § Thermal resistivity of phonon gas]. However, this figure is only a minor point in understanding thermal conductivity. Thus, Kittel's emphasis is misplaced. 55. The Hartree-Fock approximation. A. The expectation value of the Hamiltonian: [Ashc, p.333, (17.14) or Ost, p.111, (3.2)]. B. The Hartree-Fock equations: [Ashc, p.333, (17.15) or Ost, p.114, (3.14)]. Remark. For the purpose of the derivation of the above equalities, Ostlund's simplified notions are more appropriate. 56. Rayleigh scattering [Jack, p.466, (10.35)] explains why the sky is blue, why the sunset is red, why it is easy to get a sunburn at midday, and why infrared is good for seeing distant stars through the dust in the Milky Way. 57. The dispersion relation in a plasma [Kit2, p.274, (15)] explains the transparency of alkali metals in the ultraviolet and the reflection of radio waves from the ionosphere [Kit2, p.274, 58. The Robertson-Walker metric [Ber, p.105, (6.1.3)]. Remark. Peebles' oversimplified introduction to this metric [Pee, p.54, (5.9)] fails to stress its insight: the Gaussian curvature is invariant. The proof of Theorema egregium in [Ber, Appendix B, pp.160-163] is awkward. A better proof can be found in [Lau, p.65, Theorem 5.5.1]. 59. Special relativity. A. How we synchronize clocks [Rin, p.9, l.-13-l.-9]. B. Why the transformation from one inertial frame to another is linear [Rin, p.11, l.17-l.27]. 60. Transport theory. A. Reif's approach goes from the easy to the complicated: using average velocity v to express the collision frequency [Reif, p.470, (12.2.7)] ® using the distribution function f(r,v,t) to formulate the Boltzmann equation [Reif, p.525, (14.3.8)]. In contrast, Huang's approach jumps to the complicated directly [Hua, chap.3]. Thus, Huang leaves out the following two important turning points of the theory's development: Flux: [Reif, p.470, (12.2.6)] ® [Reif, p.497, (13.1.3)]. The Boltzmann equation: [Reif, p.509, (13.6.2)] ® [Reif, p.525, (14.3.7)]. The equivalence [Reif, p.510, l.1] of the two formulations enables us to jump from a crude approach [Reif, p.504, l.3-l.6] to an more accurate approach. B. Huang fails to prove du=du' [Hua, p.60, l.-13], while Reif gives a rigorous proof [Reif, p.521, l.15]. C. Reif establishes a relationship between s(v[1],v[2]ïv[1]',v[2]') and s(W) [Reif, p.520, (14.2.4)], while Huang does not. Therefore, the statement in [Hua, p.69, l.-8] is not clear. Similarly, [Reif, p.524, (14.3.3) is clear, while [Hua, p.66, (3.29)] is not. D. The generalization from an inversion [Reif, p.522, l.1-l.2] to a rotation or reflection [Hua, p.63, l.9-l.10] is unnecessary because it does not have any other useful application than the inverse collision. E. [Reif, p.523, l.-18-l.-15] imposes an essential assumption on f(r,v,t) to justify the format of the mathematical expression in [Hua, p.56, (3.2)]. The reasons given in Huang's justification [Hua, p.56, l.8-l.14] are related, but are not essential. F. The equality in [Hua, p.96, l.8] is derived from [Reif, p.529, (14.4.20)] and its corresponding formula for the inverse collision. G. The Collision frequency: [Reif, p.470, (12.2.7)] is too crude and [Reif, p.470, (12.2.8)] is too sophisticated. [Reic, p.660, (11.14)] gives an appropriate interpretation of the Collision H. [Hua, p.106, (5.72)] is correct, but Huang's argument for its derivation is incorrect. It would be better to use brute force to calculate each coefficient of L[kl] in the summation on the right-hand side of [Hua, p.106, (5.71)]. I. (Unifications) The conservation theorem [Hua, p.96, (5.14)] unifies the conservation laws of mass, momentum, and energy [Hua, p.98, (5.21)-(5.23)]. Huang derives the conservation theorem from the Boltzmann transport equation [Hua, p.67, (3.36)], which involves the concept of differential cross section. In contrast, [Reic, pp.534-537, §10.B.1] derives the conservation laws of mass, momentum, and energy without using the concept of differential cross section. The entropy source [Reic, p.539, (10.26)] helps define the generalized currents and forces [Reic, p.539, l.-11-l.-5]. Thus, the discussion in [Reic, pp.537-541, §10.B.2] is an indispensable step toward recognizing that transport coefficients are the generalized conductivities of a hydrodynamic system [Reic, p.539, l.-4-p.540, l.1;p.541, l.5-l.10; p.543, (10.29)-(10.31)]. Putting transport coefficients and conductivity into the same category is a kind of unification that stride across different fields. 61. Classical statistical mechanics A. Ensemble [Hua, p.141, l.8]. Remark. If we can prove a statement directly, we should not take a detour. [Hua, p.141, (7.6)] can be directly derived from the definition of a partial derivative. Huang's detour approach [Hua, p.141, l.-13-l.-12] indicates that he does not understand the definition of a partial derivative very well. B. The density of states [Man, pp.324-335, Appendix B]. Remark. [Man, p.334, (B-38)] can be proved using [Kit2, p.87, Fig. 18]. C. A system in a heat bath [Man, pp.52-64, §2.5]. Remark 1. The proof of [Man, p.57, (2.29)] can be found in [Reif, p.213, (6.5.8)]. Remark 2. We need not repeat the historical approach. The classical method of counting states must be fully justified in terms of quantum mechanics. Compare [Reif, p.51, l.8-l.16] with [Man, p.174, l.-7-l.-1]. D. Fluctuations a. Energy [Reif, p.110, (3.7.14); p.213, (6.5.8) or Man, p.58, (2.31) (the canonical ensemble)]. b. Occupation numbers [Hua, p.82, (4.54) (the ideal gas)]. E. The canonical ensemble evolves from the microcanonical ensemble: a. The drawback of the microcanonical ensemble with respect to calculations [Hua, p.153, l.7-l.12]. b. The new constraint imposed on the canonical ensemble [Hua, p.157, l.2-l.4]. F. The ideal gas. Remark 1. When considering ideal gases, the first thing one has to do is to throw away all the world's documents about ideal gases. One should study only [Reif, §9.1, §9.2, §9.6, §9.7, §9.8, §9.10], except for any discussions about Maxwell-Boltzmann statistics they contain. If one needs the required background on quantum mechanics, one should read only [Coh, §XIV,C.3.d]. This approach relieves one of the burden of studying a tremendous amount of incorrect physics. To stop the practice of torturing physics students, the future authors of physics textbooks should follow my advice. Remark 2. Mandl points out that [Hua, p.146, (7.22)] is based on [Hua, p.152, (7.52)]. However, Mandl's strategy to prove [Man, p.188, (7.70)] does not work. It is better to follow Huang's calculation scheme [Hua, p.152, l.9-l.10]. Remark 3. In classical mechanics, a rigorous definition of physical states [Coh, p.1392, (C-9)] for a system of identical particles does not exist. Therefore, for the partition function of the ideal gas, the classical method of counting states requires a correction when compared with the correct quantum result. The only book in classical mechanics that contains a clear definition of a macrostate is [Zem, p.279, l.11], but the concept is borrowed from quantum mechanics. The goal of counting states in classical mechanics is to lead to a rigorous definition of physical states. A good classical method of counting states should facilitate accomplishment of this goal. For example, the first term of [Man, p.168, (7.9)] corresponds to [Coh, p.1390, (C-11)] and the second term of [Man, p.168, (7.8)] corresponds to [Coh, p.1390, (C-10)] (up to the normalization factor). Even so, the classical derivation of [Man, p.169, (7.10)] is not as rigorous as the quantum mechanical derivation of [Reif, p.361, (0.10.3)]. G. (Paramagnetism) The discrepancy between [Reif, p.208, (6.3.7)] and [Pat, p.81, (14)] is due to different averaging methods. The former averages the magnetic susceptibility over the two spin states [Reif, p.207, (6.3.3)], while the latter averages the magnetic susceptibility over all solid angles [Pat, p.80, (7)]. H. (Microcanonical ensemble) Reichl fails to explain why C[N] = N! h^3N in [Reic, p.348, (7.16)]. N! can be explained using the strategy given in [Reic, p.359, l.8]. h^3N can be explained by [Man, p.174, l.-1]. Similarly, Huang fails to explain the Gibbs correction factor in [Hua, p.195, (9.42)]. Intrinsically, Microcannonical ensemble is a classical design. Its shortcomings are discussed in [Man, p.182, l.9-l.18]. The problem with [Hua, §9.5] is that Huang throws quantum particles into a classical design without explaining why it is justifiable to do so. A theory should be built with its essential features. Building a theory should not be like making a pizza with every topping on it. it should not involve throwing all the knowledge into one pot. A cumbersome theory that has no application or that is designed to solve every problem is trash. Even just looking at [Hua, p.194, Fig. 9.1] makes one dizzy. The discussion that goes with this figure is even more confusing. One should apply a method in a flexible manner instead of being entrapped in its mathematical structures. Furthermore, a model should be as simple as possible. Consequently, the discussion of an ideal gas in [Reic, p.348, Exercise 7.2] is better than that in [Hua, p.196, l.-3-p.197, l.16]. The same remark applies to [Pat, §6.1], Cliff's notes of Remark. Landau uses the fact that levels broaden into bands [Lan5, p.15, l.12; p.22, l.-12-l.-9] to explain why the microcanonical ensemble, a classical design, still applies to quantum I. The grand partition function a. The justification of the definition given in [Hua, p.190, (9.27)]: [Reif, p.347, l.1-l.18]. b. Its simplified relationship with the partition function [Reif, p.347, (9.6.6)]. c. Equalities [Hua, p.198, (9.61) & l.-3-l.-1]. Remark. Reichl derives [Reic, p.382, (7.121) & (7.123)] using terminology that is less intuitive but conveys the same idea (i.e., step a and step c). However, the advantage of the Lagrange multipliers, trace, and the number representation allows Reichl's argument to go directly from [Reic, p.378, (7.109)] to [Reic, p.382, (7.121) & (7.123)]. It is unnecessary to pass through the middle stage given in [Hua, p.190, (9.27)], and then change the basis (see step c) to 1obtain the desired result. J. Links {1}. 62. Generalizations of field equations. A. Einstein's gravitational field equation [Pee, p.268, (10.65)] ® Gravitational field equations for nonrelativistic material [Pee, p.269, (10.69); Rin, p.103, l.12-l.14]. B. Electromagnetic field equations [Rin, p.103, (38.3); p.104, (38.5) & (38.7)] Û the Maxwell equations [Rin, p.107, (38.20), (38.21) & (38.23)]. 63. Polarizability. A. The Lorentz relation [Chou, p.76, (2.92)]. B. The Clausius Mossotti relation [Chou, p.76, (2.95)]. C. Electronic polarization (induced dipoles) [Chou, p.77, (a); p.78, (2.96)]. D. Orientation polarization (permanent dipoles) [Chou, p.77, (b); p.79, (2.101)]. Remark. For details of this topic, consult [Wangs, pp.546-554, Appendix B-1]. 64. Boundary-value problems in electrostatics. A. Formal solutions of the Poisson equation [Chou, p.31, l.-2-p.32, l.9]. B. The existence of solutions of the Poisson equation with Dirichlet or Neumann boundary conditions [Chou, §3.2]. C. The uniqueness of solutions of the Poisson equation with Dirichlet or Neumann boundary conditions [Chou, §3.2]. Remark 1. The assumptions of this problem are carefully written in [Chou, §3.1]. The argument in [Jack, §1.9] cannot be considered rigorous because Jackson fails to state these assumptions Remark 2. All of the following statements are justified by the uniqueness theorem. a. [Chou, p.115, l.-10-l.-8]. b. [Chou, p.117, l.2-l.5]. c. [Chou, p.120, l.1-l.2]. D. The method of images [Chou, §3.4; Sad, §6.6; Jack, §2.1-§2.5]]. a. The image charge must be external to the region of interest [Jack, p.57, l.-9-l.-8]. b. The solution of the Poisson equation is provided by the sum of the potentials of the charges inside the region of interest [Jack, p.57, l.-6-l.-5]. Remark 1. [Wangs, p.175, l.18-l.6] gives three methods to find the force on q. By the uniqueness theorem, all we need is the method given in [Chou, p.117, l.2-l.5]. Other methods are unnecessarily complicated. Even trying to find a method other than the method of images is meaningless in the first place because anyone who understands the uniqueness theorem thoroughly should not raise such a question. [Jack, p.60, l.-20-l.-13], produced by a Berkley professor, is also of no value. Remark 2. [Sad, p.240, l.9-p.241, l.6] provides a piece of general guidance that helps to solve boundary-value problems using the method of images. This guidance, based on experiences, can only be described by guidelines rather than specific details. The success of applying these guidelines relies on one's skill and experiences. Although guidelines are valuable advice, they do not guarantee the success of problem solving. Driving is an example. Driver A with ten year experience is accident free now. Driver B just received his driver license for the first time. Even if Driver A gives Driver B good guidelines about safe driving, the latter still has to go through many accidents during the first year to learn to become a safe driver. Similarly, to become a skilful problem solver, one has to practice constantly and compile new guidelines from one's own experiences. [Lau, p.67, l.12-l.16] provides another interesting example. E. Complex-variable methods [Chou, §3.5]. F. Conformal representation [Chou, §3.6]. G. Solutions for the spherical boundary conditions: [Jack, §2.6]. H. Boundary-value problems with azimuthal symmetry a. Dielectric sphere in a previously uniform electric field: [Cor, pp.231-233, Example], [Wangs, pp.194-197, Example] and [Chou, pp.148-150, (ii)] all give the formula of the electrostatic potential. Of the three proofs of this formula, Corson's proof is the best. His concept is clear and his analysis is rigorous. In [Wangs, p.197, Fig. 11-13], Wangsness says the lines belong to the E field. In fact, they belong to the D field (see [Cor, p.232, Fig. 12-2]). b. A useful device: [Jack, p.101, l.-2-p.104, l.4]. Remark 1. Although Jackson elegantly uses the uniqueness theorem to prove [Jack, p.102, (3.38)], the proof of [Wangs, p.112, (8-12)] is a natural approach which links Legendre polynomials more closely to their generating function [Guo, §5.3]. Remark 2. Although Jackson applies the device only to the boundary-value problems, the key idea of this device is actually based on [Ru2, p.226, Corollary]. I. Mixed boundary conditions (e.g., conducting plane with a circular hole) [Jack, §3.13} Remark. The equations of [Jack, p.132, (3.179)] can be solved using [Guo, p.406, (8) & (9)]. J. Boundary-value problems with dielectrics [Jack, §4.4]. Remark. For a plane, we use the method of images [Jack, p.154, l.-2-p.157, l.9]; for a sphere or spherical cavity, we use separation of variables in spherical coordinates and expand the solution in a series using the basis of the Legendre polynomials. These methods are essentially the same as those of finding the green functions [1]. 65. Electric property of dielectrics A. The Ewald-Oseen extinction theorem [Born, p.101, l.4-l.7] The dipole field is the sum of two terms [Born, p.102, (21)]: one cancels out the incident wave [Born, p.102, (23)], whereas the other satisfies the wave equation with velocity c/n [Born, p.100, (10)]. Remark 1. (Internal references) The validity of the statement in [Born, p.101, l.4-l.7] is not well documented, so one may not catch its meanings immediately until one finishes reading [Born, §2.4.2]. However, if Born had pointed out where the readers can find the proof for each phrase of the statement in this long section [Born, §2.4.2] as I did above, the readers would catch the meaning at the first reading and would have a clearer picture for understanding the rest of material in [Born, §2.4.2]. The mathematics textbooks written by N. Bourbaki are famous for their internal references: The validity of almost every statement is well documented whether the proof is given before the statement or after. Remark 2. For various electric fields in dielectrics only [Kit2, pp.380-392] provides clear definitions. Therefore, it is important to identify the effective field E' in [Born, p.85, l.23] with the local field E[local] in [Kit2, p.386, (14)] or the polarizing field E[p] (producing the displacement of charges) in [Wangs, p.547, l.-14] and identify the mean field E in [Born, p.85, l.24] with the total macroscopic electric field E in [Kit2, p.384, (7)]. Remark 3. The assumption [Born, p.104, (32)] is not used in [Born, p.104, l.8-p.107, l.12]. [Born, p.104, (32)] is proved by [Born, p.107, (49); p.105, (41)] with the assumption [Born, p.104, (33)]. The purpose of presenting [Born, p.104, (32)] before its proof is to help create [Born, p.104, Fig. 2.4] so that we know what is going on. Remark 4. The extinction theorem provides the insightful relationship between the incident field and the dipole field. This relationship based on the microscopic viewpoint (dipoles) is so powerful that it can be used to derive both the law of refraction [Born, p.107, (52)] and the Fresnel formulae [Born, p.107, (55a) & (55b)]. B. Molecular polarizability and electric susceptibility [Jack, §4.5] a. The Clausius-Mossotti equation [Jack, p.162, (4.70)]. b. The Lorentz-Lorenz equation [Jack, p.162, l.-1]. Remark. [Born, p.87, (17)] serves to link the microscopic quantity a [Born, p.92, (30)] to the macroscopic phenomena (e and n). The Lorentz-Lorenz equation implies that the refractive index depends on frequency [Born, p.92, (31)]. The proof given in [Born, §2.3.3] is valid only for the first approximation [Born, p.85, l.12]. In contrast, the proof given in [Born, §2.4.2] is rigorously derived from an integral equation concerning polarization [Born, p.100, (4)]. C. Electrostatic Energy in dielectric media [Jack, §4.7]. Remark. The material in [Jack, p.166, l.-2-p.167, l.17] is treated by Jackson as part of the content of his advanced textbook. However, in [Wangs, p.164, l.1-l.4], Wangsness treats the same material as an exercise of his elementary textbook. It is too difficult for the reader of an elementary textbook to encounter an exercise that is accorded extensive coverage in an advanced textbook. There is difference between writing a paper and doing an exercise. Similarly, it is not proper to put an exercise from an elementary textbook into the content of an advanced textbook. Considering the intended reader, it is clear that one of the above two authors must be seriously wrong. Remark. The concept of various fields in the first paragraph of [Jack, p.160] is clear. In comparison with [Jack, (4.71) & (4.95)], the signs of forces are carefully explained in [Wangs, (7-36), (10-99) and (B-7)]. In contrast with the abstract theory given in [Jack, §4.5, §4.7], [Wangs, p.166, Fig. 10-18] gives a concrete example. To reap benefits from both books, it is important to the differences in terminology they employ. Let us attach a subscript J to a notation if the notation is used in [Jack] and a subscript W to a notation if the notation is used in [Wangs]. By comparing [Jack, p.161, (4.67)] with [Wangs, p.548, (B-9)], we see E[p;W] = E[;J]+E[i;J] and the sum in [Wangs, p.163, (4.73)] = a in [Wangs, p.548, (B-9)]. The following identities show that there are no inconsistencies in concepts between [Jack, §4.5] and [Wangs, Appendix §B-1] even though the same notation or terminology in the two books may mean different things: E[ I;W] = E [near;J] and E[;J] = E[ p;W] - E[ i;J] = (E[ ;W] + E [O;W] + E [I;W]) - (E [ near;J] - E[ P;J]) = E[ ;W] + E [O;W] + E[ P;J ]. The above comparison is only a temporary remedy. In the future, we must unify the terminology in this area so that physicists can speak the same language. 66. Force on a localized current distribution in an external magnetic induction [Jack, p.188, l.-12-p.189, l.-7; Wangs, pp.531-538, §A-2] The proof of [Jack, p.189, (5.69)] is incorrect. The notation m´Ñ in [Jack, p.189, (5.67)] is problematic. This notation is not defined in any math textbook because multiplication and differentiation are not commutative: m(¶f/¶x)¹¶(mf)/¶x. Furthermore, [Jack, p.189, (5.68)] is incorrect. For a correct formula of Ñ(m×B), see [Wangs, p.34, (1-112)]. Thus, [Jack, p.189, (5.67)] should have been F[i] = S e[ijk][m´ÑB[k](0)][j]; [Jack, p.189, (5.68)] should have been F = m´(Ñ´B) + (m×Ñ)B = Ñ(m×B). Remark 1. Jackson's serious mistakes reveal the urgent need to strengthen the teaching staff in today's institution of American higher education. Remark 2. Using [Chou, p.214, (5.62)], Choudhury gives an elegant proof of [Chou, p.214, (5.63)]. Remark 3. There is no inconsistency between [Jack, p.189, (5.69)] and [Wangs, p.537, (A-35]. We can use [Wangs, p.533, (A-20)] to explain why the two formulae look different. 67. The Magnetic Hyperfine Hamiltonian A. A classical treatment [Jack, p.190, (5.73)]. B. A quantum mechanical treatment [Coh, pp.1247-1256, Complement A[ XII]]. Remark 1. [Coh, p.1251, Fig. 2; p.1252, l.1-l.19; p.1253, l.-17-l.-4] provide a better explanation of the second term of [Jack, p.188, (5.64)] and of the contact term of [Jack, p.190, (5.73)]. Remark 2. [Jack, p.145, l.-7-p.146, l.-5] gives a rigorous proof of the statement in [Coh, p.1066, l.6]. Remark 3. The formula given in [Coh, p.1249, l.-2] and the formula given in [Jack, p.190, l.-7] are the same. The latter formula is derived from [Jack, p.176, (5.5)] by replacing the x in [Jack, p.175, Fig. 5.1] with -x (where x is the position of the electron). Remark 4. [Jack, p.190, l.-13-l.-11] tells us what the hyperfine interaction is, while [Coh, p.1248, (5)] traces to the origin of the hyperfine interaction. 68. Magnetization A. A substance with permanent magnetization: B and H [Jack, §5.10]. a. Without an external field [Jack, p.198, (5.105) & (5.106)]. b. In an external field [Jack, p.200, (5.112)]. Remark. The difficulty of the method given in [Jack, p.199, l.1-l.8] lies in calculating ò [[][0, a]]: ò [[][0, a]] = ò [[0, r]]^ + ò [[r, a]]. The difficulty of the method using the vector potential [Jack, p.199, l.-12-p.200, l.6] lies in calculating the curl. The method given in [Jack, p.198, l.6-l.-1] does not have these difficulties, so it is the simplest. B. A paramagnetic or diamagnetic substance: the magnetization is the result of the application of an external field [Jack, p.200, (5.115)]. C. A ferromagnetic substance: the phenomenon of hysteresis allows the creation of permanent magnets [Jack, p.201, Fig. 5.12]. Remark. For the basics of hysteresis, see [Sad, p.328, l.5-p.329, l.-1]. 69. Magnetic shielding A. Analogies between a conductor and a body with high permeability a. The field lines outside and near to the surface [Jack, p.201, l.-17-l.-15]. b. Cavities [Jack, p.201, l.-15-l.-13]. B. The dipole moment and the inner field [Jack, p.202, (5.121); p.203, Fig.5.14]. 70. The Daniell cell: <http://www.mpoweruk.com/chemistries.htm>. 71. Surface tension: <http://www3.interscience.wiley.com:8100/legacy/college/cutnell/0471151831/ste/ste.pdf > & <http://en.wikipedia.org/wiki/Surface_tension >. Remark. The discussion of surface tension in [Zem, §2-9 & §3-8] is ambiguous. and raises many questions. The above two web sites will help answer your questions. 72. Basics of thermodynamics A. Thermodynamic equilibrium [Zem, §1-5 & §2-1]; equations of state [Zem, §2-5; §2-8-§2-12]; macroscopic states and thermodynamic variables [Zem, p.26]. B. Quasi-static transformations [Hua1, p.4, (f)]. In order to warrant the use of an equation of state, we must perform a quasi-static process. Methods of performing a quasi-static process [Zem, p.51, l.-9-l.-8; p.56, l.14-l.16; p.57, l.6-l.7; p.85, l.-14-l.-4]. Remark. Slow free expansion is quasi-static [Hua1, p.4, l.11-l.12]; fast free expansion is not quasi-static [Zem, p.113, l.11]. C. Reversible transformations a. Adiabatic reversibility [Zem, §7-1-§7-7]. b. Reversibility involving heat transfer: reversibility in this case refers to the universe [Zem, p.85, l.-13-l.-4; §8-5-§8.6]. Remark. [Zem, §7-7] proves that the solutions of the differential equation [Zem, p.165, (7-1)] are reversible adiabatic hypersurfaces. The illustration builds a solid foundation for the following concepts: Carnot cycles [Zem, p.173, Fig. 7-8], Kelvin temperature scale [Zem, §7-10] and entropy (Compare [Zem, p.179, l.-3] with [Zem, p.174, l.14, the first equality]; [Zem, p.180, (8-3)]). Because they lack this indispensable proof, the statements about the above concepts given in both [Hua1] and [Kit] are unclear. For this reason, their foundations of thermal physics are seriously flawed. D. Ideal-gas temperature = Kelvin temperature [Zem, §7-8-§7-11]. 73. Speed of a longitudinal wave [Zem, §5-7]. 74. The second law of thermodynamics The following three statements are equivalent: A. No process is possible whose sole result is the absorption of heat from a reservoir and the conversion of this heat into work. B. No process is possible whose sole result is the transfer of heat from a cooler to hotter body. C. Whenever an irreversible process takes place the entropy of the universe increases. Proof. A Û B [Zem, §6-7]. A Þ C [Zem, §8-5-§8-8]. C Þ B [Man, p.115, l.-9-l.-1]. Remark 1. If you compare [Zem, p.154, 1] with [Hua1, p.10, l.-18-l.-7], one can easily find that T[1] in [Hua1, p.10, l.-8] should have been T[2]. This error remained undetected through two editions of [Hua1] (1963 & 1987). This reveals the fact that in these forty years, either no one reads the publications of MIT professors or no one cares about the books published by MIT Remark 2. [Hua1, p,19, l.13-l.-6] discusses some subtle points that we should pay attention to when we apply the second law. 75. The Clausius theorem The most concise and insightful proof is given by [Reic, p.28, l.5-p.31, l.4]. [Reic, p.30, Fig. 2.5] clarifies the confusion contained in other proofs. The proof of [Zem, p.180, (8-3)] is based on [Zem, p.173, (7-13)]. The inexact differential in [Zem, p.173, (7-13)] has a specific form. The inequality in [Hau1, p.14, l.-8] conveys Clausius' subtle point about the inexact differential đQ. However, the proof of the Clausius theorem in [Hua1, pp.14-15], omits too much detail. A detailed proof using the same argument can be found in http://en.wikipedia.org/wiki/Clausius_theorem. 76. The Clapeyron equation [Zem, p.31, l.-4-p.35, l.12] [Zem, p.247, l.-12] mentions the fact that the vapor pressure P(T) is a function of T only, but does not provide the proof. [Hua1, p.33, (2.3)] does give the proof. 77. Chemical equilibrium Let us compare [Reif, §8.2, §8.3 and §8.10] with [Zem, §14-8]. Note that [Zem, §14-7] emphasizes the following subtle points: A. Even though the initial states of the phases are not in chemical equilibrium, it is still possible to describe them in terms of thermodynamic coordinates. This is because these phases are in mechanical and thermal equilibrium [Zem, p.366, l.11-l.17]. B. The functions that express the properties of a phase when it is not in chemical equilibrium must reduce to those for thermodynamic equilibrium when the equilibrium values of the n's are substituted [Zem, p.388, l.-7-l.-4]. In other words, in thermodynamic equilibrium the n's are fixed values, but when the system is not in chemical equilibrium, these n's are variables. Remark 1. [Reif, §8.2 and §8.3] are reduced to twelve lines in [Zem, p.368, l.-6-p.369, l.6]. Remark 2. The proof of [Reif, p.314, (8.7.18)] is excellent, while the proof given in [Zem, p.372, l.-7-p.373, l.6] is very confusing. 78. Degree of reaction [Zem, §14-12]. 79. Equation of reaction equilibrium [Zem, §14-13]. 80. Law of mass action [Zem, §15-1]. 81. Heat of reaction [Zem, §15-3]. 82. Affinity [Zem, §15-5]. 83. The phase rule A. Without chemical reaction [Zem, §16-2 & §16-3]. B. With chemical reaction [Zem, §16-4 & §16-5]. 84. Displacement of equilibrium [Zem, §16-6]. 85. Thermocouples [Zem, §17-6-§17-10]. 86. Black-body radiation A. Why do we use cavity radiation to represent black-body radiation? Because a. Cavity radiation is in thermal equilibrium so that the thermodynamic coordinates can be defined [Man, p.246, l.-15]. b. A small hole in a wall has the same absorbing and emitting power as a black-body. Key idea: [Man, p.246, l.-6-l.-3]. Proof: [Zem, p.91, (4-17)]. B. [Man, Appendix B] proves the formula for the density of states using both particle [Man, §B.2] and wave [Man, §B.3] approaches. [Man, §10.3] proves Planck's law using both the particle and wave approaches. C. In order to prove Wien's displacement law, [Reif, §9.13] obtains the maximum by drawing the graph of the function [Reif, p.375, Fig. 9.13.1], while [Zem, §17-14] obtains the maximum using D. Planck's radiation equation [Zem, p.446, (17-27)] reduces to a. The Rayleigh-Jeans law [Man, p.253, (10.21)] or the equipartition theorem [Man, p.253, l.-6] in the limit of low frequencies. b. Wien's law [Man, p.254, (10.23)] in the limit of high frequencies. Remark. Studying the problem from the viewpoint of entropy [Wu, p.33, (1-7)], Planck originally used the method of interpolation to derive his radiation equation from the Rayleigh-Jeans law and Wien's law [Man, p.363, l.-6-p.364, l.3]. E. We may prove the formula for radiation pressure [Zem, p.451, (17-32)] using a. The kinetic theory [Zem, p.451, l.9-l.-1]. b. The partition function [Man, p.255, l.5-p.256, l.11]. Remark 1. How is the concept of standing waves related to cavity radiation? Ans. [Eis, p.7, l.16; p.8, l.20; p.14, l.3]. [Eis, §1-1-§1-4] can serve as both a good introduction and a good summary of black-body radiation because its formulation of the theory is closely and clearly related to basic concepts. However, there is a mistake in [Eis, p.11, l.31-l.32]. N(n)dn (the number of allowed frequencies) ¹ N(r)dr (the number of quantum states [allowed k-vectors]) unless we regard a frequency as a vector. In contrast, [Man, p.328, l.3] adopts a unified and better convention. Remark 2. [Man, Appendix B] gives a comprehensive discussion on the density of states. There are two points worth noting: First, the density of states is independent of boundary conditions [Man, p.330, l.1-l.2]. Second, the discussion of density of states naturally leads us from the wave equation [Man, p.324, (B-1)] to the Schrödinger equation [Man, p.331, l.-6-l.-2; Reif, p.353, l.-3-p.354, l.9]. 87. The homopolar motor and the homopolar generator [Cor, pp.399-404]. Remark. [Wangs, p.276, l.15-p.277, l.8] can be used as a stepping stone to understand the two examples in [Cor, pp.399-404]. 88. Quasi-static electromagnetic fields and the skin effect [Chou, §6.4]. Remark. [Wangs, p.450, l.-2-p.451, l.8] discusses the physical significance of the neglect of the displacement current from the viewpoint of energy loss and the viewpoint of the time needed for propagation of signals, while [Chou, p.255, l.15-p.256, l.2] discusses the physical significance from the latter viewpoint alone. 89. Derivation of the macroscopic Maxwell equations [Chou, §7.2]. Remark 1. Both Jackson and Choudhury call the details of the proof gory [Jack, p.255, l.-10; Chou, p.304, l.-1]. If this trivial proof is considered gory, I wonder what adjective should be used to describe Tycho Brahe's or Kepler's work. Remark 2. A formula should be written in its simplest form. [Chou, p.309, (7.44)] can still be reduced to [Jack, p.256, (6.96)]. Both [Chou, p.309, (7.44)] and [Jack, p.256, (6.96)] are incorrect as they stand. S[n] should have been inserted in front of S[r,s] in [Chou, p.309, l.2]. The expression inside the [ ] in [Jack, p.256, l.7] should have been (Q[n]')[ag](v[n])[b]-(Q[n]')[gb](v[n]) 90. Scalar and vector potentials [Chou, p.315, l.3-l.7] can be directly derived from [Chou, p.585, l.3-l.4]. The argument in [Chou, p.314, l.6-p.315, l.1] basically repeats the argument in [Chou, p.584, l.7-p.585, l.1]. 91. Debye's theory of solids A. In order to decouple the equations of motion [Hoo, p.38, l.17], we transform from the position space [Hoo, p.37, (2.7)] to the momentum space [Hoo, p.38, (2.9); Man, p.324, (B.1)] using [Hoo, p.37, (2.8); Man, p.330, (B.19)]. This method of finding normal coordinates has a physical origin [Hoo, p.39, Fig. 2.5]. [Kit, pp.102-106] fails to point out the main purpose of phonons: decoupling the equations of motion. B. In the one dimensional case, g(w) is given by [Hoo, p.53, (2.33)]; the assumption w = v[S ]k is equivalent [Hoo, p.59, l.20-22] to taking g(w) as given by the broken line on [Hoo, p.54, Fig. C. [Wu, p.45, (I-17)] is easier to derive and evaluate than [Man, p.160, (6.27)]. 92. Moment of inertia [Sea1, §9-6, §9-7]. Remark. Some earlier editions of this book use summation instead of integration. I never like this practice. After I read [Sea1, §9-6, §9-7], I realized that the presentation of the 6th edition using integration is much better than [Hall, §12-5] or anything about moment of inertia existing on the web. 93. The energy values of the bound states of the hydrogen atom are discrete. [Coh, chap.VII, §C.3.c] gives a detailed explanation. [Mer, p.266, l.-14 & l.-12] make a few improvements. 94. The Doppler effect for electromagnetic waves [Rob, p.21] discusses the Doppler effect from the viewpoint of time dilation. [Matv, p.33, l.1-l.36] uses the tensor approach. The tensor approach automatically shows that all the formulas on relativity are covariant with Lorentz transformations, and effectively leads to a quick answer. It also condenses three cases (The source is moving toward the observer, away from the observer, or along a line normal to the line to the observer) into one formula [Matv, p.33, (2.62)]. However, Matveev's approach is not as insightful as Bobinson's approach. For example, it is easier to see that [Matv, p.33, (2.65)] is a purely time dilation effect from the context of [Rob, p.21] than that of [Matv, p.33, l.14-l.36]. 95. Optical properties of metals [Hec, §4.8; Wangs, §24-3 & §25-6; Matv, p.120, §Color of bodies, Sec. 19 & Sec. 20] discuss the optical properties of metals. All the above books facilitate our understanding in some aspects, but none of their discussion are complete. We must piece together their discussions to see the entire picture. A. Color of gold: For a chunk of gold, we can only see the reflected light [Wangs, p.423, l.-2-l.-1; Hec, p.131, l.c., l.-14-l.-11]. By [Hec. p.129, r.c., l.-12-l.-8], the gold appears reddish yellow. When the light source is on the other side of a thin foil of gold, we can only see the transmitted light, so the gold appears greenish [Wangs, p.423, l.14-l.20]. B. Some alkali metals are transparent to ultraviolet [Hec, p.129, l.-3-p.130, l.c., l.19]. Remark. For the proof of w[p] = (Nq[e]^2/e[0]m[e])^1/2, see [Wangs, p.401, (24-138)]. C. A metal (s = +¥) is an extension of an dielectric (s = 0). a. The dispersion equation (compare [Hec, p.71, (3.72)] with [Hec, p.129, (4.79)]). b. For plane waves, insulators and conductors are two extreme limiting cases and have corresponding discussions [Wangs, p.387, l.10-p.388, l.-12]. 96. Fiberoptics (for an introduction and the history of fiberoptics, read [Hec, §5.6]; for rigorous definitions and clear relations, read http://www.njit.edu/v2/Directory/Centers/OPSE/OPSE301/ Lab14.doc ). 97. Geometrical optics [Fur, chap. 3] A. Any problem in geometrical optics can be solved either using formulas or using graphs. The latter method not only has the visual advantage, but also can be used as a check for calculations from the former method. Example: [Jen, pp.86-87, Example 2]. a. Virtual objects i. Illustrated in a figure: [Hec, p.155, Fig. 5.11]. ii. Described in words: An object is virtual when the rays converge toward it [Hec, p.155, r.c., l.-7-l.-6]. iii. Characterized by the object distance: s[o] < 0 [Hec, p.163, Table 5.2]. b. Virtual images i. Illustrated in figures: [Hec, p.152, Fig. 5.5(c); p.155, Fig. 5.10]. ii. Described in words: An image is virtual when the rays diverge from it [Hec, p.155, r.c., l.-8-l.-7]. iii. Characterized by the image distance: s[i] < 0 [Hec, p.163, Table 5.2]. C. Only after understanding the meaning of [Jen, p.55, Fig. 3J] may one understand the construction of [Hec, p.151, Fig.5.3(a)]. D. Comparing the proof of [Jen, p.56, (3n)] with the proof of [Hec, p.154, (5.8)]: Although the former proof is simpler, the latter proof is more methodological. E. If m>0, the image will be virtual [Jen, p.54, l.15]. This can be seen by [Morg, p.30, (2.11)(ii); Jen, p.49, Fig. 3E]. F. The focal plane of a lens [Hec, p.160, Fig. 5.17 (where the radius of s is determined by [p.155, (5.10)]), and Fig. 5.18]. G. Geometrical optics uses a lot of modeling. By comparing the arrangement of sections in [Jen, chap. 3] and that in [Jen, chap. 4], we see that the theory of thin lenses is parallel to the theory of refracting surfaces. The theory of spherical mirrors is a special case of the theory of lenses [Morg, p.35, l.18]. The following three theories-thin lenses, thin-lens combinations, and thick lenses- are parallel because they use the same parallel-ray method to form images (compare [Jen, p.69, Fig. 4H] with [Jen, p.80, Fig. 5B(b)]; compare [Jen, p.75, Fig. 4M] with [Jen, p.83, Fig. 5E]). Consequently, the corresponding formulas in these three theories are the same if we properly choose the reference points. For example, [Jen, p.84, (5k)] and [Morg, p.67, (5.24)] can be considered identical. A systematic approach to the problems in geometrical optics entails mastering all the above patterns. H. A system of lenses can be treated as a thick lens [Morg, p.67, l.-7-l.-1]. I. Treating mirrors as lenses a. Reflection is considered as refraction [Matv, p.163, l.4-l.14]. b. (Sign conventions) Identify [Hec, p.184, Table 5.4] with the combination of [Hec, p.154, Table 5.1] and [Hec, p.183, Table 5.2]. c. (Properties) Apply the same graphical constructions used for lenses to mirrors (e.g., identify [Jen, p.101, Fig. 6E] with [Jen, p.63, 4D]; ray 8 in [Jen, p.106, Fig. 6I] is constructed based on [Jen, p.47, Fig. 3C]), apply the same formulas for lenses to mirrors, and identify [Hec, p.185, Table 5.5] with [Hec, p.163, Table 5.3]. d. Thick mirrors [Jen, §6.5] can be considered as thick lenses. Remark. [Jen, p.107, Fig. 6J] is consistent with the convention given in [Matv, p.163, l.4-l.8], while [Jen, p.133, Fig. 8C] is consistent with the convention given in [Hec, p.252, l.c., l.-8]. In my opinion, Jenkins should have adopted the former convention as a standard and stuck to it. J. For a detailed and systematic study of the effects of stops, see [Jen, chap. 7]. K. In [Hec, §6.2], the method of ray tracing applies only to paraxial rays. That is, it is used only for the first-order approximation. Actually, in principle, the graphical method of ray tracing [Jen, chap. 8] and the matrix method [Jen, p.143] can be exact. L. [Matv. Sec. 21-Sec. 23] condense geometrical optics into 21 pages and are ready for practical application using computers. In addition, Matveev proves every statement that he presents in these sections. His rigorous reasoning and ability to organize are impressive. In contrast, [Hec, chap.5 & chap. 6] use 132 pages to discuss geometrical optics and leave many statements unproved (e.g., [Hec, (6.1)-(6.4); (6.34); (6.36)-(6.37)]). In one place, Hecht claims he has proved [Hec, (6.34)]. Actually, he uses unproven [Hec, (6.2)] to prove [Hec, (6.34)]. Thus, all he has done is state the formula [Hec, (6.2)] twice. For a detailed proof, see [Matv, p.167, (23.19)]. However, logic is not the only tool to facilitate our understanding. For example, the definitions of principle points in [Matv, p.166, l.-1] is not as good as the definitions given in [Hec, p.243, Fig. 6.1]. The graphical constructions given [Jen, §3.6 and §3.7] should not be deemphasized for they have an visual advantage. Matrix methods are a useful tool only for computer calculations. A tool is used when needed. If we use methods to discuss topics other than computer calculations, the tool will become a burden rather than an advantage. In view of [Fur, chap. 3], the theory of geometrical optics are indeed made more organized and compact by the matrix method. All the necessary information on rays is essentially contained in a single matrix. However, the theory's formulation given in [Fur, §3.1-§3.4] is not as well prepared for application [Fur, §3.5] as that given in [Hec, §5.1-§5.6] for application [Hec, §5.7]. M. [Hec, p.154, (5.8)] is derived from Fermat's principle, while [Fur, p.145, (3.26)] is derived from Snell's law. Although both approaches consider a bundle of rays, the latter approach is more natural and straightforward. 98. The essence of the theory of wave packets can be summarized in three stages: A. Superposition of two plane waves [Born, p.19, Fig. 1.5]. B. Superposition of oscillations with equidistant frequencies [Matv, p.97, Fig. 53]. C. Group velocity: the velocity of the maximum of the wave packet [Coh, p.30, (C-31); Fig. 6]. 99. Maxwell equations A. In vacuum or microscopic fields: [Fur, p.44, l.6-l.9]. B. In matter (macroscopic fields) A. Average over a volume that is macroscopically small but microscopically large: [Fur, p.60, l.1-l.4]. B. In terms of (controllable) free charges and free current densities: [Fur, p.65, l.14-l.17]. C. Suppose r[f] = 0 and J[f] = 0. In terms of the material parameters: [Fur, p.68, l.3-l.6]. 100. Foundations of geometrical optics [Born, chap. III] A. Geometrical light rays [Born, p.114, l.-18-l.-17]. B. I = |<S>| [Wangs, p.357, (21-58); Matv, p.61, (7.12)]. C. Proofs of the eikonal equation: The proof given in [Born, p.112, l.1-l.9] uses the first-order approximation, while the proof given in [Born p.112, l.16-p.113, l.2] uses the second-order approximation. Therefore, the latter proof is more refined. D. Proofs of the law of refraction The proof given in [Hec, §4.4.1] is restricted to plane waves and planar interfaces. The method lacks potential to be applied for generalization. The proof given in [Hec, p.107, l.c., l.1-l.14] uses Fermat's principle which is artificial; The proof given in [Jack, §7.3] uses the basis of a vector space which is also artificial. In addition, the way the boundary conditions are used in [Jack, (7.34)] differs from the way they are used in [Born, p.5, Fig. 1.2], which complicates matters. The proof given in [Born, §3.2.2] is the most natural proof because it is directly derived from the boundary conditions. The method meets the requirement for axiomatization: any theorem in electromagnetism should be able to be derived from Maxwell's equations. This enables us to trace the theorem to its source. In addition, the derivation of [Born, p.125, (17)] is the same as that of [Born, p.6, (23)], which is good for unification. Furthermore, the proof given in [Born, §3.2.2] applies to the general case [Born, p.125, l.-11-l.-7]. It is the only proof that establishes a strong link to electromagnetism. I wonder why other textbooks leave out such an insightful proof. E. [Born, p.126, (23)] represents a normal congruence; [Born, p.126, (22)] represents a normal rectilinear congruence. For the proof of the former statement, see [Wea1, vol I, §105]. [Sne, p.21, Theorem 5] provides a proof of the identity given in [Wea1, vol. I, p.202, l.-12]. Remark. In the early twentieth century, the textbooks of optics discussed the topics in differential geometry and the textbooks of differential geometry discussed the topics in optics. Each subject solidified the other's foundation and stimulated the other's growth. Now in the twenty-first century, optics and differential geometry have become mutually exclusive subjects. The textbooks in optics are devoid of questions about differential geometry and the textbooks in differential geometry are devoid of questions about optics. 101. The integral given in [Jack, p.42, (1.58)] is equal to 4p [Jack, p.42, l.18-l.19]. Proof. Ñf (r,q,f) = (¶f/¶r)r^+r^-1(¶f/¶q)q^+(r sin q)^-1(¶f/¶f)f^ [Wangs, p.33, (1-102)]. Ñ(|r+n|^-1)×(r/r) = Ñ(|r+n|^-1)×r^ = ¶(|r+n|^-1)/¶r. ò [[0,+¥]] ¶(|r+n|^-1)/¶r dr = |r+n|^-1 ú[r = 0]^¥ = -1.¬ 102. Vibration of membranes Tension per unit width = Constant T; Vertical deflection = w(x,y,t). Consider the displacement of an element of area dxdy at time t. x-direction: width = dx; left slope = tan q»q; right slope = q+dq. q = ¶w/¶x Þ dq = [¶(¶w/¶x)/¶x]dx = (¶^2w/¶^2x)dx. Vertical component of the tension in the x-direction: left end: -T(dy) tan » = -T(dy)q; right end: T(dy) (q+dq). Net vertical force from x-direction tension is T(dy)dq = T(¶^2w/¶^2x)dxdy. Similarly, the net vertical force from y-direction tension is T(¶^2w/¶^2y)dxdy. Total vertical force on dxdy is T(¶^2w/¶^2x+¶^2w/¶^2y)dxdy. Let r be the mass per unit area. Then (rdxdy)(¶^2w/¶^2t) = T(¶^2w/¶^2x+¶^2w/¶^2y)dxdy. Therefore, r(¶^2w/¶^2t) = T(¶^2w/¶^2x+¶^2w/¶^2y). The following figure is viewed along the positive y-direction:
{"url":"https://members.tripod.com/li_chungwang0/physics/good-illustrations-me.html","timestamp":"2024-11-07T08:00:44Z","content_type":"text/html","content_length":"173730","record_id":"<urn:uuid:4d6812d5-4a39-4c2c-91be-b0d81a0300fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00061.warc.gz"}
Monday Math: The Fundamental Theorem of Arithmetic In my last math post I casually mentioned that the sum of the reciprocals of the primes diverges. That is \frac{1}{2}+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\frac{1}{11}+\frac{1}{13}+ \dots=\infty That seems like a hard thing to prove. Certainly none of the traditional convergence tests from Calculus II will get the job done. The problem is how to “get at” the primes. Plainly we need to do something clever. As it happens, the proof is a bit tricky. It has a lot of ingredients, too. On the other hand, each one of those ingredients is pretty interesting in its own right. So how about a series of posts building up to the proof of the equation above? First ingredient: The Fundamental Theorem of Arithmetic! The fundamental theorem of arithmetic says that every positive integer greater than one can be written as a product of prime numbers in exactly one way. (We should add the caveat that two factorizations differing only by the order in which we write the factors are considered to be identical). Of course, if the integer is itself prime then we consider it to be trivially the product of primes. That way we do not have to add annoying extra clauses to the statement of the theorem to account for the primes themselves. The theorem thus has two parts: Existence and Uniqueness. The first part is easy, the second part only slightly more challenging. Let's do existence. If the theorem is false then there is a smallest integer $x$ that cannot be written as the product of prime numbers. It follows that $x$ is not prime, and is therefore composite. That means we can factor it into the product of two smaller integers, say x = r \cdot s But since $r$ and $s$ are smaller than $x$, we know they can be factored into primes. But that would imply that $x$ can be so factored as well, and we have reached a contradiction. So every positive integer greater than one can be expressed as the product of prime numbers. What about uniqueness? Well, that requires a little fact: If a prime number divides the product of two other numbers, then one of those two other numbers is a multiple of the prime. To make things more concrete, if, say, seven divides the product of two numbers, then one of the two numbers is already a multiple of seven. That particular statement is just tricky enough to prove that I would rather not bother. Click here if you want to see how the trick is done. But I think things become very clear just by considering what goes wrong when you are not dealing with a prime number. Note, for example, that $$6 \ | \ (8 \times 9),$$ where that vertical line means “divides.” In other words, seventy-two is a multiple of six. But neither eight nor nine is a multiple of six on its own! This is possible because the number six is made by multiplying together a two and a three. The eight brings the two to the party while the nine brings the three. The ingredients for making six can be spread across the two factors, and that is why neither one of them had to be a multiple of six by itself. That is precisely what you cannot do with a prime. There are no “ingredients” for making a prime number. It just is what it is. Now we can establish uniqueness. Suppose we had two different prime factorizations for the same number. Then we would have an equation like this: p_1^{a_1}p_2^{a_2} \dots p_k^{a_k} = q_1^{b_1}q_2^{b_2} \dots q_\ell^{b_\ell} where the $p$'s and $q$'s are primes. But now we see that $p_1$ divides the left-hand side. That means it must divide the right-hand side as well. But that implies one of the factors on the right must be a multiple of $p_1$. Since all the $q$'s are themselves prime, we see that $p_1$ must actually be equal to one of the $q$'s. Now we can divide $p_1$ from both sides and repeat the process anew. In this way we will cancel out all the primes and find that the two factorizations could only have differed by the ordering of the factors. Nice! Let's kick it up a notch. Suppose we consider a set a bit more complicated that the integers. For example, we could define the set \mathbb{Z}(\sqrt{-5})=\left\{a+b\sqrt{-5} \mid a, b \in \mathbb{Z} \right\} The fancy Z is the universally accepted symbol for the integers. The set on the right asks us to consider all the symbols of the form where $a$ and $b$ are integers. The more familiar integers are elements of this set. Just let $b$ equal zero. We can add and multiply within this set by following the normal algebraic rules for such things. This implies that our new set, just like the integers, is a “ring”, meaning simply that it is an environment in which you can add and multiply in a way that satisfies the normal axioms for those operations. (Technically we should say these sets are “commutative rings” which means that multiplication is commutative. We also have properties like associativity, and a distributive property that relates addition to multiplication. Those are the sorts of things I mean when I talk about the normal axioms.) So we can ask the question, does unique factorization still hold when we enlarge the integers in this way? The answer is no! Consider: 6 = 2 \times 3 = \left(1+\sqrt{-5}\right)\left(1-\sqrt{-5}\right) I have obscured a fair number of technical details here. It can be shown that those factors on the right are “irreducible” meaning roughly that they cannot be expressed as the product of two other elements of the ring in a nontrivial way. In rings more general than the integers there is an important distinction between “irreducible” and “prime”. Perhaps we will explore these differences in a future post. The bottom line is that all of the normal questions you might ask of the integers concerning primes, factorization and divisibility can also be asked in any commutative ring. In these more general environments you cannot take prime factorization for granted. But once you start investigating these sorts of questions you enter the realm of algebraic number theory, and that is definitely a different post! More like this Last week we saw that every positive integer greater than one can be factored into primes in an essentially unique way. This week we ask a different question: Just how many primes are there? Euclid solved this problem a little over two thousand years ago by showing there are infinitely many primes… In this week's edition of Monday Math we look at what I regard as one of the prettiest equations in number theory. Here it is: \[ \sum_{n=1}^\infty \frac{1}{n^s} = \prod_p \left( \frac{1}{1-\frac{1} {p^s}}\right) \] Doesn't it just make your heart go pitter-pat? You are probably familiar with… Like all moderately curious people, I'm sure you've often wondered whether it's possible for \[ N=a^k-1 \] to be a prime number, where a and k are positive integers. Well, I'm here to answer that for you! To avoid trivial cases, we shall assume that k is at least two. Of course, I'm sure we… When I first talked about rings, I said that a ring is an algebraic abstraction that, in a very loose way, describes the basic nature of integers. A ring is a full abelian group with respect to addition - because the integers are an abelian group with respect to addition. Rings add multiplication… Ah, the failure of unique factorization for a general complex-valued ring... Wasn't this originally discovered through the failure of a proof for Fermat's Last Theorem? Kummer tried to prove Fermat's Last Theorem by working with an extension of the integers within which he could factor the Fermat equation. Specifically, if we consider the case of FLT with exponent p, he adjoined a p-th root of unity to the integers, in much the same way as I adjoined the square root of -5 to the integers above. His proof was correct so long as his extended ring was a unique factorization domain. Sadly, that is not true in general. On the other hand, it is true for primes smaller than 23, so Kummer's efforts were not completely wasted. I don't if that was the actual discovery of non-unique factorization, but it certainly gave new relevance to the idea. What's purple and commutes? An abelian grape! Also: Dammit, doesn't render properly in Opera! Off to play with settings, I suppose... One neat thing about the Fundamental Theorem of Arithmetic is that is follows rather quickly that all non-square integers have irrational square roots. Let a be a nonsquare integer, and let's assume that m/n is a square root of a. Then m^2 = an^2. By the Fundamental Theorem, m, n, and a are factorable into powers of primes. The powers of primes in m ^2 must be even (since it can be factored into m*m, so every prime divisor of m is twice a prime divisor of m^2), and similarly the powers of primes in n^2 must be even. Not all powers of primes in a can be even, for if they were, then a would be square. Therefore, there must exist a prime p that divides a with an odd power b. Therefor, p must divide an^2 with an odd power. By assumption m^2 = an ^2, and the uniqueness of the Fundamental Theorem implies that p divides m^2 and an^2 to the same power. But p is an even powered divisor of m^2 and an odd powered divisor of an^2, which is a contradiction. Therefore, a cannot have a rational square root m/n. Dammit, doesn't render properly in Opera! Huh. So it doesn't. The script was changed to render the images as SVG rather than PNG, but it looks like Opera has no problem with SVG in and of itself. Indeed, I just copied and pasted the call to appspot.com for one of the equations (that is, this string: into my Opera address bar, and it does render OK in Opera. So it looks like the problem is with the underlying Javascript. I was having problems before with some characters being rendered ridiculously large (in Firefox), but that is no longer as bad as it was (although some lines are still larger than others). Browser bug or server bug? Chromium actually works the best of all -- characters are rendered darker and heavier than in Firefox, and they are all of them a reasonable size. Geoffrey Landis and Jonathan Vos Post (personal communication) show that this constant equals ((P2)^2 + P4)/2, where P2 and P4 are constants in A085548 and A085964, respectively. A117543 Decimal expansion of the sum of the reciprocals of squared semiprimes. Semiprimes (or biprimes) being products of two primes, i.e. numbers of the form p*q where p and q are primes, not necessarily distinct. {4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34, 35, 38, 39, 46, 49, 51, 55, 57, 58, 62, 65, 69, 74, 77, 82, 85, 86, 87, 91, 93, 94, 95, 106, 111, 115, 118, 119, 121, 122, 123, 129, 133, 134, 141, 142, 143, 145, 146, 155, 158, 159, 161, 166, 169, 177, 178, 183, 185, 187, ...} The graph of this sequence of semiprimes appears to be a straight line with slope 4. However, the asymptotic formula shows that the linearity is an illusion and in fact a(n)/n ~ log n / log log n goes to infinity. See also the graph of A066265 = number of semiprimes < 10^n. Followup: Looks like Firefox has some sort of browser bug. I cleared my cache and refreshed, and the page once again has some characters ridiculously large. Some, but not all, instances of x, p1, and b, look like they are 72 pts or so. ... and immediately after posting that last, everything is the right size, and the same size. Including the stuff that was large but not as large as 72 pts before. Damn intermittent bugs. Owlmirror - Yes, the inline math does not format quite right. That's why as much as possible I tried to stick with display math. I'm afraid I have never heard of Opera, at least not in this context. Is it another browser? I'm afraid I have never heard of Opera, at least not in this context. Is it another browser? Last of the Big 5 (IE, FF, Chrome, Safari, Opera) If anybody ever wants one or two hundred tabs open at the same time, then Opera is the way to go, hands down. Just sayin! Thanks all for putting up with the joys of accidental beta testing. :) Math displays on Opera now. On Opera, my code was creating image tags with an SVG src=, which Safari and Chrome allow (and actually need for scaling to work right). Instead, it needed to make object tags like it does for Firefox, and now it does. The Firefox size problem is a fun one -- I didn't run into it in my testing, but Firefox sometimes gets SVG sizes wrong according to http://e.metaclarity.org/52/cross-browser-svg-issues/ , so I'll just have to include explicit width and height in the markup. The relevant bit is: "Firefox does fine with the -version most of the time and obtains its size correctly (most of the time). Scaling works fine (CTRL+Mousewheel) in that case. I say âmost of the timeâ because sometimes, correct object dimensions are only applies after reloading the page (when this bug happens, some SVGs will be vastly bigger than they should be, and others a lot smaller)." (emphasis mine) I also noticed the problem with vertical alignment of inline math -- I don't know if I can get that perfect but I can definitely make it better than it is now. Randall -- Once again, thanks for the help. Tweaked the script some more. Works for me in the latest versions of everything and vertical alignment is closer to right. Made lots of changes, so if it doesn't work for somebody, let me know what browser and version.
{"url":"https://www.scienceblogs.com/evolutionblog/2010/07/26/monday-math-the-fundamental-th","timestamp":"2024-11-14T10:55:08Z","content_type":"text/html","content_length":"68796","record_id":"<urn:uuid:665c75e5-8739-459e-8c02-9208ad7dce12>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00507.warc.gz"}
Analytic diffuse shading No big news yet, I've been studying a little bit analytic methods for shading and occlusion, I can't report anything really now, even because I'm not yet satisfied with what I've done. But I'd like to share this link: , differential element to finite area is what you’ll need. Also, you might find this useful, if you're starting to play around with spherical harmonics, it's a small snipped of what I've been doing, with (* analytic solution for real spherical harmonics test *) shIndices[level_] := (Range[-#1, #1] & ) /@ Range[0, level] shGetNormFn[l_, m_] := Sqrt[((2*l + 1)*(l - m)!)/(4*Pi*(l + m)!)] shGetFn[l_, m_] := Piecewise[{{shGetNormFn[l, 0]*LegendreP[l, 0, Cos[\[Theta]]], m == 0}, {Sqrt[2]*shGetNormFn[l, m]*Cos[m*\[Phi]]* LegendreP[l, m, Cos[\[Theta]]], m > 0}, {Sqrt[2]*shGetNormFn[l, -m]*Sin[(-m)*\[Phi]]* LegendreP[l, -m, Cos[\[Theta]]], m <> shFunctions[level_] := Function[{list, currlevel}, (shGetFn[currlevel - 1, #1] & ) /@ list], shIndices[level]] shGenCoeffs[shfns_, fn_] := Map[Integrate[#1*fn[\[Theta], \[Phi]]*Sin[\[Theta]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}] & , shfns, {2}] shReconstruct[shfns_, shcoeffs_] := Simplify[Plus @@ (Flatten[shcoeffs]*Flatten[shfns]), Assumptions -> {Element[\[Theta], Reals], Element[\[Phi], Reals], \[Theta] >= 0, \[Phi] >= 0, \[Theta] <= Pi, \[Phi] <= 2*Pi}] shIsZonal[shcoeffs_, level_] := Plus @@ (Flatten[shIndices[level]] Flatten[shcoeffs]) == 0 shGetSymConvolveNorm[level_] := Function[{list, currlevel}, Table[Sqrt[(4 \[Pi])/(2 currlevel + 1)], {Length[list]}]], shGetSymCoeffs[shcoeffs_] := Table[#1[[Ceiling[Length[#1]/2]]], {Length[#1]}] & /@ shcoeffs shSymConvolve[shcoeffs_, shsymkerncoeffs_, level_] := (Check[shIsZonal[shsymkerncoeffs], err]; shGetSymConvolveNorm[level] shcoeffs shGetSymCoeffs[ (* tests.... *) testnumlevels = 2 testfn[a_, b_] := Cos[a]^10*UnitStep[Cos[a]] (*symmetric on the z axis*) (*testfn[a_,b_]:= (a/Pi)^4*) shfns = shFunctions[testnumlevels] testfncoeffs = shGenCoeffs[shfns, testfn] shIsZonal[testfncoeffs, testnumlevels] testfnrec = {\[Theta], \[Phi]} \[Function] Evaluate[shReconstruct[shfns, testfncoeffs]] SphericalPlot3D[{testfn[\[Theta], \[Phi]], testfnrec[\[Theta], \[Phi]]}, {\[Theta], 0, Pi}, {\[Phi], 0, 2 Pi}, Mesh -> False, PlotRange -> Full] testfn2[a_, b_] := UnitStep[Cos[a] Sin[b]](*asymmetric*) testfn2coeffs = shGenCoeffs[shfns, testfn2] testfn3coeffs = shSymConvolve[testfn2coeffs, testfncoeffs, testnumlevels] testfn2rec = {\[Theta], \[Phi]} \[Function] Evaluate[shReconstruct[shfns, testfn2coeffs]] testfn3rec = {\[Theta], \[Phi]} \[Function] Evaluate[shReconstruct[shfns, testfn3coeffs]] SphericalPlot3D[{testfn2[\[Theta], \[Phi]],(*testfn2rec[\[Theta],\ \[Phi]],*)testfn3rec[\[Theta], \[Phi]]}, {\[Theta], 0, Pi}, {\[Phi], 0, 2 Pi}, Mesh -> False, PlotRange -> Full] 6 comments: I'm playing around with spot lights but I don't have mathematica I use the free opensource maxima. Can you explain what test suite you set up? uhh I meant spherical harmonics - I wrote spot lights because my coworker said that as I was typing the comment :P Wolfram offers a 30 day trial version of Mathematica that you can use. The functions I posted are just the bare minimum for SH, I wrote the projection, the convolution and the reconstruction functions, but done analytically. I just got mathematica. Unfortunately, the code doesn't run because of syntax errors. It also lost all of its formating so deciphering it is a nightmare to a new mathematica user. I'm going to play with it tonight and try and get it working though. The code should have no errors, probably pasting it into or from the blog added a few carriage returns at the end of the lines. Check the lines you've pasted... Also Mathematica can convert from one visualization format to another, just select the text you've entered and on the right mouse button menu you have the options to convert to a better format. This comment has been removed by a blog administrator.
{"url":"https://c0de517e.blogspot.com/2009/07/analytic-diffuse-shading.html?showComment=1249972424847","timestamp":"2024-11-07T06:16:38Z","content_type":"text/html","content_length":"95374","record_id":"<urn:uuid:f239fe1c-432f-4a37-b521-b974fc45beae>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00869.warc.gz"}
Math Multiplication Tables Flash Cards Math Multiplication Tables Flash Cards – Are you the mother or father of your kid? When you are, there is a pretty good possibility that you may possibly be curious about making your child for preschool or kindergarten. If you are, you may be enthusiastic about acquiring a number of the “coolest,” top quality educational playthings for the young child. While several of these games are nice instructional, they can get pretty pricey. If you are searching to get a cheap way to instruct your kid at home, it is advisable to take time to analyze Math Multiplication Tables Flash Cards. Why you need Math Multiplication Tables Flash Cards Flash card units, while you likely already know, are available from numerous stores. For instance, flash cards can be found equally on and offline from numerous merchants; merchants including publication retailers, toy retailers, and classic department shops. Also, when you probably know, flash card packages come in many different styles. If you are the father or mother of any young child, you should search for Math Multiplication Tables Flash Cards that are designed for toddlers, because they will demonstrate one of the most useful. These types of units are frequently available in groups marked colors and styles, numbers, very first phrases, and Getting Math Multiplication Tables Flash Cards With regards to acquiring flash cards for your personal kid, you really should think of getting numerous packages. Numerous preschoolers get bored with playing using the same games. Having different groups of Math Multiplication Tables Flash Cards readily available could help to reduce the feeling of boredom linked to flash cards. You can even want to take into account buying a few a similar sets of flash cards. Flash cards can often be flimsy in general, which makes it not too difficult to allow them to present signs of deterioration.
{"url":"https://www.printablemultiplication.com/math-multiplication-tables-flash-cards/","timestamp":"2024-11-06T15:23:16Z","content_type":"text/html","content_length":"59733","record_id":"<urn:uuid:767453a8-b56d-4f00-a6f6-a21e0c414367>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00261.warc.gz"}
The Godfather of Computing - Charles Babbage "Linux Gazette...making Linux just a little more fun!" The Godfather of Computing - Charles Babbage In the beginning... Some of the most often used cliches about writing and telling stories turn out to be good advice as well. A writer is told to "write what she knows" and a storyteller is to "begin at the beginning." And so, I hope to focus on "our" beginnings and the things that "we" know; the beginnings of network and hardware engineers, computer scientists, system administrators and others among a host of geeks, hackers, and phreaks that exist in our world. The original idea came out of an interchange between Chris Campbell and I about an articled titled Adventures in Babysitting that he had written for Binary Freedom. The article described Chris' foray among our would-be next generation at a local 2600 meeting and featured his utter disbelief at their lack of interest or understanding of their technology history. Names like Admiral Grace Hopper and Ken Thompson didn't come close to ringing a bell. You can tell how disheartening an experience it was for him. These people were his heroes after all (mine as well) and they should be looked up to and seen as the mentors that they are. In order to showcase and explore them, I proposed this column. So, to begin somewhere near the beginning, let's investigate the Godfather of Computing, Charles Babbage. A Beginning of Another Sort... Charles Babbage is variously called the Father, Grandfather and Patron Saint of Computing. To many that care, he began it all. I prefer to think of him as the Godfather of Computing and to see why is all part of his story. Babbage was born into a wealthy, but undistinguished, family in Devonshire, England, in 1791. While still a young boy, Babbage was concerned with questions of "how" over those of "why." The expression of this concern saw the boy dismantling his fair share of toys and mechanical objects around his family's home. A "Personality Disorder" Explored... My father-in-law, an engineer, likes to say that engineering is the expression of a personality disorder. The way he sees it, all engineers think and see the world such an odd, but similar, way that it can only be attributed to some sort of mental disorder. When they see something new, they want to pull it apart. When they hear of a problem, whether in their realm of control or not, they will offer "the most efficient" solution. In general, the world is seen as a broken puzzle that only some good solid, and sustained, engineering will fix. I can see his point. Besides, breaking Aunt Edna's antique clock just to see how it works can be considered rude at the very least. On top of which, "normal" social graces are generally thrown out the window, placing the final nail in the coffin of diagnosis. The funny thing is that the expression of this "disorder" can be fingered early in life. One can watch for the early warning signs. Children that take apart watches or have a penchant for building elaborate structures from blocks may just be engineers in their pupae stage. By all accounts, Babbage definitely was afflicted by the time of his boyhood. His tinkering with things, his dismantling of gadgets, and his inquisitiveness as to how things worked are all sure signs. While the draw of engineering can be sublimated if caught early and treated with care, Charles had no such luck. His fate was sealed when he stumbled upon a copy of the Young Mathematician's Guide in the school library. From that point on, Babbage devoted himself to the pursuit of rational thought and scientific After boarding school, Babbage headed to Cambridge to attend Trinity College. While at Trinity, the precocious student tended to test the patience and abilities of his instructors, a manner that may be familiar to a few among our readers. One rebellious episode saw Babbage and his Analytical Society taking on the very way math was done in England. At the time, most of England preferred to do complex mathematics using Sir Isaac Newton's "dot notation." The choice of notation was more out of civic pride than actual utilitarianism. Babbage considered this an affront to the way things should be. It went against efficiency and clarity and was a general affront to Babbage's rational senses. He favored instead the scientific notation perfected by Leibniz and used throughout Europe. The Analytical Society, which Babbage helped found, championed the fight to switch to scientific calculation by translating Lacroix's Examples to the Differential and Analytical Calculus from its original French. This achievement is considered one of the main events that helped bring modern mathematics to England. The Beginnings of an Idea... Though stories about the first notion of Babbage's calculating machine vary, they all seem to focus on Babbage's unwillingness to suffer inefficiency and undue complexity. It seems that Babbage was reviewing some of the many "look-up" tables that were used to aid in calculating complex equations in his day. The number of errors that were contained therein quickly exasperated him and his partners. Since the tables were generally copied by hand or transcribed to plates for printing, it was inevitable that errors would get introduced into the tables during the process. Those errors then just percolated through all the calculations that they were used to perform. One error made hundreds of years ago could potentially misroute ships or hurt financial projections. Babbage is said to have complained to his colleague that he wished these calculations could be carried out by steam. In that simple complaint lies the beginning of the first programmable mechanical calculator. It would later see life as the Difference Engine and still later as plans for the much more ambitious, and versatile, Analytical Engine. It was 1820. Calculating Machines... Babbage's first attempt at a calculating machine took the form of a small six-wheeled model that took advantage of number differences to aid in complex calculations. The machine, dubbed the Difference Engine, was powerful and elegant in its simplicity. Babbage realized that any process that could be distilled into a repeatable algorithm probably could be mechanized. It's entirely likely that he was inspired to this line of reasoning through his fascination with automata at an early age. Automata were mechanical creations and figurines that imitated life in the form of animals, ballerinas and musicians and such. By following complex, but repeatable, mechanical tricks, some automata were able to seem extraordinarily lifelike. It was this controlled, and nearly invisible, complexity that interested Babbage. Babbage's table problem was similar to that of the automata. While fixing the errors in copying tables was a complex problem, he realized that embracing the complexity and wrapping it in elegant mechanics was a likely solution. Babbage decided that by using the method of differences, he could create a calculating machine that would aid in these complex calculations. This is how it worked. Method of Differences... This is how the method of differences works. First one takes a set of consecutive numbers and then you perform a set function on each. For sake of ease, let's use the squares of the starting number's. Then you begin to successively look at the differences between the results until you arrive at a common number. It is then possible to work the process in reverse using only addition (something that machines can easily be engineered to do) to fill in the answer to the function for successive beginning numbers in the table. The only requirement is that you begin with a certain amount starting of "known" numbers that will, following the process, eventually come to a form of stasis. For our example we will use 1, 2, 3, and 4 as our starting points. These numbers will form our x column. The function column, f(x), is then determined by applying the function chosen, squaring in this case, to each number of the x column. This gives 1, 4, 9, and 16 in order. For the next column, we find the differences between each f(x), giving us 3, the difference between 1 and 4, 5, the difference between 4 and 9, and 7, the difference between 9 and 16. We line these numbers in a column, delta 1, so that they are positioned vertically, for ease of calculation, about halfway between the two numbers in the preceding column. Next we calculate the differences between the numbers in delta 1. The answers, placed in a fourth column, delta 2, are 2, the difference between 3 and 5, and 2, the difference between 5 and 7. We have now reached a stasis point where the differences are the same. Once we have reached this point we can now work our way backwards and fill in the table. But first, the starting table looks something like this: x f(x) delta 1 delta 2 Now we just work our way backwards on the table to fill in the values for the function of new values of x. First we can check our work. Starting at the top value in the delta 2 column (2), we can add it to the top value in the delta 1 column (3) and should get the next value in line in the delta 1 column. If you don't get 5, check your addition. If you do get the second value in delta 1, then you did your calculations for those two rows correctly and you can move on (see, it's self checking). Now take the value at the top of delta 1 and add it to the top of f(x). The result is the value for the function applied to the next value of x in the table. You can carry this for any value of x as long as you know the values of the function for a few numbers before x, and you only have to use addition to fill in the table after that point. Here is a table with the values for x = 5, to show you how it works for "new" table additions. x f(x) delta 1 delta 2 As you can see, it becomes very easy to add new values to the table. Working backwards from delta 2, two (2) plus delta 1's value of seven (7) yields nine (9) for delta 1, which in turn yields twenty five (25) for the function of x, or f(x). Pretty straightforward. So much so, in fact, that it can be carried out mechanically. Herein was Babbage's genius. He understood, perhaps innately, before any objective proof existed, that complex calculations could be carried out by machine. In order to avoid transcription errors when users of the machine copied the results of the calculation, Babbage's goal was to create a printer of sorts that would copy out the results by itself. The methods that Babbage devised would successfully skirt the sources of table errors that so infuriated the inventive Babbage. It was this understanding, this internal realization of the "correctness" of his solution, that would drive him in the pursuit of the ultimate manifestation of his ideas until the day he died. The Analytical Engine... Babbage's early prototype of the Difference Engine was met with great public excitement. He became the hit of London's social circle and it was often the mark of a party's success or failure as to whether Babbage had accepted an invitation to attend. This prototype also brought him some initial funding, to the tune of 15,000 to 17,000 pounds (accounts vary), from the British government. This money was to be put into the development of a fully functional Difference Engine and, later, a more complex calculating machine dubbed the Analytical Engine. Babbage had been able to prove his ideas and gained general acceptance of his theories. His major problem with creating a version beyond his proof-of-concept prototype for the Difference Engine was his constant learning and tinkering. As Babbage worked on the project he was constantly discovering more efficient ways to accomplish his goals and overcome the problems with precision machining that hampered his progress. It is said that as soon as new plans had left his shop for the machinists, he had already come up with a revision of the previous idea. This constant tinkering would be Babbage's undoing and would defeat the progress of nearly every project he undertook. It was as if his mind were so active, that it couldn't slow down long enough to take a snapshot of an idea from which he could work to physical completion. Babbage never completed a full Difference or Analytical Engine. He died in his London home to a cacophony of street musicians (a group that Babbage sought to have abolished from the city's streets) who had come from across the country to serenade him on his way outside his window. Let's just say that he didn't make many friends among that group (lawsuits will do that). But we still remember him. Beyond the idea that complex calculations could be carried out mechanically, an idea that seems inevitable, what did he contribute? The beauty of Babbage's ideas and their overall contribution to computer science lies in their completeness. Babbage envisioned a system that was programmable through punch card inputs. It could carry out many varied types of calculations and was as versatile as the instructions that it received; versatility through "software". With his printing ideas, Babbage had basically pioneered the idea of input/output (IO) via punch cards and printers. Taking it a step further, his conception of Analytical engine could store calculations (by punching cards) and continue them later or use the results of certain calculations to continue in different directions based on the outcome; the stored program and programmatic logic respectively. Unfortunately, Babbage never saw his most dramatic ideas reach reality, though he maintained his vision going so far as to work with Ada, the Countess of Lovelace (and mathematical wunderkind) to work out the proper functioning and use of the machines. It is due to Ada's copious and annotated notes of some of Babbage's lectures, that his ideas weren't lost as a footnote in history and that the awesomeness of his, at least mental, achievement came to be appreciated. Her notes and Babbage's unearthed plans helped this vindication even further when a working and more complex Difference Engine No. 2, the precursor to the Analytical Engine, was constructed by Science Museum in London in 1991. He should be appreciated for his persistence and his ideas. The world could have been wildly different if only he had been moderately successful (read The Difference Engine by Gibson and Sterling for one possible outcome). Babbage is the godfather of computing because he beat everyone to the punch. Using the technology that was available to him, metalworking, engineering, and steam, he was able to approximate the early "computers" of the electrical age. He was a visionary before his time. We should all hope to be as much. © 2001 G. James Jones is a Microcomputer Network Analyst for a mid-sized public university in the midwest. He writes on topics ranging from Open Source Software to privacy to the history of technology and its social ramifications. Verbatim copying and redistribution of this entire article is permitted in any medium if this notice is preserved. Copyright © 2001, The Binary Freedom Project, LLC. Copying license as above Published in Issue 72 of Linux Gazette, November 2001
{"url":"http://ftp2.de.freebsd.org/pub/linux/misc/gazette/issue72/jones.html","timestamp":"2024-11-07T17:28:32Z","content_type":"text/html","content_length":"26484","record_id":"<urn:uuid:b3c70ea5-e1b9-4107-9da5-aecdd03fa62e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00572.warc.gz"}
The term percentage really matters in every area of life. Percentage is used in discounts in shops, in bank loans, interest rate and in describing marks. Good marks have importance in getting admission in well reputed colleges and in getting scholarships.A higher percentage is achieved by getting good marks in exams. A person can not estimate the efficiency of a child with marks only ,this mark has to be converted into percentages. Although percentages can be manually calculated but for ease of student marks percentage calculator can be used. exam percentage calculator can be used to find percentage on total grades or you can find each subject percentage separately. What is the percentage? A percentage is simply defined as the number or ratio represented as a fraction of 100 in mathematics. The percentage sign is given by %. Some abbreviation percentage includes pct. It is a dimensionless figure and has no unit of measurement. The history of the percentage word is linked with the Latin term centum which means hundred. It can also be described as the proportional change of one number over another. It means getting numbers out of total. There are several online tool to calculate percentage such as marks percentage calculator Percentages are often used in normal daily life. The percentage of votes gained in an election , how many people are employed in a country, and the percentage of marks of total result. Discount is often shown in percentage form on shop items. It can be used to deduce the scientific result of many experiments in percentage form. Formula to find Percentage of Marks • This formula will help you in finding the marks of class 9,10, 11,12 grades. • Percentage = (total marks gained / total marks in full ) x 100 • Total marks gained= sum of all subject marks achieved out of 100 or relevant total mark of a subject • Total marks in full= sum of maximum possible marks in each subject For example you received 487 out of 500 the percentage will be 478/500 *100 = 95.6% Hence with help of marks percentage calculator percentage of result is achieved. There is more advanced form of calculator in result percentage calculator in which subject wise percentage can be calculated. If you adopt an extra subject it can be included in the calculation. Instead of going the complexity of formula simply type the marks in marks percentage calculator How to Calculate a Number's Percentage? There are two forms of number or marks for calculating percentage. The first one is in fraction form and the other is in decimal form. The fraction has to be converted into decimal first and multiply by 100 to get the percentage of a number. The decimal number just has to be multiplied by 100. For example if 45/50 is fraction then its decimal would be 0.9 then multiply by 100 to get 90% of fraction. The second method is the decimal one . If you have a number 0.754 and convert into percentage simply multiply 0.754 x 100= 75.4% A percentage is a mathematical quantity expressed in terms of 100. A percentage is used in comparing two quantities. Percentage has a wide range of applications in everyday life. It is used in business, science and many other fields. They represent the standardised way of expressing proportions which make data easy to understand. Percentages are used for representation of everything from simple comparison to complex calculation. The complex calculation involves the rate of change and growth. 0 Comments आपको आर्टिकल कैसा लगा? अपनी राय अवश्य दें Please don't Add spam links, if you want backlinks from my blog contact me on rakeshmgs.in@gmail.com
{"url":"https://www.rakeshmgs.in/2023/09/how-to-calculate-percentage-of-marks.html","timestamp":"2024-11-13T11:30:32Z","content_type":"application/xhtml+xml","content_length":"163001","record_id":"<urn:uuid:988cb3f7-fc10-45b5-950f-0c01922ddbaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00068.warc.gz"}
Uniformly accelerated motion (UAM) - formulas and example Uniformly accelerated motion (UAM) is a type of motion in which an object moves in a straight line with constant acceleration. This means that the object’s velocity increases or decreases by a constant amount in each unit of time. In this article, we'll explore the definition, formulas, and an example of how to calculate velocity and position in a UAM. Definition of uniformly accelerated motion In kinematics, UAM is a type of motion in which an object moves along a straight line with constant acceleration. Acceleration is the rate of change of an object's velocity, that is, how fast an object's velocity changes over time. In a UAM, the acceleration is constant and is represented by the letter "a." Formulas of uniformly accelerated motion Several formulas are used to calculate different aspects of the MRUA. These are: Final velocity: v = v[0] + a·t Average speed: v[m] = (v[0] + v) / 2 Traveled distance: d = v[0]·t + 1/2 a·t² Final velocity squared: v² = v[0]² + 2·a·d • v is the final velocity of the object. • v 0 is the initial velocity of the object • a is the acceleration of the object • t is the elapsed time • v m is the average velocity of the object • d is the distance traveled by the object Uses and applications of the UAM calculation Uniformly accelerated motion (UAM) has numerous applications in physics and engineering. Some examples include: 1. Calculation of trajectories of moving objects: For example, if a projectile is fired with an initial velocity and a constant acceleration due to gravity, the UAM equation can determine its 2. Propulsion system design: Many propulsion systems, such as rocket engines and aircraft engines, use UAM principles to calculate their velocity at each instant. 3. Traffic accident analysis: In investigating traffic accidents, it is helpful to determine the speed and acceleration of the vehicles involved in the accident. This can help investigators determine the causes of the accident and prevent future incidents. 4. Study of forces in mechanical systems: The UAM is used in physics to study the relationship between the force applied to an object and its resulting acceleration. Example of uniformly accelerated motion exercise Suppose a car moves along a straight road with an initial speed of 30 m/s and a constant acceleration of 5 m/s². What is the car's speed after 10 seconds, and how far has it traveled? To calculate the speed of the car after 10 seconds, we can use the final speed formula: v = v[0] + a·t v = 30m/s + (5m/s² x 10s) v = 80m/s Therefore, the speed of the car after 10 seconds is 80 m/s. To calculate the distance traveled by the car, we can use the distance traveled formula: d = v[0]·t + 1/2 a·t² d = (30 m/s x 10 s) + 1/2 (5 m/s² x (10 s)²) d = 300m + 250m d = 550m Therefore, the car covered a distance of 550 meters in 10 seconds.
{"url":"https://nuclear-energy.net/physics/kinematics/uniformly-accelerated-motion","timestamp":"2024-11-15T02:48:45Z","content_type":"text/html","content_length":"63750","record_id":"<urn:uuid:f12df2df-d1db-4585-81bc-26291d1ea531>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00808.warc.gz"}
88,715 research outputs found Assuming SO(3)-spherical symmetry, the 4-dimensional Einstein equation reduces to an equation conformally related to the field equation for 2-dimensional gravity following from the Lagrangian L = R^ (1/3). Solutions for 2-dimensional gravity always possess a local isometry because the traceless part of its Ricci tensor identically vanishes. Combining both facts, we get a new proof of Birkhoff's theorem; contrary to other proofs, no coordinates must be introduced. The SO(m)-spherically symmetric solutions of the (m+1)-dimensional Einstein equation can be found by considering L = R^(1/m) in two dimensions. This yields several generalizations of Birkhoff's theorem in an arbitrary number of dimensions, and to an arbitrary signature of the metric.Comment: 17 pages, LaTeX, no figures, Grav. and Cosm. in prin The weak-field slow-motion limit of fourth-order gravity will be discussed.Comment: 5 pages, LaTe We answer the following question: Let l, m, n be arbitrary real numbers. Does there exist a 3-dimensional homogeneous Riemannian manifold whose eigenvalues of the Ricci tensor are just l, m and n ? Comment: 2 pages, LaTeX, reprinted from Proc. Conf. Brno (1995 Recently obtained results on linear energy bounds are generalized to arbitrary spin quantum numbers and coupling schemes. Thereby the class of so-called independent magnon states, for which the relative ground-state property can be rigorously established, is considerably enlarged. We still require that the matrix of exchange parameters has constant row sums, but this can be achieved by means of a suitable gauge and need not be considered as a physical restriction We show that solutions of the Bach equation exist which are not conformal Einstein spaces.Comment: 3 pages, LaTeX, no figur A contribution linear in r to the gravitational potential can be created by a suitable conformal duality transformation: the conformal factor is 1/(1+r)^2 and r will be replaced by r/(1+r), where r is the Schwarzschild radial coordinate. Thus, every spherically symmetric solution of conformal Weyl gravity is conformally related to an Einstein space. This result finally resolves a long controversy about this topic. As a byproduct, we present an example of a spherically symmetric Einstein space which is a limit of a sequence of Schwarzschild-de Sitter space-times but which fails to be expressable in Schwarzschild coordinates. This example also resolves a long controversy.Comment: 11 pages, LaTeX, no figure The equation of motion announced in the title was already deduced for the cases the inner metric being flat and the shell being negligibly small (test matter), using surface layers and geodesic trajectories resp. Here we derive the general equation of motion and solve it in closed form for the case of parabolic motion. Especially the motion near the horizon and near the singularity are examined.Comment: Reprinted from: 10th International Conference on General Relativity and Gravitation, Padova (Italy) July 4 - 9, 1983. Eds.: B. Bertotti, F. de Felice, A. Pascolini, Contributed papers Vol. 1, Roma (1983) page 339-34 We deduce a new formula for the perihelion advance of a test particle in the Schwarzschild black hole by applying a newly developed non-linear transformation within the Schwarzschild space-time. By this transformation we are able to apply the well-known formula valid in the weak-field approximation near infinity also to trajectories in the strong-field regime near the horizon of the black hole.Comment: 22 pages, new results added at the end of scts. 4 and 5, accepted for Phys. Rev. For the non-tachyonic curvature squared action we show that the expanding Bianchi-type I models tend to the dust-filled Einstein-de Sitter model for t tending to infinity if the metric is averaged over the typical oscillation period. Applying a conformal equivalence between curvature squared action and a minimally coupled scalar field (which holds for all dimensions > 2) the problem is solved by discussing a massive scalar field in an anisotropic cosmological model.Comment: 9 pages, LaTeX, no figur The space of all Riemannian metrics is infinite-dimensional. Nevertheless a great deal of usual Riemannian geometry can be carried over. The superspace of all Riemannian metrics shall be endowed with a class of Riemannian metrics; their curvature and invariance properties are discussed. Just one of this class has the property to bring the lagrangian of General Relativity into the form of a classical particle's motion. The signature of the superspace metric depends in a non-trivial manner on the signature of the original metric, we derive the corresponding formula. Our approach is a local one: the essence is a metric in the space of all symmetric rank-two tensors, and then the space becomes a warped product of the real line with an Einstein space.Comment: 10 pages, LaTeX, reprinted from Proc. Conf. Diff. Geom. Appl., Brno, Czechoslovakia 1989, WSPC Singapore, Eds. J. Janyska, D. Krupk
{"url":"https://core.ac.uk/search/?q=author%3A(Schmidt%20H-J)","timestamp":"2024-11-04T12:03:13Z","content_type":"text/html","content_length":"116207","record_id":"<urn:uuid:d30826cc-0644-4fc5-87b2-23d51004ab14>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00675.warc.gz"}
Counting Processes 11.1.1 Counting Processes In some problems, we count the occurrences of some types of events. In such scenarios, we are dealing with a counting process . For example, you might have a random process $N(t)$ that shows the number of customers who arrive at a supermarket by time $t$ starting from time $0$. For such a processes, we usually assume $N(0)= 0$, so as time passes and customers arrive, $N(t)$ takes positive integer values. A random process $\{N(t), t \in [0,\infty) \}$ is said to be a counting process if $N(t)$ is the number of events occurred from time $0$ up to and including time $t$. For a counting process, we assume 1. $N(0)=0$; 2. $N(t) \in \{0,1,2,\cdots\}$, for all $t \in [0,\infty)$; 3. for $0 \leq s \lt t$, $N(t)-N(s)$ shows the number of events that occur in the interval $(s,t]$. Since counting processes have been used to model arrivals (such as the supermarket example above), we usually refer to the occurrence of each event as an "arrival". For example, if $N(t)$ is the number of accidents in a city up to time $t$, we still refer to each accident as an arrival. Figure 11.1 shows a possible realization and the corresponding sample function of a counting process. Figure 11.1 - A possible realization and the corresponding sample path of a counting process.. By the above definition, the only sources of randomness are the arrival times $T_i$. Before introducing the Poisson process, we would like to provide two definitions. Let $\{X(t), t \in [0, \infty)\}$ be a continuous-time random process. We say that $X(t)$ has independent increments if, for all $0 \leq t_1 \lt t_2 \lt t_3 \cdots \lt t_n$, the random variables \begin{align*} X(t_2)-X(t_1), \; X(t_3)-X(t_2), \; \cdots, \; X(t_n)-X(t_{n-1}) \end{align*} are independent. Note that for a counting process, $N(t_i)-N(t_{i-1})$ is the number of arrivals in the interval $(t_{i-1},t_i]$. Thus, a counting process has independent increments if the numbers of arrivals in non-overlapping (disjoint) intervals \begin{align*} (t_1,t_2], (t_2,t_3], \; \cdots, \; (t_{n-1},t_n] \end{align*} are independent. Having independent increments simplifies analysis of a counting process. For example, suppose that we would like to find the probability of having $2$ arrivals in the interval $(1,2]$, and $3$ arrivals in the interval $(3,5]$. Since the two intervals $(1,2]$ and $(3,5]$ are disjoint, we can write \begin{align*} P\bigg(\textrm{$2$ arrivals in $(1,2]$ $\;$ and $\;$ $3$ arrivals in $(3,5]$}\bigg)&=\\ & \hspace{-30pt} P\bigg(\textrm{$2$ arrivals in $(1,2]$}\ bigg) \cdot P\bigg(\textrm{$3$ arrivals in $(3,5]$}\bigg). \end{align*} Here is another useful definition. Let $\{X(t), t \in [0, \infty)\}$ be a continuous-time random process. We say that $X(t)$ has stationary increments if, for all $t_2>t_1\geq0$, and all $r>0$, the two random variables $X(t_2)-X(t_1)$ and $X(t_2+r)-X(t_1+r)$ have the same distributions. In other words, the distribution of the difference depends only on the length of the interval $(t_1,t_2]$, and not on the exact location of the interval on the real line. Note that for a counting process $N(t)$, $N(t_2)-N(t_1)$ is the number of arrivals in the interval $(t_1,t_2]$. We also assume $N(0)=0$. Therefore, a counting process has stationary increments if for all $t_2>t_1\geq0$, $N(t_2)-N(t_1)$ has the same distribution as $N(t_2-t_1)$. This means that the distribution of the number of arrivals in any interval depends only on the length of the interval, and not on the exact location of the interval on the real line. A counting process has independent increments if the numbers of arrivals in non-overlapping (disjoint) intervals are independent. A counting process has stationary increments if, for all $t_2>t_1\geq0$, $N(t_2)-N(t_1)$ has the same distribution as $N(t_2-t_1)$. The print version of the book is available on Amazon. Practical uncertainty: Useful Ideas in Decision-Making, Risk, Randomness, & AI
{"url":"https://www.probabilitycourse.com/chapter11/11_1_1_counting_processes.php","timestamp":"2024-11-09T14:35:39Z","content_type":"text/html","content_length":"13517","record_id":"<urn:uuid:fb513596-c97d-422b-ad1c-397160734589>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00401.warc.gz"}
Distance Between Two Points Word Problems Worksheets - Worksheets Day Distance Between Two Points Word Problems Worksheets Distance between two points word problems can be challenging for students to solve. However, with practice and a clear understanding of the concepts involved, these problems can become much easier to tackle. In this article, we will explore distance between two points word problems and provide detailed examples to help you master this topic. What are Distance Between Two Points Word Problems? Distance between two points word problems involve finding the distance between two given points in a coordinate plane. These problems often require the use of the distance formula, which is the square root of the sum of the squares of the differences between the x-coordinates and y-coordinates of the two points. How to Solve Distance Between Two Points Word Problems To solve distance between two points word problems, follow these steps: 1. Identify the coordinates of the two given points. 2. Plug the coordinates into the distance formula: √((x2 – x1)^2 + (y2 – y1)^2). 3. Simplify the equation and calculate the distance. Example Problem: Find the distance between the points (3, 4) and (-2, 1). Step 1: Identify the coordinates of the two given points. Point 1: (3, 4) Point 2: (-2, 1) Step 2: Plug the coordinates into the distance formula: √((-2 – 3)^2 + (1 – 4)^2). Step 3: Simplify the equation. √((-2 – 3)^2 + (1 – 4)^2) = √((-5)^2 + (-3)^2) = √(25 + 9) = √34 Therefore, the distance between the points (3, 4) and (-2, 1) is √34. Distance between two points word problems can be solved by applying the distance formula. By identifying the coordinates of the given points, plugging them into the formula, and simplifying the equation, you can find the distance between the points accurately. Practice solving various word problems to enhance your understanding and proficiency in this topic. Distance Between Two Points Worksheets Distance Between 2 Points Word Scramble TEK 87D STAAR Level Questions Coordinate Plane Worksheets Grade 6 Distance Formula Of Two Points Pythagorean Theorem Distance Between Two Points Worksheet Distance Between Two Points In 3dimensions Worksheet Find The Distance Between Two Points Worksheets PDF 6NSC8 6th Distance Between 2 Points Online Exercise For Distance Between Two Points Word Problems Worksheets
{"url":"https://www.worksheetsday.com/distance-between-two-points-word-problems-worksheets/","timestamp":"2024-11-05T02:45:24Z","content_type":"text/html","content_length":"50306","record_id":"<urn:uuid:85415953-3aa5-4da7-9d1f-d27ed99609c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00836.warc.gz"}
Avishay Tal - CS 294-92: Analysis of Boolean Functions CS 294-92: Analysis of Boolean Functions (Spring 2023) Boolean functions are central objects of study in theoretical computer science and combinatorics. Analysis of Boolean functions, and in particular Fourier analysis, has been a successful tool in the areas of circuit lower bounds, hardness of approximation, social choice, threshold phenomena, pseudo-randomness, property testing, learning theory, cryptography, quantum computing, query complexity, and others. These applications are derived by understanding fundamental, beautiful concepts in the study of Boolean functions, such as influence, noise-sensitivity, approximation by polynomials, hyper-contractivity, and the invariance principle (connecting the discrete Boolean domain with the continuous Gaussian domain). We will study these foundational concepts of Boolean function and their applications to diverse areas in TCS and combinatorics. Undergraduate students who wish to take this class in Spring 2023 should fill out the following Google Form. The course will be mainly based on the wonderful book by Ryan O'Donnell. The book is available for free download via this link, or available for purchase on Amazon. In addition, we will highlight some recent exciting results that are not covered in the book. Semester: Spring 2023 Time and Place: Tuesday, Thursday 12:30-2:00 PM -- 306 Soda Hall (lecture will not be recorded) Instructor: Avishay Tal, Soda 635, atal "at" berkeley.edu Office Hours: Thursday 2-3 PM (start at Berkeley time) - 306 Soda Hall (or fix an appointment by email). TA: Xin Lyu, Soda 634, xinlyu "at" berkeley.edu Office Hours: Monday 11 AM - 12 PM, 634 Soda Hall Grading: Homework - 40% (4 assignments), Lecture Scribe - 10%, Final Project & Presentation - 50%. A link to Google Drive's with all PSets and lecture notes for this semester (Spring 2023). Discussions on Ed HW submissions on Gradescope For each lecture - please take a look at the relevant chapters in O'Donnell's book & additional resources & lecture notes. -- Spring Break 21. Apr 4 - The Invariance Principle - Chapter 11 - Lecture Notes (Xuandi Ren) 22. Apr 6 - Majority is Stablest & Hardness of Max-CUT - Chapter 11 - Lecture Notes (Bhavesh Kalisetti) 23. Apr 11 - Query Complexity - [Buhrman, de Wolf'00] 24. Apr 13 - The Sensitivity Theorem - [Huang'19] - Lecture Notes (David Wu) 25. Apr 18 - Extremal Combinatorics, The Sunflower Lemma - [Alweiss-Lovett-Wu-Zhang'19] [Rao'19] - Lecture Notes (Shilun Li) 26. Apr 20 - Threshold Phenomena, Proof of the Kahn-Kalai Conjecture - [Park-Pham'22] [Frankston-Kahn-Narayanan-Park'19] - Lecture Notes (Angelos Pelecanos) 27. Apr 28 - Presentations # 1 + Open Problems (Meghal Gupta) 28. Apr 30 - Presentations # 2 RRR Week: More Presentations
{"url":"https://www.avishaytal.org/cs294-analysis-of-boolean-functions","timestamp":"2024-11-06T03:55:47Z","content_type":"text/html","content_length":"117818","record_id":"<urn:uuid:778d0858-b57c-4c4a-9731-e916c3f0f139>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00117.warc.gz"}
How to Calculate Precision, Recall, F1, and More for Deep Learning Models - MachineLearningMastery.comHow to Calculate Precision, Recall, F1, and More for Deep Learning Models - MachineLearningMastery.com Once you fit a deep learning neural network model, you must evaluate its performance on a test dataset. This is critical, as the reported performance allows you to both choose between candidate models and to communicate to stakeholders about how good the model is at solving the problem. The Keras deep learning API model is very limited in terms of the metrics that you can use to report the model performance. I am frequently asked questions, such as: How can I calculate the precision and recall for my model? How can I calculate the F1-score or confusion matrix for my model? In this tutorial, you will discover how to calculate metrics to evaluate your deep learning neural network model with a step-by-step example. After completing this tutorial, you will know: • How to use the scikit-learn metrics API to evaluate a deep learning model. • How to make both class and probability predictions with a final model required by the scikit-learn API. • How to calculate precision, recall, F1-score, ROC AUC, and more with the scikit-learn API for a model. Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. • Mar/2019: First publish • Update Jan/2020: Updated API for Keras 2.3 and TensorFlow 2.0. Tutorial Overview This tutorial is divided into three parts; they are: 1. Binary Classification Problem 2. Multilayer Perceptron Model 3. How to Calculate Model Metrics Binary Classification Problem We will use a standard binary classification problem as the basis for this tutorial, called the “two circles” problem. It is called the two circles problem because the problem is comprised of points that when plotted, show two concentric circles, one for each class. As such, this is an example of a binary classification problem. The problem has two inputs that can be interpreted as x and y coordinates on a graph. Each point belongs to either the inner or outer circle. The make_circles() function in the scikit-learn library allows you to generate samples from the two circles problem. The “n_samples” argument allows you to specify the number of samples to generate, divided evenly between the two classes. The “noise” argument allows you to specify how much random statistical noise is added to the inputs or coordinates of each point, making the classification task more challenging. The “random_state” argument specifies the seed for the pseudorandom number generator, ensuring that the same samples are generated each time the code is run. The example below generates 1,000 samples, with 0.1 statistical noise and a seed of 1. 1 # generate 2d classification dataset 2 X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) Once generated, we can create a plot of the dataset to get an idea of how challenging the classification task is. The example below generates samples and plots them, coloring each point according to the class, where points belonging to class 0 (outer circle) are colored blue and points that belong to class 1 (inner circle) are colored orange. 1 # Example of generating samples from the two circle problem 2 from sklearn.datasets import make_circles 3 from matplotlib import pyplot 4 from numpy import where 5 # generate 2d classification dataset 6 X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) 7 # scatter plot, dots colored by class value 8 for i in range(2): 9 samples_ix = where(y == i) 10 pyplot.scatter(X[samples_ix, 0], X[samples_ix, 1]) 11 pyplot.show() Running the example generates the dataset and plots the points on a graph, clearly showing two concentric circles for points belonging to class 0 and class 1. Multilayer Perceptron Model We will develop a Multilayer Perceptron, or MLP, model to address the binary classification problem. This model is not optimized for the problem, but it is skillful (better than random). After the samples for the dataset are generated, we will split them into two equal parts: one for training the model and one for evaluating the trained model. 1 # split into train and test 2 n_test = 500 3 trainX, testX = X[:n_test, :], X[n_test:, :] 4 trainy, testy = y[:n_test], y[n_test:] Next, we can define our MLP model. The model is simple, expecting 2 input variables from the dataset, a single hidden layer with 100 nodes, and a ReLU activation function, then an output layer with a single node and a sigmoid activation function. The model will predict a value between 0 and 1 that will be interpreted as to whether the input example belongs to class 0 or class 1. 1 # define model 2 model = Sequential() 3 model.add(Dense(100, input_shape=(2,), activation='relu')) 4 model.add(Dense(1, activation='sigmoid')) The model will be fit using the binary cross entropy loss function and we will use the efficient Adam version of stochastic gradient descent. The model will also monitor the classification accuracy 1 # compile model 2 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) We will fit the model for 300 training epochs with the default batch size of 32 samples and evaluate the performance of the model at the end of each training epoch on the test dataset. 1 # fit model 2 history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=300, verbose=0) At the end of training, we will evaluate the final model once more on the train and test datasets and report the classification accuracy. 1 # evaluate the model 2 _, train_acc = model.evaluate(trainX, trainy, verbose=0) 3 _, test_acc = model.evaluate(testX, testy, verbose=0) Finally, the performance of the model on the train and test sets recorded during training will be graphed using a line plot, one for each of the loss and the classification accuracy. 1 # plot loss during training 2 pyplot.subplot(211) 3 pyplot.title('Loss') 4 pyplot.plot(history.history['loss'], label='train') 5 pyplot.plot(history.history['val_loss'], label='test') 6 pyplot.legend() 7 # plot accuracy during training 8 pyplot.subplot(212) 9 pyplot.title('Accuracy') 10 pyplot.plot(history.history['accuracy'], label='train') 11 pyplot.plot(history.history['val_accuracy'], label='test') 12 pyplot.legend() 13 pyplot.show() Tying all of these elements together, the complete code listing of training and evaluating an MLP on the two circles problem is listed below. 1 # multilayer perceptron model for the two circles problem 2 from sklearn.datasets import make_circles 3 from keras.models import Sequential 4 from keras.layers import Dense 5 from matplotlib import pyplot 6 # generate dataset 7 X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) 8 # split into train and test 9 n_test = 500 10 trainX, testX = X[:n_test, :], X[n_test:, :] 11 trainy, testy = y[:n_test], y[n_test:] 12 # define model 13 model = Sequential() 14 model.add(Dense(100, input_shape=(2,), activation='relu')) 15 model.add(Dense(1, activation='sigmoid')) 16 # compile model 17 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) 18 # fit model 19 history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=300, verbose=0) 20 # evaluate the model 21 _, train_acc = model.evaluate(trainX, trainy, verbose=0) 22 _, test_acc = model.evaluate(testX, testy, verbose=0) 23 print('Train: %.3f, Test: %.3f' % (train_acc, test_acc)) 24 # plot loss during training 25 pyplot.subplot(211) 26 pyplot.title('Loss') 27 pyplot.plot(history.history['loss'], label='train') 28 pyplot.plot(history.history['val_loss'], label='test') 29 pyplot.legend() 30 # plot accuracy during training 31 pyplot.subplot(212) 32 pyplot.title('Accuracy') 33 pyplot.plot(history.history['accuracy'], label='train') 34 pyplot.plot(history.history['val_accuracy'], label='test') 35 pyplot.legend() 36 pyplot.show() Running the example fits the model very quickly on the CPU (no GPU is required). Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. The model is evaluated, reporting the classification accuracy on the train and test sets of about 83% and 85% respectively. 1 Train: 0.838, Test: 0.850 A figure is created showing two line plots: one for the learning curves of the loss on the train and test sets and one for the classification on the train and test sets. The plots suggest that the model has a good fit on the problem. How to Calculate Model Metrics Perhaps you need to evaluate your deep learning neural network model using additional metrics that are not supported by the Keras metrics API. The Keras metrics API is limited and you may want to calculate metrics such as precision, recall, F1, and more. One approach to calculating new metrics is to implement them yourself in the Keras API and have Keras calculate them for you during model training and during model evaluation. For help with this approach, see the tutorial: This can be technically challenging. A much simpler alternative is to use your final model to make a prediction for the test dataset, then calculate any metric you wish using the scikit-learn metrics API. Three metrics, in addition to classification accuracy, that are commonly required for a neural network model on a binary classification problem are: • Precision • Recall • F1 Score In this section, we will calculate these three metrics, as well as classification accuracy using the scikit-learn metrics API, and we will also calculate three additional metrics that are less common but may be useful. They are: This is not a complete list of metrics for classification models supported by scikit-learn; nevertheless, calculating these metrics will show you how to calculate any metrics you may require using the scikit-learn API. For a full list of supported metrics, see: The example in this section will calculate metrics for an MLP model, but the same code for calculating metrics can be used for other models, such as RNNs and CNNs. We can use the same code from the previous sections for preparing the dataset, as well as defining and fitting the model. To make the example simpler, we will put the code for these steps into simple First, we can define a function called get_data() that will generate the dataset and split it into train and test sets. 1 # generate and prepare the dataset 2 def get_data(): 3 # generate dataset 4 X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) 5 # split into train and test 6 n_test = 500 7 trainX, testX = X[:n_test, :], X[n_test:, :] 8 trainy, testy = y[:n_test], y[n_test:] 9 return trainX, trainy, testX, testy Next, we will define a function called get_model() that will define the MLP model and fit it on the training dataset. 1 # define and fit the model 2 def get_model(trainX, trainy): 3 # define model 4 model = Sequential() 5 model.add(Dense(100, input_shape=(2,), activation='relu')) 6 model.add(Dense(1, activation='sigmoid')) 7 # compile model 8 model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) 9 # fit model 10 model.fit(trainX, trainy, epochs=300, verbose=0) 11 return model We can then call the get_data() function to prepare the dataset and the get_model() function to fit and return the model. 1 # generate data 2 trainX, trainy, testX, testy = get_data() 3 # fit model 4 model = get_model(trainX, trainy) Now that we have a model fit on the training dataset, we can evaluate it using metrics from the scikit-learn metrics API. First, we must use the model to make predictions. Most of the metric functions require a comparison between the true class values (e.g. testy) and the predicted class values (yhat_classes). We can predict the class values directly with our model using the predict_classes() function on the model. Some metrics, like the ROC AUC, require a prediction of class probabilities (yhat_probs). These can be retrieved by calling the predict() function on the model. For more help with making predictions using a Keras model, see the post: We can make the class and probability predictions with the model. 1 # predict probabilities for test set 2 yhat_probs = model.predict(testX, verbose=0) 3 # predict crisp classes for test set 4 yhat_classes = model.predict_classes(testX, verbose=0) The predictions are returned in a two-dimensional array, with one row for each example in the test dataset and one column for the prediction. The scikit-learn metrics API expects a 1D array of actual and predicted values for comparison, therefore, we must reduce the 2D prediction arrays to 1D arrays. 1 # reduce to 1d array 2 yhat_probs = yhat_probs[:, 0] 3 yhat_classes = yhat_classes[:, 0] We are now ready to calculate metrics for our deep learning neural network model. We can start by calculating the classification accuracy, precision, recall, and F1 scores. 1 # accuracy: (tp + tn) / (p + n) 2 accuracy = accuracy_score(testy, yhat_classes) 3 print('Accuracy: %f' % accuracy) 4 # precision tp / (tp + fp) 5 precision = precision_score(testy, yhat_classes) 6 print('Precision: %f' % precision) 7 # recall: tp / (tp + fn) 8 recall = recall_score(testy, yhat_classes) 9 print('Recall: %f' % recall) 10 # f1: 2 tp / (2 tp + fp + fn) 11 f1 = f1_score(testy, yhat_classes) 12 print('F1 score: %f' % f1) Notice that calculating a metric is as simple as choosing the metric that interests us and calling the function passing in the true class values (testy) and the predicted class values (yhat_classes). We can also calculate some additional metrics, such as the Cohen’s kappa, ROC AUC, and confusion matrix. Notice that the ROC AUC requires the predicted class probabilities (yhat_probs) as an argument instead of the predicted classes (yhat_classes). 1 # kappa 2 kappa = cohen_kappa_score(testy, yhat_classes) 3 print('Cohens kappa: %f' % kappa) 4 # ROC AUC 5 auc = roc_auc_score(testy, yhat_probs) 6 print('ROC AUC: %f' % auc) 7 # confusion matrix 8 matrix = confusion_matrix(testy, yhat_classes) 9 print(matrix) Now that we know how to calculate metrics for a deep learning neural network using the scikit-learn API, we can tie all of these elements together into a complete example, listed below. # demonstration of calculating metrics for a neural network model using sklearn from sklearn.datasets import make_circles from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import cohen_kappa_score from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix from keras.models import Sequential from keras.layers import Dense # generate and prepare the dataset def get_data(): # generate dataset X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) # split into train and test n_test = 500 trainX, testX = X[:n_test, :], X[n_test:, :] trainy, testy = y[:n_test], y[n_test:] return trainX, trainy, testX, testy # define and fit the model def get_model(trainX, trainy): # define model model = Sequential() model.add(Dense(100, input_shape=(2,), activation='relu')) model.add(Dense(1, activation='sigmoid')) # compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # fit model model.fit(trainX, trainy, epochs=300, verbose=0) return model # generate data trainX, trainy, testX, testy = get_data() # fit model model = get_model(trainX, trainy) # predict probabilities for test set yhat_probs = model.predict(testX, verbose=0) # predict crisp classes for test set yhat_classes = model.predict_classes(testX, verbose=0) # reduce to 1d array yhat_probs = yhat_probs[:, 0] yhat_classes = yhat_classes[:, 0] # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(testy, yhat_classes) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = precision_score(testy, yhat_classes) print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = recall_score(testy, yhat_classes) print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(testy, yhat_classes) print('F1 score: %f' % f1) # kappa kappa = cohen_kappa_score(testy, yhat_classes) print('Cohens kappa: %f' % kappa) # ROC AUC auc = roc_auc_score(testy, yhat_probs) print('ROC AUC: %f' % auc) # confusion matrix matrix = confusion_matrix(testy, yhat_classes) Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. Running the example prepares the dataset, fits the model, then calculates and reports the metrics for the model evaluated on the test dataset. 1 Accuracy: 0.842000 2 Precision: 0.836576 3 Recall: 0.853175 4 F1 score: 0.844794 5 Cohens kappa: 0.683929 6 ROC AUC: 0.923739 7 [[206 42] 8 [ 37 215]] If you need help interpreting a given metric, perhaps start with the “Classification Metrics Guide” in the scikit-learn API documentation: Classification Metrics Guide Also, checkout the Wikipedia page for your metric; for example: Precision and recall, Wikipedia. Further Reading This section provides more resources on the topic if you are looking to go deeper. In this tutorial, you discovered how to calculate metrics to evaluate your deep learning neural network model with a step-by-step example. Specifically, you learned: • How to use the scikit-learn metrics API to evaluate a deep learning model. • How to make both class and probability predictions with a final model required by the scikit-learn API. • How to calculate precision, recall, F1-score, ROC, AUC, and more with the scikit-learn API for a model. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. 141 Responses to How to Calculate Precision, Recall, F1, and More for Deep Learning Models 1. JG April 3, 2019 at 10:14 pm # Very useful scikit-learn library modules (API), to avoid construct and develop your owns functions. Thanks !!. I would appreciate if you can add to this snippet (example) the appropriate code to plot (to visualize) the ROC Curves, confusion matrix, (to determine the best threshold probability to decide where to put the “marker” to decide when it is positive or negative or 0/1). Also I understand, those metrics only apply for binary classification (F1, precision, recall, AOC curve)? But I know Cohen`s kappa and confusion matrix also apply for multiclass !. Thank you. □ Jason Brownlee April 4, 2019 at 7:56 am # Great suggestion, thanks. 2. scander90 May 2, 2019 at 7:27 pm # i used the code blow to get the model result for F1-score nn = MLPClassifier(activation=’relu’,alpha=0.01,hidden_layer_sizes=(20,10)) print (“F1-Score by Neural Network, threshold =”,threshold ,”:” ,predict(nn,train, y_train, test, y_test)) now i want to get all the other matrices result accuracy and prediction with Plot but i dont know how i can used in the code above □ Jason Brownlee May 3, 2019 at 6:19 am # What problem are you having exactly? ☆ scander90 May 4, 2019 at 1:49 pm # thank you so much about your support .. from sklearn.neural_network import MLPClassifier threshold = 200 train, y_train, test, y_test = prep(data,threshold) nn = MLPClassifier(activation=’relu’,alpha=0.01,hidden_layer_sizes=(20,10)) print (“F1-Score by Neural Network, threshold =”,threshold ,”:” ,predict(nn,train, y_train, test, y_test)) i used the code above i got it from your website to get the F1-score of the model now am looking to get the accuracy ,Precision and Recall for the same model ○ Jason Brownlee May 5, 2019 at 6:22 am # Perhaps check this: 3. Thb DL May 2, 2019 at 7:53 pm # Hello, thank you very much for your website, it helps a lot ! I have a problem related to this post, may be you can halp me 🙂 I try to understand why I obtain different metrics using “model.evaluate” vs “model.predict” and then compute the metrics… I work on sementic segmentation. I have an evaluation set of 24 images. I have a custom DICE INDEX metrics defined as : def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum (y_true_f * y_pred_f) result =(2 * intersection)+1 / (K.sum(y_true_f) + K.sum(y_pred_f))+1 return result When I use model.evaluate, I obtain a dice score of 0.9093835949897766. When I use model.predict and then compute the metrics, I obtain a dice score of 0.9092264051238695. To give more precisions : I set a batchsize of 24 in model.predict as well as in model.evaluate to be sure the problem is not caused by batch size. I do not know what happen when the batch size is larger (ex: 32) than the number of sample in evaluation set… Finaly, to compute the metrics after model.prediction, I run : dice_result = 0 for y_i in range(len(y)): dice_result += tf.Session().run(tf.cast(dice_coef(y[y_i], preds[y_i]), dice_result /= (len(y)) I thought about the tf.float32 casting to be the cause of the difference ? (Maybe “model.evaluate” computes all with tensorflow tensor and return a float at the end whereas I cast tensor in float32 at every loop ? …) Do you think about an explanation ? Thank you for your help. Cheers ! □ Jason Brownlee May 3, 2019 at 6:20 am # I suspect the evaluate score is averaging across batches. Perhaps take use predict then calculate the score on all predictions. ☆ Thb DL May 7, 2019 at 5:42 am # Thank you for your reply. I just have 24 images in my evaluation set, so if “model.evaluate” compute across batches, with a batch size of 24, it will compute the metric in one time on the whole evaluation set. So it will normally gives the same results than “model.predict” followed by the metric computation on the evaluation set ? That’s why I do not understand my differences here. Have a good day. ○ Jason Brownlee May 7, 2019 at 6:21 am # I recommend calling predict, then calling the sklearn metric of choice with the results. ■ Thb DL May 9, 2019 at 6:46 pm # Ok 🙂 If I finally decide not to use my dice personal score, but rather to trust Sklearn, is it possible to use this biblioteque with Keras during the training? Indeed, at the end of the training I get a graph showing the loss and the dice during the epochs. I would like these graphs to be consistent with the final results? Thanks again for help! Have a good day ■ Jason Brownlee May 10, 2019 at 8:15 am # I would expect the graphs to be a fair summary of the training performance. For presenting an algorithm, I recommend using a final model to make predictions, and plot the results anew. ■ Thb DL May 10, 2019 at 12:06 am # Ok, I worked on this today. I fixed this problem. Just in case someone alse has a similar problem. The fact was that when I resized my ground truth masks before feeding the network with, I did not threshold after the resizing, so I got other values than 0 and 1 at the edges, and my custom dice score gives bad results. Now I put the threshold just after the resizing and have same results for all the functions I use ! Also, be careful with types casting (float32 vs float64 vs int) ! Anyway, I thank you very much for your disponibility. Have a good daye ■ Jason Brownlee May 10, 2019 at 8:18 am # Well done! 4. Jianhong Cheng May 14, 2019 at 11:02 am # How to calculate Precision, Recall, F1, and AUC for multi-class classification Problem □ Jason Brownlee May 14, 2019 at 2:29 pm # You can use the same approach, the scores are averaged across the classes. ☆ Erica Rac July 17, 2019 at 5:34 am # Your lessons are extremely informative, Professor. I am trying to use this approach to calculate the F1 score for a multi-class classification problem but I keep receiving the error “ValueError: Classification metrics can’t handle a mix of multilabel-indicator and binary targets” I would very much appreciate if you please guide me to what I am doing wrong? Here is the relevant code: # generate and prepare the dataset def get_data(): n_test = 280 Xtrain, Xtest = X[:n_test, :], X[n_test:, :] ytrain, ytest = y[:n_test], y[n_test:] return X_train, y_train, X_test, y_test # define and fit the model def get_model(Xtrain, ytrain): model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=max_sequence_length)) model.add(LSTM(150, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(5, activation=’softmax’)) model.compile(loss=’categorical_crossentropy’, optimizer= “adam”, metrics=[‘accuracy’]) model.fit(X_train, y_train, epochs=2, batch_size=15,callbacks=[EarlyStopping(monitor=’loss’)]) return model # generate data X_train, y_train, X_test, y_test = get_data() # fit model model = get_model(X_train, y_train) # predict probabilities for test set yhat_probs = model.predict(X_test, verbose=0) # predict crisp classes for test set yhat_classes = model.predict_classes(X_test, verbose=0) # reduce to 1d array yhat_probs = yhat_probs.flatten() yhat_classes = yhat_classes.flatten() # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(y_test, yhat_classes) print(‘Accuracy: %f’ % accuracy) # precision tp / (tp + fp) precision = precision_score(y_test, yhat_classes) print(‘Precision: %f’ % precision) # recall: tp / (tp + fn) recall = recall_score(y_test, yhat_classes) print(‘Recall: %f’ % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(y_test, yhat_classes) print(‘F1 score: %f’ % f1) # kappa kappa = cohen_kappa_score(testy, yhat_classes) print(‘Cohens kappa: %f’ % kappa) # ROC AUC auc = roc_auc_score(testy, yhat_probs) print(‘ROC AUC: %f’ % auc) # confusion matrix matrix = confusion_matrix(y_test, yhat_classes) ○ Jason Brownlee July 17, 2019 at 8:31 am # Perhaps check your data matches the expectation of the measures you intend to use? ■ Erica Rac July 17, 2019 at 11:51 am # I see my error in preprocessing. Thanks for the quick reply! ■ Jason Brownlee July 17, 2019 at 2:24 pm # Happy to hear that. ○ Purnima Khurana May 11, 2020 at 4:29 pm # hi @Eric Rac . I am getting the same error. How you have corrected it for multiclass classification. ☆ Gilbert Gutabaga December 26, 2019 at 1:37 pm # Hello i tried the same approach but i end up getting error message ‘Classification metrics can’t handle a mix of multilabel-indicator and multiclass targets ‘ 5. Despina M May 17, 2019 at 4:03 am # Hello! Another great post of you! Thank you! I want to calculate Precision, Recall, F1 for every class not only the average. Is it possible? Thank you in advance □ Jason Brownlee May 17, 2019 at 5:59 am # Yes, I believe the sklearn classification report will provide this information. I also suspect you can configure the sklearn functions for each metric to report per-class scores. ☆ Despina M May 17, 2019 at 6:49 am # Thank you so much for the quick answer! I will try to calculate them. ○ Jason Brownlee May 17, 2019 at 2:52 pm # No problem. Let me know how you go. 6. Despina M May 19, 2019 at 6:07 am # I used from sklearn.metrics import precision_recall_fscore_support precision_recall_fscore_support(y_test, y_pred, average=None) print(classification_report(y_test, y_pred, labels=[0, 1])) It works fine for me. Thanks again! □ Jason Brownlee May 19, 2019 at 8:07 am # Nice work! 7. Vani May 23, 2019 at 10:28 pm # How is that accuracy calculated using “history.history[‘val_acc’]” provides different values as compared to accuracy calculated using “accuracy = accuracy_score(testy, yhat_classes)” ? □ Jason Brownlee May 24, 2019 at 7:51 am # It should be the same, e.g. calculate score at the end of each epoch. ☆ Vani June 3, 2019 at 1:51 pm # thank you 8. usama May 25, 2019 at 9:53 pm # hi jason, i need a code of RNN through which i can find out the classification and confusion matrix of a specific dataset. □ Jason Brownlee May 26, 2019 at 6:45 am # There are many examples you can use to get started, perhaps start here: 9. vani venk June 3, 2019 at 1:57 pm # I calculated accuracy, precision,recall and f1 using following formulas. accuracy = metrics.accuracy_score(true_classes, predicted_classes) precision=metrics.precision_score(true_classes, predicted_classes) recall=metrics.recall_score(true_classes, predicted_classes) f1=metrics.f1_score(true_classes, predicted_classes) The metrics stays at very low value of around 49% to 52 % even after increasing the number of nodes and performing all kinds of tweaking. precision recall f1-score support nu 0.49 0.34 0.40 2814 u 0.50 0.65 0.56 2814 avg / total 0.49 0.49 0.48 5628 The confusion matrix shows very high values of FP and FN confusion= [[ 953 1861] [ 984 1830]] What can I do to improve the performance? □ Vani June 3, 2019 at 2:02 pm # For the low values of accuracy, precision, recall and F1, the accuracy and loss plot is also weird. The accuracy of validation dataset remains higher than training dataset; similarly, the validation loss remains lower than that of training dataset; whereas the reverse is expected. How to overcome this problem? ☆ Jason Brownlee June 3, 2019 at 2:35 pm # Better results on the test set than the training set may suggest that the test set is not representative of the problem, e.g. is too small. ○ Vani June 4, 2019 at 9:05 pm # ■ Jason Brownlee June 5, 2019 at 8:40 am # No problem. □ Jason Brownlee June 3, 2019 at 2:35 pm # I offer some suggestions here: ☆ Arnav Andraskar November 26, 2021 at 4:20 pm # Hey very nice explanation, but i want to calculate f1 score and auc at every epoch how will i get that. ○ Adrian Tam November 29, 2021 at 8:33 am # Depends on what library you use – if Keras, you just add them as “metrics” in your compile() function. See https://keras.io/api/metrics/ for a long list. 10. onyeka July 27, 2019 at 5:23 pm # ValueError: Error when checking input: expected dense_74_input to have shape (2,) but got array with shape (10,) i got this error and i dont know what to do next □ Jason Brownlee July 28, 2019 at 6:40 am # Sorry to hear that, I have some suggestions here that might help: 11. Khalil September 5, 2019 at 11:05 pm # I’m doing a binary text classification, my X_val shape is (85, 1, 62, 300) and my Y_val shape is (85, 2). I get an error when executing this line: yhat_classes = saved_model.predict_classes(X_val, verbose=0) AttributeError: ‘Model’ object has no attribute ‘predict_classes’ My snippet code bellow: cv_scores, models_history = list(), list() start_time = time.time() for train, test in myCViterator: # Spliting our data X_train, X_val, y_train, y_val = df_claim.loc[train].word.tolist(), df_claim.loc[test].word.tolist(), df_label.loc[train].fact.tolist(), df_label.loc[test].fact.tolist() X_train = np.array(X_train) X_val = np.array(X_val) y_train = np.array(y_train) y_val = np.array(y_val) # Evaluating our model model_history, val_acc, saved_model = evaluate_model(X_train, X_val, y_train, y_val) # plot loss during training pyplot.plot(model_history.history[‘loss’], label=’train’) pyplot.plot(model_history.history[‘val_loss’], label=’test’) # plot accuracy during training pyplot.plot(model_history.history[‘acc’], label=’train’) pyplot.plot(model_history.history[‘val_acc’], label=’test’) print(“\n Metrics for this model:”) print(‘> Accuracy: %.3f’ % val_acc) # Scikit-learn metrics: # predict probabilities for test set yhat_probs = saved_model.predict(X_val, verbose=0) # predict crisp classes for test set #yhat_classes = np.argmax(yhat_probs, axis=1) yhat_classes = saved_model.predict_classes(X_val, verbose=0) # reduce to 1d array yhat_probs = yhat_probs[:, 0] #yhat_classes = yhat_classes[:, 0] # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(y_val, yhat_classes) print(‘> Accuracy: %f’ % accuracy) # precision tp / (tp + fp) precision = precision_score(y_val, yhat_classes) print(‘> Precision: %f’ % precision) # recall: tp / (tp + fn) recall = recall_score(y_val, yhat_classes) print(‘> Recall: %f’ % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(y_val, yhat_classes) print(‘> F1 score: %f’ % f1) # kappa kappa = cohen_kappa_score(y_val, yhat_classes) print(‘> Cohens kappa: %f’ % kappa) # ROC AUC auc = roc_auc_score(y_val, yhat_probs) print(‘> ROC AUC: %f’ % auc) # confusion matrix matrix = confusion_matrix(y_val, yhat_classes) print(“— %s seconds —” % (time.time() – start_time)) print(‘Estimated Accuracy for 5-Folds Cross-Validation: %.3f (%.3f)’ % (np.mean(cv_scores), np.std(cv_scores))) □ Jason Brownlee September 6, 2019 at 5:02 am # If your model is wrapped by scikit-learn, then predict_classes() is not available, it is function on the Keras model. Instead, you can use predict(). ☆ Khalil September 7, 2019 at 5:58 am # I’ve tried to use the predict() method and then get the argmax of the vector (with yhat_classes = np.argmax(yhat_probs, axis=1) ) but then it gives me another error when trying to get the accuracy = accuracy_score(y_val, yhat_classes) ValueError: Classification metrics can’t handle a mix of multilabel-indicator and binary targets ○ Khalil September 7, 2019 at 6:06 am # I found the solution, obviously I need to reduce ‘y_val’ to 1d array as well lol. Thank you so much for your help and for this great post! ■ Jason Brownlee September 8, 2019 at 5:07 am # I’m happy to hear that! ■ vinc March 31, 2020 at 9:17 am # Hello, can you explain me how you fixed the problem? maybe I have the same problem but I am not able to fix it. Really thanks! 12. NguWah September 14, 2019 at 12:47 am # Hello! I trained and got the different accuracy form the model.fit() and model.evaluate() methods. What is the problem? How can I get the right accuracy between this? □ Jason Brownlee September 14, 2019 at 6:21 am # During fit, scores are estimated averaged over batches of samples. Use evaluate() to get a true evaluation of the model’s performance. ☆ NguWah September 14, 2019 at 7:19 pm # Thanks Jason 13. Ze Rar September 14, 2019 at 7:25 pm # Hi Jason I got the validation accuracy and test accuracy which are better than the train accuracy without dropout. What can be the problems? □ Jason Brownlee September 15, 2019 at 6:20 am # Perhaps the test or validation dataset are too small and the results are statistically noisy? ☆ Ze Rar September 15, 2019 at 2:10 pm # I also set 40%(0.4) to the test size 14. Ze Rar September 14, 2019 at 7:50 pm # I also set the test size 0.4 15. Mesho October 3, 2019 at 9:24 pm # Thanks a lot for this useful tutorial. I was wondering How to Calculate Precision, Recall, F1 in Multi-label CNN. I mean having these Metrics for each label in the output. Many thanks for your help. □ Jason Brownlee October 4, 2019 at 5:41 am # I believe the above tutorial shows you how to calculate these metrics. Once calculated, you can print the result with your own labels. 16. PC November 17, 2019 at 3:55 pm # Hi Jason, Whenever I have doubts related to ML your articles are always there to clarify those. Thank you very much. My question is : Can I plot a graph of the Kappa error metric of classifiers? □ Jason Brownlee November 18, 2019 at 6:44 am # Yes, you may need to implement it yourself for Keras to access, see here for an example with RMSE that you can adapt: 17. HSA December 8, 2019 at 2:01 am # I wonder how to upload a figure in my response, However, my Line Plot Showing Learning Curves of Loss and Accuracy is very different the training and testing lines do not appear above each other like your plot, they have totally different directions opposite each other. what could be the problem given that I tested your code on two different datasets, one is balanced (with 70% f1-score ) and the other is not (with 33% f1-score)? □ Jason Brownlee December 8, 2019 at 6:15 am # You can upload an image to social media, github or an image hosting service like imgur. Not sure I follow your question, sorry. Perhaps you can elaborate? 18. Miao February 6, 2020 at 3:06 pm # Thank you for your nice post. I have one question. if we use the one-hot encoder to process labels by using np_utils.to_categorical of Keras in the preprocessing, how to use model.predict()? □ Jason Brownlee February 7, 2020 at 8:08 am # Sorry I don’t understand, they are not related. What is the problem exactly? ☆ Miao February 7, 2020 at 12:52 pm # Sorry I did not describe my question clearly. In the example of the post, if the label was one hot encoded, and the argmax value was taken to predict when using model.predict(). the code is as below, in this code, the yhat_classes can not be taken argmax, so I think the model.predict_classes() can not used in the one-hot encoder labels. And the results here is not better than the results in your post. My question is that whether to use on-hot encoder when using model.predict()? from sklearn.datasets import make_circles from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.metrics import cohen_kappa_score from sklearn.metrics import roc_auc_score from sklearn.metrics import confusion_matrix from keras.models import Sequential from keras.layers import Dense import keras import numpy as np # generate and prepare the dataset def get_data(): # generate dataset X, y = make_circles(n_samples=1000, noise=0.1, random_state=1) # split into train and test n_test = 500 trainX, testX = X[:n_test, :], X[n_test:, :] trainy, testy = y[:n_test], y[n_test:] return trainX, trainy, testX, testy # define and fit the model def get_model(trainX, trainy): # define model model = Sequential() model.add(Dense(100, input_dim=2, activation=’relu’)) model.add(Dense(num_classes, activation=’sigmoid’)) # compile model model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’]) # fit model model.fit(trainX, trainy, epochs=300, verbose=0) return model # generate data trainX, trainy, testX, testy = get_data() # One hot encode labels num_classes = 2 trainy = keras.utils.to_categorical(trainy, num_classes) testy = keras.utils.to_categorical(testy, num_classes) # fit model model = get_model(trainX, trainy) # predict probabilities for test set yhat_probs = model.predict(testX, verbose=0) # predict crisp classes for test set yhat_classes = model.predict_classes(testX, verbose=0).reshape(-1,1) yhat_probs_inverse = np.argmax(yhat_probs,axis=1).reshape(-1,1) testy_inverse = np.argmax(testy, axis=1).reshape(-1,1) # reduce to 1d array yhat_probs = yhat_probs_inverse[:, 0] yhat_classes = yhat_classes[:, 0] # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(testy_inverse, yhat_classes) print(‘Accuracy: %f’ % accuracy) # precision tp / (tp + fp) precision = precision_score(testy_inverse, yhat_classes) print(‘Precision: %f’ % precision) # recall: tp / (tp + fn) recall = recall_score(testy_inverse, yhat_classes) print(‘Recall: %f’ % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(testy_inverse, yhat_classes) print(‘F1 score: %f’ % f1) # kappa kappa = cohen_kappa_score(testy_inverse, yhat_classes) print(‘Cohens kappa: %f’ % kappa) # ROC AUC auc = roc_auc_score(testy_inverse, yhat_probs_inverse) print(‘ROC AUC: %f’ % auc) # confusion matrix matrix = confusion_matrix(testy_inverse, yhat_classes) >>Accuracy: 0.852000 Precision: 0.858871 Recall: 0.845238 F1 score: 0.852000 Cohens kappa: 0.704019 ROC AUC: 0.852055 [[213 35] [39 213]] ○ Jason Brownlee February 7, 2020 at 1:50 pm # If you one hot encode your target, a call to predict() will output the probability of class membership, a call to predict_classes() will return the classes directly. To learn more about the difference between predict() and predict_classes() see this: ■ Miao February 10, 2020 at 2:04 pm # Thank you very much. useful reply for me. ■ Jason Brownlee February 11, 2020 at 5:06 am # You’re welcome. 19. HSA February 28, 2020 at 2:39 am # I used Cohen Kappa to find the inner annotator agreement between two annotator rater1 = [0,1,1] rater2 = [1,1,1] print(“cohen_kappa_score”,cohen_kappa_score(rater1, rater2, labels=labels)) why Iam getting 0 result? □ Jason Brownlee February 28, 2020 at 6:17 am # I don’t know off hand, perhaps the prediction has no skill? ☆ HSA February 29, 2020 at 12:50 am # mmm, I am not working on classification problem, I am working on measuring how the raters agree with each other this is called inner annotator agreement as mentioned here https:// en.wikipedia.org/wiki/Cohen%27s_kappa cohen kappa is one of the ways to do that, I am expecting to have a very high value because the annotators opinion is almost similar but I am surprised to have a negative values ○ Jason Brownlee February 29, 2020 at 7:16 am # I’m not familiar wit that task. 20. Pankaj March 3, 2020 at 9:24 pm # I was working with SVHN data base and after using the above code i was getting precison\Recall\F1 of same value. which does not looks correct. □ Jason Brownlee March 4, 2020 at 5:54 am # Perhaps try debugging your code to discover the cause of the fault. 21. Giovanna Fernandes March 15, 2020 at 2:24 am # Thank. you, this is very helpful and clear. My only difficulty, which I haven’t found a solution for yet, is how to apply this to a multi-label classification problem? My Keras Model (not Sequential) outputs from a Dense layer with a sigmoid activation for 8 possible classes. Samples can be of several classes. If I do a model.predict( ) I get the probabilities for each class: array([0.9876269 , 0.08615541, 0.81185186, 0.6329404 , 0.6115263 , 0.11617774, 0.7847705 , 0.9649658 ], dtype=float32) My y looks something like this though, a binary classification for each of the 8 classes: [1 0 1 1 1 0 1 1] predict_classes is only for Sequential, so what can I do in this case in order to get a classification report with precision, recall and f-1 for each class? □ Giovanna Fernandes March 15, 2020 at 2:31 am # Há, nevermind! Sometimes the simplest solutions are right there in front of us and we fail to see them… predicted[predicted>=0.5] = 1 predicted[predicted<0.5] = 0 Problem solved! 😀 ☆ Jason Brownlee March 15, 2020 at 6:19 am # □ Jason Brownlee March 15, 2020 at 6:18 am # Yes, for multi-label classification, you get a binary prediction for each label. If you want a multi-class classification (mutually exclusive clases), use a softmax activation function instead and an arg max to get the single class. 22. Rony Sharma March 28, 2020 at 9:24 pm # # define model model = Sequential() model.add(Conv1D(filters=128, kernel_size=5, activation=’relu’)) model.add(Dense(1, activation=’sigmoid’)) # compile network model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’]) # fit network model.fit(Xtrain, ytrain, epochs=15, verbose=2) # evaluate loss, acc = model.evaluate(Xtest, ytest, verbose=0) # predict probabilities for test set yhat_probs = model.predict(Xtest, verbose=0) # predict crisp classes for test set yhat_classes = model.predict_classes(Xtest, verbose=0) # reduce to 1d array yhat_probs = yhat_probs[:, 0] yhat_classes = yhat_classes[:, 0] # accuracy: (tp + tn) / (p + n) accuracy = accuracy_score(ytest, yhat_classes) print(‘Accuracy: %f’ % accuracy) # precision tp / (tp + fp) precision = precision_score(ytest, yhat_classes) print(‘Precision: %f’ % precision) # recall: tp / (tp + fn) recall = recall_score(ytest, yhat_classes) print(‘Recall: %f’ % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(ytest, yhat_classes) print(‘F1 score: %f’ % f1) # confusion matrix matrix = confusion_matrix(ytest, yhat_classes) but my output show: Accuracy: 0.745556 Precision: 0.830660 Recall: 0.776667 TypeError: ‘numpy.float64’ object is not callable how to solve this problem?? □ Jason Brownlee March 29, 2020 at 5:53 am # Sorry to hear that, this will help: 23. Hesham April 10, 2020 at 8:10 pm # How to replace make_circles by my data (file.csv) … how to change the code □ Jason Brownlee April 11, 2020 at 6:16 am # This will show you how to load a CSV: 24. Law April 18, 2020 at 4:27 am # Thank you so much Jason i do enjoy codes a lot, please i will like to know if these metrics Precision, F1 score and Recall can also be applied to Sequence to Sequence prediction with RNN □ Jason Brownlee April 18, 2020 at 6:12 am # Perhaps, but not really. If the output is text, look at metrics like BLEU or ROGUE. ☆ Law April 20, 2020 at 3:04 am # Ok thanks for your reply, what will your advice for the choice of metrics in RNN sequence to sequence, where the output is number. ○ Jason Brownlee April 20, 2020 at 5:29 am # If you are predicting one value per sample, then MAE or RMSE are great metrics to start with. 25. JONATA PAULINO DA COSTA April 21, 2020 at 6:33 am # Hello. I’m doing an SVM algorithm together with a library to learn binary classification. How could I make a f1_score chart with this algorithm? □ Jason Brownlee April 21, 2020 at 7:43 am # What is an f1 score chart? What would you plot exactly? ☆ JONATA PAULINO DA COSTA April 21, 2020 at 10:28 am # My base is composed of tweeters and has two classes, crime and non-crime. Intend to generate the f1_score because the base is unbalanced, then, or more advisable to measure the effectiveness of the serious model in the f1_score metric. ○ Jason Brownlee April 21, 2020 at 11:45 am # Good question, this will help you choose a metric for your project: 26. JONATA PAULINO DA COSTA April 21, 2020 at 1:04 pm # In reality, I know which metric to use, I already know that it is a f1_score, however, I was unable to do it with SVM. □ Jason Brownlee April 21, 2020 at 1:25 pm # Why not? ☆ JONATA PAULINO DA COSTA April 21, 2020 at 11:41 pm # f1_score em rede neural eu pego cada época e mostro no gráfico, já no SVM não sei como fazer. ○ Jason Brownlee April 22, 2020 at 5:57 am # Sorry, I don’t have examples of working with graph data. 27. JONATA PAULINO DA COSTA April 21, 2020 at 11:42 pm # f1_score in neural network I take each season and show it in the graph, already in SVM I don’t know how to do it. obs:sorry for the message replication, it was an error. □ Jason Brownlee April 22, 2020 at 5:58 am # If someone has created a report or plot you like, perhaps ask them how they made it? ☆ JONATA PAULINO DA COSTA April 22, 2020 at 6:01 am # Muito obrigado. ○ Jason Brownlee April 22, 2020 at 6:09 am # You’re very welcome! 28. nandini May 11, 2020 at 8:19 pm # hi sir, How can we achieve 100% recall in deep learning , please suggest any tips to improve the recall part in deep learning. more over i am trying on text classification , all datasets having imbalanced , we have applied smote method to overcome imbalanced one ,not getting error smote method . is that good way to apply smote method for imbalanced text classification is their any other methods are available to improve recall of imbalanced text classification . □ Jason Brownlee May 12, 2020 at 6:43 am # Here are suggestions for improving model performance: 29. nkm May 22, 2020 at 9:15 pm # Hi Mr. Jason, Function predict_classes is not available for Keras functional API. Any suggestions please. How to calculate matrices for functional API case? □ Jason Brownlee May 23, 2020 at 6:21 am # Yes, use predict() then argmax on the result: 30. Karl Demree June 10, 2020 at 3:04 am # Some metrics, like the ROC AUC, require a prediction of class probabilities (yhat_probs). These can be retrieved by calling the predict() function on the model. I really dont get why yhat_classes isn’t used for ROC AUC as well. It would be great if you could explain this. Also when you say “prediction of class probabilities” shouldn’t we use “predict_proba” rather than just “predict”?. Many thanks □ Jason Brownlee June 10, 2020 at 6:20 am # In keras the predict() function returns probabilities on classification tasks: ☆ Karl Demree June 11, 2020 at 1:25 am # Thanks much! Great tutorial ○ Jason Brownlee June 11, 2020 at 6:00 am # 31. nkm June 16, 2020 at 4:02 pm # Hi Mr. Jason, thanks for your great support. I am working on four class classification of images with equal number of images in each class (for testing, total 480 images, 120 in each class). I am calculating metrics viz. accuracy, Precision, Recall and F1-score from test dataset. I used three options to calculate these metrics, first scikit learn API as explained by you, second option is printing classification summary and third using confusion matrix. In all three ways, I am getting same value (0.92) for all fours metrics. Is it possible to get same value for all four metrics or I am doing something wrong. From your experience, kindly clarify and suggest way ahead. Thanks ans Regards □ Jason Brownlee June 17, 2020 at 6:17 am # Perhaps. Check that you don’t have a bug in your test harness. Also, I recommend selecting one metric and optimize that. ☆ nkm June 18, 2020 at 3:06 am # Thanks for your quick Reply. I am attaching my test code in hope that your experience will definitely show some solution: #Four class classification problem: class_labels = list(test_it.class_indices.keys()) y_true = test_it.classes test_it.reset() # Y_pred = model.predict(test_it, STEP_SIZE_TEST,verbose=1) y_pred1 = np.argmax(Y_pred, axis=1, out=None) target_names = [‘Apple’, ‘Orange, ‘Mango’,’Guava’] cm = confusion_matrix(y_true, y_pred1) print(‘Confusion Matrix’) print(‘Classification Report’) print(classification_report(test_it.classes, y_pred1, target_names=class_labels)) ax= plt.subplot() sns.heatmap(cm, annot=True, ax = ax, cmap=’Blues’, fmt=’d’ ); ax.set_xlabel(‘Predicted labels’);ax.set_ylabel(‘True labels’); ax.set_title(‘Confusion Matrix’); ax.xaxis.set_ticklabels([‘Apple’, ‘Orange, ‘Mango’,’Guava’]); ax.yaxis.set_ticklabels([‘Apple’, ‘Orange, ‘Mango’,’Guava’]); accuracy = accuracy_score(y_true, y_pred1) print(‘Accuracy: %f’ % accuracy) precision = precision_score(y_true, y_pred1, average=’micro’) print(‘Precision: %f’ % precision) recall = recall_score(y_true, y_pred1, average=’micro’) print(‘Recall: %f’ % recall) f1 = f1_score(y_true, y_pred1,average=’micro’) print(‘F1 score: %f’ % f1) With thanks and Regards ○ Jason Brownlee June 18, 2020 at 6:29 am # I don’t have the capacity to review/debug your code, sorry. Perhaps this will help: ■ nkm June 19, 2020 at 4:57 am # Thanks for quick reply. My question is : is it possible to have all metrics same? For example in scikit learn classification report (example for Recognizing hand-written digits) all are shown same to 0.97. I am putting link for reference:- Comments are requested. ■ Jason Brownlee June 19, 2020 at 6:21 am # Sorry, I don’t understand your question. Perhaps you can rephrase it? 32. nkm June 19, 2020 at 1:18 pm # I multiclass classification, can three evaluation metrices (accuracy, precision, recall) converge to the same value? In this reference, there is a classification report shown, which has same values for all the three metrices:- □ Jason Brownlee June 20, 2020 at 6:04 am # 33. salim August 31, 2020 at 6:05 am # you are the best □ Jason Brownlee August 31, 2020 at 6:17 am # 34. William Mitiku February 25, 2021 at 4:30 am # What about for multiclass classification?? □ Jason Brownlee February 25, 2021 at 5:37 am # You can use the metrics for binary or multi-class classification directly. 35. Anwar May 23, 2021 at 4:29 pm # • There are 1400 fish and 300 shrimps in a pool. In order to catch fish, we cast a net. 700 fish and 200 shrimps are caught. Please calculate the precision, recall and F1 score. can someone find me answer □ Jason Brownlee May 24, 2021 at 5:42 am # Perhaps this will help: 36. VISHNURAJ KR June 15, 2021 at 11:43 am # how i can print precision, recall for if training using one dataset and testing using another dataset □ Jason Brownlee June 16, 2021 at 6:16 am # Fit your model on the first dataset, make predictions for the second and calculate your metrics with the predictions. ☆ VISHNURAJ KR June 16, 2021 at 11:02 pm # sir that already done. can you please give reference code for this. ○ Jason Brownlee June 17, 2021 at 6:17 am # Perhaps start here: 37. YP Lai October 25, 2021 at 2:17 pm # Hi Jason, Thank you very much for your tutorials which help me a lot. I am wondering if it is possible to use Tensorflow/Keras built-in APIs (e.g. tf.keras.metrics.Precision) to get the metrics (precision/ recall) for a sequence to sequence model, whose output label of each time-step is one-hot encoded as a multi-classes case? If the answer is Yes, do you have any reference or example I can refer to. If No, do you have any suggestion that I can try to get them. Thanks. □ Adrian Tam October 27, 2021 at 2:14 am # Yes, but no example yet. Thanks for your suggestion and we will consider that. ☆ YP Lai October 27, 2021 at 10:25 am # Got it. Thanks. 38. Beny November 16, 2021 at 8:36 pm # I got a new PC with tensorflow 2.7.0 and when I tried my old binary classification code (which was created by following this tutorial) I got the following error: AttributeError: ‘Sequential’ object has no attribute ‘predict_classes’ Any clue to solve this? □ Adrian Tam November 17, 2021 at 6:48 am # Yes, whenever you see: 1 y = model.predict_classes(X) replace it with 1 v = model.predict(X) y = np.argmax(v) This is because predict_classes() is removed in recent version of Keras API. ☆ Emml December 22, 2022 at 2:41 am # y = np.argmax(v) does not return the classes. ○ James Carmichael December 22, 2022 at 8:08 am # Hi Emml…What was the result you encountered from execution of the code? 39. Greg Werner November 20, 2021 at 10:27 am # So here is what I get from the article. A train/test split is performed. Then, the test set is used as both validation data during training and as test data, reporting the F1 score on what was called the test set in the original split. Is this proper to have the test set double as a validation set? Effectively I could say that you are reporting F1 score on the validation set (which happens to be the same as the test set). □ Adrian Tam November 20, 2021 at 1:42 pm # Personally I would say it is acceptable. The reason you want to do a validation or test is to get a metric to tell how good is your model. Hence you do not want your training set used for scoring as well because otherwise you can’t tell it is overfitting. But even so, the test during training is sampled on the test set, while the validation after training is using the full test set. Hence you’re not looking at the same thing. 40. Lawrence March 6, 2022 at 1:44 am # I truly appreciate you Jason, you just made it possible for me to finish my programme, you affecting a lot of us positively. Thanks a bunch □ James Carmichael March 6, 2022 at 1:06 pm # Awesome feedback Lawrence! 41. Sadegh July 8, 2022 at 6:28 am # Hello there, I’m really wondering how to use f1_score from sklearn library for metrics in keras compiler to use it as ModelCheckpoint’s monitor argument to save the model with the best f1_score and use it after for prediction? I really need to find it out how ? or if you can tell me how to do something like this by making costume metrics? Best regards! 42. Asif Munir July 25, 2022 at 9:09 pm # AttributeError: ‘Sequential’ object has no attribute ‘predict_classes’ □ James Carmichael July 26, 2022 at 8:41 am # Hi Asif…Please elaborate the error. Did you copy and paste the code or type it in? 43. Juan October 30, 2022 at 6:07 am # If we want to apply a classification model to imbalanced data we should – Use a cost matrix or apply weights for the clases. – Avoid using the Accuracy to optimize the model, and use the F1 instead, or even better the AUC ROC or the AUC PR. My question is… should we use only one of these recommendations or both simultaneously? I mean, if we are using the AUC_PR… do we still need to apply weights to the input? and vice versa if we are using weights… Is it still recommended to use the AUC_PR instead of simply the Accuracy? 44. Andrea July 28, 2024 at 1:33 am # n_test = 500 trainX, testX = X[:n_test, :], X[n_test:, :] trainy, testy = y[:n_test], y[n_test:] return trainX, trainy, testX, testy Cell In[62], line 4 return trainX, trainy, testX, testy SyntaxError: ‘return’ outside function How can I solve this?? Well done this tutorial □ James Carmichael July 28, 2024 at 4:36 am # Hi Andrea…The SyntaxError: 'return' outside function error occurs because the return statement is used outside of a function. In Python, return can only be used within a function to send back a result to the caller. To fix this, you need to wrap your code in a function definition. Here’s an example of how to define a function that splits the data into training and test sets and then returns them: def split_data(X, y, n_test=500): trainX, testX = X[:n_test, :], X[n_test:, :] trainy, testy = y[:n_test], y[n_test:] return trainX, trainy, testX, testy # Example usage trainX, trainy, testX, testy = split_data(X, y) This way, the return statement is within the split_data function, and you can call this function to get your desired split of the data. Leave a Reply Click here to cancel reply.
{"url":"https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/","timestamp":"2024-11-09T09:40:08Z","content_type":"text/html","content_length":"782480","record_id":"<urn:uuid:2646ce6e-e4e8-4462-aec9-34547122e8b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00268.warc.gz"}
Bézier Award Thomas W. Sederberg, the 2013 Pierre Bézier Award Recipient The Bézier award committee has chosen to award the 2013 Bézier Award to Thomas W. Sederberg for his pioneering contributions to solid and physical modeling ranging from algebraic techniques and FFD to subdivision and T-splines. Thomas W. Sederberg was introduced to geometric modeling at Brigham Young University during his masters degree research on surface reconstruction from contour lines. His PhD thesis at Purdue University applied tools from classical algebraic geometry to computer aided geometric design. That thesis showed how to compute an exact implicit equation for Bézier curves and surfaces, and revealed that the implicit equation for a generic bicubic patch is degree 18, so two generic bicubic patches intersect in a curve of algebraic degree 324. He next invented piecewise algebraic surfaces, a technique for creating low-degree implicit surfaces suitable for free-form design. Piecewise algebraic surfaces are defined using trivariate Bézier solids, a tool he also used in Free-Form Deformation. The Bézier representation was also central to his invention of the method of Bézier-clipping, a series of algorithms for robustly computing intersections. A problem with implicitization is that the early methods, based on multivariate resultants, fail for surfaces that have base points, which many do. He solved this problem by inventing a method called “moving surfaces” that can elegantly implicitize a surface with base points. Applying this method to curves led to his discovery of the so-called mu-basis for parametric curves. His innovation of non-uniform Catmull-Clark surfaces provided a surface representation that is a superset of both Catmull-Clark surfaces and non-uniform bicubic B-spline surfaces. His 2003 invention of T-Splines allows for local refinement of a spline surface of arbitrary topology. A subsequent paper presented a method for representing the union of two NURBS models as a single watertight T-spline. He co-founded a company to commercialize T-splines, which was acquired by Autodesk in December 2011. Because T-splines provide local refinement and watertight models, they have proven to be ideal for isogeometric analysis, which avoids the need for meshing.
{"url":"http://solidmodeling.org/awards/bezier-award/thomas-w-sederberg/","timestamp":"2024-11-05T19:38:09Z","content_type":"text/html","content_length":"17293","record_id":"<urn:uuid:87154f66-8582-4cea-b66b-99b556e597f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00038.warc.gz"}
Scripting API For a given plane described by planeNormal and a given vector vector, Vector3.ProjectOnPlane generates a new vector orthogonal to planeNormal and parallel to the plane. Note: planeNormal does not need to be normalized. ''The red line represents vector, the yellow line represents planeNormal, and the blue line represents the projection of vector on the plane.'' The script example below makes Update generate a vector position, and a planeNormal normal. The Vector3.ProjectOnPlane static method receives the arguments and returns the Vector3 position. using System.Collections; using System.Collections.Generic; using UnityEngine; // Vector3.ProjectOnPlane - example // Generate a random plane in xy. Show the position of a random // vector and a connection to the plane. The example shows nothing // in the Game view but uses Update(). The script reference example // uses Gizmos to show the positions and axes in the Scene. public class Example : MonoBehaviour private Vector3 vector, planeNormal; private Vector3 response; private float radians; private float degrees; private float timer = 12345.0f; // Generate the values for all the examples. // Change the example every two seconds. void Update() if (timer > 2.0f) // Generate a position inside xy space. vector = new Vector3(Random.Range(-1.0f, 1.0f), Random.Range(-1.0f, 1.0f), 0.0f); // Compute a normal from the plane through the origin. degrees = Random.Range(-45.0f, 45.0f); radians = degrees * Mathf.Deg2Rad; planeNormal = new Vector3(Mathf.Cos(radians), Mathf.Sin(radians), 0.0f); // Obtain the ProjectOnPlane result. response = Vector3.ProjectOnPlane(vector, planeNormal); // Reset the timer. timer = 0.0f; timer += Time.deltaTime; // Show a Scene view example. void OnDrawGizmosSelected() // Left/right and up/down axes. Gizmos.color = Color.white; Gizmos.DrawLine(transform.position - new Vector3(2.25f, 0, 0), transform.position + new Vector3(2.25f, 0, 0)); Gizmos.DrawLine(transform.position - new Vector3(0, 1.75f, 0), transform.position + new Vector3(0, 1.75f, 0)); // Display the plane. Gizmos.color = Color.green; Vector3 angle = new Vector3(-1.75f * Mathf.Sin(radians), 1.75f * Mathf.Cos(radians), 0.0f); Gizmos.DrawLine(transform.position - angle, transform.position + angle); // Show the projection on the plane as a blue line. Gizmos.color = Color.blue; Gizmos.DrawLine(Vector3.zero, response); Gizmos.DrawSphere(response, 0.05f); // Show the vector perpendicular to the plane in yellow Gizmos.color = Color.yellow; Gizmos.DrawLine(vector, response); // Now show the input position. Gizmos.color = Color.red; Gizmos.DrawSphere(vector, 0.05f); Gizmos.DrawLine(Vector3.zero, vector);
{"url":"https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html","timestamp":"2024-11-14T01:24:46Z","content_type":"text/html","content_length":"21493","record_id":"<urn:uuid:164ee9ae-f69f-4cc8-9a38-ebe948250af2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00119.warc.gz"}
I use ) and a bit, starting from when I did my PhD. It's a great document preparation system which has some annoying sides, but I won't go down that path. I tried to set up a glossary in a document recently, and it wasn't obvious to me how to do that. I did get it working though. Here's what I did, so that others (including myself) don't have to figure it out again: 1. Follow the steps provided at LaTeX Matters, namely: 1. Put two lines into your preamble (before \begin{document}): 2. When you define or introduce certain terms, wrap them in a \glossary command like so: \glossary{name={entry name}, description={entry description}} The inside curly braces are only necessary if you have commas in the entries. 3. Place this command where you want the glossary to appear: 4. If you want your glossary to have an entry in the Table of Contents, then put this with the previous line: 5. Then, instead of running a command as described in the post, follow the next step, which is equivalent, but for TeXnicCenter. 2. Prompted by Archiddeon's comment in a LaTex forum, in TeXnicCenter, add a post-processing step to a build profile: 1. Select the Build menu then choose Define Output Profiles... 2. On the left, select a profile you want to start from 3. Click the Copy button down below, enter a new name for the profile and click OK 4. Then click on the Postprocessor tab 5. Create a new postprocessor step by clicking the little "new folder" icon at the top right 6. Then enter a name like "Prepare the glossary" and the following: Executable: makeindex Arguments: "%tm.glo" -s "%tm.ist" -t "%tm.glg" -o "%tm.gls" 3. Click OK until you're back at the main window! Then you should be able to select the new profile and generate the document with a glossary! I'd run it 2-3 times to let the page numbers settle. Joy! Programming Praxis is a great site. The 147 problem was recently posted there, and I tackled it with a bit too much time... Following is the code, with discussion. (defun sum1/ (s) "Read the name as Sum the Inverses" (apply #'+ (mapcar #'/ s))) (defun limits (n) "I took the limits discussion on the site a bit further..." (loop for i from n downto 1 for prod = (apply #'* prods); Produces 1 to start with for prods = (cons (1+ prod) prods) for min = (min (car prods) n) for max = (max (* i prod) n) collect (list min max))) When calling limits with n = 5, we get ((2 5) (3 8) (5 18) (5 84) (5 1806)) which shows the maximum ranges of each number we'll search for. The actual search for solutions to 1/a+1/b+1/c+1/d+1/e = 1 then looks like this: (defun search1 (n &optional givens limits) "Actually finds all of the valid solutions to the 147 problem with n numbers. Call it without the optional arguments to start it off." (let ((limits (if (null givens) (limits n) limits))) ;; Limits is whittled down during the recursion, ;; so detect starting off by looking at givens. (if (= n (length givens)) (if (= 1 (sum1/ givens)) (list (reverse givens))) (loop for next from (max (caar limits) (if givens (car givens) 0)) to (cadar limits) for s = (cons next givens) until (or (every (lambda (i) (> i n)) s) (and (not (cdr limits)) (< (sum1/ s) 1))) appending (search1 n s (cdr limits)))))) This code times how long it takes to solve the first few solutions before the times get too big: (time (loop for n from 0 to 5 collect (nreverse (search1 n)))) Which gave 4.1 seconds on my computer. Note that the code works for n = 0 and 1 (producing the empty list, and a single solution: 1, respectively). As some other people have mentioned, Common Lisp is well thought out. I think this also supports that thought, as I didn't design the algorithm specifically to handle cases n = 0 and 1. And lastly, the following code finds the smallest distinct solutions, as described at Programming Praxis: (remove nil (mapcar (lambda (s) (if (apply #'/= s); Distinctness (cons s (apply #'+ s)))); Sum the denominators (search1 5))) #'< :key #'cdr); Sort by the sum of denominators Which gives (3 4 5 6 20), confirming our solution. Happy coding! This post follows on from when we discretised our traffic model equations. We now solve them in this post. The solving process is quite simple since we avoided most references to the future values of variables. If we did, we'd have to use matrices to solve the discretised model equations. As it is, we can simply calculate each value separately. The code is modeled around each cell, so here's the discretised cell equation: $$\rho(x_i,t_f) = \rho(x_i,t_c) - \frac{t_f-t_c}{x_{i+}-x_{i-}}\left(\rho(x_{i+},t_c) v(x_{i+},t_c) - \rho(x_{i-},t_c) v (x_{i-},t_c)\right).$$ where \(\rho\) = vehicular density (veh/m); \(t\) = time (s); \(t_c\) = the current time (s); \(t_f\) = the future time (s); \(x\) = distance (m) along road; \(i\) = the index of the current cell, ranging from 1 to \(n\); \(x_i\) = the location of the 'centroid' of the current cell (m); \(x_{i-}\) = the left (upstream) boundary of the current cell \(x_{i+}\) = \(v_f\) = free flow speed (m/s); \(\rho_j\) = jam density (veh/m), the average vehicles per metre in stationary traffic; and \(v\) = vehicle speed (m/s), and is given by: \(v = v_f(1-\rho/\rho_j)\). The left boundary is given by: $$\rho(0,t) = \rho_j/4.$$ Oh, and in case it isn't clear, the flow of traffic is in the direction of increasing \(x\) , which is to the right. That means that upstream is to the left, and downstream is to the right. The above equations have been implemented in Common Lisp, with MathP (but I'll wait till the end to show the final code). It outputs the values scaled up by 1000 for convenience. To call it, we evaluate something like this: (step-simulation :road-length 1000 :n 20 :dt 1.0 :time-run 200) Great! So, let's look at the results. Here is a visualisation of the results for the above simulation: Initial traffic simulation results. Time increases from top to bottom, distance increases from left to right. Light traffic is green, medium traffic is yellow, and heavier traffic is red. Not so good... there are fluctuations in the solution, causing negative vehicle densities. Thankfully, that can be fixed with an Upwind scheme. In our case, we just move the centroid more upstream within the cell. This removes the fluctuations, but probably increases error elsewhere in the simulation. The results with the Upwind scheme are: "Upwind" traffic simulation results. Time increases from top to bottom, distance increases from left to right. Light traffic is green, medium traffic is yellow, and heavier traffic is red. Looking good. The traffic on the left fills the road over time, and there are no fluctuations. The final code to implement this is: (defparameter *free-flow-speed* (/ 60 3.6); = 60 km/h "Speed (m/s) of freely flowing, low density traffic.") (defparameter *jam-density* (/ 7.0); = 1 veh every 7 m "Jam density (veh/m), the average vehicles per metre in stationary traffic.") (defparameter *road-length* 1000.0; = 1 km "Length (m) of road being simulated.") (defparameter *cell-count* 10 "Number of cells road is split into for simulation.") (defun step-simulation (&key (vf *free-flow-speed*) (pj *jam-density*) (road-length *road-length*) (n *cell-count*) (dt 1.0); Time step in seconds. (time-run 10.0); Time to run simulation for. (t0 0.0); Starting time for simulation. (upwind 0.5); Upwinding parameter [0 = only upstream, 1 = only downstream] p0; Initial road density for each cell. pin); Input density function (veh/m). Should take one argument - the time. dx = roadLength/(n-1); Spacing between cell centres. pin = if pin pin fn(_) pj/4.0 ;; Set the boundary density in the initial state. unless(p0 p0 := makeArray(n :initialElement 0.0) p0[0] := #!pin(t0)) v(p) = vf*(1-p/pj); Velocity as function of density. x(i) = (i-1)*dx; Cell centroid locations. xp(i) = if i<n (i-0.5)*dx roadLength; Plus, or right (downstream) boundary locations. xm(i) = if i>1 (i-1.5)*dx 0; Minus, or left (upstream) boundary locations. stepsim(pold atime dt) = makeVector(n fn(i) { pm = if i>1 (1-upwind)*pold[i-2]+upwind*pold[i-1] pold[0] pp = if i<n (1-upwind)*pold[i-1]+upwind*pold[i] pold[n-1] in = max(0 dt*pm*v(pm)) out= max(0 dt*pp*v(pp)) if i==1 #!pin(atime) pold[i-1] + (in - out)/(xp(i)-xm(i))}) #L(loop for atime from t0 to (+ t0 time-run) by dt for ps = p0 then (stepsim ps atime dt) do (loop for p across ps do (format t " ~4,1,3F" p) finally (terpri)))})
{"url":"http://blog.metalight.net/2013/02/","timestamp":"2024-11-10T19:02:08Z","content_type":"text/html","content_length":"72621","record_id":"<urn:uuid:15b4427d-c755-41f6-a111-d68440208a79>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00140.warc.gz"}
Items where Subject is "Mathematical Science" Number of items at this level: 73. A. Bolarinwa, Ismaila and T. Bolarinwa, Bushirat (2021) Incidental Parameters Problem: The Case of Gompertz Model. Asian Journal of Probability and Statistics, 15 (1). pp. 8-14. ISSN 2582-0230 Adebimpe, O. (2013) Stability Analysis of a SEIV Epidemic Model with Saturated Incidence Rate. British Journal of Mathematics & Computer Science, 4 (23). pp. 3358-3368. ISSN 22310851 Ahmed, Hanan and Shedeed, Howida and Hegazy, Doaa (2015) Enhanced Ray Tracing Algorithm for Depth Image Generation. British Journal of Mathematics & Computer Science, 11 (3). pp. 1-11. ISSN 22310851 Ajileye, G and Amoo, S and Ogwumu, O (2018) Two-Step Hybrid Block Method for Solving First Order Ordinary Differential Equations Using Power Series Approach. Journal of Advances in Mathematics and Computer Science, 28 (1). pp. 1-7. ISSN 24569968 An, Guoqiang and Chen, Miaochao (2022) Intelligent Image Analysis and Recognition Method for Art Design Majors. Advances in Mathematical Physics, 2022. pp. 1-9. ISSN 1687-9120 Andualem, Mulugeta and Asfaw, Atinafu (2020) Approximate solution of nonlinear ordinary differential equation using ZZ decomposition method. Open Journal of Mathematical Sciences, 4 (1). pp. 448-455. ISSN 26164906 Awoyemi, D. O. and Olanegan, O. O. and Akinduko, O. B. (2015) A 2-Step Four-Point Hybrid Linear Multistep Method for Solving Second Order Ordinary Differential Equations Using Taylor’s Series Approach. British Journal of Mathematics & Computer Science, 11 (3). pp. 1-13. ISSN 22310851 Baoku, I (2018) Influence of Chemical Reaction, Viscous Dissipation and Joule Heating on MHD Maxwell Fluid Flow with Velocity and Thermal Slip over a Stretching Sheet. Journal of Advances in Mathematics and Computer Science, 28 (1). pp. 1-20. ISSN 24569968 Baumann, Cyrill and Martinoli, Alcherio (2022) Spatial microscopic modeling of collective movements in multi-robot systems: Design choices and calibration. Frontiers in Robotics and AI, 9. ISSN Bolotin, Alexander (2014) The Possible Structure in the Distribution of Decimals in the Euler’s Number Expansion. British Journal of Mathematics & Computer Science, 4 (23). pp. 3328-3333. ISSN Carreras-Badosa, Gemma and Puerto-Carranza, Elsa and Mas-Parés, Berta and Gómez-Vilarrubla, Ariadna and Cebrià-Fondevila, Helena and Díaz-Roldán, Ferran and Riera-Pérez, Elena and de Zegher, Francis and Ibañez, Lourdes and Bassols, Judit and López-Bermejo, Abel (2023) Circulating free T3 associates longitudinally with cardio-metabolic risk factors in euthyroid children with higher TSH. Frontiers in Endocrinology, 14. ISSN 1664-2392 Chang, Chen-Hao and Casas, Jonathan and Brose, Steven W. and Duenas, Victor H. (2022) Closed-Loop Torque and Kinematic Control of a Hybrid Lower-Limb Exoskeleton for Treadmill Walking. Frontiers in Robotics and AI, 8. ISSN 2296-9144 Charles, Wachira M. and George, Lawi O. and Malinzi, J. (2018) A Spatiotemporal Model on the Transmission Dynamics of Zika Virus Disease. Asian Research Journal of Mathematics, 10 (4). pp. 1-15. ISSN Czarnecki, Maciej and Walczak, Szymon (2013) De Sitter Space as a Computational Tool for Surfaces and Foliations. American Journal of Computational Mathematics, 03 (01). pp. 1-5. ISSN 2161-1203 Dahbi, L and Meftah, M (2016) Mayer's Formula for Black Hole Thermodynamics in Constant Magnetic Field. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-12. ISSN 22310851 Dookhitram, Kumar (2024) Structural Equation Modelling of Time Banditry under the Theory of Planned Behaviour. B P International. ISBN 978-81-973924-5-0 Dumont, Ludovic and Lopez Maestre, Hélène and Chalmel, Frédéric and Huber, Louise and Rives-Feraille, Aurélie and Moutard, Laura and Bateux, Frédérique and Rondanino, Christine and Rives, Nathalie (2023) Throughout in vitro first spermatogenic wave: Next-generation sequencing gene expression patterns of fresh and cryopreserved prepubertal mice testicular tissue explants. Frontiers in Endocrinology, 14. ISSN 1664-2392 Eze, Nnaemeka M. and Ossai, Everestus O. and Ohanuba, Felix O. and Ezra, Precious N. and Ugwu, Samson O. and Asogwa, Oluchukwu C. (2023) Categorical Analysis of Variance on Knowledge, Compliance and Impact of Hand Hygiene among Healthcare Professionals during COVID-19 Outbreak in South-East, Nigeria. Asian Research Journal of Mathematics, 19 (2). pp. 36-53. ISSN 2456-477X Fujita, Takaaki (2024) Novel Idea on Edge-Ultrafilter and Edge-Tangle. Asian Research Journal of Mathematics, 20 (4). pp. 18-22. ISSN 2456-477X Gad, Moran and Lev-Ari, Ben and Shapiro, Amir and Ben-David, Coral and Riemer, Raziel (2022) Biomechanical knee energy harvester: Design optimization and testing. Frontiers in Robotics and AI, 9. ISSN 2296-9144 Garcia-Beltran, Cristina and Navarro-Gascon, Artur and López-Bermejo, Abel and Quesada-López, Tania and de Zegher, Francis and Ibáñez, Lourdes and Villarroya, Francesc (2023) Meteorin-like levels are associated with active brown adipose tissue in early infancy. Frontiers in Endocrinology, 14. ISSN 1664-2392 Gheorghe, Munteanu Bogdan (2021) Max Weibull-G Power Series Distributions. Asian Journal of Probability and Statistics, 15 (1). pp. 15-29. ISSN 2582-0230 Hsieh, Tsung-Cheng and Deng, Guang-Hong and Chang, Yung-Ching and Chang, Fang-Ling and He, Ming-Shan (2023) A real-world study for timely assessing the diabetic macular edema refractory to intravitreal anti-VEGF treatment. Frontiers in Endocrinology, 14. ISSN 1664-2392 I. Jingyi, L. (2018) On the Center Conditions of Certain Fifth Systems. Asian Research Journal of Mathematics, 10 (4). pp. 1-5. ISSN 2456477X Iberedem Aniefiok, Iwok and Nwikpe, Barinaadaa John (2021) The Iwok-Nwikpe Distribution: Statistical Properties and Its Application. Asian Journal of Probability and Statistics, 15 (1). pp. 35-45. ISSN 2582-0230 Ionescu, Adela and Coman, Daniela and Degeratu, Sonia (2015) Computational Analysis for the Dynamical System Associated to an Access Control Structure. British Journal of Mathematics & Computer Science, 11 (3). pp. 1-13. ISSN 22310851 Joshi, Mahesh C. and Kumar, Raj and Singh, Ram Bharat (2014) On a Weighted Retro Banach Frames for Discrete Signal Spaces. British Journal of Mathematics & Computer Science, 4 (23). pp. 3334-3344. ISSN 22310851 Kabir, K. H. and Alim, M. A. and Andallah, L. S. (2013) Effects of Viscous Dissipation on MHD Natural Convection Flow along a Vertical Wavy Surface with Heat Generation. American Journal of Computational Mathematics, 03 (02). pp. 91-98. ISSN 2161-1203 Kaleeswari, S and Selvaraj, B (2015) Oscillation Criteria for Higher Order Nonlinear Functional Difference Equations. British Journal of Mathematics & Computer Science, 11 (3). pp. 1-8. ISSN 22310851 Khalil, M and Ibrahim, Ahmed (2015) Quick Techniques for Template Matching by Normalized Cross-Correlation Method. British Journal of Mathematics & Computer Science, 11 (3). pp. 1-9. ISSN 22310851 Kim, Aeran (2018) The Convolution Sums with the Number of Representations of a Positive Integer as Sum of 6 Squares. Asian Research Journal of Mathematics, 10 (4). pp. 1-17. ISSN 2456477X Kong, Yuan and Liu, Xing-Chen and Dong, Huan-He and Liu, Ming-Shuo and Wei, Chun-Ming and Huang, Xiao-Qian and Fang, Yong and Ma, Wen-Xiu (2022) Single-Soliton Solution of KdV Equation via Hirota’s Direct Method under the Time Scale Framework. Advances in Mathematical Physics, 2022. pp. 1-8. ISSN 1687-9120 Li, Yanpeng (2016) A Possible Exact Solution for the Newtonian Constant of Gravity. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-25. ISSN 22310851 Li, Yue and Hu, Jian and Cao, Danqian and Wang, Stephen and Dasgupta, Prokar and Liu, Hongbin (2022) Optical-Waveguide Based Tactile Sensing for Surgical Instruments of Minimally Invasive Surgery. Frontiers in Robotics and AI, 8. ISSN 2296-9144 Liu, Ting and Lu, Weilin and Zhao, Xiaofang and Yao, Tianci and Song, Bei and Fan, Haohui and Gao, Guangyu and Liu, Chengyun (2023) Relationship between lipid accumulation product and new-onset diabetes in the Japanese population: a retrospective cohort study. Frontiers in Endocrinology, 14. ISSN 1664-2392 Luca, Rodica and Tudorache, Alexandru (2016) Existence and Nonexistence of Positive Solutions for a System of Higher-Order Di erential Equations with Integral Boundary Conditions. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-10. ISSN 22310851 Mahama, François and Boahen, Patience and Saviour, Akuamoah and Tumaku, John (2016) Modeling Satisfaction Factors that Predict Students Choice of Private Hostels in a Ghanaian Polytechnic. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-11. ISSN 22310851 Malhotra, Kashish and Pan, Carina Synn Cuen and Davitadze, Meri and Kempegowda, Punith (2023) Identifying the challenges and opportunities of PCOS awareness month by analysing its global digital impact. Frontiers in Endocrinology, 14. ISSN 1664-2392 Mofarreh, Fatemah and Srivastava, Sachin Kumar and Dhiman, Mayrika and Othman, Wan Ainun Mior and Ali, Akram and Alomari, Mohammad (2022) Inequalities for the Class of Warped Product Submanifold of Para-Cosymplectic Manifolds. Advances in Mathematical Physics, 2022. pp. 1-13. ISSN 1687-9120 Mutlu, Ali and Mutlu, Berrin and Akda, Sevinç (2016) Using C-Class Function on Coupled Fixed Point Theorems for Mixed Monotone Mappings in Partially Ordered Rectangular Quasi Metric Spaces. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-9. ISSN 22310851 Mécheri, H. and Saadi, S. (2013) Overlapping Nonmatching Grid Method for the Ergodic Control Quasi Variational Inequalities. American Journal of Computational Mathematics, 03 (01). pp. 27-31. ISSN Ngari, Cyrus (2018) Modelling Vaccination and Treatment of Childhood Pneumonia and Their Implications. Journal of Advances in Mathematics and Computer Science, 28 (1). pp. 1-24. ISSN 24569968 Ngiangia, A. T. and Orukari, M. A. and Jim-George, F. (2018) Application of JWKB Method on the Effect of Magnetic Field on Alpha Decay. Asian Research Journal of Mathematics, 10 (4). pp. 1-9. ISSN Okyere, Gabriel and Bruku, Silverius and Tawiah, Richard and Biney, Gilbert (2018) Adaptive Robust Profile Analysis of a Longitudinal Data. Journal of Advances in Mathematics and Computer Science, 28 (1). pp. 1-18. ISSN 24569968 Pecune, Florian and Callebert, Lucile and Marsella, Stacy (2022) Designing Persuasive Food Conversational Recommender Systems With Nudging and Socially-Aware Conversational Strategies. Frontiers in Robotics and AI, 8. ISSN 2296-9144 Prágr, Miloš and Bayer, Jan and Faigl, Jan (2022) Autonomous robotic exploration with simultaneous environment and traversability models learning. Frontiers in Robotics and AI, 9. ISSN 2296-9144 R., Tejaskumar and Ismayil, A. Mohamed (2024) Bandwagon Eccentric Domination Polynomial and its Energy in Graphs. Asian Research Journal of Mathematics, 20 (2). pp. 12-26. ISSN 2456-477X Roche, Christopher D. and Iyer, Gautam R. and Nguyen, Minh H. and Mabroora, Sohaima and Dome, Anthony and Sakr, Kareem and Pawar, Rohan and Lee, Vincent and Wilson, Christopher C. and Gentile, Carmine (2022) Cardiac Patch Transplantation Instruments for Robotic Minimally Invasive Cardiac Surgery: Initial Proof-of-concept Designs and Surgery in a Porcine Cadaver. Frontiers in Robotics and AI, 8. ISSN 2296-9144 Romano, Daniel A. and Jun, Young Bae (2020) Weak implicative UP-filters of UP-algebras. Open Journal of Mathematical Sciences, 4 (1). pp. 442-447. ISSN 26164906 Sarafian, Haiduke (2021) Alternate Cooling Model vs Newton’s Cooling. American Journal of Computational Mathematics, 11 (01). pp. 64-69. ISSN 2161-1203 Sarafian, Haiduke (2021) Nonlinear Electrostatic “Hesitant” Oscillator. American Journal of Computational Mathematics, 11 (01). pp. 42-52. ISSN 2161-1203 Sarafian, Haiduke (2021) What Projective Angle Makes the Arc-Length of the Trajectory in a Resistive Media Maximum? A Reverse Engineering Approach. American Journal of Computational Mathematics, 11 (02). pp. 71-82. ISSN 2161-1203 Sarduana, Apanapudor, Joshua and Ozioma, Ogoegbulem and Newton, Okposo, (2024) Modeling the Effect of Random Environmental Perturbation on Data Precision in Niger Delta Crude Oil Production. Asian Journal of Probability and Statistics, 26 (7). pp. 57-74. ISSN 2582-0230 Sasaki, Hiroo and Suga, Hidetaka and Takeuchi, Kazuhito and Nagata, Yuichi and Harada, Hideyuki and Kondo, Tatsuma and Ito, Eiji and Maeda, Sachi and Sakakibara, Mayu and Soen, Mika and Miwata, Tsutomu and Asano, Tomoyoshi and Ozaki, Hajime and Taga, Shiori and Kuwahara, Atsushi and Nakano, Tokushige and Arima, Hiroshi and Saito, Ryuta (2023) Subcutaneous transplantation of human embryonic stem cells-derived pituitary organoids. Frontiers in Endocrinology, 14. ISSN 1664-2392 Sen, Pulakesh and Datta, Sanjib Kumar (2014) A Computational Study on the Viscous in Compressible Laminar Separated Flow in a Unit Square Cavity. British Journal of Mathematics & Computer Science, 4 (23). pp. 3312-3327. ISSN 22310851 Shihab, Mahmood (2016) Square-Normal Operator. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-7. ISSN 22310851 Siddiqi, Shahid S. and Younis, Muhammad (2013) The &lt;i&gt;m&lt;/i&gt;-Point Quaternary Approximating Subdivision Schemes. American Journal of Computational Mathematics, 03 (01). pp. 6-10. ISSN Singh, Sukh and Ughade, Manoj and Daheriya, R and Jain, Rashmi and Shrivastava, Suraj (2016) Coincidence Points & Common Fixed Points for Multiplicative Expansive Type Mappings. British Journal of Mathematics & Computer Science, 19 (3). pp. 1-14. ISSN 22310851 Sumbul-Sekerci, Betul and Sekerci, Abdusselam and Pasin, Ozge and Durmus, Ezgi and Yuksel-Salduz, Zeynep Irem (2023) Cognition and BDNF levels in prediabetes and diabetes: A mediation analysis of a cross-sectional study. Frontiers in Endocrinology, 14. ISSN 1664-2392 Taher, A. H. S. and Malek, A. and Thabet, A. S. A. (2014) Semi-analytical Approximation for Solving High-order Sturm-Liouville Problems. British Journal of Mathematics & Computer Science, 4 (23). pp. 3345-3357. ISSN 22310851 Tang, Yuelong (2021) Convergence and Superconvergence of Fully Discrete Finite Element for Time Fractional Optimal Control Problems. American Journal of Computational Mathematics, 11 (01). pp. 53-63. ISSN 2161-1203 Tharwat, Assem and ZeinEldin, Ramadan and Khalifa, Hamiden and Saleim, Ahmed (2018) Fuzzy Risk Measure for Operational Risk. Journal of Advances in Mathematics and Computer Science, 28 (1). pp. 1-17. ISSN 24569968 Usubamatov, Ryspek and Kapayeva, Sarken and Fellah, Zine El Abiddine (2022) Inertial Forces and Torques Acting on a Spinning Annulus. Advances in Mathematical Physics, 2022. pp. 1-10. ISSN 1687-9120 Vazquez, Eduardo and Torres, Stephanie and Sanchez, Giovanny and Avalos, Juan-Gerardo and Abarca, Marco and Frias, Thania and Juarez, Emmanuel and Trejo, Carlos and Hernandez, Derlis (2022) Confidentiality in medical images through a genetic-based steganography algorithm in artificial intelligence. Frontiers in Robotics and AI, 9. ISSN 2296-9144 Verma, Ayush (2021) A Short Study on Bias Present in Classical Random Processes. Asian Journal of Probability and Statistics, 15 (1). pp. 30-34. ISSN 2582-0230 Wang, Peiheng and Wang, Shulei and Huang, Bo and Liu, Yiming and Liu, Yingchun and Chen, Huiming and Zhang, Junjun (2023) Clinicopathological features and prognosis of idiopathic membranous nephropathy with thyroid dysfunction. Frontiers in Endocrinology, 14. ISSN 1664-2392 Zeng, Lijiang (2018) Study for Uniform Convergence and Power Series. Asian Research Journal of Mathematics, 10 (4). pp. 1-13. ISSN 2456477X Zenkour, Ashraf M. (2010) Rotating Variable-Thickness Inhomogeneous Cylinders: Part II—Viscoelastic Solutions and Applications. Applied Mathematics, 01 (06). pp. 489-498. ISSN 2152-7385 Zenkour, Ashraf M. (2010) Rotating Variable-Thickness Inhomogeneous Cylinders: Part I—Analytical Elastic Solutions. Applied Mathematics, 01 (06). pp. 481-488. ISSN 2152-7385 Zhang, Gaopeng and Ding, Zhe and Yang, Junping and Wang, Tianqi and Tong, Li and Cheng, Jian and Zhang, Chao (2023) Higher visceral adiposity index was associated with an elevated prevalence of gallstones and an earlier age at first gallstone surgery in US adults: the results are based on a cross-sectional study. Frontiers in Endocrinology, 14. ISSN 1664-2392 Zhao, Xiaoxuan and Jiang, Yuepeng and Luo, Shiling and Zhao, Yang and Zhao, Hongli (2023) Intercellular communication involving macrophages at the maternal-fetal interface may be a pivotal mechanism of URSA: a novel discovery from transcriptomic data. Frontiers in Endocrinology, 14. ISSN 1664-2392 Zhou, Li and Zhu, Chuanxi and Shmarev, Sergey (2022) Ground State Solution for a Fourth Order Elliptic Equation of Kirchhoff Type with Critical Growth in ℝ N. Advances in Mathematical Physics, 2022. pp. 1-7. ISSN 1687-9120 Zoramawa, A. B. and Gulumbe, S. U. (2021) On Sequential Probability Sampling Plan for a Truncated Life Tests Using Rayleigh Distribution. Asian Journal of Probability and Statistics, 15 (1). pp. 1-7. ISSN 2582-0230
{"url":"http://library.2pressrelease.co.in/view/subjects/Mathematical=5FScience.html","timestamp":"2024-11-10T14:18:02Z","content_type":"application/xhtml+xml","content_length":"44400","record_id":"<urn:uuid:a37d5750-793b-49c2-9b27-ca074d8a18fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00345.warc.gz"}
The Grandaddy of Trig Identities! The challenged was issued by the most brilliant of educators: BlueCereal – a genius among us smart folks! He suggested a post about our favorite content – challenge accepted Mr. Cereal! I taught high school math for over 18 years before embarking on my current journey as a full time Ph.D student at Oklahoma State University. During that time I taught almost every math course offered in high school: Geometry, Applied Geometry, Applied Math II and III, Advanced Algebra Trig, PreAP Precalculus, Calculus, Math Concepts, Algebra II, PreAP Algebra II, Algebra I, PreAP Algebra I, Algebra I-First Half, Algebra II Support…You get the picture – lots of math. Across all of those classes there is one I just love to teach – Calculus! I could teach this course 24/7/365! It is so darn interesting! The relationship of Calculus all the back to the beginnings of Algebra I is fascinating! I could go on and on… However, my most favorite lesson to teach, and its not even close, is the pythagorean trigonometric identities. These things have fascinated me since Lu Ireton introduced them to me in her Math Analysis class my junior year of high school. She is the teacher who propelled me to where I am today! THANK YOU Mrs. Ireton!! Since a, b, and c are all variables, we can use any letter (or thing) to represent them. For our trigonometric purposes we will use the horizontal leg as x, the vertical leg as y, and the hypotenuse as r. This image shows how we could represent this triangle on the coordinate axis. The point (x, y) represents the two legs of our right triangle and these two legs create the angle θ. If you were to draw a perpendicular line from the point (x, y) to the x-axis you would have a right triangle. Our equation has changed just a little bit to Hang on – this is where stuff just gets super duper amazing! Let’s investigate the specific value of r = 1, this specific value will help some patterns be more visible. When comparing the side of the triangle with x, θ, and r the ratio created is the cosine – this shows what that looks like, remember r = 1. Using the same steps again, this time comparing y, θ, and r we get a different ratio called the sine, don’t forget that we are choosing r = 1. Notice in both of these instances what is happening, we are reducing the ratio (r = 1) and have an equality for both the sine and the cosine. cos θ = x and sin θ = y When you have equality, you can exchange the items that are equal. Using and replacing only those things that are equal (see cosine and sine equality statements above) – you have a new and very powerful trigonometric identity! This identity is the foundation for all other trigonometric identities! This identity can be used to make complicated problems simple! Did I make a typo? Have a question? Want to expound on how brilliant this identity is? Leave a comment below… All of the cool people leave comments - what are your thoughts? This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"http://teachingfromhere.com/the-grandaddy-of-trig-identities/","timestamp":"2024-11-09T09:35:50Z","content_type":"text/html","content_length":"97168","record_id":"<urn:uuid:63c965a1-4231-4777-9f0f-ef249677929c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00671.warc.gz"}
Disabling use of libimf and resolving missing math functions 10-12-2023 01:55 AM Hi there, I'm currently working on a project to move away from use of the Intel math library (libimf) on a Intel Fortran compiled codebase. The aim is to switch to an open-source math library as this allows us to have (better) results consistency across various CPU manufacturers and OS's. I'm able to integrate the alternate math library to the Intel Fortran compilation. However, as soon as I disable the linking of libimf through the link option '-no-intel-lib=libimf', I get a lot of linking errors for undefined math functions. error: undefined reference to '__powr8i4' error: undefined reference to 'f_pow2i' error: undefined reference to 'pow2o3' error: undefined reference to '__powq' error: undefined reference to '__sqrtq' I presume that the Intel Fortran compiler assumes use of libimf and therefore introduces such optimised math functions. Is there a way to modify the compilation to avoid pulling in libimf math functions? Or, can you tell me where I can find the libimf header file so I can reproduce such functions for my project? 10-12-2023 06:40 AM 10-12-2023 07:10 AM 10-12-2023 09:04 AM 10-12-2023 01:11 PM 10-12-2023 11:10 AM 10-13-2023 06:29 AM 10-13-2023 08:13 AM 10-13-2023 08:21 AM 10-13-2023 09:08 AM 10-13-2023 08:36 AM 12-11-2023 03:53 AM
{"url":"https://community.intel.com/t5/Intel-Fortran-Compiler/Disabling-use-of-libimf-and-resolving-missing-math-functions/td-p/1532986","timestamp":"2024-11-03T00:48:56Z","content_type":"text/html","content_length":"426878","record_id":"<urn:uuid:bc744e4b-3724-40fb-afe0-c785c4f65640>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00347.warc.gz"}
Formalism (philosophy of mathematics) - Wikipedia Republished // WIKI 2 In the philosophyofmathematics, formalism is the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings (alphanumeric sequences of symbols, usually as equations) using established manipulationrules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess."^[1] According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they aren't "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation (or semantics). In contrast to mathematicalrealism, logicism, or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist. Along with realism and intuitionism, formalism is one of the main theories in the philosophy of mathematics that developed in the late nineteenth and early twentieth century. Among formalists, DavidHilbert was the most prominent advocate.^[2] YouTube Encyclopedic • 1/5 • Silvia Jonas | The Philosophy of Maths • Introduction to Formalism in Philosophy of Mathematics • What is Formalism? (See link below for the video lecture on "What is New Criticism?") Early formalism The early mathematical formalists attempted "to block, avoid, or sidestep (in some way) any ontological commitment to a problematic realm of abstract objects."^[1] German mathematicians EduardHeine and CarlJohannesThomae are considered early advocates of mathematical formalism.^[1] Heine and Thomae's formalism can be found in GottlobFrege's criticisms in TheFoundationsofArithmetic. According to Alan Weir, the formalism of Heine and Thomae that Frege attacks can be "describe[d] as term formalism or game formalism."^[1] Term formalism is the view that mathematical expressions refer to symbols, not numbers. Heine expressed this view as follows: "When it comes to definition, I take a purely formal position, in that I call certain tangible signs numbers, so that the existence of these numbers is not in question."^[3] Thomae is characterized as a game formalist who claimed that "[f]or the formalist, arithmetic is a game with signs which are called empty. That means that they have no other content (in the calculating game) than they are assigned by their behaviour with respect to certain rules of combination (rules of the game)."^[4] Frege provides three criticisms of Heine and Thomae's formalism: "that [formalism] cannot account for the application of mathematics; that it confuses formal theory with metatheory; [and] that it can give no coherent explanation of the concept of an infinite sequence."^[5] Frege's criticism of Heine's formalism is that his formalism cannot account for infinite sequences. Dummett argues that more developed accounts of formalism than Heine's account could avoid Frege's objections by claiming they are concerned with abstract symbols rather than concrete objects.^[6] Frege objects to the comparison of formalism with that of a game, such as chess.^[7] Frege argues that Thomae's formalism fails to distinguish between game and theory. Hilbert's formalism A major figure of formalism was DavidHilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics.^[8] Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent (i.e. no contradictions can be derived from the system). The way that Hilbert tried to show that an axiomatic system was consistent was by formalizing it using a particular language.^[9] In order to formalize an axiomatic system, you must first choose a language in which you can express and perform operations within that system. This language must include five components: • It must include variables such as x, which can stand for some number. • It must have quantifiers such as the symbol for the existence of an object. • It must include equality. • It must include connectives such as ↔ for "if and only if." • It must include certain undefined terms called parameters. For geometry, these undefined terms might be something like a point or a line, which we still choose symbols for. By adopting this language, Hilbert thought that we could prove all theorems within any axiomatic system using nothing more than the axioms themselves and the chosen formal language. Gödel's conclusion in his incompletenesstheorems was that you cannot prove consistency within any consistent axiomatic system rich enough to include classical arithmetic. On the one hand, you must use only the formal language chosen to formalize this axiomatic system; on the other hand, it is impossible to prove the consistency of this language in itself.^[9] Hilbert was originally frustrated by Gödel's work because it shattered his life's goal to completely formalize everything in number theory.^[10] However, Gödel did not feel that he contradicted everything about Hilbert's formalist point of view.^[11] After Gödel published his work, it became apparent that proof theory still had some use, the only difference is that it could not be used to prove the consistency of all of number theory as Hilbert had hoped.^[10] Hilbert was initially a deductivist, but he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation. Further developments Other formalists, such as RudolfCarnap, considered mathematics to be the investigation of formalaxiomsystems.^[12] HaskellCurry defines mathematics as "the science of formal systems."^[13] Curry's formalism is unlike that of term formalists, game formalists, or Hilbert's formalism. For Curry, mathematical formalism is about the formal structure of mathematics and not about a formal system.^[13] StewartShapiro describes Curry's formalism as starting from the "historical thesis that as a branch of mathematics develops, it becomes more and more rigorous in its methodology, the end-result being the codification of the branch in formal deductive systems."^[14] Criticisms of formalism KurtGödel indicated one of the weak points of formalism by addressing the question of consistency in axiomatic systems. BertrandRussell has argued that formalism fails to explain what is meant by the linguistic application of numbers in statements such as "there are three men in the room".^[15] See also External links This page was last edited on 10 October 2023, at 03:51
{"url":"https://wiki2.org/en/Formalism_(mathematics)","timestamp":"2024-11-08T04:11:31Z","content_type":"application/xhtml+xml","content_length":"87267","record_id":"<urn:uuid:9c7c8409-e4cf-417f-8857-cc7306269103>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00153.warc.gz"}
Certificateless Signature Schemes Posted on:2020-09-07 Degree:Master Type:Thesis Country:China Candidate:C L Liu Full Text:PDF GTID:2428330605950810 Subject:Information and Communication Engineering With the increasing dependence on the internet,people are increasingly paying attention to and attaching importance to information security issues.As an important technology of information security,digital signature can provide users with three technical supports: identity authentication,data integrity and non-repudiation.At present,the research on digital signature is mainly based on three different public key cryptography algorithms,namely traditional PKI-based public key cryptography,identity-based public key cryptography and certificateless public key cryptography.PKI-based public key cryptography has public key management problem and identity-based public key cryptography has key escrow problem,while certificateless public key cryptography can effectively avoid such security problems,so its security is relatively higher.In addition,it is more efficient and has important practical research significance.The advantages of high reliability and high efficiency of the certificateless public key cryptography are continuously reflected in practical applications.This thesis proposes two digital signature schemes,that is,a linkable certificateless ring signature and a certificateless aggregation signature.The advantages of two schemes are that it inherits the high reliability and high efficiency of the certificateless public key cryptography,and further their privacy is enhanced.The research work of this paper is as follows:(1)The research status of the ring signature and aggregate signature is reviewed.The mathematical theory knowledge involved in the digital signature algorithms is introduced,including algebra basic knowledge,elliptic curve,bilinear mapping and its difficult problems.(2)A linkable certificateless ring signature scheme is proposed.Combing the linkable feature with the certificateless ring signature scheme,the scheme overcomes the certificate management problem and the key escrow problem,and it can protect the privacy of the user while avoiding the abuse of the signature right,hence,it reduces the dependence on trusted third parties and increases the efficiency of the solution.Due to the problem of low computational efficiency of the bilinear pairing,our scheme proposes a certificateless ring signature scheme based on discrete logarithm without using bilinear pairing,which has higher computational efficiency.In addition,the unforgeability of the scheme is proved in the random oracle model.(3)A certificateless aggregation signature scheme is proposed.Combing the aggregate signature algorithm with the certificateless public key cryptography,it can aggregate n signatures from n users corresponding to n different messages into a single short signature.In addition,it can simplify the verifications of n signatures into a single verification,which not only eliminates the reliance on the trusted third parties,but also greatly reduces computational overhead and bandwidth usage in resource-constrained environments.In addition,based on the difficulty of the computational Diffie-Hellman problem,the unforgeability of the scheme under the random oracle model is proved.Our certificateless aggregation signature scheme reduces the calculation of two bilinear pairs in the aggregation verification stage,and so the calculation efficiency is greatly improved and the utility is stronger. Keywords/Search Tags: Certificateless Public Key Cryptosystem, Ring Signature, Aggregate Signature, Elliptic Curves, Bilinear Pairings
{"url":"https://globethesis.com/?t=2428330605950810","timestamp":"2024-11-13T22:34:53Z","content_type":"application/xhtml+xml","content_length":"8952","record_id":"<urn:uuid:a4e7c637-3ccb-474b-922a-91f2e763b5d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00123.warc.gz"}
Chapter 51: Pedagogical Integration As The Method Of Constructing Cross-Curriculum Modules Of Training Courses The authors of the article present the significance of cross-curriculum modules in Russia and abroad in obtaining a new quality of education: a holistic perception of the world, the assimilation of universal educational activities, meta-subject competencies. The possibilities of pedagogical integration in the formation of cross-curriculum modules of physics and computer science have been analyzed. The intercross-curriculum module as a didactic superstructure over the learning subjects necessary for the formation of meta-subject competencies is determined. The list of cross-curriculum modules of physics and computer science for general education, obtained on the basis of the method of inter-structural integration - the integration of methods and means of computer science with the content of the course of physics has been provided. The following cross-curriculum modules are shown: "Modeling in physics" and "Tabular information model of the domain" Dynamics", "Solving tasks on the laws of conservation and momentum of energy using the computer ","Creating algorithms and spreadsheets of oscillatory motion", "Creating a website layout on topic "Atomic and nuclear physics". The content of the cross-curriculum module "Graphical Information Models of Mechanical Motion" is revealed. Examples of graphic information models of mechanical motion and tasks necessary for their learning by students are provided. Methodical possibilities of inclusion of cross-curriculum modules in the system of teaching physics and computer science in general education are explained. It is shown how the study of these modules leads to the formation of meta-subject competencies in students, which proves the advantages of integrated modules over traditional teaching. Starting from 2010, the Federal State Educational Standard for Basic General Education (FSES), among other requirements, establishes the formation of universal training activities (UTA), meta-subject competencies and expertise. The basic educational programme, according to FSES, should contain "... the description of the concepts, functions, structure and characteristics of the UTA (personal, regulative, cognitive and communicative) and their connection with the content of separate academic subjects, extracurricular activities, and the position of separate components of universal educational activities in the structure of educational activities " (Federal State Educational Standard of Basic General Education, 2010). The formation of such educational elements exceeds the educational potential of a single academic discipline. The formation of meta-subject competencies in terms of integrated courses or cross-curriculum training modules is considered the most Integrated courses and cross-curriculum modules are quite common in Russia and abroad. In Russia, such courses are created on different integrating foundations: global problems of the present ("Ecology"), the subject of study of a number of sciences ("Environment", "Natural Science"), complex phenomena ("Synergetics"), general scientific theory ("Electrical Engineering") and etc. The goals of these courses are the formation of modern holistic thinking, scientific outlook, meta-subject competencies. Foreign integrated courses are created in terms of the globalization of education to form its new quality. It has been proven that cross-curriculum courses contribute to the development of cross-curriculum thinking among students (Pike & Selby, 1995; Тye, 2003). For example, the module "International Education", the subject of which provides a new quality of knowledge, contributes to a holistic perception of the world, assimilation of European competences has been created (Grzybowski, 2008). In our work, we will deal with the construction of cross-curriculum modules of physics and computer science based on the method of pedagogical integration. "Pedagogical integration is the restoration in the process of cognition of the naturally existing integrity of the object - an object, event, phenomenon or process, separated by a description in different sciences" (Federal'nyy gosudarstvennyy obrazovatel'nyy standart osnovnogo obshchego obrazovaniya, 2014). Pedagogical integration is a kind of scientific integration, which is carried out within the framework of pedagogical theory and practice [1, p.15]. The works of many outstanding scientists - didacticians and methodologists in the field of specific educational disciplines, in particular, physics and computer science (Bezrukova, 1990; Chapaev, 1992) are devoted to the pedagogical integration. The main ideas of scientific research include highlighting basic opportunities for the integration of academic disciplines based on their methodological foundations. Training courses with different integrity levels, including cross-curriculum modules are created by integrating academic disciplines. The problem of developing cross-curriculum modules of physics and computer science remains relevant today. Solving this problem contributes to the formation of universal learning activities, meta-subject competencies and the application of IT technologies in all spheres of the future adult life of a pupil. Problem Statement There is a contradiction between the requirements of the FSES to the formation of meta-subject competencies and the absence of cross-curriculum modules of academic disciplines, within which these competencies could be successfully formed. The cross-curriculum module is a didactic superstructure over the subjects which is necessary for meta-subject competencies formation. The development and implementation of cross-curriculum modules should, on the one hand, provide the implementation of FSES requirements, and on the other hand, be basedupon integral didactic capabilities of specific subjects. Each cross-curriculum module should provide the development of at least one meta-subject competence and respective cross-curriculum knowledge and skills. We have chosen the method of pedagogical integration for constructing a cross-curriculum module of physics and computer science, since it enables us to restore the unity and integrity of the objects of study by selecting individual subject elements. Research Questions In our study, we searched for answers to the questions: • What is the content, and what are the possibilities of pedagogical integration of school subjects? • What are the advantages of modular integration of physics and computer science? • What technological steps are required for cross-curriculum modules formation? • How many cross-curriculum modules of physics and computer science should be included for grade 9? • How to form the content of a specific cross-curriculum module of physics and computer science (on the example of the module "Graphical and informational models of mechanical motion")? Purpose of the Study Formation and justification of cross-curriculum modules of physics and computer science, which allow to form subject and meta-subject knowledge and skills of the above disciplines, based on pedagogical integration method. Research Methods The methods applied in our study are: • method of pedagogical integration for construction of cross-curriculum modules of training courses in physics and computer science; • methods of analysis and synthesis for identification of the content, capabilities and types of pedagogical integration of academic subjects, in particular - the integration of physics and computer science; • the method of generalization and concretization is used to justify the stages of creating cross-curriculum modules of academic disciplines; • the method of pedagogical experiment is used to test the quality of students' mastering subject and meta-subject knowledge and skills in the study of cross-curriculum modules by schoolchildren. As a result of the research, it was revealed that the difficulty of understanding mathematical terms and statements and the belief in the truth of mathematical statements based on verbal conformity to standards can be attributed to the mass difficulties of schoolchildren in solving mathematical problems. The positive experience of teaching mathematics in the school and teaching students in a pedagogical university, analyzed by us, suggests that both these problems are solved by reviewing the content of school mathematical education. Therefore, it seems reasonable to structure the content of education in relation to some mathematical facts, more precisely speaking, with respect to a system of (possibly excessive) mathematical artifacts specially selected so that the practice of handling them allows the child to accumulate the experience necessary for understanding mathematical terms and statements. For this, it is necessary to "legalize" the experimental method in teaching mathematics. Russian scientists distinguish three levels of pedagogical integration: the educational process, the content of education and the content of a particular subject (Bezrukova, 1990). The main feature of the content-based subject level of integration is the fusion of elements of different subjects (knowledge, skills, etc.) in one synthesized course, which lose their structural independence in the integrated academic subject. It is substantiated that educational subjects with a common object or theoretical concept, for example, natural science disciplines: physics, chemistry, biology, are well integrated. They have a common object of research - nature and general theoretical concepts - molecular-kinetic theory, electron theory of matter. The leading role in this integration belongs to physics, since many chemical and biological laws, phenomena and processes are explained on the basis of physical laws. Views on cross-curriculum integration have in many ways been expanded by the concepts of intra-structural, inter-structural and external integration. (Chapaev, 1992). Academic disciplines include didactic elements - what students should learn: knowledge, skills, while at the present time it is competences. Intra-structural integration involves the integration of the same elements from different disciplines, for example, knowledge with knowledge, skills with skills, etc. Inter-structural integration involves the integration of different elements from different disciplines, for example, knowledge with skills, etc. External integration means the combination of skills with organizational forms, or skills with teaching methods. As a result of cross-curriculum integration, all components of the content of objects are transformed or assimilated. In other words, a different degree of integrity can be achieved. Levels of integration of cross-curriculum modules for disciplines from different blocks will be lower, for example, natural science and polytechnics. Let us, as an example, consider the possibilities and results of integrating physics and computer science. Physics and computer science have different objects. Physics is the science of inanimate nature, in which the most general laws of nature are studied. Computer science is a science that systematizes the methods of creating, storing, processing and transferring information by means of computer technology, as well as the principles of the functioning of these tools and the methods of managing them. The object of physics is nature; the object of computer science is information. These disciplines have different methods of research: in physics - observation and experiment, in computer science - methods of working with information (accumulation, storage, processing, transmission, etc.). In this respect, the methods of these sciences are different. It is clear that these sciences do not have shared theoretical concepts. Physical science is based upon four fundamental physical theories: classical mechanics, statistical physics, classical electrodynamics and quantum physics. The theoretical basis of computer science is a group of fundamental sciences such as: information theory, theory of algorithms, mathematical logic, theory of formal languages and grammars, combinatorial analysis, etc. Thus, these sciences cannot form cross-curriculum modules with a high degree of integrity. The only exception to the entire computer science course is the study of physical media devices and the fundamentals of computer technology. In different classes this material is represented in one or several paragraphs of textbooks Computer science and ICT. The physical foundations of computer technology are studied throughout all years of computer science, but at different levels of content. In this topic, a schematic diagram of the computer and peripheral devices are presented as electrical devices. Their ability to receive, process, transform and transmit light, sound, electrical and magnetic signals - physical media - is explained. Integration of this topic with the topics of the physics course: "Laws of direct current", "Magnetic phenomena", "Light phenomena" (8-th grade), "Sound vibrations and waves" (9-th grade) should have the highest degree. However, it is generally accepted that the methods and means of computer science reach the end user as information technology (IT). The term "information technology" became widely used in the late 70's - early 80's due to the development of electronic technology, which allowed to quickly and efficiently process a wide range of information. According to UNESCO definition, "Information technology is a complex of interrelated scientific, technological, engineering disciplines, which study methods of effective labor organization of people engaged in processing and storing information; computing techniques and methods of organization and interaction with people and production machinery, their practical applications, as well as related social, economic and cultural issues " (Definition of information technologies adopted by UNESCO). The above-mentioned possibilities for integrating sciences physics and computer science are also characteristic of the corresponding academic disciplines. In terms of pedagogical integration, these subjects are not related, thus their external integration is possible, suggesting the transfer of methods and means of one discipline to another. In practice, there is a transfer of methods, technologies and means of computer science to the content area of the educational subject physics, which makes it possible to learn physical concepts and laws more successfully or a transfer of teaching aids, mainly physical tasks, to the informative part of computer science, which allows students to successfully form skills and competencies in the information technology. This approach was previously considered in our works (Mashinyan & Kochergina, 2017a, 2017b). When you combine the methods of one discipline with the content of the other, the inter-structural integration is realized. The development of cross-curriculum modules involves modeling simultaneous (or close-in-time) learning of related topics in two or more subject areas on the basis of a single module content, agreed methodology, and considering the interpenetration of tasks into each subject area. Stages of creating across-curriculum module: • Identification the FSES requirements, the implementation of which is complicated in the context of a separate study of physics and computer science. • Determining the content of the module, the learning of which is necessary for the development of FSES meta-subject competencies. • Methodical development of module implementation. In accordance with the above stages, a list of cross-curriculum physics and computer science modules for grade 9 has been constructed (table). Table 1 - See Full Size > The introduction of cross-curriculum modules involves solving a number of fundamental issues. First - at the expense of what hours and in the study of what topics is it possible to implement the suggested module? Second, is there a need for the time correction of the study of the subject modules for a coordinated study of the cross-curriculum module? Third - how in the methodical plan to provide the fulfilment of the principle of complementarity in the process of implementation of the cross-curriculum module in terms of separate subjects (physics and computer science)? The fourth is how to provide technological correction of the results of studying the cross-curriculum module in two academic subjects? Fifth - what is the final result of studying the cross-curriculum module for physics and for computer science? We will try to respond to all the questions. Cross-curriculum modules of physics and computer science (there are 7 in total) can be implemented at the expense of the hours devoted to revision and summarizing knowledge in each academic subject. This is quite realistic, since most school teachers teach both physics and computer science. Another possible way of cross-curriculum modules implementation is making them part of elective courses. Elective courses are an integral component of the educational process of the modern school, especially in lyceums. As can be seen from Table 1 , the topics of physics and computer science courses are studied almost concurrently during the school year. Therefore, there is no need for the correction of time of the study of subject modules. In the methodical terms, the principle of complementarity is already fulfilled when the topics of the physics and computer science courses are coordinated. Technological correction of the results of the study of the cross-curriculum module can be performed using inverse connection of two subjects, during diagnostic work at intermediate stages. Based on diagnostic results, corrections and additions are made to the content and methodology of the cross-curriculum module. The formation of the meta-subject competencies specified in the Federal state educational standard is considered as the final result of the study of the cross-curriculum module for physics and computer science. Let us consider an example of the cross-curriculum module of physics and computer science using the module "Graphical and informational models of mechanical motion" for the 9th form of the general education school: In the process of studying this module, students learn the basic concepts of computer science and physics, meta-subject competencies, including knowledge and skills. Basic concepts of computer science: scheme, map, drawing, graphic, diagram, graph, network, tree. Basic concepts of physics: material point, coordinate system, reference frame, trajectory, path, displacement, speed, uniform motion, uneven movement, acceleration, uniformly accelerated motion, laws of velocity and displacement change at uniformly accelerated motion, uniform motion along the circumference. Basic meta-project concepts: modeling, formalization. Meta-project skills: the ability to build graphic information models in the study of mechanical motion. The main methodological idea is to study the mechanical movement using all graphic information models, concentrating the students' attention on the content of both physical and computer science concepts, generate in students the ability to build graphic information models. The application of modeling in the teaching of natural sciences is widely used in foreign schools (Afzal, Safdar & Ambreen 2015; Gaidule & Heidingers, 2015; Wee & Leong, 2015), especially when organizing problem training (Ageorges, Bacila, Poutot & Blandin, 2014; Kipkorir, 2015; Zeitoun & Hajo, 2015; Schütte & Köller, 2015). By the time of training in the 9th form, students have a conception of the simplest graphic models: a map a geographical or topographic map as an example, a drawing, an exact picture of technical details, various schemes, as less accurate than the drawing examples. The position of material points, the movement of the body as a segment that connects its initial and final position, the trajectory of the body's movement, the path travelled can be shown on a simple geographical map. To form meta-subject skills, the following tasks are offered to students: 1. The plane is flying from Moscow to Vladivostok with a landing in Yakutsk. Draw on the map of the Russian Federation the trajectory and vector of the movement of the plane. Given the scale of the map, determine the displacement module and the path of the aircraft. 2. Build a flat XOY reference system in the graphic editor. Depict the displacement of a material point from A (2.3) to B (5.6) in it. Construct a displacement vector, find the projections of this vector on the axis OX and OY. Find the modulus of the displacement vector. Perform this task when moving a material point from A (7, 8) to B (1, 1). To illustrate the concepts of displacement, projection of movement on a computer in a given reference system, a drawing is performed (Fig. 1). Figure 1: Drawing of the displacement vector in the coordinate system See Full Size > At the very beginning of the physics course students should learn how to build schemes. A scheme or schematic drawing reflects the basic concepts and the connections between them at a qualitative level. The diagram shows the physical quantities in the form of vectors, their location in space is taken into account. Particularly essential are the schemes in the study of physical phenomena, the derivation of physical laws and the solution of physical tasks. Skills formation in constructing schemes is repeatedly worked out when solving physical tasks. The diagram (a schematic drawing of the condition and requirements of the physical task) is its simplest model. Without the use of schemes, a physical task, especially with a high degree of difficulty, cannot be solved. Graphs are more complex models of physical phenomena that reflect the connections between physical quantities. Reading and constructing graphs is the most important polytechnical, therefore, cross-curriculum skill. For example, the figure shows a graph of the dependence of the projection of velocity on time during uniformly accelerated motion (Figure 2 ): Figure 2: Graph of the dependence of the projection of the velocity of uniformly accelerated motion on time. See Full Size > Graphs are used to derive physical laws (Figure 3 ). For example, the graph of the dependence of speed on time during uniformly accelerated motion: $v x = v 0 x + a x t$ is used to derive the equation of uniformly accelerated motion: $s x = v 0 x t + a x t 2 2$ Figure 3: Derivation of the law of uniformly accelerated motion on the basis of the graph. See Full Size > The geometric meaning of the area of the figure located under the graph of the dependence of the projection of velocity on time is the projection of the displacement. To create the skills for building graphics students get tasks: 1. In the coordinate system, plot the velocity versus time curves: v = 5; v = 1 + 2t; v = 10 - 5t. 2. Describe each of these types of movement. In the coordinate system, plot the motion graphs: x = 5 + 3t; x = 2 + 3t + t2. What kinds of movement are these? A diagram is a kind of graphical information model that reflects the quantitative relationships between homogeneous objects. There are the following types of diagrams: columnar, linear, with regions, circular, XY graph, radial, and also point, bubble, speedometer, columnar-linear, pyramid. In general education physics, all the considered types of diagrams should be applied. To learn the concept of the diagram and the ability to build them, students get tasks: • In the graphical editor, draw a diagram comparing the speeds of the pedestrian, bicyclist, car, airplane and missile movement. Show in the diagram the approximate values of the speeds of movement of these bodies. • Using a point diagram of the dependence of the path on time with uniformly accelerated motion with zero initial velocity, formulate the regularity of this type of movement. Another kind of model: graph. The graph consists of vertices connected by lines-ribs. The vertices of the graph are represented by circles, ovals, dots, rectangles, etc. The objects are represented as the vertices of the graph, and the links are represented as its edges. A graph is called weighted if its vertices or edges are characterized by some additional information - weights of vertices or edges. For example, the figure can depict the cities and the distances between them in kilometers, the relationship between physical quantities in physical laws. Basic concepts of graphs: a chain and a cycle. A chain is a path along the vertices and edges of a graph into which any edge of the graph enters no more than once. A cycle is a chain whose initial and final vertices coincide. The graph model has been effectively used in physics for a very long time. For example, a network depicting a graph with a cycle is used to represent any cyclic process (uniform motion of a material point along a circle). A tree is a graph without cycles - it is used to establish a hierarchy of physical concepts. Thus, all kinds of mechanical motion and the connection between them can be represented as a graph-tree (Fig. 4) Figure 4: Graph-tree "Types of mechanical motion." See Full Size > To learn the concept of graph and the ability to construct graphs, following assignments are offered to students: • Build a graph on the computer for the concepts of uniformly accelerated motion. Describe the connections between concepts. • Construct a graph that reflects the relationship between the angular and linear characteristics of the uniform motion of a material point along the circumference. The pedagogical experiment on the students' acquiring the knowledge of subject and skills in physics and computer science, as well as meta-subject knowledge and skills related to the study of integrated modules, was conducted in the schools of Moscow - the experimental sites of the Institute for the Development of Education during 2016-2018. The results of the formation of the subject knowledge and skills in physics and computer science, meta-subject knowledge in experimental and control classes convincingly prove the advantages of implementing integrated modules. To summarize, the research that was conducted shows the evidence of both similarities and differences among the employers/media professionals’ versus higher education professors’ versus students’ understanding of modernizing the current journalist education process. According to the employers, the young journalists’ lack of applicable skills needs to be compensated by expanding the types of hands-on training, which should focus on new multimedia specializations within the journalist profession. About this article Publication Date 21 September 2018 Article Doi eBook ISBN Edition Number 1st Edition Education, educational equipment, educational technology, computer-aided learning (CAL), Study skills, learning skills, ICT Cite this article as: Mashinyan, A. A., & Kochergina, N. V. (2018). Pedagogical Integration As The Method Of Constructing Cross-Curriculum Modules Of Training Courses. In S. K. Lo (Ed.), Education Environment for the Information Age, vol 46. European Proceedings of Social and Behavioural Sciences (pp. 433-444). Future Academy. https://doi.org/10.15405/epsbs.2018.09.02.51
{"url":"https://www.europeanproceedings.com/article/10.15405/epsbs.2018.09.02.51","timestamp":"2024-11-04T18:16:35Z","content_type":"text/html","content_length":"76041","record_id":"<urn:uuid:e1269416-646c-4697-80de-e2206d405dc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00482.warc.gz"}
Continued fractions with various algorithms Continued fractions with various algorithms Hi, Can I find a package allowing to compute continued fractions and convergents with an algorithm different than the euclidian division ? nearest integer for example or others... I have code to deal with Multidimensional Continued Fraction Algorithms in my optional package but nothing about other 1D variants, sorry. Thanks Sebastien good to know anyway
{"url":"https://ask.sagemath.org/question/50749/continued-fractions-with-various-algorithms/","timestamp":"2024-11-07T00:12:26Z","content_type":"application/xhtml+xml","content_length":"49358","record_id":"<urn:uuid:79dbcb38-24c4-422a-af7d-31cf43566939>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00125.warc.gz"}
Handbook of Methods The Job Openings and Labor Turnover Survey (JOLTS) program uses the following methodologies to generate the estimates. The methodologies below are presented in terms of their order of operation. National-level estimation Estimation of JOLTS estimates at the national level involves the following processes: unit nonresponse adjustment, item nonresponse adjustment, monthly benchmarking and estimation, automatic outlier detection, birth and death model estimation, estimates review and outlier selection, alignment, seasonal adjustment, and variance estimates. Establishment size class levels are also produced. These processes are described in detail below. Unit nonresponse adjustment A multiplicative nonresponse adjustment factor (NRAF) is used to inflate the weight of respondents in an estimation cell to adjust for nonrespondents. The weight of all nonrespondents is redistributed among the respondents to preserve the total weighted employment of the cell. The NRAF is calculated by dividing the weighted frame employment of the viable establishments in the cell by the weighted frame employment of usable sample units in the cell: • the subscript “cell” denotes the industry division, census region, and establishment size, • i designates the i^th establishment, • viable designates those in-scope sampled units which are capable of reporting; that is, sampled units that are not out of business, out of scope, or duplicates, • usable designates a subset of viable units, that is, those units which responded to the JOLTS with usable data, • emp is the sample frame employment of the i^th unit, and • w[i] is the sampling weight of the i^th unit. Note: By definition, NRAF >= 1 since the number of usable units is less than or equal to the number of viable units. Item nonresponse adjustment Item nonresponse occurs when a respondent reports some of the JOLTS data elements, but not others. When a respondent only partially reports JOLTS data, the missing data must be replaced. The replacement of missing data mitigates bias, increases statistical efficiency, and increases the ease of data analysis. Imputation is the process by which missing values are replaced by an estimate based on other available data. To impute data elements that have not been reported, the JOLTS program classifies establishments based on their employment dynamic—expanding, stable, or contracting—and imputes items within those groups. Thus, expanding establishments donate estimated item values to expanding establishments, stable to stable, and contracting to contracting. Drawing imputed values from a model-based donor distribution derived from reported data within a dynamic grouping reduces variation in the estimates. The imputation model also ensures that imputed data within dynamic groups are consistent with reported data within the corresponding groups without biasing the means of the data elements or substantially lowering their variances. Imputation methodology The imputation methodology produces three separate models for each of the JOLTS industry imputation cells. One model is based on the respondent rate distribution of stable establishments, a second is based on the respondent rate distribution of expanding establishments, and a third is based on the respondent rate distribution of contracting establishments. The employment dynamics classification is based on the reported over-the-month employment change of the respondents. The purpose of the models is to estimate vital characteristics of the entire distribution (mean, standard-deviation, skewness) based on full respondent data and then to impute missing values using a random draw from the estimated distribution. Suppose that id for a given month t. JOLTS item imputation is concerned only with those sampled establishments that reported at least employment. Complete nonrespondents are accounted for in JOLTS using a nonresponse adjustment factor (NRAF). Therefore, for each variable id and given month t and that id and in the previous month. We can then define employments change as The JOLTS imputation methodology subdivides the current industry imputation cell into three parts based on the reported employment change for each respondent establishment. where = 0 is the stable group with donor rates denoted as > 0 is the expanding group with donor rates denoted as < 0 is the contracting group with donor rates denoted as where i=1,…,n given n donor establishments within the group for each industry. A simple model of the item respondent rate distribution will then be constructed based on the known distribution characteristics of the item respondent rates for each group (). Independent draws from the model distribution will then be applied to the imputation recipient employment to produce imputed levels. The variables of interest within the actual reported donor data used to construct the model distribution are as follows: 1) The mean rate of the items within each group (), which is calculated as sum of the item over all-item respondents divided by the employment of all item respondents, 2) The standard deviation of the absolute differences between item respondent rate and the cell mean within each group (), which is calculated as follows: ABS ( - ), for each i in each id,group,ir cell. 3) The percentage of item respondents who report rates above the cell mean within each group () It is important to note that the distribution of model rates is non-Normal. The distributions more resemble a skewed Uniform distribution, typically with the bulk of the distribution below the mean and with a thin long tail above the mean. Hence, the data from respondents in used to estimate the model attributes for each model (expanding, contracting, and stable) as the following illustration µ: The mean rate of the items within each group (). β: The percentage of item respondents who report rates below the cell mean within each group () 1-β: The percentage of item respondents who report rates above the cell mean within each group () ψ: This variable is based on the observed standard deviation of the absolute value of the observed distance between reported rates and the cell mean and takes into account establishment size class. The value of ψ, in effect, determines the level of potential skewness for the model. U(0,µ): Is a random uniform distribution of rates from 0 to µ. U(µ,ψ): Is a random uniform distribution of rates from µ to ψ. Imputed rate values are generated using two random draws from a Uniform distribution: As the first step, a random draw from a U(0,1) is made: a. If the U(0,1) is <= β then a random draw from a U(0,µ) is made. i. The imputed rate is equal to the random draw from a U(0,µ) b. If the U(0,1) is > β then a random draw from a U(µ,ψ) is made. i. The imputed rate is equal to the random draw from a U(µ,ψ) 1. Example of the imputation method for contracting establishments i.e., those establishments for which employment declined over the month. This example uses establishment data for a randomly chosen month (ID 44 Retail Trade, April 2011). We begin by sorting item respondents and item nonrespondents into their proper group based on their over-the-month employment change. We next summarize the set of item respondents with respect to the mean rate of the items within each group (), the standard deviation of the absolute differences between item respondent rate and the cell mean within each group (), and the percentage of item respondents who report rates above the cell mean within each group ( ). Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element N μ Σ β Job openings 133 3.99% 5.43% 76% Hires 198 3.59% 5.61% 40% Quits 174 2.40% 3.98% 59% Layoffs and discharges 174 1.34% 31.29% 80% Other separations 170 0.62% 49.76% 92% Group: Expanding establishments ID: 44 Retail trade Month: April 2011 Data element N μ Σ β Job openings 173 4.70% 3.05% 85% Hires 257 4.43% 13.56% 50% Quits 218 1.61% 2.07% 84% Layoffs and discharges 218 1.01% 1.21% 85% Other separations 212 0.07% 0.59% 94% Group: Stable establishments ID: 44 Retail trade Month: April 2011 Data element N μ Σ β Job openings 327 1.22% 3.05% 94% Hires 351 2.05% 4.08% 85% Quits 342 1.42% 3.54% 85% Layoffs and discharges 342 0.69% 5.40% 93% Other separations 340 0.14% 0.36% 97% Each dynamic group differs with respect to the relationship between the cell means of hires and total separations. In the contracting group, the mean hires rate (3.59 percent) is less than the sum of the separations component means (4.36 percent). The expanding group has a mean hires rate (4.43 percent) greater than the sum of the separations component means (2.68 percent), while the mean hires rate of stable establishments (2.05 percent) is approximately equal to the sum of the separations component means (2.25 percent). The JOLTS variable total separations represents the summation of its components of quits, layoffs and discharges, and other separations. If total separations are reported and any component of total separations is not, then the imputed components levels will be prorated to the reported total separations level. Next, in step 2, we construct the length of the proposed model distribution () using the observed deviations (σ). The length (ψ) is equal to the value 1.645*(σ). Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element N μ σ ψ β Job openings 133 3.99% 5.43% 8.93% 76% Hires 198 3.59% 5.61% 9.23% 40% Quits 174 2.40% 3.98% 8.37% 59% Layoffs and discharges 174 1.34% 31.29% 51.89% 80% Other separations 170 0.62% 49.76% 81.86% 92% Group: Expanding establishments ID: 44 Retail trade Month: April 2011 Data element N μ σ ψ β Job openings 173 4.70% 3.05% 5.01% 85% Hires 257 4.43% 13.56% 22.30% 50% Quits 218 1.61% 2.07% 3.41% 84% Layoffs and discharges 218 1.01% 1.21% 1.99% 85% Other separations 212 0.07% 0.59% 0.97% 94% Group: Stable establishments ID: 44 Retail trade Month: April 2011 Data element N μ σ ψ β Job openings 327 1.22% 3.05% 5.02% 94% Hires 351 2.05% 4.08% 6.71% 85% Quits 342 1.42% 3.54% 5.83% 85% Layoffs and discharges 342 0.69% 5.40% 8.88% 93% Other separations 340 0.14% 0.36% 0.97% 97% Finally, in step 3, we construct the two uniform distributions from which imputed rates can be drawn. With probability β (the probability that an item respondent within the group has reported less than the cell mean) we will draw from U(0, Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Job openings 3.99% 8.93% 76% U(0,3.99%) 24% U(3.99%,8.93%) Hires 3.59% 9.23% 40% U(0,3.59%) 30% U(3.59%,9.23%) Quits 2.40% 8.37% 59% U(0,3.98%) 41% U(3.98%,8.37%) Layoffs and discharges 1.34% 51.89% 80% U(0,1.34%) 20% U(1.34%,51.89%) Other separations 0.62% 81.86% 92% U(0,0.62%) 8% U(0.62%,81.86%) Group: Expanding establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Job openings 4.70% 5.01% 85% U(0,4.70%) 15% U(4.70%,5.01%) Hires 4.43% 22.30% 50% U(0,4.43%) 50% U(4.43%,22.30%) Quits 1.61% 3.41% 84% U(0,1.61%) 36% U(1.61%,3.41%) Layoffs and discharges 1.01% 1.99% 85% U(0,1.01%) 15% U(1.01%,1.99%) Other separations 0.07% 0.97% 94% U(0,0.07%) 6% U(0.07%,0.97%) Group: Stable establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Job openings 1.22% 5.02% 94% U(0,1.22%) 6% U(1.22%,5.02%) Hires 2.05% 6.71% 85% U(0,2.05%) 15% U(2.05%,8.16%) Quits 1.42% 5.83% 85% U(0,1.42%) 15% U(1.42%,5.83%) Layoffs and discharges 0.69% 8.88% 93% U(0,0.69%) 7% U(0.69%,8.88%) Other separations 0.14% 0.97% 97% U(0,0.14%) 3% U(0.14%,0.59%) The final data required to produce imputed rates are the lengths of the six size class distributions ( for each variable. These lengths are calculated the same way as the group lengths; however, these lengths are based on the data across all industries and groups for a given month. The following matrix details the size class lengths (size x variable matrix): ψ = Observed standard deviations Size class Job openings Hires Quits Layoffs and discharges Other separations 1 72.60% 28.11% 10.42% 28.52% 61.01% 2 38.07% 27.40% 5.90% 26.05% 2.58% 3 24.15% 11.45% 3.04% 10.50% 0.56% 4 15.54% 9.38% 2.30% 21.17% 0.55% 5 5.01% 7.11% 2.54% 4.41% 0.39% 6 2.43% 1.63% 0.72% 1.27% 0.37% We can now begin imputing rates. In this example, the establishment has a reported employment of 2,648 employees (classifying it as size class 5), and its over-the-month employment change is negative 3. It is therefore classified as a contracting establishment. The truncated table below will then be used to impute job openings for this nonrespondent. Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Job openings 3.99% 8.93% 76% U(0,3.99%) 24% U(3.99%,8.93%) In the matrix above, we note that the job opening length () for size class 5 is 5.01 percent. Since the value of for the ID44/contracting group (8.93 percent) is greater than the size class 5 job openings length (5.01 percent), we replace the upper bound value for the id/group with the value of the size. The following table illustrates this adjustment: Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Job Openings 3.99% 5.01% 76% U(0,3.99%) 24% U(3.99%,5.01%) We begin item imputation by drawing from a Standard Uniform distribution, U(0,1), to determine if the imputed rate is to be drawn from the “below” uniform distribution or the “above” uniform distribution. If the value of the U(0,1) is less than β then we will draw from the “below” uniform distribution, otherwise we will draw from the “above” distribution. If U(0,1) <= β, then draw from U(0,3.99%) If U(0,1) > β, then draw from U(3.99%,5.01%) Suppose that the value drawn from U(0,1) is 0.1616, then since 16 percent is lower than 76 percent, we will draw from the “below” distribution, U(0,.0399). Suppose that the random value drawn from the U(0,.0399) distribution is .02854. The imputed rate for job openings for this establishment is .02854, while the imputed level is the reported employment of the establishment being imputed multiplied by the imputed rate, 2648*.02854, so the imputed job openings level is 75.57. 2. Example of the imputation method for expanding establishments i.e., those establishments for which employment increased over the month. In the second example, the establishment is an item nonrespondent for quits. The reported employment is 111 employees (classifying it as size class 3), and its over-the-month employment change is positive 1. It is therefore classified as an expanding establishment. The truncated table below will then be used to impute quits for this nonrespondent. Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Quits 1.61% 3.41% 64% U(0,1.61%) 36% U(1.61%,3.41%) In the matrix, we note that the quits length () for size class 3 is .0304. Since the value of for the ID44/expanding group (.0341) is greater than the size class 3 quits length (.0304), we replace the upper bound value for the id/group with the value of that size. The table below illustrates this adjustment: Group: Contracting establishments ID: 44 Retail trade Month: April 2011 Data element μ ψ β Below (1- β) Above Quits 1.61% 3.04% 64% U(0,1.61%) 36% U(1.61%,3.41%) As before, we begin item imputation by drawing from a Standard Uniform distribution, U(0,1), to determine if the imputed rate is to be drawn from the “below” uniform distribution or the “above” uniform distribution. If the value of the U(0,1) is less than β then we will draw from the “below” uniform distribution, otherwise we will draw from the “above” distribution. If U(0,1) <= β then draw from U(0,1.61%) If U(0,1) > β then draw from U(1.61%,3.04%) Suppose that the value drawn from U(0,1) is 0.9876, then since 0.9876 is greater than 0.64, we will draw from the “above” distribution, U(.0161,.0304). Suppose that the random value drawn from U (.0161,.0304) distribution is .01705. The imputed rate for quits for this establishments thus .01705, while the imputed level is the reported employment of the establishment being imputed multiplied by the imputed rate, 111*.01705, so the imputed quits level is 1.89. Monthly benchmarking and estimating procedures The JOLTS weighted employment for each estimation cell is adjusted through a process called benchmarking. JOLTS estimation cells are benchmarked monthly to the current employment level from the BLS Current Employment Statistics (CES) program. The resulting factor is the Benchmark Factor (BMF). In addition, the weights are adjusted to account for aggregation and disaggregation of establishments. The sampled weight is adjusted to ensure that JOLTS weighted employment is equal to CES employment. JOLTS estimates are calculated by multiplying the establishment weight, the estimation cell nonresponse adjustment factor (NRAF), the estimation cell BMF, and any necessary aggregation adjustment. The product is summed across the estimation cell to produce the estimate for that cell. A Horvitz–Thompson estimator with a ratio adjustment is used to produce estimates of levels of the surveyed data elements at different degrees of geographical and industrial detail. To calculate the estimated level for each data element for a given month in a basic estimation cell, the following steps are performed: To ratio-adjust JOLTS employment to Current Employment Statistics (CES) employment, it is necessary to calculate the Summed Weighted Total Employment (SWTE) for each JOLTS industry division within a census region (region/id). The final weighted JOLTS employment for each record in a region/id cell is calculated by multiplying the following: sample weight*NRAF*reported JOLTS employment for that record. The SWTE is calculated in each region/id cell by summing the final weighted JOLTS employment in each region/id cell. The benchmark factor (BMF) is calculated by dividing CES employment (at the region/id level) by the SWTE (at the region/id level). The CES program produces an industry employment estimate using a much larger sample than JOLTS. Ratio adjusting JOLTS data element estimates to CES industry employment increases the statistical reliability of all JOLTS data element estimates. □ the subscript id,cr denotes industry division and census region, □ BMF[id,cr ]is the benchmark factor for industry and census region, □ CES[id,cr ]designates industry division and census region employment, □ w[i] is the sampling weight reflecting all adjustments (NRAF, atypical data adjustment, etc.) for sample unit i, and □ e[i] = reported employment from sample unit i. Thus, the equation used to compute the estimate of a characteristic is where weight is the recomputed (i.e. reweight) sampling weight. Each data element—job openings, hires, and separation type—is estimated this way. The resulting levels are converted to rates. The hires and separations rates are computed by dividing the number of separations by employment and multiplying that quotient by 100. The job openings rate is computed by dividing the number of job openings by the sum of employment and job openings and multiplying that quotient by 100. Automated outlier detection Winsorization is a statistical process commonly used to reset outlier values to a predetermined threshold value, also called the cutoff value. In JOLTS, an independent cutoff value is established for each employment size and data element (job openings, hires, etc.). Any reported value exceeding the cutoff is reset to the cutoff value. Birth–death model As with any sample survey, the JOLTS sample can only be as current as its sampling frame. The time lag from the birth of an establishment until its appearance in the sampling frame is approximately 1 year. In addition, many new establishments fail within the first year. Because new and short-lived universe establishments cannot be reflected in the sampling frame immediately, the JOLTS sample cannot capture job openings, hires, and separations from these establishments during their early existence. BLS has developed a model for estimating birth and death activity in current months by examining data on birth and death activity in previous years as collected by the Quarterly Census of Employment and Wages (QCEW) and projecting forward to the present using over-the-year change in the Current Employment Statistics (CES). The birth-death model also uses historical JOLTS data to calculate the amount of churn (meaning the rates of hires and separations) that exists in establishments of various sizes. The model then combines the calculated churn with the projected employment change to estimate the number of hires and separations that take place in these establishments that cannot be measured through sampling. The model-based estimate of total separations is distributed to the three components of total separations—quits, layoffs and discharges, and other separations—in proportion to their contribution to the sample-based estimate of total separations. In addition, job openings in the establishments modeled are estimated by computing the ratio of openings to hires in the collected data and applying that ratio to the modeled hires. The estimates of job openings, hires, and separations produced by the birth-death model are then added to the sample-based estimates produced from the survey to arrive at the estimates for job openings, hires, and separations. Estimates review and outlier selection During monthly estimates review and annual processing, JOLTS staff conduct a manual review for not seasonally adjusted estimates containing potential outliers. Estimates are examined for atypical or large movements. Those estimates that need investigation are flagged. To investigate a flagged estimate, JOLTS staff examine the microdata by establishment that contributed to that estimate. If microdata for a reporting establishment is confirmed atypical, the establishment is flagged as an outlier. After estimates review, the not seasonally adjusted estimates are re-run and again reviewed. Conceptually, the JOLTS estimates of hires minus separations should be comparable to the CES over-the-month net employment change. The CES series is considered a highly accurate measure of net employment change due to its large sample and annual benchmark to universe counts of employment from the QCEW program. However, definitional differences, as well as sampling and nonsampling errors between the two surveys, have caused JOLTS to diverge from the CES survey over time. To limit the divergence and to improve the quality of the JOLTS hires and separations series, BLS implemented a monthly alignment method. Simply put, there are four steps to this method: seasonally adjust, align, back out the seasonal adjustment factors, and re-seasonally adjust. The monthly alignment method applies the seasonally adjusted CES employment trend to the seasonally adjusted JOLTS implied employment trend (hires minus separations), keeping the two trends consistent while preserving the seasonality of the JOLTS data. First, the two series are seasonally adjusted and the difference between the JOLTS implied employment trend and the CES net employment change is calculated. Next, the JOLTS implied employment trend is updated to equal the CES net employment change through a proportional adjustment. This proportional adjustment procedure modifies the two components (hires and separations) in proportion to their contribution to the total churn (hires plus separations). For example, if the hires estimate makes up 40 percent of the churn for a given month, it will receive 40 percent, and separations will receive 60 percent, of the needed adjustment. The following is an example of the alignment method. Let Hires denote the number of hires. Let Seps denote the number of separations. Let Cesemp represent CES employment. Hires_sa = 40 Seps_sa = 60 D Cesemp = –25 In this case, hires minus separations does not equal the change in CES employment. where D denotes the divergence between CES employment trend and JOLTS hires minus separations. Let PAHires_sa denote the proportionally adjusted seasonally adjusted hires. Let PASeps_sa denote the proportionally adjusted seasonally adjusted separations. Let Hires_A denote aligned hires. Let Seps_A denote aligned separations. This yields the following: Seasonally adjusted hires minus seasonally adjusted separations is equal to the change in CES employment. Resulting in, Job openings are aligned based on the ratio of job openings to hires from the not seasonally adjusted estimates. This ratio of job openings to hires is applied to the updated hires to compute the updated job openings. The adjusted job openings, hires, and separations are converted back to not seasonally adjusted data by reversing the application of the original seasonal adjustment factors. Let JO denote job openings. Let JO_A denote aligned job openings. To obtain aligned job openings, The monthly alignment procedure assures a close match of the JOLTS implied employment trend with the CES employment trend for the not seasonally adjusted data. The aligned not seasonally adjusted estimates are then published. Seasonal adjustment After alignment, the seasonal adjustment program (X-13-ARIMA-SEATS) is used to seasonally adjust the JOLTS series. Seasonal adjustment is the process of estimating and removing periodic fluctuations caused by events such as weather, holidays, and the beginning and ending of the school year. Seasonal adjustment makes it easier to observe fundamental changes in data series, particularly those associated with general economic expansions and contractions. Each month, a concurrent seasonal adjustment methodology uses all relevant data, up to and including the data for the current month, to calculate new seasonal adjustment factors. Moving averages are used as seasonal filters in seasonal adjustment. JOLTS seasonal adjustment includes both additive and multiplicative models, as well as regression with autocorrelated errors (REGARIMA) modeling, to improve the seasonal adjustment factors at the beginning and end of the series and to detect and adjust for outliers in the series. Variance estimation The estimation of sample variance for the JOLTS survey is accomplished by using the balanced half samples (BHS) method. This replication technique uses half samples of the original sample to calculate estimates. The sample variance is calculated by measuring the variability of the subsample estimates. The sample units in each cell—where a cell is based on region, industry, and size class—are divided into two random groups. The basic BHS method is applied to both groups. The cells are subdivided systematically, in the same order as the initial sample selection. Weights for units in the half sample are multiplied by a factor of 1 + α , whereas weights for units not in the half sample are multiplied by a factor of 1 – α , where in which γ is Fay’s factor (0.5). Fay’s method is a generalized form of BHS which uses the full sample but with unequal weights for each half sample. Sample weights are adjusted by in the formula above by setting y=0.5 for those units outside the half-sample and are adjusted by by setting y=1.5 for those units within the half-sample. The finite population correction (f) factor is calculated as r[t,h ]is the number of units reporting employment in allocation stratum h at time t, n[h] is the number of sample units in allocation stratum h, and the variable w[i]^SEL is the sample selection weight of sample unit i. Annual estimates and benchmarking The JOLTS estimates are revised annually to reflect annual updates to the CES employment estimates and incorporate new seasonal adjustment factors. The JOLTS employment levels (not published) are ratio-adjusted to the CES employment levels, and the resulting ratios are applied to all JOLTS data elements. This annual benchmarking process results in revisions to both the seasonally adjusted and not seasonally adjusted JOLTS series, for the period since the last benchmark was established. The seasonally adjusted estimates are recalculated for the most-recent 5 years to reflect updated seasonal adjustment factors. Further, the alignment methodology creates a dependency of the not seasonally adjusted estimates on the seasonal adjustment process. Therefore, the data series that are not seasonally adjusted are also recalculated for the most-recent 5 years to reflect the effect of the updated seasonal adjustment factors on the alignment process. Establishment size class estimates The JOLTS program produces estimates for job openings, hires, and separations by establishment size class. These estimates can help to better explain some of the internal dynamics of the labor market. The size class series are available back to December 2000. The estimates provide users with job openings, hires, and total separations, as well as the components of total separations: quits (voluntary separations), layoffs and discharges (involuntary separations), and other separations. (See size class definitions in the Concepts section.) Size classes are estimated at the total private industry level. The estimation process for size class estimates uses the same processes that generate national estimates for industry and region, with two differences: • Size class estimates are not reviewed for outliers. • Estimates are aligned at the total private level based on proportions of size classes to CES total employment. State-level estimation The JOLTS program produces estimates for all 50 states and the District of Columbia at the total nonfarm level for job openings, hires, and separations. The JOLTS sample of 21,000 establishments does not directly support the production of sample-based state estimates. However, state estimates have been produced using other BLS program data by combining the available sample with model-based These estimates consist of four major estimating models: the Composite Regional model (an unpublished intermediate model), the Synthetic model (an unpublished intermediate model), the Composite Synthetic model (published historical series through the most current benchmark year), and the Extended Composite Synthetic model (published current-year monthly series). The Composite Regional model uses JOLTS microdata, JOLTS regional published estimates, and Current Employment Statistics (CES) employment data. The Composite Synthetic model uses JOLTS microdata and Synthetic model estimates derived from monthly employment changes in microdata from the Quarterly Census of Employment and Wages (QCEW), and JOLTS published regional data. The Extended Composite Synthetic extends the Composite Synthetic estimates by ratio-adjusting the Composite Synthetic by the ratio of the current Composite Regional model estimate to the Composite Regional model estimate from the previous year. The Extended Composite Synthetic model (and its major component—the Composite Regional model) is used to extend the Composite Synthetic estimates because all of the inputs required by this model are available at the time monthly estimate are produced. In contrast, the Composite Synthetic model (and its major component—the Synthetic model) can only be produced when the latest QCEW data are available. The Extended Composite Synthetic model estimates are used to extend the Composite Synthetic model estimates during the annual JOLTS re-tabulation process. The extension of the Composite Synthetic model using current data-based Composite Regional model estimates will ensure that the Composite Synthetic model estimates reflect current economic trends. Composite Regional model The Composite Regional approach calculates state-level JOLTS estimates from JOLTS microdata using sample weights, and the adjustments for nonresponse (NRAF). The Composite Regional estimate is then benchmarked to CES state-supersector employment to produce state-supersector estimates. The JOLTS sample, by itself, cannot ensure a reasonably sized sample for each state-supersector cell. The small JOLTS sample results in several state-supersector cells that lack enough data to produce a reliable estimate. To overcome this issue, the state-level estimates derived directly from the JOLTS sample are augmented using JOLTS regional estimates when the number of respondents is low (that is, less than 30). This approach is known as a composite estimate, which leverages the small JOLTS sample to the greatest extent possible and supplements that with a model-based estimate. Previous research has found that regional industry estimates are a good proxy at finer levels of geographical detail. That is, one can make a reliable prediction of JOLTS estimates at the regional-level using only national industry-level JOLTS rates. The assumption in this approach is that one can make a good prediction of JOLTS estimates at the state-level using only regional industry-level JOLTS rates. In this approach, the JOLTS microdata-based estimate is used, without model augmentation, in all state-supersector cells that have 30 or more respondents. The JOLTS regional estimate will be used, without a sample-based component, in all state-supersector cells that have fewer than five respondents. In all state-supersector cells with 5 to 30 respondents an estimate is calculated that is a composition of a weighted estimate of the microdata-based estimate and a weighted estimate of the JOLTS regional estimate. The weight assigned to the JOLTS data in those cells is proportional the number of JOLTS respondents in the cell (weight=n∕30, where n is the number of respondents). Composite model inputs The following are the inputs into the Composite Model: • All JOLTS microdata records • All weights from JOLTS estimation (final weights that account for sampling weight, NRAF, agg-codes, etc.) • JOLTS published regional rates estimates (regional JO, H, Q, LD, and TS rates) • CES state-supersector employment Composite model aggregation 1. All JOLTS microdata are weighted using final weights. A weighted estimate is made for each JOLTS respondent. 2. Counts are made for each state-supersector cell. 3. Each JOLTS respondent is paired with its regional rate estimate for all variables. 4. Based on the count of respondents in the state-supersector cell the JOLTS respondent belongs to, a Composite Model Weight (CMW) is calculated. 4.1. If the count is>30, then the CMW for the respondent data=1. The CMW for the regional estimate=0. 4.2. If the count<5, then the CMW for the respondent data=0. The CMW for the regional estimate=1. 4.3. If the count is 5–30, then the CMW for the respondent data=n∕30, where n is the number of respondents. The CMW for the regional estimate=(30-n)∕30. 5. The state-level rate estimate is therefore the final weighted respondent-based JOLTS rate times the CMW added to the regional rate times the CMW, benchmarked to CES state-level estimate: 5.1. FINAL ESTIMATE=CES STATE EMP×((final weight JOLTS rate×CMW)+(regional rate×CMW)) 5.2. The Composite Regional supersector estimates are summed across state industry supersectors to the nonfarm level. 6. To stabilize the estimate, the sum of state Composite Regional estimates within each region is then benchmarked to the published JOLTS regional estimates. Composite model output This model produces state-level estimates of job openings (JO), hires (H), quits (Q), layoffs and discharges (LD), and total separations (TS). These estimates provide estimates for the most current month of estimates and can be produced during monthly JOLTS estimation production. Composite model limitations JOLTS data are somewhat volatile at the national and regional levels due to the small sample size which in turn results in volatile state estimates. The Composite Regional estimates can vary substantially from Composite Synthetic estimates for states that exhibit seasonal employment patterns that differ substantially from the JOLTS region to which they belong. For example, Alaska has a pronounced seasonal employment pattern that differs from the West region in which it resides. Consequently, the Composite Regional estimates derived using West region JOLTS rates substantially understate the JOLTS rates in that state. These estimates are based upon a model. BLS constructed a methodology to produce error measures of estimates, which are updated annually in June. Synthetic model The Synthetic model differs fundamentally from the Composite Regional model. The Synthetic approach does not use JOLTS microdata but rather it uses data from the QCEW that have been linked longitudinally (Longitudinal Database—LDB), the QCEW–LDB. The Synthetic model attempts to convert QCEW–LDB monthly employment change microdata into JOLTS job openings, hires, quits, layoffs and discharges, and total separations data. Synthetic model inputs The following are inputs into the Synthetic model: • All monthly employment changes for each record on the QCEW–LDB • JOLTS published regional estimates for job openings, hires, quits, layoffs and discharges, and total separations Synthetic model aggregation 1. Every record on the QCEW-LDB is classified as expanding, contracting, or stable based on monthly employment change. 1.1. For expanding records, the amount of employment growth is converted to JOLTS hires. They are given no separations. 1.2. For contracting records, the amount of employment decline is converted to JOLTS separations. They are given no hires. 1.3. For stable records, no attribution of JOLTS hires or separations is made. 2. The entire QCEW-LDB is summarized to the US Census regional level. 3. The QCEW-LDB regional summary is ratio adjusted to the JOLTS published regional estimate for hires and total separations. 3.1. For each region, the ratio of QCEW-LDB based regional hires and total separations to JOLTS published hires and total separations is calculated (Ratio-H for hires and Ratio-TS for total 3.2. Each record on the QCEW-LDB within each US Census region will have their converted JOLTS data multiplied by Ratio-H and Ratio-TS, by region. 3.2.1. For expanding records, the amount of employment growth is then: (JOLTS hires×Ratio-H). They remain with no separations. 3.2.2. For contracting records, the amount of employment decline is then: (JOLTS separations×Ratio-TS). They remain with no hires. 3.2.3. For stable records, they remain with no JOLTS hires or separations. 4. To produce state-level estimates, sum the regional hires×Ratio-H by state to produce a state-level JOLTS hires estimate and sum the TS×Ratio-TS by state to produce a state-level JOLTS total separations estimate. Synthetic model output State-level JOLTS estimates for hires and total separations come directly from the model outlined above. Synthetic job openings (are a function of the ratio of industry-regional job openings and hires (. This ratio of published job openings to hires is applied to model hires estimates (to derive model job opening estimates. Ratio-adjusting the JOLTS model hires and separations to the regional published JOLTS hires and separations estimates ensures that the JOLTS published churn rate is fully accounted for. Synthetic quits and layoffs and discharges are a function of the relative percentage of the individual components of total separations at the industry-regional level (. The relative percentages of each component are applied to the model separations estimates () to derive model quits and layoffs and discharges (. () to derive model quits and layoffs and discharges (. Synthetic model limitations This approach is not meant to model individual QCEW-LDB data records. It would not be prudent to use this approach to model small populations (30 or fewer establishments). The model works best at the state-level. And while it is possible to model smaller populations, there potentially is a reduction in the strength of the model proportionate to the reduction in the size of the population being The model does generate state-level job openings and separations breakouts. However, these estimates are based upon ratios that are common across the region to which a state belongs. If there are significant differences in the ratio of job openings to hires or separations breakouts for any particular state (or set of states) within a region, the model cannot detect that and estimates will not reflect those differences. Since the model is based on QCEW-LDB data, the model cannot produce current state-level estimate since QCEW-LDB data lags current JOLTS estimation production by 6–9 months. Composite Synthetic model The Composite Synthetic model is nearly identical to the Composite Regional model. The primary difference is the use of the Synthetic model estimates (described in the composite regional model) rather than JOLTS published regional estimates when there is an insufficient amount of JOLTS microdata to produce a state-supersector estimate. Like the Composite Regional approach, the JOLTS microdata-based estimate is used in all state-supersector cells that have 30 or more respondents. However, in contrast to the Composite Regional approach, the Composite Synthetic approach uses the Synthetic estimate in all state-supersector cells that have fewer than five respondents. In all state-supersector cells with 5–30 respondents, an estimate is calculated that is a composition of a weighted estimate of the microdata-based estimate and a weighted estimate of the Synthetic estimate. The weight assigned to the JOLTS data in those cells is proportional the number of JOLTS respondents in the cell (weight=n∕30, where n is the number of respondents). The Composite Synthetic supersector estimates are summed across state-supersectors to the nonfarm level. Composite Synthetic model inputs The following are inputs into the Composite Synthetic model: • All JOLTS microdata records • All weights from JOLTS estimation (final weights that account for sampling weight, NRAF, agg-codes, etc.) • Synthetic estimates (regional JO, H, Q, LD, and TS rates) • JOLTS regional-level estimates (to benchmark the state estimates) • CES state-supersector employment Composite Synthetic model aggregation 1. All JOLTS microdata are weighted using final weights. A weighted estimate is made for each JOLTS respondent. 2. Counts are made for each state-supersector cell. 3. Each JOLTS respondent is paired with its Synthetic rate estimate for all variables. 4. Based on the count of respondents in the state-supersector cell the JOLTS respondent belongs to, a Composite Model Weighted (CMW) estimate is calculated. 4.1. If the count is>30, then the CMW for the respondent data=1. The CMW for the Synthetic estimate=0. 4.2. If the count<5, then the CMW for the respondent data=0. The CMW for the Synthetic estimate=1. 4.3. If the count is 5–30, then the CMW for the respondent data=n∕30, where n is the number of respondents. The CMW for the Synthetic estimate=1−n∕30. 5. The state-level rate estimate is therefore the final weighted respondent-based JOLTS rate times the CMW added to the Synthetic rate times the CMW, benchmarked to CES state-level estimate: 5.1. FINAL ESTIMATE=CES STATE EMP×((final weight JOLTS rate×CMW)+(synthetic rate×CMW)) 6. To stabilize the estimate, the sum of state Composite Synthetic estimates within each region is then benchmarked to the published JOLTS regional estimates. Composite Synthetic model output State-level JOLTS estimates for hires and total separations come directly from the model outlined above. Synthetic job openings are a function of the ratio of industry-regional job openings and hires (. This ratio of published job openings to hires is applied to model hires estimates (to derive model job opening estimates. Ratio-adjusting the JOLTS model hires and separations to the regional published JOLTS hires and separations estimates ensures that the JOLTS published churn rate is fully accounted for. Synthetic quits and layoffs and discharges are a function of the relative percentage of the individual components of total separations at the industry-regional level (. The relative percentages of each component are applied to the model separations estimates () to derive model quits and layoffs and discharges (. Composite Synthetic model limitations This approach is not meant to model individual QCEW-LDB data records. It would not be prudent to use this approach to model small populations (30 or fewer establishments). The model works best at the state-level, and while it is possible to model smaller populations, there potentially is a reduction in the strength of the model proportionate to the reduction in the size of the population being The model does generate state-level job openings and separations breakouts. However, these estimates are based upon ratios that are common across the region to which a state belongs. If there are significant differences in the ratio of job openings to hires or separations breakouts for any particular state (or set of states) within a region, the model cannot detect that and estimates will not reflect those differences. Since the model is based on QCEW-LDB data, the model cannot produce current state-level estimate since QCEW-LDB data lags current JOLTS estimation production by 6–9 months. These estimates are based upon a model. BLS constructed a methodology to produce error measures of estimates, which will be updated annually in June. Extended Composite Synthetic model The Extended Composite Synthetic model is designed to project the Composite Synthetic forward until QCEW-LDB data are available to produce Composite Synthetic estimates. The Composite Synthetic estimates are extended using the ratio of the current Composite Regional state industry estimate to the Composite Regional state industry estimate from 1 year ago. This approach ensures that the Extended Composite Synthetic state estimates reflect current JOLTS regional and industry-level economic conditions. The Extended Composite Synthetic estimates reflects current JOLTS state economic conditions to the extent that sufficient JOLTS microdata are available. Extended Composite Synthetic model inputs The following are inputs into the Extended Composite Synthetic model: • The historical series of Composite Synthetic model estimates at the state-industry-level • The historical series of Composite Regional model estimates at the state-industry-level Extended Composite Synthetic model aggregation The Composite Synthetic model estimates are produced at a lag since QCEW-LDB data are only available at a 6- to 9-month lag relative to JOLTS data. The Composite Regional model estimates, in contrast, are not produced at a lag and are available concurrent with JOLTS data. Therefore, Composite Synthetic estimates can be extended by ratio-adjusting the Composite Synthetic estimates by the ratio of current Composite Regional estimates to the Composite Regional estimates from the previous year at the state-industry-level as follows: State-level estimates are produced by summing the Extended Composite Synthetic estimates over industry. Extended Composite Synthetic model limitations This model will produce state-level estimates of job openings, hires, quits, layoffs and discharges, and total separations. These estimates are produced without lag. The methodology allows the Extended Composite Synthetic data to reflect current economic trends at the CES Industry–JOLTS Region level. The projection reflects current state economic trends where sufficient JOLTS microdata are These estimates are based upon a model. BLS constructed a methodology to produce error measures of estimates, which are updated annually in June. Response rates Unit and item response rates are tracked monthly to measure data quality and usability. Refusal rates, initiation rates, and collection rates are also calculated and monitored. Reliability of the estimates JOLTS estimates are subject to two types of error: sampling error and nonsampling error. Sampling error can result when a sample, rather than an entire population, is surveyed. There is a chance that the sample estimates may differ from the true population values they represent. The exact difference, or sampling error, varies with the sample selected, and this variability is measured by the standard error of the estimate. BLS analysis is generally conducted at the 90-percent level of confidence. This means that there is a 90-percent chance that the true population mean will fall into the interval created by the sample mean plus or minus 1.65 standard errors. Estimates of the median standard errors are released monthly as part of the significant change tables on the JOLTS webpage and are available upon request. Standard errors are updated annually with the most recent 5 years of data. The JOLTS estimates are also affected by nonsampling error. Nonsampling error can occur for many reasons including the failure to include a segment of the population, the inability to obtain data from all units in the sample, the inability or unwillingness of respondents to provide data on a timely basis, mistakes made by respondents, errors made in the collection or processing of the data, and errors from the employment benchmark data used in estimation. The JOLTS program uses quality control procedures to reduce nonsampling error in the survey’s design. See the Data Sources section.
{"url":"https://blsmon1.bls.gov/opub/hom/jlt/calculation.htm","timestamp":"2024-11-08T21:30:08Z","content_type":"text/html","content_length":"149568","record_id":"<urn:uuid:88bcfe09-2ce2-4a94-b078-4f65cf245e05>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00583.warc.gz"}
A computational model of red blood cells using an isogeometric formulation with T-splines and a lattice Boltzmann method The red blood cell (RBC) membrane is often modeled by Skalak strain energy and Helfrich bending energy functions, for which high-order representation of the membrane surface is required. We develop a numerical model of RBCs using an isogeometric discretization with T-splines. A variational formulation is applied to compute the external load on the membrane with a direct discretization of second-order parametric derivatives. For fluid–structure interaction, the isogeometric analysis is coupled with the lattice Boltzmann method via the immersed boundary method. An oblate spheroid with a reduced volume of 0.95 and zero spontaneous curvature is used for the reference configuration of RBCs. The surface shear elastic modulus is estimated to be G[s]=4.0×10^−6 N/m, and the bending modulus is estimated to be E[B]=4.5×10^−19 J by numerical tests. We demonstrate that for physiological viscosity ratio, the typical motions of the RBC in shear flow are rolling and complex swinging, but simple swinging or tank-treading appears at very high shear rates. We also show that the computed apparent viscosity of the RBC channel flow is a reasonable agreement with an empirical equation. We finally show that the maximum membrane strain of RBCs for a large channel (twice of the RBC diameter) can be larger than that for a small channel (three-quarters of the RBC diameter). This is caused by a difference in the strain distribution between the slipper and parachute shapes of RBCs in the channel flows. ASJC Scopus subject areas 「A computational model of red blood cells using an isogeometric formulation with T-splines and a lattice Boltzmann method」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリン
{"url":"https://waseda.elsevierpure.com/ja/publications/a-computational-model-of-red-blood-cells-using-an-isogeometric-fo","timestamp":"2024-11-05T15:39:21Z","content_type":"text/html","content_length":"54188","record_id":"<urn:uuid:510b9516-66db-4b8f-8b7d-787147d968ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00411.warc.gz"}
Computational mathematics Jump to navigation Jump to search Computational mathematics may refer to two different aspect of the relation between computing and mathematics. Computational applied mathematics consists roughly of using mathematics for allowing and improving computer computation in applied mathematics. Computational mathematics may also refer to the use of computers for mathematics itself. This includes the use of computers for mathematical computations (computer algebra), the study of what can (and cannot) be computerized in mathematics (effective methods), which computations may be done with present technology (complexity theory), and which proofs can be done on computers (proof assistants). Both aspects of computational mathematics involves mathematical research in mathematics as well as in areas of science where computing plays a central and essential role—that, is almost all sciences—, and emphasize algorithms, numerical methods, and symbolic computations.^[1] Areas of computational mathematics[edit] Computational mathematics emerged as a distinct part of applied mathematics by the early 1950s. Currently, computational mathematics can refer to or include: Further reading[edit] External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/tud%C3%A1sreprezent%C3%A1ci%C3%B3_(KR)/en.wikipedia.org/wiki/Computational_mathematics.html","timestamp":"2024-11-05T22:32:59Z","content_type":"text/html","content_length":"75625","record_id":"<urn:uuid:2b1b2d88-fb81-4944-b1ec-8ea26d972dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00601.warc.gz"}
4.3.1 Postponing the pension age – the biggest economical nonsense ever In today times it is a habit to listen from politicians, that because of underfunding of pension systems the present workers will have to postpone their retirement, as there is no money and so in order for workers to ensure their decent pension they will have to work a bit longer. And therefore they are slowly, quietly approving laws which are making this a reality and are postponing the start of pension. It sounds logically, simple and sound – as you have saved not enough money, you will have to save a bit more. But it is a great lie, demagogy and unemployment which is steadily growing ( and mostly among young people) is its direct consequence. Imagine a mini economy, in which there is 1 company producing cakes. This company employs 12 people and its production is enough to satisfy needs of 220 people ( 120 employees, 50 children and 50 How the financial system which enables is functioning is not important at this moment. Simply, there is a distributional mechanism which enables that every member of society gets his fair share. After a time, new technological progress occurs which enables this company to substitute 20 employees with machines and produce instead of 220pc, say 250 cakes. How the situation looks now ? The result is quite worrying: the company produced more, but in reality this surplus has no market as the buying power decreased by 20 customers, who became unemployed and additional production which arose as a result of automation has no customers at all. If before the automation the pension system ( whatever type it is) was getting contributions from 120 people, now it is just 100 – so it is logically underfunded by 20%. But this deficit in pension system is just FINANCIAL, and has to do solely with the way financial resources are distributed. The REAL, RESOURCES based economy produced the same, even bigger amount of goods as before and therefore it is possible for people to go to pension at the previous age or even sooner ! What needs to be done for this scenario to materialize is to recalibrate the FINANCIAL economy, so as the redistribution of financial resources is adequate to the new production capacities of REAL Because in reality there is no problem, people in this society did nothing wrong. Quite the opposite – through technological advancement they reached higher level of production, which provides the basis for higher consumption. But it is just a potential and to make it transform into reality there is needed a change in financial flows, otherwise there will be, due to technological advancement quite opposite, illogical situation, where citizens will suffer. While in the previous situation the company was getting 220 financial units ( 50+120 employees and their children, 50 pensioners) and was producing 220 units of production, now it is in a situation where it gets 200 financial units ( 50+100 employees, children and 50 pensioners) and is producing 250 production units – and this represents a loss against previous variant. What logically follows is reduction of production, as there are no customers for it: The result of unmanaged technological progress is an increase of unemployment and a finding, that built production capacities ( not so a small investments ) were in reality quite useless. There comes a desinvestment, which is even bigger as previous increase of production capacity and accompanying factor is growing unemployment. This reduction of REAL production potential is real threat to society as a whole, as it is really diminishing the amount of goods the economy will be able to provide in the future. It is not just an optical illusion, which arises as a result of wrongly calibrated financial system. ( we can see it on a daily basis, whether it is a case of automobile producers, steel makers or other industrial factories which are shedding thousands of employees, or cities going bust, which were once a pride of the nation and home of millions) Of course, such approach is a total stupidity. And imagine, that reaction to such problem is prolonging the amount of working time available through postponing the retirement age ! It is not enough that there is unemployment rising in the society, which is signaling the surplus of available labour ( due to the rising automation), but we are going to increase this available working time even more ! Because postponing the pension age is exactly this, increasing the available working time on individuals and society as a whole. In reality, our problem is quite the opposite and requires quite the opposite solution: to decrease the available working time, which is not needed to be so high due to the technological advancement and to redistribute the financial resources so that everybody would get the fair share from increased production capacity of our economy. So the postponing of retirement age is stupidity squared !!! ( and indeed leads to deepening of existing problems, drifting further away from real solution) Imagine a hypothetical society in a distant future, where all the work is done by robots. Robots are working in factories, robots are serving in restaurants, providing all sorts of services... But this production is not for free. There is still a private ownership of production means and therefore their owners are asking money for their production. Who will give it to them ? And where they will take them from ? The buying power comes from wages, and entirely. The pensions are just transformed wages and savings are just wages not consumed so far. As all work is done by robots, who are not taking any salary, citizens who have no income has no buying power and are not able to buy not even basic necessities. Such a society would soon go bust, because after extinguishing the savings which would be flowing in just one direction ( towards owners of robots) no more transactions could occur – simply there would be no money for it, and technological advancement of such world would soon show itself as quite useless. The only way how such a world could survive would be 100% taxation, which would be regularly taking away all sales going to robots´s owners and redistributing them back between citizens. 100% taxation is inevitable, because with lower ( say 90%) the citizens would get only 90% resources back to revitalize their buying power which would mean only 90% future sales. In further year it would be only 90% from 90%, so 81% and so on, so rather easily understood recession of such overrobotized world. Such society is so far highly utopistic and represent pure communism, which corresponds to the level of taxation. But with such level of technological advancement it would be the only possible economical system. The second extreme is some prehistoric society, where there are no production tools and so all people have to work to survive ( to really produce, what their society needs). As work of everyone is essentially inevitable for survival and all have to consume rather equal amounts of goods ( food) to survive, everybody will get share of common production (food = wage) and taxation is 0%. If there would be any taxation, some members of such society would not get their full share and they would perish by hunger. As everybody´s work is essential for survival of society as a whole, there is no taxation and everybody uses his full wage ( share ) to maintain his life. Our present position is somewhere in the middle. Production is partially automated, and is moving more and more to the right of the chart. During last 20-30 years there has been an enormous technological progress, mainly because of computerization and automation in production. That means shifting of profits towards owners of production means. If such shift is not matched with higher taxation, there comes an automated decrease of sales and fall of economic system into recession, as buying power was reduced due to technological progress and its revival to previous level is not The rise of capital share on society product ( so clearly visible during last 20-30 years) is inevitable asking for higher taxation, which will shift part of profits ( and so buying power) back towards employees. Otherwise the increased production capacity of our economy will come in vain and temporary substitute of regular buying power ( wages) by personal debt will inevitably lead to bust. Personal debts are not a sustainable form of aggregate demand. So if you are hearing argument of the Right, that taxes were lower in the past and now they are too high and so to start the economy we have to lower them again, you can understand why this reasoning is wrong. Those, asking for lower taxes often go in their reasoning back to the medieval times, where they quote 10% taxation as that time prevailing tax to landlords and comparing it with today´s 25-30%. Of course! This historical increase of taxation is an inevitable reaction to technological progress, which is more and more removing the need for human work. Production remains (increases) but the number of people working towards its achieving is continuously falling. Therefore we need higher and higher redistribution, so that production output would get to previous number of members of (if the Right is not asking for removal of certain number of citizens by war, which would of course clearly depict its agenda for potential voters. And not to mention that even such barbaric solution would not bring equilibrium to economic system. Why to build an industry if its builders will get only destruction and its production capacity will remain useless ?) The another argument, you may come across is the following: technological progress, and unemployment that comes with it is OK, those who are against it are idiots and the society always managed to cope with this problem and moved to higher level. They will tell you about demolishing of machinery in GB during industrial revolution and they will point out to the indisputable progress and higher living standards of today. It is a matter of course that technological progress is OK, and this theory never disputed it. Technological progress increases production capacity of REAL, resources based economy as an only possible way to provide higher consumption to the people. But these critics will never tell you, HOW the society actually managed to cope with this problem !!! Before industrial revolution people commonly worked 10-12 hours per day, 6 days in week, children labor was a matter of fact and nobody from common employees even dreamed about paid vacation. After the industrial revolution, and as a mean to solve problems related with unemployment the situation of employees changed dramatically: What happened: • Shortening of working time to 10, later just 8 hours per day • Ban on child labor • Shortening of working week to 5 days • Introduction of paid vacation and increasing its duration • Introduction of paid pension • Continuously shortening of weekly working time So, all measures logically aimed at shortening of working time ( as the need of human labor is continuously decreasing) and parallel increasing of taxation ( in order to pass the increase in productivity on all members of society, not just owners of production means) Without these measures the society would quickly deteriorate into chaos, revolutions and civil wars. So which way shell we go now? Will it be the way of enlightenment, further passing of benefits of increased productivity towards the people in the form of decreasing the working time ? We can choose from: to further lower daily working time, more weeks off, sooner pension Of course, it is historically inevitable and in line with technological progress to further increase the rate of taxation, which will enable the FINANCIAL side, redistribution which will enable consumption for all. Or will it be the path of barbarity? Increase of unemployment, not utilizing and destruction of already existing production capacities or even worse – war? And all of this just because we are not able to understand the necessity of redistribution, accompanying the technological progress? Situation in Europe seems as if we forgot all the knowledge accumulated during the 30. of previous century and we are going to repeat all the cruel mistakes again and again. Postponing the retirement age is definitely one of wrong ways, which will lead only to further suffering and decline.
{"url":"https://www.genomofcapitalism.com/index.php/ru/4-3-1","timestamp":"2024-11-12T03:51:26Z","content_type":"application/xhtml+xml","content_length":"242954","record_id":"<urn:uuid:02d304af-13cd-4c52-8ea4-d5e57ed95262>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00432.warc.gz"}