content
stringlengths
86
994k
meta
stringlengths
288
619
I developed an FEM toolkit in Java: FuturEye How to use WeakFormBuilder: package edu.uta.futureye.tutorial; import java.util.HashMap; import edu.uta.futureye.algebra.SolverJBLAS; import edu.uta.futureye.algebra.intf.Matrix; import edu.uta.futureye.algebra.intf.Vector; import edu.uta.futureye.core.Element; import edu.uta.futureye.core.Mesh; import edu.uta.futureye.core.NodeType; import edu.uta.futureye.function.AbstractFunction; import edu.uta.futureye.function.Variable; import edu.uta.futureye.function.basic.FC; import edu.uta.futureye.function.basic.FX; import edu.uta.futureye.function.intf.Function; import edu.uta.futureye.function.intf.ScalarShapeFunction; import edu.uta.futureye.io.MeshReader; import edu.uta.futureye.io.MeshWriter; import edu.uta.futureye.lib.assembler.AssemblerScalar; import edu.uta.futureye.lib.element.FEBilinearRectangle; import edu.uta.futureye.lib.element.FEBilinearRectangleRegular; import edu.uta.futureye.lib.element.FELinearTriangle; import edu.uta.futureye.lib.weakform.WeakFormBuilder; import edu.uta.futureye.util.container.ElementList; public class UseWeakFormBuilder { * <blockquote><pre> * Solve * -k*\Delta{u} = f in \Omega * u(x,y) = 0, on boundary x=3.0 of \Omega * u_n + u = 0.01, on other boundary of \Omega * where * \Omega = [-3,3]*[-3,3] * k = 2 * f = -4*(x^2+y^2)+72 * u_n = \frac{\partial{u}}{\partial{n}} * n: outer unit normal of \Omega * </blockquote></pre> public static void rectangleTest() { //1.Read in a rectangle mesh from an input file with // format ASCII UCD generated by Gridgen MeshReader reader = new MeshReader("rectangle.grd"); Mesh mesh = reader.read2DMesh(); //2.Mark border types HashMap<NodeType, Function> mapNTF = new HashMap<NodeType, Function>(); //Robin type on boundary x=3.0 of \Omega mapNTF.put(NodeType.Robin, new AbstractFunction("x","y"){ public double value(Variable v) { if(3.0-v.get("x") < 0.01) return 1.0; //this is Robin condition return -1.0; //Dirichlet type on other boundary of \Omega mapNTF.put(NodeType.Dirichlet, null); //3.Use element library to assign degrees of // freedom (DOF) to element ElementList eList = mesh.getElementList(); //FEBilinearRectangle bilinearRectangle = new FEBilinearRectangle(); //If the boundary of element parallel with coordinate use this one instead. //It will be faster than the old one. FEBilinearRectangleRegular bilinearRectangle = new FEBilinearRectangleRegular(); for(int i=1;i<=eList.size();i++) //4.Weak form. We use WeakFormBuilder to define weak form WeakFormBuilder wfb = new WeakFormBuilder() { * Override this function to define weak form public Function makeExpression(Element e, Type type) { ScalarShapeFunction u = getScalarTrial(); ScalarShapeFunction v = getScalarTest(); //Call param() to get parameters, do NOT define functions here //except for constant functions (or class FC). Because functions //will be transformed to local coordinate system by param() Function fk = param(e,"k"); Function ff = param(e,"f"); switch(type) { case LHS_Domain: // k*(u_x*v_x + u_y*v_y) in \Omega return fk.M( u._d("x").M(v._d("x")) .A (u._d("y").M(v._d("y"))) ); case LHS_Border: // k*u*v on Robin boundary return fk.M(u.M(v)); case RHS_Domain: return ff.M(v); case RHS_Border: return v.M(0.01); return null; Function fx = FX.fx; Function fy = FX.fy; wfb.addParamters(FC.c(2.0), "k"); //Right hand side(RHS): f = -4*(x^2+y^2)+72 //5.Assembly process AssemblerScalar assembler = new AssemblerScalar(mesh, wfb.getScalarWeakForm()); System.out.println("Begin Assemble..."); Matrix stiff = assembler.getStiffnessMatrix(); Vector load = assembler.getLoadVector(); //Boundary condition System.out.println("Assemble done!"); //6.Solve linear system SolverJBLAS solver = new SolverJBLAS(); Vector u = solver.solveDGESV(stiff, load); for(int i=1;i<=u.getDim();i++) System.out.println(String.format("%.3f", u.get(i))); //7.Output results to an Techplot format file MeshWriter writer = new MeshWriter(mesh); writer.writeTechplot("./tutorial/UseWeakFormBuilder2.dat", u); public static void main(String[] args) {
{"url":"https://www.cfd-online.com/Forums/main/87820-i-developed-fem-toolkit-java-futureye.html","timestamp":"2024-11-03T02:21:37Z","content_type":"application/xhtml+xml","content_length":"121999","record_id":"<urn:uuid:af3b3a11-eeba-4239-94db-0d2aecd8fa17>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00163.warc.gz"}
BCS-054 Solved Assignment 2019-20 | Computer Oriented Numerical Techniques - KHOJINET | IGNOU Solved Assignments 2023-2024 at Discounted Price Course Code : BCS-54 Course Title : Computer Oriented Numerical Techniques Assignment Number : BCA(5)/054/Assignment/19-20 Maximum Marks : 100 Weightage : 25% Last Date of Submission : 15th October, 2019 (for July 2019 session) 15th April, 2020 (for January 2020 session) Solution Type : Softcopy (PDF File) This assignment has eight questions of total 80 marks. Answer all the questions. 20 marks are for viva voce. You may use illustrations and diagrams to enhance explanations. Please go through the guidelines regarding assignments given in the Programme Guide for the format of presentation. Illustrations/ examples, where-ever required, should be different from those given in the course material. You must use only simple calculator to perform the calculations. Q1 (a) Use the eight-decimal digit floating point representation as given in your Block 1, Unit 1, Section 1.3.1 page 29 to perform the following operations: (i) Represent 0.0006374845 and 5749855743 as floating point numbers in normalised form using chopping for first number and rounding for second number. (ii) Given the above two numbers what is the absolute and relative error in their representation. (iii) Subtract the smaller number from the bigger numbers. What is the error in the resulting number? (iv) Divide the first number by the second number. Convert the result into normalized form in the given format (v) Take the first number as 586309 and assume any second number to demonstrate the concepts of overflow or underflow for the given representation. (You may assume any second number to demonstrate overflow or underflow). (vi) Explain the term bias in the context of binary floating point representation. (b) Explain the term Unstable Algorithm and Unstable Problem with the help of one example of each other than the example given in the course material. (c) Find the Maclaurin series for calculating (1-2x)-1at x=0. Use first four terms of this series to calculate the value of (1-2x)-1 at any value of x. Also find the bounds of truncation error for such cases. (d) What is Taylor’s series? Explain with the help of an example. Explain the Truncation errors in this context. Q2 (a) Solve the system of equations x + y + 6z = 6 7x +3y–4z = 4 2x –7y +3z = 21 using Gauss elimination method with partial pivoting. Show all the steps. (b) Perform four iterations (rounded to four decimal places) using (i)Jacobi Method and(ii)Gauss-Seidel method for the following system of equations. 4 1 -2 x = 15 1 -6 2 y =-10 -2 4 8 z =-24 With x ^(0) = (0, 0, 0)^T. The exact solution is (2, 1, -3)^T. Which method gives better approximation to the exact solution? Q3 Determine the largest negative root of the following equation: f(x) = 4×3–6×2 –8x + 11 = 0 The root should be correct up to 2 decimal places, using (a) Regula-falsi method (b) Newton-Raphson method (c) Bisection method (d) Secant method Q4 (a) Find Lagrange’s interpolating polynomial that fits the following data. Hence obtain the value of f(3.5). x 1 3 6 10 f(x) 1 7 31 91 (b) Using the Lagrange’s inverse interpolation method, find the value of x when y is 7. x 4 16 36 81 y=f(x) 1 3 5 8 Q5 (a) The population of a State for the last 20 years is given inthe following table: Year (x) : 1998 2003 2008 2013 2018 Population(y) (in Lakhs) : 19 40 79 142 235 (i) Using Stirling’s central difference formula,estimate the population for the year 2007 (ii) Using Newton’s forward formula,estimate the population for the year 2000. (iii) Using Newton’s backward formula,estimate the population for the year 2015. (b) Derive an expression of forward difference operator in terms of δ. Q6 (a) Find the values of the first and second derivatives of y = x^2+x-1 for x=2.25 using the following table. Use forward difference method. Also, find Truncation Error (TE) and actual errors. x: 2 2.5 3 3.5 y: 5.00 7.75 11.00 14.75 (b) Find the values of the first and second derivatives of y = x^2+x-1 for x=2.25 from the following table using Lagrange’s interpolation formula. Compare the results with (a) part above. x: 2 2.5 3 3.5 y: 5.00 7.75 11.00 14.75 Q7 (a) Compute the value of the integral ∫^6 [0] (2x^3+ 5x^2-11) dx By taking 12 equal subintervals using (a) Trapezoidal Rule and then (b) Simpson’s 1/3 Rule. Compare the result with the actual value. Q8 (a) Solve the Initial Value Problem, using Euler’s Method for the differential Equation: y’= 1+x^2y, given that y(0) = 1. Find y(1.0) taking (i) h = 0.25 and then (ii) h = 0.1 (b) Solve the following Initial Value Problem using (i) R-K method of O(h^2) and (ii) R-K method of O(h^4) y’ = xy + x^2 and y(0) = 1. Find y(0.4) taking h = 0.2,where y’ means dy/dx There are no reviews yet.
{"url":"https://shop.khoji.net/p/bcs-054-solved-assignment-2020/","timestamp":"2024-11-13T12:22:49Z","content_type":"text/html","content_length":"155598","record_id":"<urn:uuid:3874d51d-2b18-4561-b65b-80b9cd8c150f>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00792.warc.gz"}
Smarter Grids With Sass And Susy — Smashing Magazine Smarter Grids With Sass And Susy This article provides a small taste of what is possible with a Sass-based grid framework named “Susy”. Once you’ve spent a little time getting the hang of it, you’ll see how easy and fast it is to create simple or complex layouts without a huge amount of code. If you’re a designer, you’ll know that grids are your friends. More often than not, they’re the vital architecture that holds a beautiful design together; they create rhythm, structure your page, lead the eye, and prevent the whole thing collapsing in a sloppy mess. I’m a firm advocate for designing with the browser: prototyping with HTML and CSS has many clear advantages over static Photoshop comps, which have less value for the responsive web. Unfortunately, HTML, CSS and grids aren’t natural bedfellows: the progression of web standards over the years has been lacking in this area, meaning we have to grapple with floats (which were never designed to be used this way) and clearfixes — not ideal when you want to spend less time debugging layout and more time crafting experiences. The relatively new implementation of the flexbox and CSS grid layout modules are on course to free us from the constraints of floats and allow us to create some pretty interesting layouts — which I’m excited about — but they’re not a magic bullet. Even with those, coding up a layout can still be a lengthy process for a designer, and require you to remember a bunch of not very intuitive Happily, lots of tools currently exist to enable you to quickly design grids for the web, from CSS frameworks like Bootstrap and Foundation, to sites like Responsive Grid System. However, frameworks have their drawbacks, such as requiring you to add a large number of classes to your markup, and delivering a bloated codebase that can be bad for performance. Luckily for us, solutions exist to the problem of creating fast, responsive, fully customizable grids: Susy is a Sass-based grid framework. It’s very lightweight and enables you to create entirely custom grids in CSS without touching your markup. Susy was developed by Eric M. Suzanne, with support from Chris Eppstein — creator of Compass and co-creator of Sass — so it’s very well supported in the Sass community and it’s rapidly gaining in Framework Or Not? I tend to refer to Susy as a framework, but it may be more accurate to call it a grid system. Unlike more traditional frameworks, Susy doesn’t require you to add classes to your HTML elements. Susy is written entirely in your SCSS file by adding mixins and functions, using media queries to customize your layout at your own specified breakpoints. It enables you to keep content and style entirely separate — not essential all the time, but widely viewed as best practice. Before we continue, I should point out that Susy isn’t the only solution: Zen Grids by John Albin and Jeet by Cory Simmons are two others. Singularity is also a pretty interesting framework, which works in a similar way to Susy, using Sass mixins rather than adding classes to your HTML. However, Susy has a growing community and helpful documentation, making it a good choice to get started Why might you want to use Susy in your next project? There are several advantages: 1. (Relatively) Easy to Learn If you’ve used other frameworks (like Bootstrap or Foundation), and if you’re familiar with Sass at all, Susy shouldn’t be too difficult for you to pick up. Even if you’re fairly new to Sass, Susy doesn’t require in-depth knowledge and is a great way to start! The examples in this article assume a working knowledge of Sass, so it’s worth reading up a little if you’re not familiar. 2. Speed Up Your Workflow Unlike many other frameworks, Susy doesn’t come with a bunch of default styling that you need to overwrite. In fact, Susy has no styling: it’s purely a grid layout system. Its purpose is to do the maths for you — anything else is for you to add. Once you’re familiar with a few of Susy’s mixins, you’ll find it’ll save you time and free you up to concentrate on design. 3. Use as Much or as Little as You Like As with Sass, you can pick and choose what works for you. We’ll focus on a few fairly simple examples here, but you can use Susy to do some pretty complex things if you’re so inclined! I should also point out that while Susy currently relies on traditional floats to position your grid, the documentation indicates that flexbox and CSS grid layout could well be forthcoming, which will make it even more powerful! Getting Started Susy was designed to work with Compass, so if you already have Compass installed then setting up Susy is straightforward. You don’t actually need to use Compass in order to use Susy — Susy is compatible with just about any Sass workflow — but for the purpose of getting started I’ll be using Compass as a primary example. To install Susy, simply run the following in the command line: $ gem install susy (If you get an error, you may need to prefix this command with sudo.) Then set up your new project. If you’re using Compass, in your config.rb you need to add require 'susy'. Then, in your main SCSS file (in this case screen.scss) add @import "susy"; to import Susy into your project. Alternatively, CodeKit is an excellent app for getting up and running with everything you need and allows you to add Susy to your project quickly and easily. Finally, we need to create an index.html file in our project folder to house our markup, and link it to our style sheet. Building Our First Susy Grid Assuming you’ve gone through the necessary steps to run Susy in your project, we’re ready to create our first layout. First of all, we need to define some parameters for our grid in a map at the beginning of our main SCSS file. If you’ve come across Sass maps, you’ll be familiar with the syntax: $susy: ( columns: 12, container: 1200px, gutters: 1/4, global-box-sizing: border-box, debug: (image: show) In this map you can define pretty much any of the global settings for your grid, from the number of columns to gutter width, all of which are listed in the Susy documentation. If you don’t enter a setting in your map, Susy will simply use its default settings. There are a few things we’ll need to define to create our grid: • The number of columns we’ll use. • The maximum width of the container. If you don’t specify a width, your container will be 100% of the width of the viewport, like any block element. You might want this in some cases but, especially while we’re learning, setting a maximum width allows us to see more clearly what’s going on. • Gutters. By default Susy includes gutters as right-hand margins on your columns and at one quarter (1/4) of the column width. You can change the gutter ratio here, or use [gutter-position](https: //susydocs.oddbird.net/en/latest/settings/?highlight=gutters#gutter-position) to decide how you want gutters to behave. • Box-sizing. I always prefer to set this to border-box. (Susy’s default is content-box.) • The debug image. Setting this to show displays a background image showing your column grids, useful for making sure everything is aligned correctly and your elements behave as they should. Creating A Basic Layout We’re going to start by creating this simple layout using Susy: A basic webpage layout. (View large version) See the Pen Susy Grid Example 1A by Michelle Barker (@michellebarker) on CodePen. See the Pen Susy Grid Example 1A by Michelle Barker (@michellebarker) on CodePen. We’ll start with some markup containing a header, a main content area with article and sidebar, and a footer. <main class="wrapper"> As I previously mentioned, Susy depends entirely on CSS and Sass to customize your grid, so you don’t need to add anything else to your HTML. The most important feature for creating grids in Susy is the span mixin. Use @include to include the mixin in your Sass. As you can see from the image, we want our header and footer to take up 100% of the container width, so we don’t need to do anything here. But we need our <article> and <aside> elements to take up eight columns and four columns respectively in our twelve-column grid. In order to achieve this we need to use the span mixin: /* SCSS */ article { @include span(8 of 12); /* More styles here */ aside { @include span(4 of 12 last); /* More styles here */ There are a couple of things to note here: 1. Susy depends on context: we could easily write @include span(8) for the <article> element, which would produce the same result because we already defined our layout as four columns in the map. However, if we wanted to override the map for this particular element (say, subdivide this area of our layout into a greater number of columns), we need to specify the context — in this case, twelve columns. 2. We want the <aside> to be the last item in the row, so we’ve added the word last to the mixin. This tells Susy to remove the right-hand margin on that element so that it fits on the row. If we take a look at our CSS file, we’ll see the above Sass compiles to: article { width: 66.10169%; float: left; margin-right: 1.69492%; aside { width: 32.20339%; float: right; margin-right: 0; You can see that Susy has calculated our column widths and gutters based on the settings we specified in our map. In the Codepen example I’ve included some dummy content, as well as a little padding and a background color on our elements. Without these our grid would simply be invisible, as Susy has no default As we’re floating elements, we also need to remember to clear our footer: header { padding: 2em; background-color: #FF4CA5; /* Of course, you can define your colours as variables if you prefer! */ article { @include span(8); padding: 2em; background-color: #ff007f; aside { @include span(4 last); padding: 2em; background-color: #CC0066; footer { clear: both; padding: 2em; background-color: #7F2653; Finally, we’ll include the container mixin in our main element to give our content a maximum width and position it in the center of the page by setting the left and right margins to auto: main.wrapper { @include container; With these additions, we get this result: A basic webpage layout with dummy content. (View large version) Refining Our Grid We could do with separating the elements to make our layout more pleasing. Let’s add a bottom margin to our <header>, <article> and <aside> elements. What would make an even more appealing layout would be to make our bottom margins the same width as our column gutters. With Susy, we can do this using the gutter function (as opposed to the mixin): header, article, aside { margin-bottom: gutter(); As we haven’t specified a value in the gutter function, Susy will use our map settings; that is, 1/4 column-width in a twelve-column layout. But if, for instance, the section we were working on was only eight columns wide, we may want to specify gutter(8) to create the same effect. Our SCSS file now looks like this: main { @include container; header, article, aside { margin-bottom: gutter(); header { padding: 2em; background-color: #FF4CA5; article { @include span(8); padding: 2em; background-color: #ff007f; aside { @include span(4 last); padding: 2em; background-color: #CC0066; footer { clear: both; padding: 2em; background-color: #7F2653; Now our layout looks like this: A webpage layout refined with gutters. (View large version) Mixins Vs. Functions We just used gutter as a function, as opposed to including it in a mixin. It’s worth noting that span, gutter and container can all be used as both mixins and functions. The Susy documentation outlines use cases for each, but our next example should help give you an understanding of the circumstances in which a function might be useful. Creating A Gallery There’s one more thing that will no doubt be extremely handy to you as a designer: the gallery mixin. As the Susy documentation succinctly puts it, “Gallery is a shortcut for creating gallery-style layouts, where a large number of elements are layed [sic] out on a consistent grid.” It’s great for unordered lists, for example, if you want to create a portfolio page. A gallery page layout. (View large version) Below is the CodePen for this gallery layout. See the Pen Susy Grid gallery example by Michelle Barker (@michellebarker) on CodePen. See the Pen Susy Grid gallery example by Michelle Barker (@michellebarker) on CodePen. We’ll use the following markup for a gallery of twelve items — again, I’ve added some placeholder content in the Codepen example: <ul class="gallery"> We’ll keep the same SCSS from the previous example, with the addition of the following: ul.gallery { padding: span(1 of 8); list-style: none; .gallery li { @include gallery(2 of 6); margin-bottom: gutter(6); &:nth-last-child(-n + 3) { margin-bottom: 0; There are a few things going on here: 1. First, we’re using span as a function to add one column-width of padding all the way around our gallery. As the element is eight columns wide, taking up one column-width on either side leaves us with the remaining six columns for our gallery items. 2. Using the gallery mixin on our <li> element (@include gallery(2 of 6)), we’re telling Susy that each item should take up two columns in our six-column width. That means that each row will hold three gallery items. 3. Using the gutter function (margin-bottom: gutter(6)) we’re adding a bottom margin the equivalent of one gutter-width in our six-column context to each item in our gallery. I’m using the :nth-child pseudo-class to remove the bottom margin from our last row, giving us a perfectly even amount of spacing around our gallery. 4. As we’re floating elements, we’ll also need a clearfix on the parent element (in this case the ul element). In the example, I’m using Compass’s clearfix, but you could create your own mixin, or write it longhand. ul.gallery { @include clearfix; A gallery page layout populated with content. (View large version) Susy For Responsive Web Design Although the examples we’ve walked through so far are based on fluid grids (Susy’s default), they aren’t responsive — yet. At the beginning of this article I mentioned that Susy is a great tool for designing responsively. Let’s look at one more example to see how we can use Susy to create a layout that adapts to different viewport sizes. In this demo we’ll use use media queries in our Sass, along with Susy’s layout mixin to customize our grid for different breakpoints. You’ll recall that at the beginning of our SCSS file we created a map with our global settings. In fact, we can create any number of maps and summon them at will into our layout with this mixin. This is useful if at a certain breakpoint you want to switch from, for instance, a twelve-column to a 16-column layout with no gutters. Our second map may look something like this: $map-large: ( columns: 16, container: auto, gutters: 0, global-box-sizing: border-box For this example I’ve created a simple gallery webpage displaying a collection of photos of found typography. Taking a mobile-first approach, our first step is to create a single-column view, where our main elements take up the full width of the page. However, in our gallery section, we want our images to display in rows of two once our viewport gets wider than, say, 400px and in rows of three after 700px. We’re using a twelve-column grid in our global settings, so all we need to do is instruct Susy to set our gallery items at six columns out of twelve, and four columns out of twelve respectively, in min-width media queries: li { @media (min-width: 480px) { @include gallery(6); margin-bottom: gutter(); @media (min-width: 700px) { @include gallery(4); margin-bottom: gutter(); At widths above 700px, this is what our webpage looks like: Responsive gallery layout at 700px wide. (View large version) At desktop sizes we’re going to make our layout a little more complex. Here we want our header to display as a bar on the left-hand side and our gallery items to display in rows of four on the right of the screen, with no gutters. To achieve this we’re going to switch to a 16-column layout, as defined in the new map we created a moment ago. We’re going to use the layout mixin as follows: @media (min-width: 1100px) { @include layout($map-large); header { @include span(4); main { @include span(12 last); li { @include gallery(3 of 12); margin-bottom: 0; The layout mixin sets a new layout for our grid. Any code following this mixin will be affected by the 16-column layout we specified. (Note: if you want to revert back to a different layout, you’ll need to use this mixin again to call a different map, as your code will be affected by this mixin until you specify otherwise!) In this instance, the <header> element will span 4 colomns out of 16 and the main content area will span the remaining 12 out of 16 columns. Because our gallery items are nested within this 12-column section, we should specify that they take up 3 columns out of 12 (rather than 16). The above code gives us this layout at desktop sizes: Desktop view of our webpage. (View large version) Here’s the CodePen for this responsive layout. See the Pen Responsive layout with Susy by Michelle Barker (@michellebarker) on CodePen. See the Pen Responsive layout with Susy by Michelle Barker (@michellebarker) on CodePen. Dealing With Sub-Pixel Rounding If you’re working with percentage-based grids you’re going to come up against sub-pixel rounding issues. These occur when percentages result in column widths that subdivide a pixel, so the browser rounds your column width up or down by a pixel to compensate. This may mean your layouts don’t always behave as you might expect. For a full explanation of sub-pixel rounding, read “Responsive Design’s Dirty Little Secret” by John Albin Wilkins. Susy provides a workaround for dealing with sub-pixel rounding via its isolate technique, and it’s a handy port of call when your layouts aren’t playing nice. I hope this article has helped you get started with Susy and provided a small taste of what is possible. Once you spend a little time getting the hang of it, you’ll see how easy and fast it is to create simple or complex layouts without a huge amount of code; and it really will save you time in the long run. Here are some useful links and resources to help you get started with Susy: Further Reading (ds, ml, og, mrn)
{"url":"https://www.smashingmagazine.com/2015/07/smarter-grids-with-sass-and-susy/","timestamp":"2024-11-01T20:28:30Z","content_type":"text/html","content_length":"243813","record_id":"<urn:uuid:625e947e-1c44-4862-ae99-6268976aeaa7>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00688.warc.gz"}
Is Second Order Logic Logic? First-order predicate calculus, a logician might argue, is an effective way of symbolically representing those aspects of sentences in natural language pertinent to their use in arguments and related to the preservation of truth value when considered together. By first-order predicate calculus I mean that logical system involving the standard connectives - &, v, ¬, --> - name letters - a, b, c, etc. - and quantifiers - for all and there is an. There is of course some debate as to whether identity - = - is to be included in first-order logic, but that is the subject for another debate, and I shall simply assume that it can be included, but that this assumption does not have a vast effect on the current discussion^1. I shall argue, however, that it is clear that there are groups of sentences which first-order logic is unable to correctly represent, and we use them quite happily in natural language. I shall go on to discuss whether extending our logic, allowing quantification of predicates as well as name variables, in order to allow ourselves to formally represent these sentence is worthwhile, or whether the various losses in doing so - moving to a logic which is incomplete and involving arguments which cannot be systematically assessed for validity - outweigh the gains, and whether we should, as Quine asserts, deny that these higher-order logics are logic at all. In his article 'To be is to be a value of a variable (or to be some values of some variables)' Boolos discusses a number of these sentences. He focuses in various examples on terms like 'some', 'most' and 'the same number as' - showing that these sorts of 'mathematical' terms cannot be fully represented using simply first-order logic. Whilst in first-order logic it is possible to allude to specific numbers of objects having a property (for example 'There are two chickens') : there is an x there is a y for all z ( Cx & Cy & ¬ x = y & ( Cz --> z = x v z = y ) ) but it is not possible to make more general claims about the numbers of the objects, such as that there are the same (unspecified) number of one kind as another, or that there are more of one than the other. To do this would require not just a discussion of the objects belonging to the group, but a discussion of the group itself. This should be fairly clear - the number of objects belonging to a group must be a property of the group, and not the objects themselves. Perhaps the most famous sentence for logicians that cannot be expressed in first order logic is about that group with just one member - namely Leibniz's law - which states: for allx for ally ( for all F ( Fx & Fy ) --> x = y ) If x and y share all properties, x is y. It would seem an attractive prospect, therefore, to allow quantified predicates into our logical system, so that we are able to encapsulate these sorts of sentences. Particularly appealing is the idea that, if in discussing groups we can compare their size as well as simply their members, we might be able to capture the whole of mathematics in a second-order system. In spite of the great advantages looming if we accept quantified predicates into the logical fold, Quine argues that we should not do so. We have already seen that one of the powers of second-order logic comes from its ability to discuss groups of objects together - and in fact this can reasonably be seen as being just what quantified variables do: to say ' there is an F Fx' is to say '...there is a characteristic defining a group which...'; to say 'for all F Fx' is to say '...every characteristic defines a group which...' - allowing us to discuss the sections of our domain which characteristics mark out. Therein, however, Quine argues, lies the rub - for in making claims about groups of objects we presuppose that such groupings exist. To discuss quantified predicates, Quine suggests, is to discuss sets, assuming them to be real and existent, and this is to multiply entities beyond necessity, at least for logic, which should presuppose nothing, relying rather on a priori definitions of its terms for the validity of its claims^2. Furthermore, Quine argues, logic is supposed to be topic-neutral, so the claim 'A; A --> B' then 'B' is as valid for sentences about rabbits running away from foxes as it is for sentences discussing paracetemol and headaches. Clearly, however, if second order logic is about sets, it isn't topic neutral - it's about, well, sets. We can't hope to substitute, say, people for sets in every sentence of second-order logic in a validity preserving way. The mistake, says Quine, is to claim that 'Fx' can be unpacked as 'x has the property F', as if there are such things as 'attributes' which are what give some object that property. Surely, though, anyone who believes in Universals will agree that this is what 'Fx' really means, and would even claim to have good reasons why this is the only position one could hold (for example because their opponents would have no way to compare objects if there were not common, universal properties that the two objects either shared or did not). We should, however, be wary of an account that commits us to something as metaphysically heavy as Universals, and the supporter of second-order logic as logic should prefer to argue their case without resorting to such baggage-laden theses if at all Quine does, however, see a way out of the problems caused by interpreting second-order logic as discussing sets: namely to take it non-literally. He sets about constructing 'pseudo-sets' which follow the rules and positions of real sets, but are not to be interpreted as discussing something 'really out there in the world'. He defines a symbol 'ÃŽ' corresponding to the notion of membership (but not indicating membership of any actual set), and shows that the other set theoretic connectives can be defined simply in terms of this symbol, in combination with the standard connectives of first-order logic. By this method he argues that we should avoid the ontological excesses of set-theory, and use only 'pseudo-set-theory' which can be counted as true logic. Boolos, in his article 'On second-order logic' argues, quite sensibly in my mind, that the actual question being asked, of whether second-order logic is logic, is simply a 'quasi-terminological' question, and that 'It is of little significance whether second-order logic may bear the (honorific) label 'logic' or must bear 'set theory''. He is also quite right, in my mind, to dismiss Quine's objection that quantified variables can only stand in the places where a name might stand, and thus that we should not quantify predicates. Clearly first-order quantified variables stand in the places where names might stand, but this is precisely because in first-order logic only names (or perhaps rather the things to which the names refer) are quantified. In second-order logic, where we allow quantified predicates, clearly we are going to have quantifiers appearing in positions where previously they could not. Surely we should allow this, going with the natural language example which is perfectly able to establish where these new quantifiers may appear, and what they would mean in these positions, rather than following Quine's example and denying them outright, simply because when we extend our language to include new sets of symbols, the language involves unfamiliar sets of symbols. The meat of Quine's argument, therefore, seem to be in his claim that second-order logic presupposes sets. Boolos takes issue with this claim - suggesting that whilst it may seem that second-order logic alone is guilty of forcing sets upon us, there are implicit assumptions even within first-order logic which require at least a number of sets exist. If we take for allx ( x = x ) to mean 'Everything is identical with itself' and believe it to be valid (i.e. true on every interpretation) we must be assuming a non-empty domain. Boolos himself, however, accepts that if we unpack for all x ( x = x ) more cautiously, as 'Everything in the domain is identical with itself' we have a valid sentence which does not assume there is anything in the domain. He asserts also that, since the truth of a sentence even in first-order logic invariably depends on the domain, we should subscript our quantifiers to state to which domain they allude - and yet it seems to me this assumes just what he is trying to prove: most logicians would take issue with the claim that all logical truths are dependent on the domain, and it is certainly the hope of logic that it is able to assert truths, such as modus ponens for example, which are true regardless of the domain in which we are debating. Furthermore, it is all very well claiming that truth-value depends on the domain of discourse, but what Quine is objecting to is that it seems to be the case with second-order logic that meaningfulness depends on the domain - namely that sentences of second-order logic don't have any meaning in domains of anything but sets. Boolos does, however, take issue with this claim that second-order logic's lack of topic-neutrality is a shortcoming, again by suggesting that this is the case even with first-order logic. First-order logic, he claims, is not topic-neutral because it is about the notions of '&', 'v', '¬', '-->, 'for all', and 'there is an'. Whilst this might initially seem a tempting line to take, I don't think that realistically it is a terribly profitable one, in that I don't think it's true. Whilst validity and truth may be defined in terms of, and discussed using these symbols, it is the intention of logic that they are the means and not the end - it is the sentences and truths expressed that are the interest of all but the meta-logician. It might seem initially to be a claim like Quine's about variables in second-order logic occurring in the wrong positions, but I think this is not so, to suggest that whilst first-order logic may involve connectives and quantifiers, it is not at that level that the work of logic is done, instead truths are expressed in the places taken up by sentences. Whilst we might slot entities in around connectives to discuss them, they themselves are not the topic of discussion, and since they are simply symbols, defined in a particular way, they do not seem to have ontological weight, and thus are a vehicle for expression of truth, rather than expressing truths simply by being used, as Quine would assert the use of second-order logic does. Is this whole problem to be escaped then, simply by taking Quine's line, and using pseudo-set-theory rather than the genuine article? I don't think so, and I am somewhat confused as to why Quine might think so. If indeed second-order logic can be interpreted pseudo-set-theoretically it isn't clear how Quine's claim that it commits us to set-theory in the first place can stand up. It seems to be a total about turn for him to claim that there is an interpretation of second-order logic free of the ontological claims of set-theory, since then there is no reason for him to have ever asserted that these ontological claims would be a burden for second-order logic. Overall, therefore, must I agree with Quine's assertion that second-order logic is not logic? Well, no. As Boolos argued as soon as he entered this argument, the way the claim is phrased seems to make it simply a trivial terminological dispute - one side can say that the word logic should include higher-order logics, the other side may want to use the word simply to refer to first-order logic, and there can be no serious argument which should decide how a word can be used. The second-order logician (evidently open to new experiences!) should be perfectly willing simply to coin a new term - say 'bogic' - to refer to his system, and the argument could be settled that way. The issues, however, of why we might decide to accept or refuse second-order logic's entrance into logic is an interesting one, focusing more on value-laden judgments of whether we should, effectively, taint our perfect, complete logic with this new-fangled, somehow less rigorous kind - and it is this debate with which Boolos engages. In the end, however, I think Boolos comes to the right conclusion. He points out that Quine's intuitions seem in the end to come down to an unwillingness to accept second-order logic as logic because of its incompleteness, and that, whilst this is an important consideration, it isn't clear why we should draw the line here. We have already sacrificed total decidability in order to have a fuller, more abundant logic than monadic logic in our commitment to first-order logic, and this seems to be as great a loss as that of completeness would be. Obviously a large part of it comes down to how much we get back in return for this further loss - and yet it seems very plausible that the gains are enormous, and that the wealth of sentences that can be formulated using second-order logic is great enough as to justify these setbacks. ^1If the question of whether identity is part of first-order logic does enter the debate it is presumably only as a matter of degree. Whilst first-order logic is even more severely limited if identity is not allowed, the matter of the serious inconveniences which Quine highlights will still only appear upon the introduction of quantified predicates. ^2Perhaps a more than contentious claim, and yet it seems fair to argue that a great advantage of logic is that it never appears to appeal to things beyond itself, rather laying its own foundations and then walking on them.
{"url":"http://www.smileyben.com/words-essays/31.php","timestamp":"2024-11-02T20:46:30Z","content_type":"text/html","content_length":"20811","record_id":"<urn:uuid:176a10c0-3fce-4907-b42e-2529422ee00b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00683.warc.gz"}
Bayesian approach to analyzing goalies, Part 3: Updating estimates with more data Part 1 of this series, we gave an introduction to the series, and the motivation behind this new method of analyzing goalies. In Part 2 , we started with an example of Luongo's first 10 shots to build some intuition for what estimates this method gives and the advantages it has over traditional SV% or even strength save percentage (ESSV%). We got a picture like this: After 10 shots, Luongo's curve (black) is still very similar to what our prior expectation was based on typical ESSV% in the NHL. Since he saved 10 out of his first 10 shots, his curve (and the corresponding dot) is shifted slightly to the right. In this article, we'll continue this example and discuss the animation that we showed at the end of Part 2. The animation shows what Luongo's curve looks like after 20 shots, 30 shots, 40 shots, Luongo's distribution curve goes to the left, but quickly moves back to the right, and seems to stabilize pretty quickly.... the black dot seems to bounce around .930-ish area after the first 500 or 1000 shots. As we add more and more shots, the curve gets narrower, and the peak of the curve gets higher. This indicates that as time goes on, and we get more and more information (i.e. we use more and more shots in the model), we are becoming more and more certain of our estimate of Luongo's true ESSV%. In other words, our estimate is getting more and more precise. We added a blue dot to this animation which represents the observed ESSV%. In the very beginning, it starts out off the screen to the right. (Recall that in Part 2, we said that with 10 saves in his 10 shots, Luongo's ESSV% was only 1.000.) Luongo allowed 3 goals in the first 7 shots of his second game, so the blue dot swings off the screen to the left. The blue dot then swings back to the right and bounces around there for a while, way up past .940. After a while, it stabilizes a bit, and is pretty close to the black dot. This blue dot is kind of showing what we know about ESSV%: extreme values when the number of shots is small, very unstable in the beginning, but with a significant number of shots, it begins to stabilize. Another observation is that the black dot is always between the blue dot and the red dot. This is true in the beginning when the blue dot is off the screen to the left, and when the blue dot crosses over to the right half of the figure, so does the black dot. This is illustrating the "regression to the mean" that is going on. Our Bayesian estimate for Luongo's true ESSV% is closer to the mean than his observed ESSV%, whether his ESSV% is higher or lower than the mean. During the beginning of the animation, the regression is pretty heavy. In other words, the black dot is pulled towards the red dot. The prior distribution is dominating and has a big influence on our estimate. This is probably preferred... we don't have much data, so our prior information has more influence on our estimate. The blue dot, which is based solely on the limited data, is pretty erratic in the beginning with pretty extreme values. But as time goes on, the black dot moves further from the red dot and gets closer and closer to the blue dot. The more data we get, the less influence the prior has, and the more influence the data has. This method is automa ically regressing Luongo to the mean by an appropriate amount based on how much data there is. It might be helpful to look at a non-animated figure to emphasize what is going on. Here's the position of the blue dot and black dot over time: The horizontal axis is time in the animation. More precisely, it is the number of shots used in the model. The vertical axis is ESSV%. So the curves are showing ESSV% after shots, where ranges from 10 to 5230. The colors are the same as in the animation: the blue curve corresponds to the blue dot (observed ESSV%), the gray/black curve corresponds to our Bayesian estimate, and the red line is league average. The black appears to be pretty well-behaved and stabilizes quickly. The blue is much more erratic in the beginning, with large swings, and it appears as a pretty noisy signal before stabilizing. The black is always between the blue and the red. In the beginning, the regression to the mean is heavy and the black is closer to the red. After a while, when we have more data, the black is closer to the blue, and the two curves pretty much move in tandem. Part 4 , we'll talk about the regression to mean a little more, and start showing some results for the rest of the league's goalies. We might even compare Luongo and Schneider, since that's never been done before. This will be an example of a couple of other benefits of our Bayesian ESSV%: 1. it is useful when comparing goalies who have faced different numbers of shots. 2. it is useful for answering questions like "what is the probability that Luongo's true ESSV% is greater than Schneider's?" Links to other Parts: Part 1 - Part 2 - An example using only 10 shots Part 3 - Updating estimates with more information Part 4 - Regression to the mean, Luongo vs Schneider, Thomas vs Rask, Hiller vs Fasth
{"url":"https://www.greaterthanplusminus.com/2013/10/bayesian-approach-to-analyzing-goalies_7222.html","timestamp":"2024-11-06T07:20:04Z","content_type":"application/xhtml+xml","content_length":"70464","record_id":"<urn:uuid:e29b1259-d1bc-4686-98de-1c6d9a87d40e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00898.warc.gz"}
fahrenheit Archives - Online Unit Converter. Free Conversion Tool. 6 °F to °C converter. How many Celsius are in 6 °F? 6 °F to °C The question “What is 6 °F to °C?” is the same as “How many Celsius are in 6 Fahrenheit?” or “Convert 6 Fahrenheit to Celsius” or “What is 6 Fahrenheit to Celsius?” or “6 Fahrenheit to °C”. Read on … Read more 4 °F to °C converter. Convert 4 °F to °C 4 °F to °C converter. How many Celsius are in 4 °F? 4 °F to °C The question “What is 4 °F to °C?” is the same as “How many Celsius are in 4 Fahrenheit?” or “Convert 4 Fahrenheit to Celsius” or “What is 4 Fahrenheit to Celsius?” or “4 Fahrenheit to °C”. Read on … Read more 3 °F to °C converter. Convert 3 °F to °C 3 °F to °C converter. How many Celsius are in 3 °F? 3 °F to °C The question “What is 3 °F to °C?” is the same as “How many Celsius are in 3 Fahrenheit?” or “Convert 3 Fahrenheit to Celsius” or “What is 3 Fahrenheit to Celsius?” or “3 Fahrenheit to °C”. Read on … Read more 2 °F to °C converter. Convert 2 °F to °C 2 °F to °C converter. How many Celsius are in 2 °F? 2 °F to °C The question “What is 2 °F to °C?” is the same as “How many Celsius are in 2 Fahrenheit?” or “Convert 2 Fahrenheit to Celsius” or “What is 2 Fahrenheit to Celsius?” or “2 Fahrenheit to °C”. Read on … Read more 82.8 °F to °C converter. Convert 82.8 °F to °C 82.8 °F to °C converter. How many Celsius are in 82.8 °F? 82.8 °F to °C The question “What is 82.8 °F to °C?” is the same as “How many Celsius are in 82.8 Fahrenheit?” or “Convert 82.8 Fahrenheit to Celsius” or “What is 82.8 Fahrenheit to Celsius?” or “82.8 Fahrenheit to °C”. Read on … Read more 37.2 °F to °C converter. Convert 37.2 °F to °C 37.2 °F to °C converter. How many Celsius are in 37.2 °F? 37.2 °F to °C The question “What is 37.2 °F to °C?” is the same as “How many Celsius are in 37.2 Fahrenheit?” or “Convert 37.2 Fahrenheit to Celsius” or “What is 37.2 Fahrenheit to Celsius?” or “37.2 Fahrenheit to °C”. Read on … Read more 104.6 °F to °C converter. Convert 104.6 °F to °C 104.6 °F to °C converter. How many Celsius are in 104.6 °F? 104.6 °F to °C The question “What is 104.6 °F to °C?” is the same as “How many Celsius are in 104.6 Fahrenheit?” or “Convert 104.6 Fahrenheit to Celsius” or “What is 104.6 Fahrenheit to Celsius?” or “104.6 Fahrenheit to °C”. Read on … Read more -267 °F to °C converter. Convert -267 °F to °C -267 °F to °C converter. How many Celsius are in -267 °F? -267 °F to °C The question “What is -267 °F to °C?” is the same as “How many Celsius are in -267 Fahrenheit?” or “Convert -267 Fahrenheit to Celsius” or “What is -267 Fahrenheit to Celsius?” or “-267 Fahrenheit to °C”. Read on … Read more 572 °F to °C converter. Convert 572 °F to °C 572 °F to °C converter. How many Celsius are in 572 °F? 572 °F to °C The question “What is 572 °F to °C?” is the same as “How many Celsius are in 572 Fahrenheit?” or “Convert 572 Fahrenheit to Celsius” or “What is 572 Fahrenheit to Celsius?” or “572 Fahrenheit to °C”. Read on … Read more 356 °F to °C converter. Convert 356 °F to °C 356 °F to °C converter. How many Celsius are in 356 °F? 356 °F to °C The question “What is 356 °F to °C?” is the same as “How many Celsius are in 356 Fahrenheit?” or “Convert 356 Fahrenheit to Celsius” or “What is 356 Fahrenheit to Celsius?” or “356 Fahrenheit to °C”. Read on … Read more
{"url":"https://online-unit-converter.com/convert_from/fahrenheit/","timestamp":"2024-11-11T00:42:12Z","content_type":"text/html","content_length":"54590","record_id":"<urn:uuid:eea4891c-3226-4e2b-a551-c36a2a2dd65e>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00647.warc.gz"}
Cartoon Calculator – Fun Interactive Math Tool This tool helps you quickly and easily perform basic arithmetic calculations. How to Use the Cartoon Calculator Enter two numbers into the “Number 1” and “Number 2” fields. Select an operator from the dropdown menu. Click the “Calculate” button to see the result of the calculation. The result will be displayed in the “Result” field. Calculation Method The calculator uses standard arithmetic operations: • +: Adds the two numbers. • –: Subtracts the second number from the first. • *: Multiplies the two numbers. • /: Divides the first number by the second. This calculator performs basic arithmetic operations. It does not handle complex mathematical functions, error checking beyond invalid operator inputs, or division by zero. For more advanced calculations, consider using a scientific calculator. Use Cases for This Calculator Use Case 1: Addition for Kids Help kids improve their math skills by allowing them to practice simple addition with fun cartoon characters. Let them input numbers and see the cute characters animate the addition process step by Use Case 2: Subtraction for Beginners Make subtraction learning engaging for beginners by using colorful cartoons in a calculator. Visualize the subtraction process with animated characters to keep kids interested in the math concept. Use Case 3: Multiplication Made Fun Engage students in learning multiplication tables by incorporating interactive cartoons. Show groups of cartoon characters to represent multiplication, making it easier for children to grasp the concept of repeated addition. Use Case 4: Division Explained Playfully Explain division to kids in a playful way with the help of cartoon illustrations. Use cute characters to demonstrate division as sharing or grouping to make the mathematical operation more accessible and enjoyable. Use Case 5: Interactive Mathematics Quiz Create an interactive math quiz using cartoon characters to test kids’ arithmetic skills. Display questions with cartoon illustrations and provide instant feedback on the correctness of their answers to enhance the learning experience. Use Case 6: Fraction Calculator with Visuals Teach kids about fractions using cartoon visuals in a calculator interface. Show fraction operations with animated characters to help children understand how fractions work and how they relate to whole numbers. Use Case 7: Geometry Calculator with Shapes Incorporate cartoon geometrical shapes into a calculator to assist kids in solving geometry problems. Use colorful characters representing shapes like squares, circles, and triangles to aid in calculating areas, perimeters, and angles. Use Case 8: Time Calculator with Animated Clocks Enhance learning about time concepts by using animated clocks in a calculator design. Enable kids to add, subtract, multiply, and divide time units with the help of cartoon clock characters displaying different times. Use Case 9: Measurement Converter with Cartoon Comparisons Develop a measurement converter calculator with cartoon visuals to help kids compare different units. Use animated characters showcasing size comparisons to make converting lengths, weights, and volumes more engaging and understandable. Use Case 10: Math Puzzles and Brain Teasers Integrate math puzzles and brain teasers into the cartoon calculator to challenge kids’ problem-solving skills. Present animated puzzles that require arithmetic operations to solve, making learning math a fun and interactive experience.
{"url":"https://madecalculators.com/cartoon-calculator/","timestamp":"2024-11-09T17:09:29Z","content_type":"text/html","content_length":"144917","record_id":"<urn:uuid:7390eae9-6ee9-4882-9096-4f5f75b88490>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00458.warc.gz"}
PROC UCM: SLOPE Statement :: SAS/ETS(R) 9.22 User's Guide SLOPE <options> ; The SLOPE statement is used to include a slope component in the model. The slope component cannot be used without the level component (see the statement). The level and slope specifications jointly define the trend component of the model. A SLOPE statement without the accompanying LEVEL statement is ignored. The equations of the trend, defined jointly by the level The SLOPE statement is used to specify the value of the disturbance variance, The preceding statements fit a model with a locally linear trend. The disturbance variances slope variance=0 noest; PLOT=( <FILTER> <SMOOTH> ) PRINT=( <FILTER> <SMOOTH> )
{"url":"http://support.sas.com/documentation/cdl/en/etsug/63348/HTML/default/etsug_ucm_sect022.htm","timestamp":"2024-11-14T14:33:30Z","content_type":"application/xhtml+xml","content_length":"15337","record_id":"<urn:uuid:206bb1c4-182f-4f60-8674-7e13a4f1ee48>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00845.warc.gz"}
Permutation groups arising from pattern involvement Lehtonen, Erkko Journal of Algebraic Combinatorics, 52 (2020), 251–298 For an arbitrary finite permutation group G, subgroup of the symmetric group Sl, we determine the permutations involving only members of G as l-patterns, i.e. avoiding all patterns in the set Sl\G. The set of all n-permutations with this property constitutes again a permutation group. We consequently refine and strengthen the classification of sets of permutations closed under pattern involvement and composition that is due to Atkinson and Beals.
{"url":"https://cemat.ist.utl.pt/document.php?member_id=71&doc_id=3433","timestamp":"2024-11-03T07:43:53Z","content_type":"text/html","content_length":"8394","record_id":"<urn:uuid:4b6ccfac-cb63-4638-bc7c-69142070522a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00685.warc.gz"}
som: Create and train a self-organizing map (SOM) in RSNNS: Neural Networks using the Stuttgart Neural Network Simulator (SNNS) This function creates and trains a self-organizing map (SOM). SOMs are neural networks with one hidden layer. The network structure is similar to LVQ, but the method is unsupervised and uses a notion of neighborhood between the units. The general idea is that the map develops by itself a notion of similarity among the input and represents this as spatial nearness on the map. Every hidden unit represents a prototype. The goal of learning is to distribute the prototypes in the feature space such that the (probability density of the) input is represented well. SOMs are usually built with 1d, 2d quadratic, 2d hexagonal, or 3d neighborhood, so that they can be visualized straightforwardly. The SOM implemented in SNNS has a 2d quadratic neighborhood. As the computation of this function might be slow if many patterns are involved, much of its output is made switchable (see comments on return values). som(x, ...) ## Default S3 method: som( x, mapX = 16, mapY = 16, maxit = 100, initFuncParams = c(1, -1), learnFuncParams = c(0.5, mapX/2, 0.8, 0.8, mapX), updateFuncParams = c(0, 0, 1), shufflePatterns = TRUE, calculateMap = TRUE, calculateActMaps = FALSE, calculateSpanningTree = FALSE, saveWinnersPerPattern = FALSE, targets = NULL, ... ) x a matrix with training inputs for the network ... additional function parameters (currently not used) mapX the x dimension of the som mapY the y dimension of the som maxit maximum of iterations to learn initFuncParams the parameters for the initialization function learnFuncParams the parameters for the learning function updateFuncParams the parameters for the update function shufflePatterns should the patterns be shuffled? calculateMap should the som be calculated? calculateActMaps should the activation maps be calculated? calculateSpanningTree should the SNNS kernel algorithm for generating a spanning tree be applied? saveWinnersPerPattern should a list with the winners for every pattern be saved? targets optional target classes of the patterns should the SNNS kernel algorithm for generating a spanning tree be applied? should a list with the winners for every pattern be saved? Internally, this function uses the initialization function Kohonen_Weights_v3.2, the learning function Kohonen, and the update function Kohonen_Order of SNNS. an rsnns object. Depending on which calculation flags are switched on, the som generates some special members: map the som. For each unit, the amount of patterns where this unit won is given. componentMaps a map for every input component, showing where in the map this component leads to high activation. a list containing for each pattern its activation map, i.e. all unit activations. The actMaps are an intermediary result, from which all other results can be computed. This list can actMaps be very long, so normally it won't be saved. winnersPerPattern a vector where for each pattern the number of the winning unit is given. Also, an intermediary result that normally won't be saved. a matrix which – if the targets parameter is given – contains for each unit (rows) and each class present in the targets (columns), the amount of patterns of the class where the labeledUnits unit has won. From the labeledUnits, the labeledMap can be computed, e.g. by voting of the class labels for the final label of the unit. labeledMap a labeled som that is computed from labeledUnits using decodeClassLabels. spanningTree the result of the original SNNS function to calculate the map. For each unit, the last pattern where this unit won is present. As the other results are more informative, the spanning tree is only interesting, if the other functions are too slow or if the original SNNS implementation is needed. the som. For each unit, the amount of patterns where this unit won is given. a map for every input component, showing where in the map this component leads to high activation. a list containing for each pattern its activation map, i.e. all unit activations. The actMaps are an intermediary result, from which all other results can be computed. This list can be very long, so normally it won't be saved. a vector where for each pattern the number of the winning unit is given. Also, an intermediary result that normally won't be saved. a matrix which – if the targets parameter is given – contains for each unit (rows) and each class present in the targets (columns), the amount of patterns of the class where the unit has won. From the labeledUnits, the labeledMap can be computed, e.g. by voting of the class labels for the final label of the unit. a labeled som that is computed from labeledUnits using decodeClassLabels. the result of the original SNNS function to calculate the map. For each unit, the last pattern where this unit won is present. As the other results are more informative, the spanning tree is only interesting, if the other functions are too slow or if the original SNNS implementation is needed. Kohonen, T. (1988), Self-organization and associative memory, Vol. 8, Springer-Verlag. Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of Tübingen. https://www.ra.cs.uni-tuebingen.de/SNNS/ ## Not run: demo(som_iris) ## Not run: demo(som_cubeSnnsR) data(iris) inputs <- normalizeData(iris[,1:4], "norm") model <- som(inputs, mapX=16, mapY=16, maxit=500, calculateActMaps=TRUE, targets=iris [,5]) par(mfrow=c(3,3)) for(i in 1:ncol(inputs)) plotActMap(model$componentMaps[[i]], col=rev(topo.colors(12))) plotActMap(model$map, col=rev(heat.colors(12))) plotActMap(log(model$map+1), col=rev (heat.colors(12))) persp(1:model$archParams$mapX, 1:model$archParams$mapY, log(model$map+1), theta = 30, phi = 30, expand = 0.5, col = "lightblue") plotActMap(model$labeledMap) model$componentMaps model$labeledUnits model$map names(model) For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/RSNNS/man/som.html","timestamp":"2024-11-14T02:09:15Z","content_type":"text/html","content_length":"38045","record_id":"<urn:uuid:380de586-56f0-4e6a-a148-ca1523a05b30>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00800.warc.gz"}
Circular Arc – Doughnut Charts A few weeks back Jhouz asked a question in the Chandoo.org Forums “Is is possible to create a doughnut chart like this one in excel?” This post will examine how to make it Alert: It isn’t as straight forward as you may first think! A couple of users responded with a Doughnut Chart Which at first glance looks quite similar. But the original author wanted round ends on the ends of the Doughnut segment. He also wanted a smooth chart. A quick scan through the properties of a Doughnut Chart reveals there is no optionality to control the ends of the Doughnuts Segments. An alternative approach was required. A Solution Before starting, if you want to you can follow along using a sample file with the worked examples shown below: Download Here The solution I posed was to use an X-Y Scatter chart for the line segments and apply a thick Line style. The part of this approach that makes it work is that Line Styles have a property for the Lines End including an option for a round end. The solution chart above consists of 2 lines The first is the Background (Grey) line, which is a complete The second line is the green line, which is a segment of the circle equal to in this case 45% of a circle or 162 Degrees (0.45 x 360). It is in front of the Grey line. To apply this technique I used a number of Named Formula, and based the chart on these named formula: First for the Background Grey chart segment To define the Grey segment I applied 3 Named Formula: c1_Rad =RADIANS(-(ROW(OFFSET(Sheet1!$A$1,,,360+1,1))-91)) _x1 =COS(c1_Rad) _y1 =SIN(c1_Rad) The Grey circle is defined by an Array of Radians of each degree between 0 and 360 of a circle. C1_Rad =RADIANS(-(ROW(OFFSET(Sheet1!$A$1,,,360+1,1))-91)) This works by using the Excel Row() and Offset() function to generate an array of Degrees from 0 to 360 The formula ROW(OFFSET(Sheet1!$A$1,,,360+1,1)) Will return ={1;2;3;4;5;6; …. ;358;359;360;361} Note that we have taken the array 1 degree past 360 because the Row’s lowest value is Row 1, not row 0. We then subtract 91 degrees from this to allow the Chart to start at the top of the circle. The adjusted formula ROW(OFFSET(Sheet1!$A$1,,,360+1,1))-91 Returns: ={-90;-89;-88;-87; … ;268;269;270} Finally the – in front of the array changes the direction of the circle from Anticlockwise to clockwise. Returns: ={90;89;88;87; … ;-268;-269;-270} The Radians() function is used to convert the array of Degrees into an array of Radians Returns: ={1.57;1.55;1.53; … ;-1.22;-1.23;-1.25} The Radians above were rounded to 2 decimals places for display on this post, but Excel internally is using the full 15 decimal place precision. We can now use this array of Radians to draw the background circle To do this setup 2 new Named Formula _x1: =COS(c1_Rad) _y1: =SIN(c1_Rad) Each of these will return an array of the X and Y values corresponding to each of the Radians from the previous c1_Rad array. The X and Y values will vary between -1 and 1. You may need these for Chart Scaling later. If you want a circle of different radius simply multiply the x and y formulas like _x1: =COS(c1_Rad)*5 for a radius of 5 and the same for the _y1 named formula To plot these we add a X-Y Scatter Chart. Select a single cell. Then goto the Insert, Chart, Scatter Chart menu and select a Scatter Chart with Smooth lines. This will give you a blank chart. With the Chart Selected, Right click on the chart area and choose Select Data… Add a Series using the Add button. Use the Worksheet Name Sheet1 and Named Formula _x1 & _y1 for the X and Y values You can leave the Series Name blank or enter a value like “Background Circle”. Note that you must enter the Sheet Name including the ! preceding the Named Formula name. Once you have accepted the inputs, if you return to the Edit Series dialog, notice that Excel now displays the Workbooks name instead of the Worksheets name. That’s quite ok. You will now have a chart which looks like: Finally Right click on the first series and select Format Data Series. Set the Line Color to a Light Grey and set the Line Width to 12 . Check that Markers are set to None Next the Foreground Green chart segment To draw the front arc of the circle we add a few more Named Formula _pct =Sheet1!$C$6 c2_Rad =RADIANS(-(ROW(OFFSET(Sheet1!$A$1,,,_pct*360+1,1))-91)) _x2 =COS(c2_Rad) _y2 =SIN(c2_Rad) _pct stores the value of the percentage of the circle directly from the reference cell on the worksheet eg: 45% To draw an arc we only need to factor the 360 Degrees for a full circle back to the percentage required for the arc: ie: from 0 to 45% x 360 degrees = 162 Degrees. Hence drawing an Arc from 0 degrees to 162 Degrees. To do this we use the same formula as before except that we set the range to the 45% of 360 degrees using the Named Formula: C2_Rad: =RADIANS(-(ROW(OFFSET(Sheet1!$A$1,,,_pct*360+1,1))-91)) Add another series to the chart using. With the Chart Selected, Right click on the chart area and choose Select Data… X values: =Sheet1!_x2 Y values: =Sheet1!_y2 Next select the chart and ensure that the 45% circle is in front of the full circle Select the Chart’s 2nd series and change the line width and line color to suit the impact you want. Finally select the 45% line Goto the Lines properties and set the Cap type to Round Add the Measurement With the Chart selected, goto the Insert, Text Box dialog and select a text box style and insert it. With the text box selected, goto the Formula Bar and enter the Formula =_pct and press Enter or click the Tick icon to accept. Finally with the text box selected, Change the Font Size to suit eg: 64 and Format the Text using an appropriate style from the Drawing Tools, Format Menu Ensure the Text box is wide enough to display up to 100% include the percentage sign The Final Chart and with another value… Other line type endings Experiment with other Line Ends and see what you can make? and Line Styles and Thicknesses? Multiple Series By careful use of chart series you can add multiple measurements to the same chart and use a combination of display properties to enhance your chart In conclusion I have demonstrated a successful solution to Jhouz’s original post and then extended it a bit further. The Author acknowledges that there is limited use for doughnut charts and only recommends them in limited circumstances. I hope these enhancements allow you to better use and emphasise your data in your situation as well as add another Excel technique to your arsenal. The post Circular Arc – Doughnut Charts appeared first on Chandoo.org – Learn Excel, Power BI & Charting Online. Original source: http://feedproxy.google.com/~r/PointyHairedDilbert/~3/0svhfhWjgVM/ You must be logged in to post a comment.
{"url":"https://exceljobs.com/2019/05/17/circular-arc-doughnut-charts/","timestamp":"2024-11-14T08:59:23Z","content_type":"text/html","content_length":"47728","record_id":"<urn:uuid:37ed8c36-8335-4815-9f2b-422d28970100>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00539.warc.gz"}
Stonehenge and Pi ArchaeoBlog - Pi Day 2012 2012.03.14 - Pi appeared while considering the arc distances between the largest Neolithic stone circles and Stonehenge. The pi numeric string expressing the ratio of arc distances between three sites previously occurred in relation to Stonehenge. Pi has also surfaced when considering astronomy correlations and monument properties. In this most recent instance, the accuracy of the ratio is so impressive a question is posed, "Did the Neolithic builders of the megalithic stone circles intend to express pi?" The largest stone circles include the very largest, Avebury, the outstanding outlier at 344 meters in diameter. Another half-dozen stone rings nearer 100 to 115 meters in diameter comprise the "great stone circles," large outliers among the nearly one thousand smaller circles. The great stone circles include Newgrange, Long Meg, the Ring of Brodgar, Stanton Drew, and the two inner circles within the massive Avebury circle and surrounding henge. While the great circles are geographically dispersed, most certainly they are not independent inventions. While it is unlikely to be coincidental that five of the seven have near the same diameters, precise placements on the landscape to express pi is both a far more difficult task and an ability not commonly attributed to Neolithic cultures. Geodetic science and the ability to accurately determine the coordinates of ancient monuments is itself a recent development. The current world geodetic system, adopted in 1984 (WGS84), readily allows accurate determination of the relationships of points across the globe. In recent years, the Global Positioning System (GPS) and Google Earth (GE) aerial and satellite imagery have facilitated accurate geographic definition of monument coordinates. Individual megaliths are clearly visible in Google Earth today. Recent Google Earth image updates provided the coordinate data in Table 1 (refining previous coordinate data). Table 1. The Neolithic Monuments Considered. Site Latitude Longitude Code Stonehenge 51.178865 -1.826189 stonc Ring of Brodgar 59.001476 -3.229740 rbrod Avebury Obelisk 51.428035 -1.853392 avebo Avebury Cove Megalith 51.429087 -1.854061 avebc Avebury Circles Mean 51.428561 -1.853727 avecm The arc distances ratio of 1.0 to 31.4168 caught my attention when Stonehenge, Ring of Brodgar, and Avebury were the three site variables in my research applet, archaeogeodesy.xls. Avebury's "centerpoint" is a bit ambiguous because the "circle" is not a true circle. The Cove and the Obelisk are internal features of the two Avebury inner circles. While these immense menhirs are not necessarily the precise centers of their respective circles, the mean of the two immense stones (avecm) approximates one plausible center for Avebury while providing a reference to specific stone settings. Avebury's north inner circle (an ellipse with the central Cove setting) and south inner circle (with the central Obelisk) are sized nearly the same diameter as the Ring of Brodgar, a true circle. Accuracy of pi is expressed with greater refinement than the methodology employed to determine the coordinates. The margin of error in determining coordinates exceeds the error factor in the hypothetical representation of pi. Download Google Earth placemarks file: stonehenge_pi.kml Move the centerpoint at Avebury less than a meter from the avecm coordinate, and the value '10 pi' is precise! Avebury is about one-quarter degree from Stonehenge, almost 28,000 meters to the north and just over 3,000 meters west. The Stonehenge to Brodgar center-on-center arc distance of 875,200 meters is known within a few meters accuracy, say within 10 meters. Divide 875,200m by 31.4159 and the hypothetical Avebury to Stonehenge arc is thus determined to an accuracy of plus or minus 2/3 meter. From the center of Stonehenge, that line (27,857.4 +/- 0.3m) lands not only precisely between the Cove and the Obelisk but also within a meter of latitude equaling one-seventh of circumference. There is no doubt pi is irrational. Is it also irrational to assume the builders intended this 10 pi distance ratio? The implication of the intentional hypothesis is counter paradigmatic, implying that by 4,500 years ago the builders knew the value of pi and either precisely surveyed the British Isles or could astronomically point-position accurately. It would be easy to dismiss this as coincidence were it not so precise. Precision is a valuable tool when doing statistics on a sample of one (the only statistical tool that comes to mind for samples of one). Having independent probabilities is another useful tool for probability analysis. To determine the probability of two coincidences, each probability is multiplied by the other, resulting in an astronomical number. Is it also a coincidence that Avebury, the largest stone circle with three of the great stone circles, is so accurately situated at one-seventh of circumference latitude? You can test these ideas and other numbers yourself with my applets, downloads: Astronomy Page Further reading related to these ideas: Ancient Astronomy, Integers, Great Ratios, and Aristarchus Stonehenge and Astronomy | Return to the ArchaeoBlog. 2012.08.13 - Mark Vidler did the math for himself to test the ideas above and noted I confused two numbers. Thanks for the correction Mark.
{"url":"http://jqjacobs.net/archaeology/stonehenge/stonehenge_pi.html","timestamp":"2024-11-11T12:58:09Z","content_type":"text/html","content_length":"31468","record_id":"<urn:uuid:22203e6a-9bdc-4b9a-8d79-159d8164772d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00211.warc.gz"}
Ground Heat Transfer Calculations using Site:GroundDomain:Slab Ground Heat Transfer Calculations using Site:GroundDomain:Slab[LINK] In order to simulate heat transfer with horizontal building surfaces in contact with the ground, a general finite difference ground model has been implemented. The model simulates heat transfer with the horizontal building surfaces through the ground domain. The domain can simulate heat transfer for slab-in-grade or slab-on-grade scenarios. In all scenarios, the ground domain must interact with the zone through an OthersideConditionsModel as the horizontal surface’s outside boundary condition. This model is generalized to be able to handle a number of different slab and insulation configurations. It uses an implicit finite difference formulation to solve for the ground temperatures. As a result the simulation is stable for all timesteps and grid sizes, but an iteration loop must be employed to converge the temperatures in the domain for each domain timestep. Multiple horizontal surfaces can be coupled to each ground domain object. The model determines which surfaces are coupled to the ground domain and creates a surface of equivalent surface area within the ground domain as a representation of the horizontal surfaces coupled to the ground domain. This surface then interacts with the ground providing updated other side conditions model temperatures to the coupled surfaces for use in their surface heat balance calculations. Boundary Conditions[LINK] At the interface surface, the average surface conduction heat flux from all surfaces connected to the ground domain is imposed as a GroundDomain boundary condition at the Surface/GroundDomain interface cells. Heat flux to each cell is weighted as was recommended by Pinel & Beausoleil-Morrison 2012 which is shown in Equation [eq:Pinel-Beausoleil-Morrison]. Far-field temperatures are applied as boundary temperature at the GroundDomain sides and lower surface. The ground temperature profile at the domain sides and lower surface are taken from Kusuda & Achenbach 1965. The correlation requires annual ground surface temperature data. Ground surface cells are treated as a heat balance, where long and short wave radiation, conduction, and convection are considered. Evapotranspiration is also considered. The evapotranspiration rate is calculated as a moisture loss by using the Allen et al. (2005) model, and translated into a heat loss by multiplication with the density and latent heat of evaporation of water. The evapotranspiration rate is dependent on the type of vegetation at the surface; the user can vary the surface vegetation from anywhere between a concrete surface and a fairly tall grass (about 7“). Once the ground model has run, the updated cells with zone surface boundary conditions will update the OtherSideConditionsModel temperatures which are the used at the next timestep in the surface heat balance calculations. Simulation Methodology[LINK] The ground domain is updated at each zone timestep, or hourly as specified by the user. For situations when the ground domain is updated at each timestep, the domain is simulated by applying the surface heat flux boundary conditions from the previous timestep and calculating a new OthersideConditionsModeltemperature. At this point, the surface heat balance algorithms can then take the new outside surface temperatures to update their surface heat fluxes. For situations when the user has elected to have the domain update on an hourly basis, the surface heat balance for each coupled surface is aggregated and passed to the domain as an average surface heat flux from the previous hour, which will then update the outside surface temperatures for the surface heat balance’s next Both in-grade and on-grade scenarios are simulated with the GroundDomain object. The key difference being that for in-grade situations, the slab and horizontal insulation are simulated by the ground domain, whereas for the on-grade situations the slab and horizontal insulation must be included in the floor construction object. All possible insulation/slab configurations are seen in Table 1. Possible insulation/slab configurations for Site:GroundDomain Situations Vert. Ins. Horiz Ins. (Full) Horiz Ins. (Perim) In-Grade 1 X In-Grade 2 X X In-Grade 3 X X In-Grade 4 In-Grade 5 X In-Grade 6 X On-Grade 7 X On-Grade 8* X X On-Grade 9 On-Grade 10* X * Horizontal insulation must be included in the floor construction For the slab-in-grade scenarios, a thin surface layer must be included in the floor construction. This can be a very thin layer of the slab or other floor covering materials above the slab. This provides a zone boundary condition for the GroundDomain while still allowing 3-dimensional heat transfer effects to occur within the slab. Allen, R.G., Walter, I.A., Elliott, R.L., Howell, T.A., Itenfisu, D., Jensen, M.E., Snyder, R.L. 2005. The ASCE Standardized Reference Evapotranspiration Equation. Reston, VA: American Society of Civil Engineers. Kusuda, T. & Achenbach, P. 1965. Earth Temperature and Thermal Diffusivity at Selected Stations in the United States, ASHRAE Transactions 71(1): 61–75. Pinel, P & Beausoleil-Morrison, I. 2012. Coupling soil heat and mass transfer models to foundation models in whole-building simulation packages. In Proceedings of eSim 2012, Halifax, Canada.
{"url":"https://bigladdersoftware.com/epx/docs/9-1/engineering-reference/ground-heat-transfer-calculations-using-site.html","timestamp":"2024-11-07T15:38:00Z","content_type":"text/html","content_length":"24205","record_id":"<urn:uuid:700f70b7-be54-40f0-94be-ec30df800e5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00810.warc.gz"}
Discounted Cash Flow Calculator (DCF) - fide2020.eu Our Discounted Cash Flow Calculator helps you assess the intrinsic value of stocks by calculating the present value of expected future cash flows. This tool is essential for making informed investment decisions, allowing you to determine whether a stock is undervalued or overvalued. Use it to guide your long-term investment strategy and optimize portfolio management. Discounted Cash Flow Calculator ⓘ Enter the expected cash flow for the first year. ⓘ Enter the expected cash flow for the second year. ⓘ Enter the expected cash flow for the third year. ⓘ Enter the expected cash flow for the fourth year. ⓘ Enter the expected cash flow for the fifth year. ⓘ Enter the discount rate. ⓘ Enter the number of the company's outstanding shares. Show additional input options ⓘ Enter the terminal value of the company. ⓘ Enter the growth rate for the terminal value. How the Dicounted Cash Flow Calculator Works The Discounted Cash Flow calculator helps you estimate the intrinsic value of a stock by considering expected future cash flows. It discounts those cash flows back to the present using a discount rate, providing investors with an informed basis for investment decisions. By inputting projected cash flows and the discount rate, you can calculate the fair price of a stock based on its future financial performance. Calculating DCF: Formula and Explanation The DCF formula discounts future cash flows to present value using the following equation: \( DCF = \sum \left( \frac{CF_t}{(1 + r)^t} \right) + \frac{TV}{(1 + r)^n} • \(CF_t\): Cash flow in year t • \(r\): Discount rate (in decimal) • \(t\): Time period in years • \(TV\): Terminal Value (company’s value after the forecast period) • \(n\): Number of forecast years The terminal value represents the value of the company beyond the forecasted years and is discounted just like the cash flows. Adjustments can be made based on expected growth rates. Application Areas and Use Cases The DCF calculator is primarily used in stock valuation, especially for companies with predictable cash flows. Investors use the DCF model to determine a stock’s “fair” price and make informed investment decisions. It is particularly useful for long-term investments where future cash flows play a critical role in valuing the company. Using DCF for Long-Term Financial Planning The Discounted Cash Flow (DCF) model is a valuable tool in strategic financial planning, helping businesses and investors project long-term value. By estimating future cash flows and discounting them to their present value, companies can assess the feasibility of long-term projects, investments, or business expansions. This method provides insights into how current financial decisions will impact future performance and profitability, making it a cornerstone of strategic planning. More Investment Tools:
{"url":"https://fide2020.eu/tools/discounted-cash-flow-calculator-dcf/","timestamp":"2024-11-10T15:57:38Z","content_type":"text/html","content_length":"91932","record_id":"<urn:uuid:e1886bad-9514-4c02-af8c-0e7c2bd19c3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00538.warc.gz"}
A mountain's peak is 880 m above sea level while the valley is 139 m below sea level. what is the elevation drop between the top of the mountain and the bottom of the valley? 1. Home 2. General 3. A mountain's peak is 880 m above sea level while the valley is 139 m below sea level. what is the el...
{"url":"https://math4finance.com/general/a-mountain-s-peak-is-880-m-above-sea-level-while-the-valley-is-139-m-below-sea-level-what-is-the-elevation-drop-between-the-top-of-the-mountain-and-the-bottom-of-the-valley","timestamp":"2024-11-06T20:43:26Z","content_type":"text/html","content_length":"30399","record_id":"<urn:uuid:dc2bc694-3f71-45fd-8004-77ed1448ede9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00512.warc.gz"}
Clearhat - Tag - AristotleMathematical infinity at the beginning instead of at the ends 2024-11-01T17:13:27-05:00 urn:md5:681a01897c8b427643d25b22ede9bc11 Dotclear urn:md5:27ae3b68473bb496577771ded10131f5 2021-04-15T06:26:00-05:00 2024-05-19T13:27:54-05:00 Clearhat Mathy Stuff and Ternary Logic Aristotle certainty emptiness empty set horror vacui infinity negative infinity zero <p><em>The following is a "thinking-out-loud" kind of thought experiment which sort of went off the rails... and then got righted again.</em></p> <h4>The simplicity of emptiness</h4> <p><img alt="" class="media media-right" src="https://www.clearhat.org/public/glass-161034_640.png" style="width: 30%; float: right;" />We accept the simplicity of emptiness as a reliable foundation upon which to establish all of mathematics without question. The logic is plain: clearly, there can be nothing more simple than the empty set. We start there, and, knowing that we have begun with the most logically solid foundation possible, develop the rest of set theory. For a similar reason, Peano's axioms, widely understood to be fundamental, start with zero as the first number, and follow this same pattern; from simplicity to complexity.</p> <p>Note that we entail a few little-considered intuitions about mathematics when we begin with emptiness in this way. For example, one of the little-realized implications of starting with emptiness is that it is then easier to accept the assumption that&nbsp;<em>none of mathematics has any physical weight whatsoever</em>. In other words, the full continuum of real numbers, in all its vast and infinitely divisible infinitude -- as well as its related dividing techniques like Dedekind cuts or Cantorian diagonals which we use to separate reals and infinities from each other -- all of this together weighs nothing.</p> <p>Such weightlessness -- although a kind of emptiness -- may seem unrelated to the empty set -- another kind of emptiness, but consider: if we imagined for a moment that numbers or anything in mathematics weighed anything, a curious paradox about&nbsp;<em>something resting upon nothing</em>&nbsp;would immediately appear to imagination in stark relief to the elegance of the well-laid foundation.</p> <p>Therefore we can be sure all of mathematics weighs nothing, even though it's not the way we normally think about math.</p> <p>By means of this small thought experiment, the overlapping relationship between various forms of emptiness becomes a little more obvious; emptiness is emptiness, whether we call it zero, empty set, weightlessness, null, or any other name. It is the lack of properties which blends all these nomenclatures into one.</p> <p>Much of this point is simply taken for granted. For example, no one talks about mathematics as having no weight; there is not even a need for an axiom; it is simply not questioned, being of such little consequence that it is easily ignored. "Everyone knows" mathematics has no weight.</p> <p>Although emptiness is a relatively new idea within mathematics, such assumptions about the non-physical nature of mathematics are ancient; they can easily be traced back through Descartes' separation between mind and matter, to Plato's perfect ideals, or Euclid's extensionless points, and further.</p> <h4>A paradox at the root of set theory</h4> <p>Although the non-physical nature of mathematics is ancient, recent studies of the cognitive roots of mathematical concepts prove that mathematical intuition and mathematical structures originate in -- and cannot be separated from -- the same perceptual awareness we use to comprehend the physical world. There are other, similar studies, but Giuseppe Longo and Arnaud Viarouge pose a striking paradox regarding how we accept the formless empty set as the cornerstone of mathematics:</p> <blockquote>"...there is no Mathematics without structure; its constitutive analysis must be the opposite of the unstructured assembly which is the primary foundation and the conceptual origin of Set Theory." --&nbsp;<a href="https://www.researchgate.net/publication/224000251_Mathematical_Intuition_and_the_Cognitive_Roots_of_Mathematical_Concepts" hreflang="en" title= "Mathematical intuition and the cognitive roots of mathematical concepts">Mathematical Intuition and the Cognitive Roots of Mathematical Concepts</a></blockquote> <p>In other words, the structureless form of the empty set <strong>does not correspond with the cognitive roots of mathematical concepts</strong>. It is an artificial construction. The authors go into a fair amount of detail on this matter, saying: "In Set Theory, elements or points precede structures; the latter are conceptually secondary. In our views, gestalts, as structures, precede points, they are our primary, proto-mathematical relations to the world."</p> <p>This "Foundation Paradox" is strengthened as the authors go on to discuss the remarkable invariance and high degree of certainty which are defining features of Mathematics. This is a significant point, not to be glossed over as we try to resolve the dilemma -- in fact, it points in the direction we should go with our answer: toward certainty, away from artificial structures.</p> <p>This invariant nature embedded within how we understand and process mathematics can be said to shine a spotlight onto the importance of getting the foundation correct: If mathematics is the science of structures and mathematics exhibits the highest degree of invariance (indeed, the highest certainty in all of scientific inquiry) then it follows that the most stable portion of mathematics -- its foundation -- should reveal structures and parallels to origins in our perceptions and cognitions rooted in billions of years of evolutionary progress. However, we find the opposite; a complete lack of structure, an emptiness, at the root of mathematics.</p> <p>How did we get here?</p> <h4>Infinity and zero historically separated</h4> <p>Without the existence of such a paradox, the novel structure proposed by this present essay would be easily disregarded, because on the surface "infinity at the beginning instead of the ends" is as absurd as claiming that anything mathematical has physical properties -- like weight, or mass, etc.</p> <p>But the paradox clearly does exist. And as long as it does, we must look more closely at all possible resolutions; even those which may seem absurd at first glance.</p> <p>As we shall see, the absurdity of our proposition is <em>not</em> an essential feature. In fact, it is more an artifact of how the history of mathematics unfolded than it is an <em>actual mathematical impossibility</em>.</p> <p>The initial conceptual difficulty with fundamentally changing how we look at infinity is rooted in an idea which previously dominated mathematics for centuries -- but is no longer considered important, being now a footnote in the history books. This idea is known as&nbsp;<em>horror vacui</em>, or, an extremely strong prejudice against the idea of emptiness. Aristotle was the first to write of this, and his influence on this matter reached centuries into the future, until the&nbsp;<em>horror</ em>&nbsp;began to dissipate with the arrival of the zero from India in about 600 A.D.</p> <p>That dissipation was slow. For example, when the zero first came into being, nobody quite realized that it would grow from its role as a convenient, semantically meaningless, placeholder used within large numbers, into the embodiment of something substantial: the founding "nothing" or "emptiness" by which we know it today. The evolution of zero was necessarily slow, for if its emptiness as we know it had been known early, the&nbsp;<em>horror</em>&nbsp;would have prevented its arrival altogether.</p> <p>Although avoiding emptiness, Aristotle did not avoid that which we today consider its opposite: infinity. Indeed, he had a fairly sophisticated concept of infinity, even separating infinity into two different kinds, "actual" and "potential." Thus the idea of infinity as being larger than&nbsp;<strong>the end</strong>&nbsp;of countable numbers was known and studied long before zero was placed where we have it today, at&nbsp;<strong>the beginning</strong>&nbsp;of countable numbers.<img src="https://www.clearhat.org/public/infinities.png" style="float: right; margin: 10px; width: 50%;" /></ p> <p>The separation in time of these two insights is important. After zero was finally accepted as a placeholder and then later as a numerical symbol for nothing, eventually negative numbers were discovered. Along with them came negative infinity, a mirror image of positive infinity. In this way, emptiness was incrementally but firmly established at the center of the wide spectrum we now know as the real number line, extending from negative infinity to positive infinity.</p> <p>To summarize: The&nbsp;<em>horror vacui</em>, a fear of emptiness, was fading from its dominant place in our minds at the same time zero, or emptiness, was evolving into its now-central place within our mathematical understanding. This gradual transformation is the key to understanding why no one ever seriously considered the possibility that zero and infinity could be joined, in the manner that we will now explore.</p> <p>Thus the seeming absurdity of our position is more an artifact of how the history of mathematics unfolded, than it is an actual mathematical impossibility as it appears on the surface. Now that zero and emptiness have been firmly embedded in their places within mathematics for the past century, we have the liberty to consider things which previously no one, or few, previously considered.</p> <h4>When infinities merge and move to the center</h4> <p>Let us consider, then, what happens when the two endpoints of positive and negative infinity are brought to the center of the number line, where zero firmly exists. This turns mathematics inside out. It's like starting with an ordinary 1992 Ford Taurus, dividing its engine into four pieces, and putting one fourth of the engine on each wheel.</p> <p>In the process of imagining such a thing, how quickly does such an idea become incoherent? For example, can such a car even function? One answer is no, the most essential part of a car is broken. But another answer is, yes, a small motor at each wheel is how many electric cars operate. Sometimes what seems absurd can turn out to be sensible, if we give the idea a little patience.</p> <p>Patience is required here. Immediately after embarking into the thought experiment about bringing two opposite infinities together, it will feel like the vastness of infinity cannot fit into the emptiness of zero. Nor can the emptiness of zero contain anything within it and still retain the essential simplicity of emptiness. On the surface, it appears one extreme will cancel the other out, either way we start. The inevitable collision feels like a zero-sum-game with only a single winner possible.</p> <p><img src="https://www.clearhat.org/public/superposition_waves.png" style="float: left; margin: 10px; width: 30%;" />Upon further consideration however, it turns out that quantum mechanics provides a ready intuition for how to do this, by lending its concept of superposition. Superposition is where two or more separate particles inhabit the same space simultaneously. So what happens if we superpose both infinities into a single infinity, and then superpose that singularity of infinities with zero?</p> <p>Before answering that, let's take a moment before going too deeply into the magical melting-pot of superposition.</p> <p>Consider briefly what happens at the positive and negative ends of the number line, now that the two infinities are removed from consideration. Without infinities, both number lines simply stop when the countable numbers end. After that, nothing. Surprisingly after such a dramatic removal, the remaining structure is already well-known within mathematics, especially by computer scientists who figure out how computers should process mathematics:</p> <p>By eliminating infinities from the ends of the number lines, we have not created a Frankenstein, but simply confined ourselves to computable mathematics.</p> <p>Constructivists will appreciate this move immediately, as they've been saying this is the correct way for well over a century. We may quibble with finitists on the details of how this happens, but it is enough, for the moment, to realize that we need not worry too much about the ends of the number lines once we remove positive and negative infinity.</p> <p>No big disaster has happened. We can set aside major structural worries for now, and safely come back to the ends later, once we've decided how to resolve what's happening in the center.</p> <p>Back to superposing infinity and zero. How do we <em>do</em> this? Can we actually merge both infinities into a single super-infinity? Or, perhaps we place emptiness in the center of two infinities? Note, as soon as we put anything into emptiness it is no longer empty, so, should we include reference to time or do we do this instantly?</p> <p>How do we get visuals on this? Beginning with what we have in the way of symbols, what happens if we place a single, merged, infinity&nbsp;<em>inside the little emptiness</ em>&nbsp;carried visually within the number zero? That approach seems like as good a place to start as any, let's try it.</p> <p>Emptiness we can imagine -- the shape of the zero comes from the shape a pebble would leave in sand after it was removed; a zero makes a good visual for emptiness -- but how do we visualize infinity? Is it like a sun, a sphere flowing outward with light? Or is it like a supernova, exploding infinitely outward, bursting all bonds, even the ability to imagine? Or somewhere between these extremes; what about a fountain... a mountain spring flowing with water into a pool...</p> <p>Yes! That's it!</p> <h4>A mountain spring flowing outward</h4> <p><img src="https://www.clearhat.org/public/havasu_falls.png" style="width: 40%; margin: 10px; float: right;" />Imagine a mountain spring flowing outward from within a circle (say, of small stones laid around the spring in the shape of a zero, which defines the border of emptiness). The spring is overflowing the stone boundary, and a river of... numbers... are flowing outward in all directions... from emptiness?</p> <p>So there's a raw visual for how to do this. It contains a paradox, of "something arriving from nothing," but we at least have a starting point. It's not too bad, as visuals go. Maybe we could use fire instead of water, an iron band instead of a few stones... but infinity is so immense, compressing it into a too-small space might cause it to explode, so let's remain with flowing water til the idea is more stable -- experiment with fires and supernovas later if the visual is moving too slow.</p> <p>Is this a reasonable way to think about placing infinity within emptiness?</p> <p>(Long pause).</p> <p>No, not exactly. The outflowing water representing infinity has completely replaced any hope of the previously motionless, simple, emptiness that makes such a solid foundation for set theory. And all of this outward flowing is&nbsp;<em>too physical</em>. We've spent many centuries thinking of mathematics and numbers as having no physical properties and here we clearly have introduced motion, not as something studied by math, but as something mathematical; if we keep this up, we'll have to re-invent calculus, which slices physical motion into infinitesimally small, non-moving pieces. We'll need a calculus for calculus. Oh dear.</p> <p>And what does&nbsp;<em>a flowing number</em>&nbsp;even look like?</p> <p>These are problems. But then again, it's... somewhat workable. It's stable enough to hold its own while we consider some of the consequences of placing infinity at the center. It hasn't collapsed yet. It may not be perfect, but let's keep going for now.</p> <p>With infinity safely flowing outward from the center, we see that the number line (going off to the right toward the greatest positive countable number, and off to the left toward the least negative countable number) can be visualized as coming out of the fountain. Little has changed for the countable integers.</p> <p>The numbers start small: one, two, three, and "flow" to larger and larger... wait, larger? Larger is what happens as we get closer to infinity, and here we are going away from infinity... but getting larger...</p> <p>Okay, that's a problem.</p> <p>A quick fix for this is to temporarily invert the two number lines. In other words, instead of starting with the smallest countable numbers, 1,2,3,... let's imagine that the numbers flowing from the fountain at the center begin with the&nbsp;<strong>largest countable</strong>&nbsp;(whatever that is. Let's use "999" to represent it temporarily) and descend, one by one, down to a single&nbsp;<strong>one</strong>&nbsp;at the outer edge, where infinity used to be.</p> <p>What happens if we do this, completing the original turn-everything-inside-out movement of infinit(ies) which began our journey? Is this even coherent?</p> <p>Barely. It's getting awkward... but it's still possible to imagine, even to draw a simple line-drawing representation to help visualize what is being described here:</p> <p align="center"><img src="https://www.clearhat.org/public/ infinity_at_the_center_and_counting_toward_one_on_the_edges734x100px.png" /></p> <p>Infinity is at the center, and "the largest countable number" is nearest it, with the counting descending as the number line goes outward. This means that 1 and -1 are at the outermost edges. Since we know we're working in the computable realm, let's give realistic boundaries and say that "the largest countable number" is at the limit of&nbsp;<em>whatever memory space we have available within a given computer</em>.</p> <p>Is there anything useful with this visualization, or did that last number-line-inversion cause it to lose all coherence? Can we even do something as basic as counting with this new inside-out structure? What do we do with 1 and -1 dangling in their vulnerable smallness at the outer edges?</p> <p>Maybe we should undo that last temporary inversion and try another path?</p> <p>Lastly, is there is anything in Nature that looks like this structure, maybe something we can use as an analogy to help think about things? Maybe a solar system, with a sun at the center, and smaller planets off toward the outer edges?</p> <p style="text-align: center;"><img src="https://www.clearhat.org/public/fancy_squiggle.png" /></p> <p>(That was an even longer pause.)</p> <p>Okay, so a few days have passed, and I've been thinking about this often, trying to solve these riddles that I've created for myself. I've almost given up on the ridiculousness of where I've gotten myself in this thought experiment several times, but intuition keeps pulling me back to the image that I created above and finally, moments ago (early a.m. May 3, 2021) I just realized&nbsp;<em>yes, there is something in Nature that works very much like what I just described in the illustration above!</em></p> <p>The solution is in, quite literally, the last place I'd normally ever look: my own imagination!</p> <h4>Is imagination like a fountain of information out of nowhere?</ h4> <p>Look at that illustration above and you should be able to see that it is an accurate structural representation of what your own imagination does when it is counting.</p> <p>We tend to think that counting is a linear thing, but take a few minutes to think about what happens when you count. Is it linear? When you iterate a new number at the end of all that you've counted, say, as you have carefully, one-by-one, reached the number 999, and you're about to count the next number, 1,000.&nbsp;<em>Where does that new number come from?</em>&nbsp;Does it not come from within the center of your imagination, or,&nbsp;<strong>out of nowhere</strong>&nbsp;out of "an empty set" and suddenly it exists, matching all the patterned rules that define what number it should be (i.e. Peano's axioms). A new number is effectively a new creation, pushing all the numbers that have come before it out to the edges, like water flowing from a spring.</p> <p>Now that is a dumbstrucking realization.</p> <p>I've never seen the structure of my own thoughts in that way before. This is a big enough insight that I'm ready to draw this little essay to a conclusion and consider it complete enough for now, because it turned up something potentially useful.</p> <p>As a conclusion, this is not where I was headed, but... I did find something coherent and insightful right when I thought I had created a mess, and this gem, of understanding what counting looks like -- what it really looks like -- is absolutely worth the journey.</p> <p>Also, I have the rather pleasant confirmation that intuition was correct in drawing me back to this paradox over and over, not giving up, until it got solved. Good job, intuition.</p> <p>Now I'm pretty certain I can take the ideas I was writing in the first half of this essay and drive them to a better conclusion.</p> <p>Good Lord, this adventure (over the past few months contemplating these kinds of ideas and continuing to find gems like this) is truly off the rails and I can't wait to see what happens next.</p> <p> </p> https://www.clearhat.org/post/mathematical-infinity-beginning-instead-ends#comment-form https://www.clearhat.org/
{"url":"https://www.clearhat.org/feed/tag/Aristotle/atom","timestamp":"2024-11-01T22:59:58Z","content_type":"application/atom+xml","content_length":"25256","record_id":"<urn:uuid:107ecf4d-5ed9-40f2-b9c8-26baecee0705>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00899.warc.gz"}
The C1 Algebra topics all form part of the GCSE syllabus but are covered in more depth in C1. Key facts to remember Ways of simplifying algebraic expressions: o Collect up like terms o Use the rules of indices to combine terms o Expand expressions by multiplying out brackets, then collect up like terms […]
{"url":"https://witneymathstutors.co.uk/tag/simplifying-surds/","timestamp":"2024-11-09T15:58:27Z","content_type":"text/html","content_length":"51189","record_id":"<urn:uuid:a29d624a-ce5b-4408-be6d-7fcf16706a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00610.warc.gz"}
Quantitative Reasoning - MTH 154 Quantitative Reasoning - MTH 154 at Eastern Shore Community College Effective: 2023-05-01 Course Description Presents topics in proportional reasoning, modeling, financial literacy and validity studies (logic and set theory). Focuses on the process of taking a real-world situation, identifying the mathematical foundation needed to address the problem, solving the problem and applying what is learned to the original situation. This is a Passport and UCGS transfer course. Lecture 3 hours. Total 3 hours per week. 3 credits The course outline below was developed as part of a statewide standardization process. General Course Purpose The Quantitative Reasoning course is organized around big mathematical concepts. The course's nontraditional treatment of content will help students develop conceptual understanding by supporting them in making connections between concepts and applying previously learned material to new contexts. The course will help to prepare students for success in future courses, gain skills for the workplace, and participate as productive citizens in our society. * Encourage students to do mathematics with real data. This includes recognizing the real world often has less than perfect data, ambiguities and multiple possible solutions. It also means equipping students to be intelligent consumers of quantitative data and reports. * Encourage students to engage in productive struggle to learn mathematics and make connections to the world in which they live. Course Objectives • Communication □ Interpret and communicate quantitative information and mathematical and statistical concepts using language appropriate to the context and intended audience. ☆ Use appropriate mathematical language in oral, written and graphical forms. ☆ Read and interpret real world advertisements, consumer information, government forms and news articles containing quantitative information. ☆ Use quantitative information from multiple sources to make or critique an argument. • Problem Solving □ Share strategies to find solutions to life application problems to make sense of the mathematical content and persevere in solving them. ☆ Apply strategies for solving open-ended questions requiring analysis and synthesis of multiple calculations, data summaries, and/or models. ☆ Apply problem solving strategies to applications requiring multiple levels of engagement. • Reasoning □ Reason, model, and draw conclusions or make decisions with quantitative information. ☆ Draw conclusions or make decisions in quantitatively based situations that are dependent upon multiple factors. Students will analyze how different situations would affect the decisions. ☆ Present written or verbal justifications of decisions that include appropriate discussion of the mathematics involved. ☆ Recognize when additional information is needed. ☆ Recognize the appropriate ways to simplify a problem or its assumptions. • Evaluation □ Critique and evaluate quantitative arguments that utilize mathematical, statistical, and quantitative information. ☆ Evaluate the validity and possible biases in arguments presented in real world contexts based on multiple sources of quantitative information - for example; advertising, internet postings, consumer information, political arguments. • Technology □ Use appropriate technology in a given context. ☆ Use a spreadsheet to organize quantitative information and make repeated calculations using simple formulas. ☆ Search for and apply internet-based tools appropriate for a given context - for example, an online tool to calculate credit card interest or a scheduling software package. • Financial Literacy □ Simple Interest ☆ Define interest and its related terminology. ☆ Develop simple interest formula. ☆ Use simple interest formulas to analyze financial issues □ Compound Interest ☆ Compare and contrast compound interest and simple interest. ☆ Explore the mechanics of the compound interest formula addressing items such as why the exponent and (1+r/n) is used by building the concept of compounding interest through manual computation of a savings or credit account. ☆ Apply compound interest formulas to analyze financial issues ☆ Create a table or graph to show the difference between compound interest and simple interest. □ Borrowing ☆ Compute payments and charges associated with loans. ☆ Identify the true cost of a loan by computing APR ☆ Evaluate the costs of buying items on credit ☆ Compare total loan cost using varying lengths and interest rates. □ Investing ☆ Calculate the future value of an investment and analyze future value and present value of annuities (Take into consideration possible changes in rate, time, and money.) ☆ Compare two stocks and justify your desire to buy, sell, or hold stock investment. ☆ Explore different types of investment options and how choices may impact one's future such as in retirement. • Perspective Matters - Number, Ratio, and Proportional Reasoning □ Solve real-life problems that include interpretation and comparison of summaries which extend beyond simple measures, such as weighted averages, indices, or ranking and evaluate claims based on them. □ Solve real-life problems requiring interpretation and comparison of various representations of ratios (i.e., fractions, decimals, rates, and percentages including part to part and part to whole, per capita data, growth and decay via absolute and relative change). □ Distinguish between proportional and non-proportional situations and, when appropriate, apply proportional reasoning leading to symbolic representation of the relationship. Recognize when proportional techniques do not apply. □ Solve real-life problems requiring conversion of units using dimensional analysis. □ Apply scale factors to perform indirect measurements (e.g., maps, blueprints, concentrations, dosages, and densities). □ Order real-life data written in scientific notation. The data should include different significant digits and different magnitudes. • Modeling □ Observation ☆ Through an examination of examples, develop an ability to study physical systems in the real world by using abstract mathematical equations or computer programs ☆ Collect measurements of physical systems and relate them to the input values for functions or programs. ☆ Compare the predictions of a mathematical model with actual measurements obtained ☆ Quantitatively compare linear and exponential growth ☆ Explore behind the scenes of familiar models encountered in daily life (such as weather models, simple physical models, population models, etc.) □ Mathematical Modeling and Analysis ☆ Collect measurements and data gathered (possibly through surveys, internet, etc.) into tables, displays, charts, and simple graphs. ☆ Create graphs and charts that are well-labeled and convey the appropriate information based upon chart type. ☆ Explore interpolation and extrapolation of linear and non-linear data. Determine the appropriateness of interpolation and/or extrapolation. ☆ Identify and distinguish linear and non-linear data sets arrayed in graphs. Identifying when a linear or non-linear model or trend is reasonable for given data or context. ☆ Correctly associate a linear equation in two variables with its graph on a numerically accurate set of axes ☆ Numerically distinguish which one of a set of linear equations is modeled by a given set of (x,y) data points ☆ Identify a mathematical model's boundary values and limitations (and related values and regions where the model is undefined). Identify this as the domain of an algebraic model. ☆ Using measurements (or other data) gathered, and a computer program (spreadsheet or GDC) to create different regressions (linear and non-linear), determine the best model, and use the model to estimate future values. □ Application ☆ Starting with a verbally described requirement, generate an appropriate mathematical approach to creating a useful mathematical model for analysis ☆ Explore the graphical solutions to systems of simultaneous linear equations, and their real world applications ☆ Numerically analyze and mathematically critique the utility of specific mathematical models: instructor-provided, classmate generated, and self-generated • Validity Studies □ Identify logical fallacies in popular culture: political speeches, advertisements, and other attempts to persuade □ Analyze arguments or statements from all forms of media to identify misleading information, biases, and statements of fact. □ Develop and apply a variety of strategies for verifying numerical and statistical information found through web searches. □ Apply the use of basic symbolic logic, truth values, and set theories to justify decisions made in real-life applications, such as if-then-else statements in spreadsheets, Venn Diagrams to organize options, truth values as related to spreadsheet and flow-chart output. (Students must have experience with both symbolic logic and basic truth tables to meet this standard.) Major Topics to be Included • Financial Literacy (Interest, Borrowing, and Investing) • Perspective (Complex Numeric Summaries, Ratios, Proportions, Conversions, Scaling, Scientific Notation) • Modeling (Observation, Mathematical Modeling and Analysis, Application) • Validity Studies (Statements, Conclusions, Validity, Bias, Logic, Set Theory)
{"url":"https://courses.vccs.edu/colleges/escc/courses/MTH154-QuantitativeReasoning/detail","timestamp":"2024-11-14T12:03:44Z","content_type":"application/xhtml+xml","content_length":"18801","record_id":"<urn:uuid:591cc267-9bc1-44d4-88ec-93b6f319b43d>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00402.warc.gz"}
CIVL4017-Surface Water Hydrology Report Writing - Engineering Assignment Help - Masters Achiever The project has been formulated to allow you to determine the differences in catchment responses before and after development. The catchment currently is undeveloped and one of the three sub-catchments is proposed for development. It is up to your team to decide which sub-catchment you want developed. You will need to identify the sub-catchment for development in your report. Please note that the project has been formulated step-by-step (each task identified), the final report will need to be in a technical report format. A-Plus Writing Help For University Students Get expert assistance in any academic field. All courses and programs covered. Get Help Now! Task 1. Estimation of physical parameters Estimate the area of each sub-catchment. Compute the total catchment area. Estimate the length of channels. This will be different for different catchments. For catchment A, you will need to estimate lengths of two channels (1 to 2 and 2 to outlet) whereas for catchments B & C, you only require one channel length (1 to outlet). Task 2. Construction of rainfall hyetographs. Use the Bureau of Meteorology web site to generate the family of IFD curves for your catchment. Use the 2016 IFD and the latitude and longitude stated in the figures (for your catchment) to generate IFD information for your site. Use 1?P 24-hr storm for further analysis. You will need to include both the IFD table and IFD curves in your report. Use the ARR Data Hub web site (https://data.arr-software.org/) to generate total rainfall hyetograph for the storm generated in the above step. You will need to include the total rainfall hyetograph, both table and histogram, in your report. Extract the initial and constant loss values from the ARR Data Hub web site. Make necessary adjustments and construct the rainfall excess hyetograph. You will need to explain how you achieved your result and include this in your report. Graphical representation suffices here. For post-development condition, assume that both the initial loss and continuing loss will be reduced by 50% for the sub-catchment you’re proposing to develop. Remember you’re developing only one of the three sub-catchments, therefore loss values and rainfall excess will change only for one sub-catchment. Task 3. Construction of unit hydrographs of desired duration. You will be generating 15-minute unit hydrographs for your sub-catchments using the 15-minute unit hydrograph for a 6.15km2 catchment given in the table below and making reasonable assumptions (explained below). You may then be required to convert thus generated 15-min unit hydrograph to another duration unit hydrograph to meet the time interval of the rainfall excess hyetograph. You may need to follow S-hydrograph method to achieve this outcome. Use of spreadsheet will save considerable time, as the process involves repetitive ncomputations. You will need to include your spreadsheet. Construct ?t-hr unit hydrograph (where ?t-hr is the time step of rainfall excess hyetograph) from the 15-minute unit hydrographs generated for your sub-catchments. You will need to use the S-hydrograph method to achieve this. You will need to explain nthe process you followed in your report. Make sure to verify your results by checking volumes after each computation. Task 4. Construction of storm hydrographs. Construct the storm hydrograph for each sub-catchment using rainfall excess (task 2) and respective ?t-hr unit hydrographs generated above (task 3). The 3 storm hydrographs you’ve generated are the responses at the outlet of respective sub-catchments for the catchment in natural condition. Let us call it predevelopment storm hydrographs. Task 5. Construction of network diagram,routing through channels and hydrograph computation at the catchment outlet. Your team will need to construct the hydrologic network to show connectivity of sub-catchments, channels and reservoirs (if any). Your team will require to extract channel properties for routing the hydrographs through channels using Muskingum routing method. For this, assume the following. Use average channel velocity of 1.2m/sec to estimate average flow velocity in the channel. Use this average flow velocity to estimate travel time constant, K. Use these values of x & K to route relevant hydrographs through respective channel(s) and compute the hydrograph at the outlet of the catchment. Your team will need to discuss how you obtained these, include sample calculations and show the final result (both table and figure). All computations will be performed using a spreadsheet and the spreadsheet will also need to be submitted. This completes generation of pre-development hydrograph at the catchment outlet. Task 6. Post-development hydrographs. Your team will need to repeat the process, taking into account development of one of the sub-catchments. For the catchment being developed, use the following. Both initial loss and continuing loss will be reduced by 50%. 15-minute unit-hydrograph will have the following characteristics. • Peak discharge will increase by 25% • Time to peak will decrease by 10% • Time base will decrease by 20% Use the above adjustments to scale and generate meaningful post-development 15-min unit hydrograph. Your team will have to ensure that the volume balance works out. This may require a few iterations. Your team will then need to construct ?t-hr unit hydrograph (post-development) for the sub-catchment being developed and use this to generate postdevelopment catchment response. Task 7. Comparison of Pre- and Post-development hydrographs. You will need to compare the pre-development and post-development hydrographs. Present your results in graphical form and present salient values in a tabular form. Analyse and discuss your results. You will then need to propose a solution that will ensure that the peak of the postdevelopment hydrograph at the outlet does not exceed the peak of the predevelopment hydrograph at the outlet. You may have to design a reservoir incorporating outlet structures to achieve this. If you’re to use this approach, yo will need to route the post-development hydrograph through the reservoir. You will need to discuss your strategy, provide the size of the reservoir and the details of outlet structures in your report. Task 8. Use HEC-HMS to verify your results This is the last step. You will use HEC-HMS (latest version) to verify your results. You will need to include all HEC-HMS files in your submission. This CIVL4017-Engineering Assignment has been solved by our Engineering Expert at TV Assignment Help. Our Assignment Writing Experts are efficient to provide a fresh solution to this question. We are serving more than 10000+ Students in Australia, UK & US by helping them to score HD in their academics. Our Experts are well trained to follow all marking rubrics & referencing Style. Be it a used or new solution, the quality of the work submitted by our assignment experts remains unhampered. You may continue to expect the same or even better quality with the used and new assignment solution files respectively. There’s one thing to be noticed that you could choose one between the two and acquire an HD either way. You could choose a new assignment solution file to get yourself an exclusive, plagiarism (with free Turn tin file), expert quality assignment or order an old solution file that was considered worthy of the highest distinction. Welcome to our Online Essay Writing Agency. Securing higher grades costing your pocket? Order your assignment online at the lowest price now! Our online essay writers are able to provide high-quality assignment help within your deadline. With our homework writing company, you can order essays, term papers, research papers, capstone projects, movie review, presentation, annotated bibliography, reaction paper, research proposal, discussion, or another assignment without having to worry about its originality – we offer 100% original content written completely from scratch We write papers within your selected deadline. Just share the instructions
{"url":"https://mastersachiever.com/civl4017-surface-water-hydrology-report-writing-engineering-assignment-help/","timestamp":"2024-11-01T19:30:00Z","content_type":"text/html","content_length":"63711","record_id":"<urn:uuid:50e5e1da-1be0-424d-a712-19ff867536e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00791.warc.gz"}
Linear Vs Binary Search + Code in C Language (With Notes) - RD2SUCCESS Linear Vs Binary Search + Code in C Language (With Notes) In this course on Data Structures and Algorithms, we have seen a lot so far. In today's video we are going to talk about: linear search and binary search. What is Linear Search? And what is binary search? Today's video will tell you guys in a little more detail Although I gave a little idea to you guys in this video. The third or fourth video that I made Video was made on Best Case, Worst Case and Average case analysis In that I gave you a very good idea, What is linear search and what is binary search. But today we are going to code it here, because it is asked a lot. And here I can't ignore it So here quickly I will tell you about linear and binary search. So first of all I grab a pen, and show you guys here by making an array. So look, let's start the story from here Let's say you have an array: well, this is your array: Here I am creating an array and here is my array like this, and inside it let's say some elements As if it is inside 4, it is inside 8 here 10, 12, 15 There can be many elements inside an array, okay! And whether it is sorted or unsorted, I don't care about that. And I have to do an element search for what is here, 2, 8 and that's 2, well, I found the search 2 What have I found, search for this element So what if I want to search for this element, in this array So I'm going to search all the indices one by one to see if there is 2 over 0, no, is there 2 here, no Is there a 2 here, no, is there a 2 here, no Is there a 2 here, no, is there a 2 here, no Yes, there is 2 so i'd say, ok here's 2 2 found But if 2 could not be found here, that is, I could not find 2 then I would say that I could not find 2 in this array ok, and when will I say this when i traverse to the end of the array So the linear searching that happens, I write here, which is our linear search. that is done through array traversal ok, so let me write here, this is done through array traversal The meaning of traversal was told to you people that we do this by visiting all the elements. And we do array traversal But we stop the array traversal if we find the element So we stop the array traversal at that whenever we get our element, ok This Linear Search: Very Simple Search If I give you a lot of cards in real life, I give you a deck of card And I say that you take out 6 of hearts in it So you may only do linear search, well, you will do linear search only. Now let's say I give you 6 of Hearts, I say these cards are a lot of cards: this dude take me 6 of Hearts out of this What will you say, you will say it is okay You will keep falling cards one by one, as soon as you get 6 of heart you say you have got it So what did you do, you did linear search So it's you, wait I'll make you better, I also wear shoes for you And you did this linear search Now I will give you a very good example of binary search that when you will do binary search So the linear search is very simple, quite a straight forward algorithm to find all the elements of the array one by one: Where the element is found, it has to be said that the element is found, and the search is over. Or if you reach the end of the array, then you have to say that the search is over: the element could not be found, that's the end, that's all. ok So this was our linear search what will i do here now I will make a line, and tell you guys here binary search So let me extend this line a bit. And here now I will tell binary search, below this line so what is binary search Binary search is a slightly smarter algorithm. How much smart algorithm is it? a little smart algorithm what does it do By the way, I will tell that this our linear search:, it will work for sorted arrays, will also work for unsorted arrays And the reason is clear that why will it, why will it Because these cards are sorted, what difference does it make to you if they are unsorted? You have to find 6 of hearts, keep falling one by one finish, you will get card somewhere, ok So this thing is clear to us Now what I am going to tell you people here is: A smart algorithm named Binary Search what does binary search do Before explaining what binary search does, I will give you a real life example. Then you will understand better What will I do to you in real life example? I want to give you people in real life example, Let's say I give you a book, what do I give, I give a book i will give you a book In this I say that open the page number 38, or show it by opening page number 238. Suppose it has thousand pages ok That's a thousand pages, and I'm saying I want page number 238 in this, okay. So will you turn every page of the book? I want you guys to tell me how the example looks I want you guys to comment below and tell me how this example looks I will also give such examples in future And even if you don't want an example, tell Example is you have a book, and this book has been given to you people, you have been told that 238 page no. should be opened inside it There are thousand pages in this book, and you have to open 238 page no. so the first thing you do is open a random page Most probably you open the page with the center So you have come to page number 500 Now you have 2 parts: of the book What did you do, opened the book: One is the part of you that opened your book, that: this part of you, look, I opened the book: here You see in the diagram, I have opened the book And one part you have this one, and one part this one So divided that book into 2 parts And this page no. What is open: in this book that is 500 Now you know that 238 is less than 500 then you will half it back What will you do now, you will half this part too And you will come to page no. 250 Now you know that, what is 238 is less than 250 so what will you do? Come back a little bit and you guys will come to the page number 238 means you will converge If you turned the pages one by one, you would have to turn 238 pages. And how many pages did you turn over here? Instead of getting 1.2.3.2.4.5.6 you will do with your intuition Is this a binary search algorithm? This binary search algorithm works in a very similar way Now here you guys also put your intuition You also see here that how much less than 250, 38 is just a little less. So whatever you have, turn 4 pages, turn 8 pages little by little. You converge, and you will get this 238 page number How computer will solve this problem first of all, would you have been able to open this 238 page number, if its pages were stitched at random, like the book? without being stitched from 1 to 1000, This pages would have been stitched randomly, then you would not be able to do this Then you can't do 238 page number so what is the first condition of binary search, the condition is, should be sorted array Okay. Sorted array Array must be sorted, okay Array must be sorted what the array needs to be, the array needs to be sorted this is its first condition Now what I will do here is to make an array or make a sorted array, Infact I will create a sorted array and put some elements inside a sorted array So let's say here I am putting 2, here I am putting 8, Putting here 14, putting here 32, putting here 66 After this, as if I am putting another element in it. Which is a little bigger, which is 100 After that let's say I'm putting 104 and so on You can add any number of elements Now I say search 8 in it, ok search 8 in this array when i say search 8 in this array So you can also do linear search, you can also do binary search Here if you do a linear search then you are lucky. So here you will find the element in the second search So linear search will be better if you are doing 8 searches here But let us say we are searching 100, Okay. And I'll make this array a little bit bigger And I put 200 here, I put 400 here too This is what you will get: Linear Search easily But if I'm saying 200, search for you, then what will happen? When I am telling you to search 200, then you have to travel the whole array almost to get 200 Now some elements are such that if we search for them, we will find them in linear search. Like 2, 8 will get it very easily But as the size of the array increases, this linear search will become difficult for us. That's why we do binary search, if the array is sorted then If the array is sorted then we take advantage of this Take advantage of the array being sorted, and do a binary search And what will binary search tell us, I will tell you how it works It works just like I took the example of the book But here's a little bit to you guys, because we're just about to code: That's why I have to tell you people a little more easily, well and in such a way that you guys are able to code. So look, we first consider the first one as Low, okay And we accept the last element as High Okay! So it's Low, it's High, okay. What will I do now, I will take out the mid, what will be the mid greatest integer of (Low + High)/2 greatest integer of (Low + High)/2 Here (8+0)/2 =4, so it will be the mid ok i got it mid What happened, it's my mid, here I got my mid So I'll keep track of three things here I'll keep track of the low, which is my low One will keep the track of high, will keep track of mid So here I am making a table Low, High, and Mid, OK So look over here what's my low You guys see, what's my low my low is 0 here right now and what is high, is 8 And what happened to the mid, became 4 Okay! Now I'll see what if I'm looking for 200 I am searching for 200 now, I have been searching for 8, I am searching for 200 now, okay And I write here that I am searching for 200, I am doing this search 200 here Here I put its tag, I write it completely after doing this, okay So, low, high and mid here I have given these values: Now tell me where 200 will be will be between mid and high or between low and mid Obviously it will be between mid and high Because the mid one is smaller than 200, now less than 200 Now if mid becomes equal to 200 then my search is over, okay If mid is equal to 200 then my search is over but my search is not over yet I'll make the mid here the low over here So I would say now my mid is low And my high is what it is, okay And now I will calculate the new mid So what will I do to with low, I will do 4 And what will I do to the high, my high will remain the same and I calculate the mid (8+4)/2= 6 My mid turns 6, so this will be my mid, ok So I write it here like this, it is my mid now Now tell me where would it be 200 Now tell me where would you guys have been 200 The value that is 104 is less than 200: OK, smaller than 200: a value of 104 So what will I do, I will make this mid low back from Like I made this mid, low This high was let back to be high now i will make this mid, low why will I make this mid, low? I am making this mid, low because I know that my search is to be in this space, in this space. Here it is my search space: What am I doing to my search space? getting smaller First there was a search space for the whole array, then I halved it, then I halved that array too, then I halved that array too And look over here, it's done as low and I'll let it stay high again Okay Now tell me what will happen low, I have made mid to low, So low will be 6 what will be high, will be 8 And what will be the new mid, you will do the new mid (8+6)/2= 7 So the new mid will be 7 And at every step we are checking this, element found or not, Element found. When our mid was 4, there was element found!, No when our mid was 6, there was element found!, No when our mid is 7, then element found!, Yes Our search is over, and we found the element here in just 3 steps Where we would have done in 7 steps, we found our element in 3 steps So, Hope's Binary Search and Linear Search will be clear to you. I know in this course I repeated myself a little over here But this repetition is not a useless repetition: these things are so important, if they will sit in the minds of you people. then it will be very beneficial So that's why I showed this thing, linear search and binary search step by step here Earlier I had told you a little theory on the board. But here I showed you guys by making an array, completing it. Now let's code this thing and go to our visual studio code and write the code So, I have made for you guys, some hand written notes with my own hands Well summarize up Linear vs Binary Search: For you guys Surely everyone must read this, access this, Downloading from the description, you will get the link somewhere When I upload the video, it takes me some time to put the notes on the website. So if you watch the video when it is uploaded directly then allow me 5 to 10 minutes or sometimes half an hour, I put the notes, I take time You will find the notes on the website Now if you guys go to the website and do open the DS algo, so man, I do everything according to my own, right here And I try to do everything in time, put the video, then after that here I have to put the notes. Enough work gets done, so i need my time So hope you guys understand this thing, you understand So what will I do here now, I'll come to the notes The way I told linear search, binary search on one note In the same way I told you people here also: Linear Search and Binary Search So you guys read this, it's a very one pager it's just a one page: Linear and binary search in one page is summarized here So I don't want to read out This is what I told I Didn't Talk About Time Complexity Worst Case Linear search would have been O(n) Binary search would have been O(log n) I did not talk about this, here it is a little important to tell it So look at this, which is our Linear Search and Binary Search : What do we people have to do, if you are looking at linear search, then its complexity is O(n) Because this for an (n) size of array, (n) elements that are traversing. But if you look here binary search So I'm halving the array until it runs out, does not converge speak the same (log n), ok So here you people will not face any issue, okay So this thing must have been cleared to you guys, I hope And here's what I'll do now that I will come in my own VS code Here I open my code part, I have created a separate folder for the code. And what will I do after opening it, I will create a new file here and i'll name it, This is my video number, If i'm not wrong this is my video number, number 12 I see this is my video number 12 So this is my video number 12 And in video number 12, what will I do here I am showing you people by doing here I am video number 12 12_linearbinarysearch.c, Okay. So, I created this file: using Linear and Binary search.c I put the boiler plate code here, okay I put my boiler plate code here And now I will do linear search code first, which is very easy So int linearSearch and it will take an array So here I will write, this taking an integer array And at the same time, I would say that (int size), this size is being taken (int size) So it is taking the integer array and at the same time it is taking the size of the array, and at the same time it will take the element of the array what will it do after that this will run a for loop and how long will the for loop run till the size of the array the for loop will run till it finds the element So I would say that as soon as (arr[i]==element) what you do, do return 1 Return 1 means I have got this element here in the array Or I will do one thing that I will return only the index of the array where I have got Otherwise, what will i do otherwise i will return -1 So let's see here whether it works or not Here I am giving (int arr[ ]=) an array in which I am giving some elements And here I put 56, I put a semicolon (;) and what will i do with that, int searched or i will write, int searchindex = linear search And I'll give it the array, and with that the size of the array, and the element I'll put in it Now I have to find out the size of the array, so I will tell you a way to get the size of the array. So to get the size of the array you can do, int size = sizeof(arr)/sizeof(int) So this way if you don't want to count the elements of the array So you can get the size of the array like this, Okay. So what will I do here, I will give the size and then I'll put a printf i would say, ("The element %d was found that index %d \n") And I'll write the element here and the index here, okay Write the element, write the index, put a semicolon (;) So this is the array: I passed this, I passed it to the size What did I do after this, I passed the element to be searched so i'm doing a work here element = 4; And the index here will be the search index I am returning the result of linear search in the form of search index. So, int element = 4; Okay So here I run and see it and as soon as i run it you guys look, The element 4 was found at index 4 So is it true, 0,1,2,3,4, yes it is true Can I search for element 54 which is not there? so is it giving me -1 It should give -1, but it is saying that element 54 was found at index 4, which is wrong ok so here i should put the element I was passing the 4, I should pass Element Now it will give -1 over here, as you guys can see So how did we code the linear search, let's see one more time Okay! I made a function called linear search to which I passed the array, I passed the size of the array After that I passed the element which I want to search I ran the for loop over the entire array, I said if the array becomes (i = element) as soon as the element is found, while traversing this array Then you return the index where that element was found Otherwise what you do, else you return -1 If we come to the end while searching in this array then you return -1 You should know that as this return i; is written, the function terminates and returns The function's activation record in this stack gets lost. And this is whatever value it is, It returns and resumes main. So the value of this will come in the search index, whatever it will return It can also be -1, it can also be the index of an array And here's the linear search which will be successful now. Take a look here This is the array it is not sorted So if I am running linear search in this array, it will be sorted even then the linear search will run, Linear search will run even if it is unsorted But if the array is sorted then I will not do linear search, I will do binary search. Since my array is sorted, I can take advantage of that, I can save time. Now you will not know the time for 10-15 elements, But for 10 to 20,000 elements, you will find that the time is taking a lot of time: in linear searching. Binary searching will take you less time So let's code binary search as well. The linear search was very straight forward I told you here that you go one by one, one after the other, one after the other, do whatever you want with the element, and what you will get here You have to get what your element is ok So, hope you guys have understood this thing, that what is linear search. Traverse all the elements one by one And as soon as the element is found, you return its index, If the element is not found then return -1, the story is over Okay! Let's do the Binary search now To write the code of binary search I will write here int binarysearch And I am telling you one thing, your responsibility of getting the array sorted is yours If you run your binary search and the array is not sorted So you will get ridiculous results, okay. So the array must be sorted It becomes the responsibility of you people Now as I told you people in binary search that you guys have to maintain three things 1 thing you have to maintain, 1 thing you have to maintain mid And you have to maintain 1 low and maintain 1 high So what will i do, I will write, int mid = first i will write, int mid, int high, int low Okay. I'll do the low, mid and high, okay So low, mid and high I made here: in my binary search function and what will i do after that, first i will write mid = (low + high)/2 And don't forget to put the brackets even by mistake, you guys must put the brackets And here the greatest integer will by default the (C) language What I mean is that if you will do (5 + 6)/2, Then it would be 11/2 After that it will give you 5 automatically. C language Since it is an integer, the operation between 2 integers will be an integer. So you don't care about the greatest integer Directly, even if you write like this, It will automatically take that greatest integer. 5 point something will not be this value for 5 and 6 This value will be 5 only, you know this from the C language If you have still problems with C language, So i speak again, Please revise C language, you guys will revise it parallelly. Keep watching this course also, I am not saying that leave this course and go away But you guys take my this learn seen 1 video with notes Take his notes man at least, if you are not watching the video then take notes. With great effort, and very well I have made these notes I made my dream notes, which I once thought that if I had notes, I would make this Made the notes: I gave you people, definitely access it Let's come back from here in the code I updated the mid And I updated the mid, and after updated the mid what i will do here, i will check So I'll check here that (arr[mid] == element) is done If it is done then return it to mid, because this is the index that I want But if it hasn't happened, it hasn't happened, then check it. (arr[mid]) which : is smaller than : or greater than element if(arr[mid]<element) What will I do then, who will I update if (arr[mid]<element), tell me If there's a less than element, I'll update whom Here I show you in the notebook If it's less than mid Now here's the Greater than mid:, So, so who will I update i will update, update my low now the (arr[mid]<element), Okay. So which one I was updating, when smaller than element : i was updating the low Look, my (arr[mid]) was smaller than the element 66, so I brought the low here, so that's why I'll update the low i would say, low = what will i do to low, low = mid+1; Now why I did (mid+1), you will say why I did not (mid) So you guys look over here it's the mid: not an element This mid element is not mine So here I can't find it, I know So I'm not going to search inclusive from here I will search from 100, ok I will search from 100 till here, ok So this is what I told you, I told you guys Just for convenience, but here I can go more smarter because at 66, I will not get anything So I'll go here, I'll search from here ok So I will do it (mid+1), because i don't mean to take it inclusive, I know : not even here : element Elements are from here to here right now, well, I know that So from here till here my element is there, I know this So now I will take the array from 100 to 400 in consideration, and repeat the same thing with him But what would happen So what if my mid was above 200, i.e. In this half? Who would I update then, then I would raise my high on (mid-1) Because there is no element on the mid There is no element on the mid So I'll bring him on (mid-1) but why on (mid-1)? i will give you an example as if we were searching for 8 instead of 200 Okay Who would have been searching instead of 200, searching for 8 Look carefully here, Because It will not understand again Searching for 8, and mid would have been 66, which is greater than 8 who do i update then Then I know here that my search is not in this half, it has to be in this half. That's why what's mid is it, make it high But I don't bring the high to 66, I bring the high to 32, Why bring it to 32, because I know that this 66 element is not what I am looking for So I bring it here so (mid-1) ok so high will become (mid-1), So (high = mid-1), Okay So here I write it So else, otherwise what will happen i would write, (high = mid-1), Okay Try to understand these things Try a dry run, make a table like this I know it takes hard work to make a table like this But make, if you make, you will understand clearly If you don't make it, how will you understand, you people will have to make it, you will have to do some hard work, okay So here I made it (else) , put inside the (else) Now, now tell me how long will we keep doing this work From here, I write here, comment to till here How long will we keep doing this We'll keep doing this until (low<=high) is done, okay Now if low and high are converged it means the element which is not available to me Either the low and the high become equal, or the low that is, becomes greater than the high So, as long as it's short to high, I'll keep searching. Otherwise, I will stop searching either i want to get the element or not If element is to be found then it will return while loop, ok. what will happen, this while loop will return But if I don't get the element: what will happen then If I don't want to get the element then I will return -1 So if I have to get the element then it will return directly here ok, then the code below will not execute again But if return-1 needs to be executed So it just means that the return didn't happen inside this while loop it has not returned inside the while loop. This means it needed to come out and return it means i didn't get the element inside the array That's why return -1 So let's run this man for an array And the way I drove here, Okay. I write here Unsorted array for linear search Now what will I do here, I will replicate it below and i would say Sorted Array for Binary Search, ok. Sorted array for binary search and doing it like this, 1, 3, 5, 56 or 64, 73 and after that 123 Then 225, then 444, that's all we keep the elements, ok I've only kept the elements and here I got the size of the array. Now I will do binary search instead of linear search, ok And I'll do a binary search and what element am I searching for, I'm searching 56, let's say, okay? So I'm searching 56, and running this code And here's what happened that I didn't print. What happened this I did search index is equal to binary search arr, size, element. So here I am seeing what is the problem ok so the problem has come that I have to initialize the low first low = 0; After that i have to high = (arr[size – 1]), Okay. what to do (arr[size – 1]) That is, what is high will be my last element. So, I have to do this thing over here. Now, I run it And just now I ran it and it's working until (low<=high) I did this, ughh Sorry, I didn't here (arr[size – 1]), only have to [size – 1] I am making a little mistake, which are my low and high: those are my indexes. So I had to do [Size – 1] over here Look here, element 55 was found at index 3 So a small mistake would have been: Sometimes, okay So, I have to keep (low = 0) Because there is low, that is the index, Not an element, it's an index this is low, low is 0 High is 8, High is not 400, High is 8, so high = 8, look here so what happened now low, mid and high, 56 is saying Your index number at 3, is 0, 1, 2, 3 search 64 and see 64 should come to 4 over here So here I want 4 to come, index comes to 4, that's right Binary search did the right thing. Can i search for 1, let's see if i can search for 1 or not, Yes i can search 1 also, i was found at index 0. Can i search 444, If I search 444, then it will come on which index? it's coming at 8 0, 1, 2, 3, 4, 5, 6, 7, 8 Absolutely perfect, absolutely perfect Brilliant, Enjoyed here we coded linear search as it is Binary search has also been coded in very simple language. Here I just remove the searching and searching starts. And i will write here, Keep searching until low and high converges Okay Until low <= high, Here I write it like this So here you people have understood linear search, you people have understood binary search Now I have done this to you with coding. Because it is asked too much, it is asked a lot. Linear Search, Binary Search, You will be asked to enter the code in interviews And I am telling that the probability of asking question question is very high, very high. Linear search O(n), Binary search O(log n) Even the big companies, they also start with this question. Why does start with this questions? If you are going for an advanced interview then you can expect this question That's because, what will happen that first of all they search whether you know basic First of all it will be seen what you are Do you know a little bit about algorithms or did you just increase your grades by bringing marks, in some way you had the exam for the interview, you cleared it, or somehow you got that interview, through referral or by any means. So first he will do a basic search in you and questions like this will be asked in it It will say a sorted array, which algorithm would you use to search? Linear or Binary Or else he will not tell you linear or binary, he will say that you have to search, tell which one you will apply So your answer will depend on whether your second round of interview is coming or not. Whether it is going to be the second round of the interview or not it will depend on your answer, ok So hope you understand If you give such simple question answer Then the interviewer will ask to you: one step ahead Tell it, let's tell it now, let's tell it now So all these things matter a lot So that's why I have given you this along with the code Now what you will get, where may I give it, I will tell you You will get this PDF, you will find it here I am updating, here the whole course:, updating the content Many people had requested that that you also bring distraction free reading So that too I brought here If you do the show player then this video will be seen, If you do a Hide player then it will be visible to you, in this phone too. So this responsive website: You can now hide player here too And you can do the show player, what you will see in this way I hope you understand this And what shall I do here now? I'll upgrade a code for you guys here, on the website And along with that I have written the notes here hand written. By the way, reading from Hand written only But I am putting it by typing too, so I am making everything available. So dear to you, many people are saying that we are downloading notes, so zip file is being downloaded, so zip file, we are not able to extract in phone So I would like to tell them that there are some apps, which are helping you to extract zip file. So you can do that But where the material is not much, I have tried: Directly updating the PDF only As you will see, if you open it, the PDF is opening directly. So I will just say this to you people that you download them on the computer and transfer them in the phone. You guys can adjust a little bit If you have phone which is such: in which you are not able to extract Well you will be able to, if you don't know, to extract So I'm talking about then I hope this course is helpful for you guys Please share this course with your friends If you do this then to bring you such videos for you people it will be very easy for me. I just want now at this point that everyone should know that the course is going on here, because whoever is coming here He is saying "OH My God" this course is also going on here, It is going so well, everything is going well, I wish I had found this playlist earlier So I want the playlist to reach the people as soon as possible, because I also think that man, people are not coming And the interest of the people is less in the course So earlier also at one time with me, I tell you very transparently I am telling you people that there were many much courses which I used to tell. And the four views that came to me on a course, and my mind used to go haywire that man If nobody is watching then why am i making So I used to get four views, the fifth one didn't even come, I kept refreshing the page. So it is the same if you take if you share the content, if you think it will be of great benefit. That's all for now in this video Don't forget to like the video Thankyou so much guys for watching this video and I'll see you next time..
{"url":"https://www.rd2success.co.uk/2022/11/14/linear-vs-binary-search-code-in-c-language-with-notes/","timestamp":"2024-11-06T12:04:37Z","content_type":"text/html","content_length":"199415","record_id":"<urn:uuid:e2a36bd1-4635-4581-b5f6-3f63a6340a78>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00254.warc.gz"}
BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:UW-Madison-Physics-Events BEGIN:VEVENT SEQUENCE:0 UID:UW-Physics-Event-1766 DTSTART:20100413T170500Z DURATION:PT1H0M0S DTSTAMP:20241106T053506Z LAST-MODIFIED:20100122T142408Z LOCATION:4274 Chamberlin Hall SUMMARY:Shock Waves in Nature and in Numerical Computations\, Chaos & Complex Systems Seminar\, James Rossmanith\, UW-Madison\, Dept. of Mat hematics DESCRIPTION:Shock waves are propagating disturbances that are characte rized by an abrupt\, nearly discontinuous change in the characteristic s of a fluid or plasma. They can occur in a variety of phenomena in bo th laboratory and natural settings. Mathematically\, shock waves are d ifficult to handle since in general they are not unique solutions of t he equations that model them. Computationally\, shock waves are diffic ult to handle for several reasons: (1) most discontinuous cannot be ex actly represented on a discrete mesh\, (2) standard high-order methods are unstable for shocks\, and (3) the numerical schemes must be caref ully constructed to yield the physically correct solution. \nIn this talk I will begin by briefly reviewing the basic theory of s hock waves. I will then\, mostly through computational examples\, desc ribe the various pitfalls in trying to numerically solve equations wit h shock solutions. Finally\, I will describe some strategies based on adaptive mesh refinement to obtain highly accurate numerical solutions . URL:https://www.physics.wisc.edu/events/?id=1766 END:VEVENT END:VCALENDAR
{"url":"https://www.physics.wisc.edu/events/ical.php?id=1766","timestamp":"2024-11-06T05:35:07Z","content_type":"text/calendar","content_length":"1923","record_id":"<urn:uuid:e8c08530-ef05-455c-874f-5ae222c10792>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00719.warc.gz"}
Appendix B | Core White Paper v1.0.7 The reward mechanism in Core isn’t terribly complicated, but it does have a lot of moving parts. This section presents two simple examples to elucidate its inner workings. Part 1 Let's assume there are two validators, both of which have been elected: Validator A, which has: two units of delegated hash power one unit of non-custodial BTC stake. Validator B, which has three units of CORE stake two units of non-custodial BTC stake Let’s assume that there are 10 total units of Bitcoin hash power on the Core network. This would mean validator A has 20% of the hash power (2/10) and validator B has 10% of the hash power (1/10). Let’s also assume there are 20 total units of CORE staked on the network. This would mean validator A has 5% of the CORE staked (1/20) and validator B has 15% of the CORE staked (3/20). Let’s further assume there are 10 total units of non-custodial BTC staked on the network. This would mean validator A has 10% of the BTC staked (1/10) and validator B has 20% of the BTC staked (2/ For this example, m is set to 1/3 and n is set to 1/5. To simplify the calculations, the number of earned rewards distributed is set to one for both validators. And to make things easier, the equations for the hybrid score and rewards are reproduced Hybrid Score: The rewards per unit: Here are the hybrid scores for validator A (designated as “SA”) and validator B (designated as “SB”): Here is the distribution of the respective hash power rewards and staking rewards for the two validators: And here is the rewards per unit for the two validators: Part 2 Here, we'll work through an identical example, except we'll make a few different assumptions about the relationships between different quantities. If you carefully study both examples, you should have a firm intuitive grasp on how Core's reward mechanics function. Let's assume there are two validators, both of which have been elected: Validator A, which has: 60 units of delegated hash power 400 units of non-custodial BTC stake. Validator B, which has 200 units of non-custodial BTC stake Let’s assume that there are 300 total units of Bitcoin hash power on the Core network. This would mean validator A has 20% of the hash power (60/300) and validator B has 10% of the hash power (30/ Let’s also assume there are 100M total units of CORE staked on the network. This would mean validator A has 5% of the CORE staked (5,000,000/100,000,000) and validator B has 15% of the CORE staked Let’s further assume there are 4,000 total units of non-custodial BTC staked on the network. This would mean validator A has 10% of the BTC staked (400/4,000) and validator B has 5% of the BTC staked For this example, m is set to 1/3 and n is set to 10000. Here are the hybrid scores for validator A (designated as “SA”) and validator B (designated as “SB”): Here is the distribution of the respective hash power rewards and staking rewards for the two validators: And here are the rewards per unit for the two validators: rHu_A = 1.0145% of (validator A reward - commission) per delegated hash. rSu_A = 0.043478% of (validator A reward - commission) per 10,000 CORE rBu_A = 0.043478% of (validator A reward - commission) per BTC rHu_B = 0.9722% of (validator B reward - commission) per delegated hash rSu_B = 0.041667% of (validator B reward - commission) per 10,000 CORE rBu_B = 0.041667% of (validator B reward - commission) per BTC
{"url":"https://whitepaper.coredao.org/core-white-paper-v1.0.7/appendices/appendix-b","timestamp":"2024-11-02T11:03:41Z","content_type":"text/html","content_length":"648077","record_id":"<urn:uuid:46348042-e82f-435f-a112-0e28b948a037>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00004.warc.gz"}
Calculation Boogey Solve the math problems before the sun sets or the boogeyman will get you. Addition, Subtraction, Multiplication, Division. Nine levels for each category. Each level has five rounds. Solve the required number of equations in each round. Beat 5 rounds to advance to the next level.
{"url":"https://madsirstudio.com/calculationboogey/","timestamp":"2024-11-08T11:17:15Z","content_type":"text/html","content_length":"9589","record_id":"<urn:uuid:a017dc33-982e-4621-ad90-0d150ae59fb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00011.warc.gz"}
Astrometric Calibration Astrometric Calibration Field Distortion Optical systems, particularly those involving refractive elements, do not have a uniform plate scale over the field and generally have a radial distortion term which takes the form r_true = k1 * r + k3 * r**3 + k5 * r **5 where r_true is an idealised angular distance from the optical axis and r is the measured distance. Prime focus at the INT is no exception and the ING observer handbook, page 83, lists the distortion terms for INT PF as k1 = 24.7 arcsec/mm and k3 = -9.202E-05 arcsec/mm**3. These convert measured distance in mm to true angular distance in arcsec. The term due to k5 is negligible. Rearranging the above equation to a more convenient form gives r_true = r' * (1 + k3/k1**3 * r'**2) where r' is the measured distance from the optical axis in arcsec using the k1 scale. The default coefficient k3/k1**3 in these units is -7.59E-09 arcsec**-3. If we convert all units from arcsec to radians, the default coefficient becomes 259.8, though more recent measurements of the INT PF distortion using a mixture of photographic plates and the INT WFC show that a value of 220.0 is more The non-linear term in the above introduces a (pincushion/barrel) distortion amounting to a differential effect up to 10 arcsec from one corner of a CCD to the other near the field edges (no vignetting to r=40 arcmin, 50% vignetting at r=52 arcmin) as illustrated in the above figure. Astrometric Accuracy The end product of the full pipeline currently has an astrometric precision better than 100mas over the whole array (ie. across CCDs), as determined by analysis of independently calibrated adjacent overlapping pointings. Note, however, that this is not the same as the external accuracy of the FK5 reference frame we are generating, which is wholly determined by the external accuracy of the reference frame of the Schmidt plate-based astrometric catalogues. This is currently limited to about 0.25 arcsec. By stacking the astrometric residuals from a series of independent pointings and CCD WCS solutions it is possible to assess the accuracy of the simple INT distortion model. This is illustrated in the diagram below using the average residuals from a stack of a one week WFS run. To generate this figure the independent CCD frames were analysed using a standard linear 6 plate constant model, assuming a fixed optical axis and the r**3 term outlined above. This pattern appears to be fairly stable over long periods of time and will form the basis for further astrometric performance improvements. Virtual Pixel Mosaic The individual CCDs making up the mosaic are mechanically rigid since their CCD carriers are bolted to the baseplate. Therefore unless the individual CCDs are physically disturbed (unlikely unless severe problems) their active surfaces should retain a fixed geometric relationship as shown in the diagram below. The dot at (0,0) denotes the position of the rotator centre. Apart from translating the entire system to be centred on the optical axis, all CCDs have been mapped onto the CCD#4 coordinate system. The blodges in the CCD corners are the pixels (1,1) for the active part of each CCD, ie. overscan and underscan trimmed off. The following relations transform all the CCDs to the CCD#4 pixel system. Virtual transform constants: (from a set of 30 pointings in ELAIS region) 0.10000E+01 -0.10013E-02 2113.94 0.58901E-03 0.10001E+01 -12.67 Location of rotator centre in CCD-space 1 -332.881 3041.61 -0.10272E-01 0.99992E+00 78.84 -0.10003E+01 -0.10663E-01 6226.05 Location of rotator centre in CCD-space 2 3177.58 1731.94 0.10003E+01 -0.23903E-02 -2096.52 0.24865E-02 0.10003E+01 21.93 Location of rotator centre in CCD-space 3 3880.40 2996.45 0.10000E+01 0.00000E+00 0.00 0.00000E+00 0.10000E+01 0.00 Location of rotator centre in CCD-space 4 1778.00 3029.00 The transforms are in the form a b c d e f and based on CCD#4 pixel system So to convert a CCD to the CCD#4 system take the pixel location (x,y) on the CCD and apply the following transformation to it x' = a*x + b*y + c y' = d*x + e*y + f to get to rotator centre replace c -> c-1778 f -> f-3029 Note that the astrometric solution errors for a single 4 chip solution are about the few parts in 10,000 level and for 30 pointings about the several parts in 100,000 hence the slightly different scales etc... These will be updated and improved as we fold in more WCS solutions. World Coordinate Systems and FITS Headers Unfortunately there are several conflicting ways of defining a World Coordinate System (WCS) for telescopes with focal stations requiring a general radial distortion model. As yet the FITS community has not adpoted an agreed final standard. However, much of the transformation representation has been agreed and there are a couple of popular, but mutually incompatible, interim solutions outlined below (see also the WFS cookbook here ). IRAF uses the following header items to define a WCS transformation including radial distortion terms CTYPE1 = 'RA---ZPX' / Type of coordinate on axis 1 CTYPE2 = 'DEC--ZPX' / Type of coordinate on axis 2 CRPIX1 = 3914.68291287068 / Reference pixel on axis 1 CRPIX2 = 2958.23149413302 / Reference pixel on axis 2 CRVAL1 = 325.5563 / Value at ref. pixel on axis 1 CRVAL2 = 0.2084674 / Value at ref. pixel on axis 2 CD1_1 = -1.4011089128826E-6 / Transformation matrix CD1_2 = -9.2625825594913E-5 / Transformation matrix CD2_1 = -9.2750922381494E-5 / Transformation matrix CD2_2 = 1.31362241659925E-6 / Transformation matrix WAT1_001= 'wtype=zpx axtype=ra projp1=1.0 projp3=220.0' WAT2_001= 'wtype=zpx axtype=dec projp1=1.0 projp3=220.0' Several other image processing packages or display programs (eg. SAOTNG, GAIA) use a similar, but incompatible with IRAF version and vice-versa, with the following changes, CTYPE1 = 'RA---ZPN' / Zenithal polynomial projection CTYPE2 = 'DEC--ZPN' / Zenithal polynomial projection PROJP1 = 1.0 / coefficient for r term PROJP3 = 220.0 / coefficient for r**3 term Note that the ZPX and ZPN projections can be set to give same results. A possible future WCS style for this type of projection in FITS is CTYPE1 = 'RA---TAN' / Change projection to TAN CTYPE2 = 'DEC--TAN' / Change projection to TAN PV1_11 = 220.0 / split PROJP3 into x and y (PROJP1 assumed PV2_11 = 220.0 / to be 1.0) note new improved radial terms / also allow for generalised polynomial CRDER1 = 0.4733928 / random error in axis1 CRDER2 = 1.0012080 / random error in axis2 CSYER1 = 0.5 / systematic error in axis1 CSYER2 = 0.5 / systematic error in axis2 But we were wrong and in fact after much debate an agreed form for the WCS for assorted projections of interest was mooted last Autumn. Here's an example of what the INT WFC WCS will look like shortly (as soon as the display and analysis software catches up). For more details see: Calabretta & Greisen 2002 A&A 395 1077 Greisen & Calabretta 2002 A&A 395 1061 CTYPE1 = 'RA---ZPN' / Zenithal polynomial projection CTYPE2 = 'DEC--ZPN' / Zenithal polynomial projection CRPIX1 = 3914.68291287068 / Reference pixel on axis 1 CRPIX2 = 2958.23149413302 / Reference pixel on axis 2 CRVAL1 = 325.5563 / Value at ref. pixel on axis 1 CRVAL2 = 0.2084674 / Value at ref. pixel on axis 2 CD1_1 = -1.4011089128826E-6 / Transformation matrix CD1_2 = -9.2625825594913E-5 / Transformation matrix CD2_1 = -9.2750922381494E-5 / Transformation matrix CD2_2 = 1.31362241659925E-6 / Transformation matrix PV2_1 = 1.0 / coefficient for r term PV2_3 = 220.0 / coefficient for r**3 term PS. If you are having problems with DS9 version 2.2 or 2.3b reading ZPN projections don't blame us. Get the latest wcstools patch and rebuild DS9 sans PV matrix bug. Then make the headers look like above instead of currently supplied WFC wcs style. DS9 version 2.1 reads the older style currently supplied ok.
{"url":"https://people.ast.cam.ac.uk/~mike/wfcsur/technical/astrometry/","timestamp":"2024-11-10T22:02:27Z","content_type":"text/html","content_length":"14695","record_id":"<urn:uuid:ac1ac90a-f532-459e-b5a2-7ad62cbdc9d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00532.warc.gz"}
Project Using - "Current Period Time Elapsed" In the Last Value Projection properties, there is an "Project Using" option to select various methods for the projection. It is common in Finance or other applications to project an unfinished period by "annualizing" it, or rather projecting by dividing the current amount in the unfinished period by percent of time that has elapsed in that period. For example, if you have selected the card's time bucket to be this year by month and you wanted to project August on 8/15, you would take the current amount (let's say 50), and divide that by .483 to get 104. The .483 is obtained by taking the elapsed days (15) divided by the total days in the period (31). So the total calculation would be 50/(15/31) and the projected amount would be 54 and the final amount shown would be 104 in the last • 1.8K Product Ideas • 1.5K Connect • 2.9K Transform • 3.8K Visualize • 682 Automate • 34 Predict • 394 Distribute • 121 Manage • 5.4K Community Forums
{"url":"https://community-forums.domo.com/main/discussion/60903/project-using-current-period-time-elapsed","timestamp":"2024-11-14T17:44:02Z","content_type":"text/html","content_length":"376154","record_id":"<urn:uuid:566ab810-2c4f-450a-9cd6-8e3859b1b56b>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00762.warc.gz"}
George Wilson nLab George Wilson On integrable hierarchies in terms of infinitedimensional Sato-Segal-Wilson Grassmannian On Calogero-Moser integrable systems • George Wilson, Collisions of Calogero-Moser particles and an adelic Grassmannian, With an appendix by I. G. Macdonald. Invent. Math. 133:1 (1998) 1–41 MR99f:58107, doi • Yuri Berest, G. Wilson, Ideal classes of the Weyl algebra and noncommutative projective geometry (with an Appendix by M. Van den Bergh), Internat. Math. Res. Notices 26 (2002) 1347–1396 • Yu. Berest, G. Wilson, Mad subalgebras of rings of differential operators on curves, Advances in Math. 212 no. 1 (2007) 163–190 Created on September 22, 2022 at 10:08:09. See the history of this page for a list of all contributions to it.
{"url":"https://ncatlab.org/nlab/show/George%20Wilson","timestamp":"2024-11-12T00:30:48Z","content_type":"application/xhtml+xml","content_length":"13910","record_id":"<urn:uuid:d8e2cc6a-7e23-405e-87ed-a9df5e8c759f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00118.warc.gz"}
How do you convert SI units? | Socratic How do you convert SI units? 1 Answer Multiply or divide by powers of 10. SI units are easy to convert because you multiply or divide by 10 - sometimes more than once. Think of a staircase - every time you step up a stair, you divide by 10; every time you step down a stair, you multiply by 10. Another way to think of it is: as you move down the staircase, the decimal moves to the right; as you climb up the staircase, the decimal moves to the left. If you need to memorize the order of the prefixes, there are a number of mnemonics that can help you. My personal favorite is "King Henry Dropped over Dead Converting Metrics." Videos from: Noel Pauller Impact of this question 58191 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-convert-si-units","timestamp":"2024-11-03T22:36:23Z","content_type":"text/html","content_length":"34191","record_id":"<urn:uuid:81cea8c4-c79c-4d14-bd7c-bd931272b361>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00711.warc.gz"}
Multiobjective optimization using Kriging models Multiobjective optimization using Kriging models# In this section, multiobjective optimization using Kriging models will be described. The idea in this section is to replace the true function with an accurate enough Kriging surrogate model and perform multiobjective optimization using the Kriging model. In this case, an accurate enough model is one that correctly describe the Pareto front of the problem in conjunction with an optimizer such as NSDE. The Kriging models used in this section are generated in much the same way as done in previous sections. The block of code below imports the required packages for this section. import numpy as np import matplotlib.pyplot as plt from smt.sampling_methods import LHS from smt.surrogate_models import KRG from pymoo.core.problem import Problem from pymoo.optimize import minimize from pymoode.algorithms import NSDE from pymoode.survival import RankAndCrowding from pymoo.util.nds.non_dominated_sorting import NonDominatedSorting Branin-Currin optimization problem# The Branin-Currin optimization problem has been described in the previous section. The block of code below defines the two functions. # Defining the objective functions def branin(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) x1 = 15*x[:,0] - 5 x2 = 15*x[:,1] b = 5.1 / (4*np.pi**2) c = 5 / np.pi t = 1 / (8*np.pi) y = (1/51.95)*((x2 - b*x1**2 + c*x1 - 6)**2 + 10*(1-t)*np.cos(x1) + 10 - 44.81) if dim == 1: y = y.reshape(-1) return y def currin(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) x1 = x[:,0] x2 = x[:,1] factor = 1 - np.exp(-1/(2*x2)) num = 2300*x1**3 + 1900*x1**2 + 2092*x1 + 60 den = 100*x1**3 + 500*x1**2 + 4*x1 + 20 y = factor*num/den if dim == 1: y = y.reshape(-1) return y The block of code below solves the optimization problem directly using the true functions and NSDE. This will be used as a point of comparison when solving the problem using Kriging models. # Defining the problem class for pymoo - we are evaluating two objective functions in this case class BraninCurrin(Problem): def __init__(self): super().__init__(n_var=2, n_obj=2, n_constr=0, xl=np.array([1e-6, 1e-6]), xu=np.array([1, 1])) def _evaluate(self, x, out, *args, **kwargs): out["F"] = np.column_stack((branin(x), currin(x))) problem = BraninCurrin() algorithm = NSDE(pop_size=100, CR=0.9, survival=RankAndCrowding(crowding_func="pcd"), save_history = True) res_true = minimize(problem, algorithm, verbose=False) The next block of code defines a new problem class for solving the optimization problem using Kriging models. The problem class accepts two models, one for each function, as input arguments when initializing the class. After the definition of the problem class, a loop is utilized to iteratively create Kriging models for varying number of samples. Separte Kriging models are created for each of the objective functions. The optimization problem is solved using each of the Kriging models created and NSDE. Plots are created to compare the Pareto front obtained using the Kriging model and that obtained from the true function at each number of samples used. # Defining problem for kriging model based optimization class KRGBraninCurrin(Problem): def __init__(self, sm_branin, sm_currin): super().__init__(n_var=2, n_obj=2, n_constr=0, xl=np.array([1e-6, 1e-6]), xu=np.array([1, 1])) self.sm_branin = sm_branin self.sm_currin = sm_currin def _evaluate(self, x, out, *args, **kwargs): out["F"] = np.column_stack((sm_branin.predict_values(x), sm_currin.predict_values(x))) # Defining sample sizes samples = np.arange(20,140,20) xlimits = np.array([[1e-6,1.0],[1e-6,1.0]]) for size in samples: # Generate training samples using LHS sampling = LHS(xlimits=xlimits, criterion="ese") xtrain = sampling(size) ybranin = branin(xtrain) ycurrin = currin(xtrain) # Create kriging model corr = 'squar_exp' sm_branin = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_branin.set_training_values(xtrain, ybranin) sm_currin = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_currin.set_training_values(xtrain, ycurrin) problem = KRGBraninCurrin(sm_branin, sm_currin) algorithm = NSDE(pop_size=100, CR=0.9, survival=RankAndCrowding(crowding_func="pcd"), save_history = True) res_krg = minimize(problem, algorithm, verbose=False) F_krg = np.column_stack((branin(res_krg.X), currin(res_krg.X))) # Plotting final Pareto frontier obtained fig, ax = plt.subplots(figsize=(8, 5)) ax.scatter(res_true.F[:, 0], res_true.F[:, 1], color="blue", label="True function") ax.scatter(F_krg[::2, 0], F_krg[::2, 1], color="red", label="Kriging model") ax.set_ylabel("$f_2$", fontsize = 14) ax.set_xlabel("$f_1$", fontsize = 14) ax.legend(fontsize = 14) fig.suptitle("Number of samples: {}".format(size)) The plots obtained from creating the various Kriging models show that approximately 80-100 samples are required to obtain an accurate enough representation of the Pareto front. Even though the Kriging models can be very accurate with a large number of samples, they can still produce certain solutions that lie outside of the Pareto front found using the true function. In most cases, these solutions are dominated solutions and should be ignored when plotting the Pareto front. This shows that it is indeed difficult to obtain an accurate Pareto front using surrogates when the problem involves complex multimodal functions. Constrained multiobjective optimization problem# The constrained multiobjective optimization problem has been described previously. The block of code below defines the objective and constraint functions for the problem. After the definition of the functions, a Problem class is defined and the optimization problem is solved using NSDE and the true function. This solution will be used as a point of comparison for the surrogate-based methods. # Defining the objective functions def f1(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) y = 4*x[:,0]**2 + 4*x[:,1]**2 return y def f2(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) y = (x[:,0]-5)**2 + (x[:,1]-5)**2 return y def g1(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) g = (x[:,0]-5)**2 + x[:,1]**2 - 25 return g def g2(x): dim = x.ndim if dim == 1: x = x.reshape(1,-1) g = 7.7 - ((x[:,0]-8)**2 + (x[:,1]+3)**2) return g # Defining the problem class for pymoo - we are evaluating two objective and two constraint functions in this case class ConstrainedProblem(Problem): def __init__(self): super().__init__(n_var=2, n_obj=2, n_ieq_constr=2, vtype=float) self.xl = np.array([-20.0, -20.0]) self.xu = np.array([20.0, 20.0]) def _evaluate(self, x, out, *args, **kwargs): out["F"] = np.column_stack([f1(x), f2(x)]) out["G"] = np.column_stack([g1(x), g2(x)]) problem = ConstrainedProblem() nsde = NSDE(pop_size=100, CR=0.8, survival=RankAndCrowding(crowding_func="pcd"), save_history = True) res_true = minimize(problem, nsde, verbose=False) In the block of code below, a loop is utilized to iteratively create Kriging models for varying number of samples. The optimization problem is solved using each of the Kriging models created and NSDE. Separate Kriging models are created for each of the objective and constraint functions. This means the four Kriging models in total are created and used as input arguments for the new Problem class. Plots are created to compare the Pareto front obtained using the Kriging model and that obtained from the true function at each number of samples used. # Defining problem for kriging model based optimization class KRGProb(Problem): def __init__(self, sm_f1, sm_f2, sm_g1, sm_g2): super().__init__(n_var=2, n_obj=2, n_ieq_constr=2, vtype=float) self.xl = np.array([-20.0, -20.0]) self.xu = np.array([20.0, 20.0]) self.sm_f1 = sm_f1 self.sm_f2 = sm_f2 self.sm_g1 = sm_g1 self.sm_g2 = sm_g2 def _evaluate(self, x, out, *args, **kwargs): F1 = self.sm_f1.predict_values(x) F2 = self.sm_f2.predict_values(x) G1 = self.sm_g1.predict_values(x) G2 = self.sm_g2.predict_values(x) out["F"] = np.column_stack((F1, F2)) out["G"] = np.column_stack((G1, G2)) # Defining sample sizes samples = [6,8,9,10,12,14,15] xlimits = np.array([[-20.0,20.0],[-20.0,20.0]]) for size in samples: # Generate training samples using LHS sampling = LHS(xlimits=xlimits, criterion="ese") xtrain = sampling(size) yf1 = f1(xtrain) yf2 = f2(xtrain) yg1 = g1(xtrain) yg2 = g2(xtrain) # Create kriging model corr = 'squar_exp' sm_f1 = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_f1.set_training_values(xtrain, yf1) sm_f2 = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_f2.set_training_values(xtrain, yf2) sm_g1 = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_g1.set_training_values(xtrain, yg1) sm_g2 = KRG(theta0=[1e-2], corr=corr, theta_bounds=[1e-6, 1e2], print_global=False) sm_g2.set_training_values(xtrain, yg2) problem = KRGProb(sm_f1, sm_f2, sm_g1, sm_g2) algorithm = NSDE(pop_size=100, CR=0.8, survival=RankAndCrowding(crowding_func="pcd"), save_history = True) res_krg = minimize(problem, algorithm, verbose=False) F_krg = np.column_stack((f1(res_krg.X), f2(res_krg.X))) # Plotting final Pareto frontier obtained fig, ax = plt.subplots(figsize=(8, 5)) ax.scatter(res_true.F[:, 0], res_true.F[:, 1], color="blue", label="True function") ax.scatter(F_krg[::2, 0], F_krg[::2, 1], color="red", label="Kriging model") ax.set_ylabel("$f_2$", fontsize = 14) ax.set_xlabel("$f_1$", fontsize = 14) ax.legend(fontsize = 14) fig.suptitle("Number of samples: {}".format(size)) Owing to the simple nature of the functions used in this problem, only a few samples are required for the Kriging models to accurately obtain the Pareto front of the problem in conjunction with the NSDE algorithm. In such cases, surrogates prove to be useful tools for performing multiobjective optimization and require drastically lower evaluations of the true function to find the Pareto front.
{"url":"https://computationaldesignlab.github.io/surrogate-methods/multi_objective/krg_multi_objective.html","timestamp":"2024-11-14T00:27:00Z","content_type":"text/html","content_length":"78506","record_id":"<urn:uuid:34520edc-2f1c-43f8-8e57-30e875d702b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00894.warc.gz"}
Free Printable Math Worksheets For 4Th Grade Multiplication - Lexia's Blog Free Printable Math Worksheets For 4Th Grade Multiplication Free Printable Math Worksheets For 4Th Grade Multiplication – Free Printable Math Worksheets For 4Th Grade Multiplication might help a teacher or pupil to understand and comprehend the lesson plan in a a lot quicker way. These workbooks are ideal for both youngsters and grownups to make use of. Free Printable Math Worksheets For 4Th Grade Multiplication can be utilized by anyone at home for instructing and learning goal. Today, printing is made easy with all the Free Printable Math Worksheets For 4Th Grade Multiplication. Printable worksheets are perfect to understand math and science. The students can certainly do a calculation or implement the equation making use of printable worksheets. You are able to also utilize the on-line worksheets to show the students all sorts of topics as well as the easiest approach to educate the subject. There are several types of Free Printable Math Worksheets For 4Th Grade Multiplication available on the web today. Some of them can be simple one-page sheets or multi-page sheets. It relies upon around the want in the person whether or not he/she utilizes one web page or multi-page sheet. The primary advantage of the printable worksheets is it provides a great understanding surroundings for students and instructors. Students can study effectively and discover swiftly with Free Printable Math Worksheets For 4Th Grade Multiplication. A college workbook is largely divided into chapters, sections and workbooks. The primary function of a workbook would be to collect the information from the pupils for different matter. For example, workbooks contain the students’ class notes and check papers. The information regarding the college students is gathered within this type of workbook. Pupils can make use of the workbook like a reference whilst they are carrying out other topics. A worksheet functions effectively with a workbook. The Free Printable Math Worksheets For 4Th Grade Multiplication could be printed on typical paper and may be produced use to add each of the additional information regarding the students. Pupils can produce various worksheets for different subjects. Making use of Free Printable Math Worksheets For 4Th Grade Multiplication, the students can make the lesson programs may be used inside the present semester. Teachers can make use of the printable worksheets for your present year. The lecturers can save time and money making use of these worksheets. Teachers can use the printable worksheets in the periodical report. The printable worksheets can be utilized for just about any kind of subject. The printable worksheets may be used to build computer applications for teenagers. You’ll find different worksheets for various topics. The Free Printable Math Worksheets For 4Th Grade Multiplication may be effortlessly modified or modified. The lessons can be easily integrated within the printed worksheets. It is crucial to realize that a workbook is a part of the syllabus of the college. The scholars should realize the significance of a workbook before they’re able to use it. Free Printable Math Worksheets For 4Th Grade Multiplication could be a fantastic aid for college students.
{"url":"https://lexuscarumors.com/free-printable-math-worksheets-for-4th-grade-multiplication/","timestamp":"2024-11-05T01:09:27Z","content_type":"text/html","content_length":"54470","record_id":"<urn:uuid:47fa7b18-a8f1-4a51-8119-f736e88bca5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00479.warc.gz"}
Attempt Data Interpretation Quiz 3 SBI PO Exam 2020 Data Interpretation Quiz 3 SBI PO 2020 Testbook | Updated: Jun 11, 2018 18:52 IST Are you preparing for Banking, Insurance and other Competitive Recruitment or Entrance exams? You will likely need to solve a section on Quant for sure. Data Interpretation Quiz 3 SBI PO 2018 will help you learn concepts on an important topic in Quant – Data Interpretation. This Data Interpretation Quiz 3 SBI PO is important for exams such as IBPS PO, IBPS Clerk, IBPS RRB Officer, IBPS RRB Office Assistant, IBPS SO, SBI PO, SBI Clerk, SBI SO, Indian Post Payment Bank (IPPB) Scale I Officer, LIC AAO, GIC AO, UIIC AO, NIACL AO, NICL AO. Data Interpretation Quiz 3 SBI PO 2018 Que. 1 Directions: Study the information carefully to answer the questions that follow: In a college there are 900 students who are doing Post Graduation (PG) in any one of the subjects, out of the five different subjects viz. Zoology, Botany, Mathematics, Physics and Statistics. The ratio between the boys and the girls among those is 5 : 4 respectively. 20% of the total girls are doing PG in Zoology and 25% of the total girls are doing PG in Statistics. Total number of students doing PG in Botany is 220. Total students who are doing PG in Mathematics is 150. Respective ratio between the number of girls and the number of boys doing PG in Statistics is 2 : 3. 20 per cent of the total number of boys are doing PG in Botany. The ratio between the number of girls and boys doing PG in Mathematics is 1 : 2 respectively. There are equal number of boys and girls who are doing PG in Physics. 180 students are doing PG in Zoology. Number of girls doing PG in Statistics is what per cent of the number of boys doing PG in Physics? Que. 2 In which PG course the number of girls is the highest and in which course the number of boys is the lowest (respectively)? Que. 3 What is the difference between the boys doing PG in Zoology and the number of girls doing PG in Mathematics? Que. 4 What is the respective ratio between the boys doing PG in Mathematics and the number of girls doing PG in Botany? Que. 5 What is the total number of students doing PG in Physics and Statistics together? Que. 6 Directions: The following radar graphs show the Trade Growth (in $ billion) of world and China from the previous year for the year 1977 to 1985. Refer to the graphs to answer the questions that What is the per cent increase in trade growth of China in the year 1980 over that of the same in 1979? Que. 7 Average world trade growth is how much per cent more or less than the average trade growth of china during the entire shown period? Que. 8 What is the ratio of the total World trade to total trade of China in the year 1985, if the total trade of world in 1976 is $ 5267 billion and total trade of China in 1979 is $ 1200 billion? Que. 9 If the total trade of China in the year 1979 is $ 1200 billion, what it will be in the year 1985? Que. 10 If the total trade of world in the year 1976 is $ 5267 billion, what it will be in the year 1985? Did you like this Data Interpretation Quiz 3 SBI PO 2018? Let us know! You may also like – Get more quizzes here: Data Interpretation Quiz 2 SBI PO Data Interpretation Quiz 1 for SBI POPermutation, Combination & Probability Quiz SBI PO [socialpoll id=”2434776″]
{"url":"https://testbook.com/blog/data-interpretation-quiz-3-sbi-po/","timestamp":"2024-11-11T12:09:01Z","content_type":"text/html","content_length":"261724","record_id":"<urn:uuid:0d2a418d-f34c-47bc-8243-d8c4880d6d18>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00445.warc.gz"}
Dolores Romero Morales We study robust versions of the uncapacitated lot sizing problem, where the demand is subject to uncertainty. The robust models are guided by three parameters, namely, the total scaled uncertainty budget, the minimum number of periods in which one would like the demand to be protected against uncertainty, and the minimum scaled protection level per … Read more Visualizing proportions and dissimilarities by Space-filling maps: a Large Neighborhood Search approach In this paper we address the problem of visualizing a set of individuals, which have attached a statistical value given as a proportion, and a dissimilarity measure. Each individual is represented as a region within the unit square, in such a way that the area of the regions represent the proportions and the distances between … Read more A Multi-Objective approach to visualize proportions and similarities between individuals by rectangular maps In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual, their areas reflect the proportions, and the closeness between … Read more Visualizing data as objects by DC (difference of convex) optimization In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective is the difference of two convex functions (DC). Suitable DC … Read more An SDP approach for multiperiod mixed 0–1 linear programming models with stochastic dominance constraints for risk management In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multistage case and includes both first-order and second-order constraints. We propose a stochastic … Read Clustering Categories in Support Vector Machines Support Vector Machines (SVM) is the state-of-the-art in Supervised Classification. In this paper the Cluster Support Vector Machines (CLSVM) methodology is proposed with the aim to reduce the complexity of the SVM classifier in the presence of categorical features. The CLSVM methodology lets categories cluster around their peers and builds an SVM classifier using the … Read more Polynomial time algorithms for the Minimax Regret Uncapacitated Lot Sizing Model We study the Minimax Regret Uncapacitated Lot Sizing (MRULS) model, where the production cost function and the demand are subject to uncertainty. We propose a polynomial time algorithm which solves the MRULS model in O(n^6) time. We improve this running time to O(n^5) when only the demand is uncertain, and to O(n^4) when only the … Read more Strongly Agree or Strongly Disagree?: Rating Features in Support Vector Machines In linear classifiers, such as the Support Vector Machine (SVM), a score is associated with each feature and objects are assigned to classes based on the linear combination of the scores and the values of the features. Inspired by discrete psychometric scales, which measure the extent to which a factor is in agreement with a … Read more Variable Neighborhood Search for parameter tuning in Support Vector Machines As in most Data Mining procedures, how to tune the parameters of a Support Vector Machine (SVM) is a critical, though not sufficiently explored, issue. The default approach is a grid search in the parameter space, which becomes prohibitively time-consuming even when just a few parameters are to be tuned. For this reason, for models … Read more Matheuristics for $\PsihBcLearning Recently, the so-called $\psi$-learning approach, the Support Vector Machine (SVM) classifier obtained with the ramp loss, has attracted attention from the computational point of view. A Mixed Integer Nonlinear Programming (MINLP) formulation has been proposed for $\psi$-learning, but solving this MINLP formulation to optimality is only possible for datasets of small size. For datasets of … Read more
{"url":"https://optimization-online.org/author/drm-eco/","timestamp":"2024-11-11T10:08:07Z","content_type":"text/html","content_length":"108145","record_id":"<urn:uuid:7145626f-70bd-4247-b38c-e1bc2fc2dfb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00090.warc.gz"}
Natural Latents Are Not Robust To Tiny Mixtures — LessWrong In our previous natural latent posts, our core theorem typically says something like: Assume two agents have the same predictive distribution over variables , but model that distribution using potentially-different latent variables. If the latents both satisfy some simple “naturality” conditions (mediation and redundancy) then the two agents’ latents contain approximately the same information about . So, insofar as the two agents both use natural latents internally, we have reason to expect that the internal latents of one can be faithfully translated into the internal latents of the other. This post is about one potential weakness in that claim: what happens when the two agents’ predictive distributions are only approximately the same? Following the pattern of our previous theorems, we’d ideally say something like If the two agents’ distributions are within of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about , to within some bound. But that turns out to be false. The Tiny Mixtures Counterexample Let’s start with two distributions, and , over . These won’t be our two agents’ distributions - we’re going to construct our two agents’ distributions by mixing these two together, as the name “tiny mixtures” suggests. and will have extremely different natural latents. Specifically: • consists of 1 million bits, consists of another 1 million bits • Under , is uniform, and . So, there is an exact natural latent under . • Under , and are independent and uniform. So, the empty latent is exactly natural under . Mental picture: we have a million-bit channel, under the output () is equal to the input (), while under the channel hardware is maintained by Comcast so they’re independent. Now for our two agents’ distributions, and . will be almost , and will be almost , but each agent puts a probability on the other distribution: First key observation: and are both roughly 50 bits. Calculation: Intuitively: since each distribution puts roughly on the other, it takes about 50 bits of evidence to update from either one to the other. Second key observation: the empty latent is approximately natural under , and the latent is approximately natural under . Epsilons: • Under , the empty latent satisfies mediation to within about bits (this is just mutual information of and under ), and redundancy exactly (since the empty latent can always be exactly computed from any input). • Under , satisfies mediation exactly (since mediates between and anything else), redundancy with respect to exactly ( can be exactly computed from just without ), and redundancy with respect to to within about bits (since there’s a chance that doesn’t tell us the relevant 1000000 bits). … and of course the information those two latents tell us about differs by 1 million bits: one of them is empty, and the other directly tells us 1 million bits about . Now, let’s revisit the claim we would’ve liked to make: If the two agents’ distributions are within of each other (as measured by some KL-divergences), then their natural latents contain approximately the same information about , to within some bound. Tiny mixtures rule out any claim along those lines. Generalizing the counterexample to an bit channel (where above) and a mixin probability of (where above), we generally see that the two latents are natural over their respective distributions to about , the between the distributions is about in either direction, yet one latent contains bits of information about while the other contains zero. By choosing , with both and large, we can get arbitrarily precise natural latents over the two distributions, with the difference in the latents exponentially large with respect to the ’s between What To Do Instead? So the bound we’d ideally like is ruled out. What alternatives might we aim for? Different Kind of Approximation Looking at the counterexample, one thing which stands out is that and are, intuitively, very different distributions. Arguably, the problem is that a “small” just doesn’t imply that the distributions are all that close together; really we should use some other kind of approximation. On the other hand, is a pretty nice principled error-measure with nice properties, and in particular it naturally plugs into information-theoretic or thermodynamic machinery. And indeed, we are hoping to plug all this theory into thermodynamic-style machinery down the road. For that, we need global bounds, and they need to be information-theoretic. Additional Requirements for Natural Latents Coming from another direction: a 50-bit update can turn into , or vice-versa. So one thing this example shows is that natural latents, as they’re currently formulated, are not necessarily robust to even relatively small updates, since 50 bits can quite dramatically change a distribution. Interestingly, there do exist other natural latents over these two distributions which are approximately the same (under their respective distributions) as the two natural latents we used above, but more robust (in some ways) to turning one distribution into the other. In particular: we can always construct a natural latent with competitively optimal approximation via resampling. Applying that construction to , we get a latent which is usually independent random noise (which gives the same information about as the empty latent), but there’s a chance that it contains the value of and another chance that it contains the value of . Similarly, we can use the resampling construction to find a natural latent for , and it will have a chance of containing random noise instead of , and an independent chance of containing random noise instead of . Those two latents still differ in their information content about by roughly 1 million bits, but the distribution of given each latent differs by only about 100 bits in expectation. Intuitively: while the agents still strongly disagree about the distribution of their respective latents, they agree (to within ~100 bits) on what each value of the latent says about . Does that generalize beyond this one example? We don’t know yet. But if it turns out that the competitively optimal natural latent is generally robust to updates, in some sense, then it might make sense to add a robustness-to-updates requirement for natural latents - require that we use the “right” natural latent, in order to handle this sort of problem. Same Distribution A third possible approach is to formulate the theory around a single distribution . For instance, we could assume that the environment follows some “true distribution”, and both agents look for latents which are approximately natural over the “true distribution” (as far as they can tell, since the agents can’t observe the whole environment distribution directly). This would probably end up with a Fristonian flavor. ADDED July 9: The Competitively Optimal Natural Latent from Resampling Always Works (At Least Mediocrely) Recall that, for a distribution , we can always construct a competitively optimal natural latent (under strong redundancy) by resampling each component conditional on the others , i.e. We argued above that this specific natural latent works just fine in the tiny mixtures counterexample: roughly speaking, the resampling natural latent constructed for approximates the resampling natural latent constructed for (to within an error comparable to how well approximates ). Now we'll show that that generalizes. Our bound will be mediocre, but it's any bound at all, so that's progress. Specifically: suppose we have two distributions over the same variables, and . We construct a competitively optimal natural latent via resampling for each distribution: Then, we'll use (with expectation taken over under distribution ) as a measure of how well 's latent matches 's latent . Core result: So we have a bound. Unfortunately, the factor of (number of variables) makes the bound kinda mediocre. We could sidestep that problem in practice by just using natural latents over a small number of variables at any given time (which is actually fine for many and arguably most use cases). But based on the proof, it seems like we should be able to improve a lot on that factor of n; we outright add , which should typically be much larger than the quantity we're trying to bound. Which brings to mind How Many Bits Of Optimization Can One Bit Of Observation Unlock?, and the counter-example there... We actually started from that counterexample, and the tiny mixtures example grew out of it. Sure, but what I question is whether the OP shows that the type signature wouldn't be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It's not clear that their datasets would be different the same way and are different here. I may misunderstand (I’ve only skimmed), but its not clear to me we want natural latents to be robust to small updates. Phase changes and bifurcation points seem like something you should expect here. I would however feel more comfortable if such points had small or infinitesimal measure. Another angle to consider: in this specific scenario, would realistic agents actually derive natural latents for and as a whole, as opposed to deriving two mutually incompatible latents for the and components, then working with a probability distribution over those latents? Intuitively, that's how humans operate if they have two incompatible hypotheses about some system. We don't derive some sort of "weighted-average" ontology for the system, we derive two separate ontologies and then try to distinguish between them. This post comes to mind: If you only care about betting odds, then feel free to average together mutually incompatible distributions reflecting mutually exclusive world-models. If you care about planning then you actually have to decide which model is right or else plan carefully for either outcome. Like, "just blindly derive the natural latent" is clearly not the whole story about how world-models work. Maybe realistic agents have some way of spotting setups structured the way the OP is structured, and then they do something more than just deriving the latent. New Comment 8 comments, sorted by Click to highlight new comments since: Coming from another direction: a 50-bit update can turn into , or vice-versa. So one thing this example shows is that natural latents, as they’re currently formulated, are not necessarily robust to even relatively small updates, since 50 bits can quite dramatically change a distribution. Are you sure this is undesired behavior? Intuitively, small updates (relative to the information-content size of the system regarding which we're updating) can drastically change how we're modeling a particular system, into what abstractions we decompose it. E. g., suppose we have two competing theories regarding how to predict the neural activity in the human brain, and a new paper comes out with some clever (but informationally compact) experiment that yields decisive evidence in favour of one of those theories. That's pretty similar to the setup in the post here, no? And reading this paper would lead to significant ontology shifts in the minds of the researchers who read it. Which brings to mind How Many Bits Of Optimization Can One Bit Of Observation Unlock?, and the counter-example there... Indeed, now that I'm thinking about it, I'm not sure the quantity is in any way interesting at all? Consider that the researchers' minds could be updated either from reading the paper and examining the experimental procedure in detail (a "medium" number of bits), or by looking at the raw output data and then doing a replication of the paper (a "large" number of bits), or just by reading the names of the authors and skimming the abstract (a "small" number of bits). There doesn't seem to be a direct causal connection between the system's size and the amount of bits needed to drastically update on its structure at all? You seem to expect some sort of proportionality between the two, but I think the size of one is straight-up independent of the size of the other if you let the nature of the communication channel between the system and the agent-doing-the-updating vary freely (i. e., if you're uncertain regarding whether it's "direct observation of the system" OR "trust in science" OR "trust in the paper's authors" OR ...).^[1] Indeed, merely describing how you need to update using high-level symbolic languages, rather than by throwing raw data about the system at you, already shaves off a ton of bits, decoupling "the size of the system" from "the size of the update". Perhaps really isn't the right metric to use, here? The motivation for having natural abstractions in your world-model is that they make the world easier to predict for the purposes of controlling said world. So similar-enough natural abstractions would recommend the same policies for navigating that world. Back-tracking further, the distributions that would give rise to similar-enough natural abstractions would be distributions that correspond to worlds the policies for navigating which are similar-enough... I. e., the distance metric would need to take interventions/the operator into account. Something like SID comes to mind (but not literally SID, I expect). 1. ^^ Though there may be some more interesting claim regarding that entire channel? E. g., that if the agent can update drastically just based on a few bits output by this channel, we have to assume that the channel contains "information funnels" which compress/summarize the raw state of the system down? That these updates have to be entangled with at least however-many-bits describing the ground-truth state of the system, for them to be valid? In the context of alignment, we want to be able to pin down which concepts we are referring to, and natural latents were (as I understand it) partly meant to be a solution to that. However if there are multiple different concepts that fit the same natural latent but function very differently then that doesn't seem to solve the alignment aspect. I do see the intuitive angle of "two agents exposed to mostly-similar training sets should be expected to develop the same natural abstractions, which would allow us to translate between the ontologies of different ML models and between ML models and humans", and that this post illustrated how one operationalization of this idea failed. However if there are multiple different concepts that fit the same natural latent but function very differently That's not quite what this post shows, I think? It's not that there are multiple concepts that fit the same natural latent, it's that if we have two distributions that are judged very close by the KL divergence, and we derive the natural latents for them, they may turn out drastically different. The agent and the agent legitimately live in very epistemically different worlds! Which is likely not actually the case for slightly different training sets, or LLMs' training sets vs. humans' life experiences. Those are very close on some metric , and now it seems that isn't (just) . Maybe one way to phrase it is that the X's represent the "type signature" of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it's "out there in the world" as it represents the outwards interaction with things. We'd hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents' distribution, but this distribution is more an "internal" thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.
{"url":"https://www.lesswrong.com/posts/xDsbqxeCQWe4BiYFX/natural-latents-are-not-robust-to-tiny-mixtures","timestamp":"2024-11-10T12:42:44Z","content_type":"text/html","content_length":"1049104","record_id":"<urn:uuid:c2a2bda4-4fd3-43e5-ac00-f93a6f0f496e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00422.warc.gz"}
Numerical modeling of EDA - Improving Surface Characteristics of Mold Steel using Electric Disc CHAPTER 1 Introduction B. Numerical modeling of EDA In order to understand the numerical modeling of EDA, there is a need to investigate the development of numerical modeling pertaining to the effects of electrical discharge phenomenon. In this regard, researchers have attempted different approaches for numerical modeling of EDM. The finite difference method (FDM) and finite element method (FEM) are the two numerical methods that are widely used in modeling of EDM. In these methods, thermal based model consisting of the transient nonlinear partial differential equation is being solved. In FDM, the solution approach is based on the Taylors series expansion, but FEM is based on integral minimization. FDM uses pointwise approximations to the governing equations, while FEM uses piecewise or regional approximations. In a comparison of the two approaches, FEM is more flexible as it can handle highly nonlinear equations such as complex geometry. Thermal based numerical model of EDM is based on the determination of temperature distribution, surface roughness, maximum temperature, and also material removal rate. Salah et al. (2006) used FDM to predict the surface roughness and the removal rate of the workpiece and compared with the experimental data obtained during electrical discharge machining of SS316L. The experimental results were compared with the simulated results by considering two cases, viz., using constant thermal conductivity and varying thermal conductivity. Results indicated that the use of temperature-dependent thermal conductivity gives a better result as compared to that of constant thermal conductivity. Izquierdo et al. (2009) modeled multiple discharges in EDM by employing FDM to predict the surface roughness and material removal rate from temperature distribution data. The model showed a 6 % error in the predicted result as compared with the experimental results. Works have also been reported with the use of the finite element method to model the electric discharge phenomenon. Shankar et al. (1997) studied the profile of the spark generated during the discharge phenomenon. It was concluded that the middle section of the spark has a smaller cross-section and is non-cylindrical. Also, the spark radius at the anode surface is smaller than that at the cathode surface. Authors also analyzed the amount of total energy distributed to the cathode, anode, and inter-electrode gap for different currents, pulse duration, and inter-electrode distance values. The predicted material removal rate and the relative electrode wear rate were compared with the experimental results and found that they agreed well. In the work of Das et al. (2003), FEM based model was developed to predict the phase transformation and residual stresses developed due to the electric discharge phenomenon by studying the transient temperature distribution at all nodes of the work domain. Kansal et al. (2008) also used FEM to develop a model of powder mixed electric discharge machining by using an axisymmetric two-dimensional work domain. In their model, the heat source was considered to be Gaussian distribution. The effect of input process parameters, such as current, pulse on- time, pulse off-time and the fraction of energy distributed to workpiece, on to the material removal rate were studied. Literature reports various works to determine the fraction of energy transferred to the work domain by using both FDM and FEM. This factor is one of the important parameters in modeling electric discharge phenomenon. Gostimirovic et al. (2012) reported that discharge energy plays a significant role in the machining characteristics in the EDM process. With the increase in discharge energy, the material removal rate increases up to an optimal value. The surface roughness and also the white layer thickness depends on the discharge energy. This discharge energy is basically a function of the fraction of energy distributed to the workpiece. Numerous works have been carried out to determine the FA (fraction of energy transferred to the anode) or FC (Fraction of energy transferred to cathode) value for the EDM process by inverse computation of the heat conduction problem. Chiou et al. (2011) evaluated the input power and thermal conductivity of the workpiece by inverse estimation. Temperature measurements at various levels of discharge duration and at different locations were recorded, and the results were compared with the numerical solutions. On a similar front, Zhang et al. (2014b) worked on determining the FA and FC value and the plasma diameter during the EDM process by comparing the experimentally determined crater diameter with that of the numerical result. The work was carried out for both positive and negative polarity and also by using different dielectric media, viz. deionized water, kerosene, oil, and water in oil emulsion. Results indicated that the fraction of energy distribution is more in positive polarity regardless of the dielectric medium used. Further, in the work of Ming et al. (2017), the fraction of energy distributed for different workpieces was compared. It was reported that the fraction of energy value varied from material to material, i.e., 0.079 to 0.12 for Al 6061, 0.028 to 0.034 for Inconel 718 and 0.029 to 0.037 % for SKD 11. In the case of the electric discharge alloying process, reverse polarity is generally preferred (Gangadhar et al. 1991), and hence, it becomes important to determine the fraction of energy (FA) transferred to the workpiece which is made anode or the positive polarity. The fraction of energy transferred to the anode as determined by Patel et al. (1989) is a fixed value of 0.08. Shabgard et al. (2013) found out that the range of energy transferred to the anode was within 0.0413 to 0.364, and it was dependent on pulse duration and input discharge current. Algodi et al. (2018) computed the fraction of energy going to the workpiece during electrical discharge coating by comparing the experimentally determined crater radii with that of the numerically simulated results and concluded that the FA varies from 0.07 to 0.53. 2.3.3 Soft computing based process modeling Artificial neural network (ANN) is a soft computing technique used to develop a network that establishes a nonlinear relationship between the input process parameters and the desired outputs. It has the capability of functional mapping even from incomplete and noisy data. Researchers have employed ANN in EDA as it is very difficult to develop an analytical model due to the stochastic nature of the electric discharge phenomenon. Tsai and Wang compared various types of the neural network model to predict the surface finish (Tsai and Wang 2001b) and material removal rate (Tsai and Wang 2001a) in electric discharge machining and found that the adaptive network-based fuzzy interference system (ANFIS) is best suited for both cases. Panda and Bhoi (2005) used feed-forward back propagation neural network using the Levenberg Marquardt technique to predict the material removal rate. The concept of using a hybrid model of artificial neural network (ANN) and genetic algorithm (GA) for optimization of EDM process parameters has also been reported in numerous works. Mohana et al. (2009) used a hybrid model of ANN and GA to optimize the surface roughness. The model considered average current, average voltage, and machining time as the input parameters to develop the ANN model, and surface roughness is the output parameter. The developed model is optimized using GA by adjusting the weight of the network. In a similar manner, Ming et al. (2016) developed a backpropagation neural network (BPNN) and radial basis neural network (RBNN) to predict the material removal rate and surface roughness separately in the machining of SiC/Al composite by using EDM. The mean prediction error of the optimal network using BPNN was reported to be 10.61 %, while that using RBNN was 12.77 %. The network was further optimized by using GA. Apart from developing a neural network to predict a single objective, Joshi and Pande (2011) developed an integrated FEM-ANN-GA model to train a network having multiple output parameters. The developed model was used to determine the optimum process conditions which would give the optimum performance of the EDM process in terms of material removal rate, tool wear rate, and crater depth. A backpropagation neural network with a scaled conjugate gradient algorithm was employed. Scant work has been reported in developing of neural network in the field of electric discharge alloying. Patowari et al. (2010) developed a feed-forward back propagation neural network to predict the material deposition rate and average alloyed layer thickness in the electric discharge alloying process. In their work, the input parameters considered were compaction pressure, sintering temperature, peak current, pulse on-time, and pulse off-time and observed that for both the material deposition rate and alloyed layer thickness, the optimum network was attained with five number of neurons in the hidden layer. Numerous mathematical models, both analytical and theoretical, have been developed to study the phenomenon of spark generated by EDM. Analytical methods give an exact solution, while numerical methods give an approximate solution to the mathematical problem based on a trial and error procedure. Analytical methods can be time-consuming due to the complex functions involved or due to the large data size. In such cases, numerical methods are used since they are generally iterative techniques that use simple arithmetic operations to generate numerical solutions. Extensive work has been carried out to predict the material removal rate, surface roughness, plasma flushing efficiency, residual stresses, and also the white layer thickness by thermal analysis in the electric discharge machining process. Researchers noted that the fraction of energy distributed to the work domain was the most important factor. Some attempts have been noted on the use of inverse estimation method in computation of input parameters. In spite of the extensive works carried out in modeling of EDM, scant work has been reported in the field of alloying or coating by EDM. Algodi et al. (2018) worked in modeling single spark interactions during electrical discharge coating. In their work, the experimentally determined alloyed layer thickness was compared with the numerically determined crater depth of the melted region, and the results were found to be satisfactory. Works have also been reported in the use of soft computing techniques like ANN to develop a network that can predict the material removal rate, surface roughness, etc. Use of hybrid models such as FEM-ANN-GA has also been reported. 2.4 Research gaps In the field of electric discharge alloying, various experimental works have been reported. Deliberate transfer of materials or alloying is quite possible with the use of EDA. If the process of alloying by EDA could be well established, then it will play a vital role in the manufacturing industry due to its flexibility and is economical as compared to other available coting techniques such as PVD, CVD, magnetron sputtering, etc. These techniques require a specific vacuum chamber for its fabrication, thereby increasing the cost of production. The coating thickness is also limited to around 5 µm. Though CVD produces a quality coating, it has environmental hazards in terms of residual gases released during the chemical reaction process. Therefore, there is a need to come up with some efficient techniques to replace this coating technique. In view of this, electrical discharge alloying can be a highly promising technique. However, it limits its application in the industry due to the lack of information about the characteristics of the alloyed layer in terms of its hardness, wear resistance, corrosion resistance, and also thickness of the alloyed layer. Hence it has become an important area of research to be explored for understanding the underlying mechanism by experimental investigations as well as by development of physics-based predictive model. Works have been reported with the use of different types of tool material viz. solid tool electrode of electrolytic copper (Yan et al. 2005), graphite electrode (Chang-Bin et al. 2011), multilayer electrode of graphite and titanium (Hwang et al. 2010), etc. for alloying the workpiece to enhance its surface properties. Apart from the solid electrode, powder metallurgy tools have also been used for alloying as it has the flexibility to control the binding energy of the molecules by varying the compaction pressure, elemental composition and also sintering temperatures (Suzuki and Kobayashi 2013). Attempts have been carried out to improve functional surface characteristics such as wear and corrosion resistance by using varying PM tools such as Ti green compact, WC/Co, WC/Fe, TiC/WC/Co, Cr/Cu, WC/Cu, semi sintered TiC, etc. However, scant work has been reported in the alloying of titanium, aluminium, and nitrogen with AISI P20 mold steel by using EDA. Other than the transfer of tool material over the workpiece by varying the tool material, alloying by EDA has also been done by using different dielectric such as urea mixed dielectric (Santos et al. 2017), mixing of different powder in the dielectric such as silicon powder (Kansal and Kumar 2007), titanium powder (Janmanee and Muttamara 2012), aluminum powder (Syed and Palaniyandi 2012), etc. for different purposes. From the reported literature, it is also observed that dielectric media plays a vital role in EDA. In this field, less work has been reported in surface alloying of mold steel by varying the dielectric media. Therefore, investigations can be made to study the effects of different dielectric media in EDA of mold steel. An extensive study can be made to study the influence of the input process parameters onto the alloyed layer thickness, hardness of the alloyed layer, material deposition rate, surface roughness, elemental transfer, hardness, wear and corrosion resistance behavior. Apart from the experimental works reported to study the phenomenon of electric discharge alloying, researchers have also worked on modeling the EDA process. In spite of the fact that there is an abundant amount of literature available to model the discharge phenomenon in EDM to predict the material removal rate, surface roughness, tool wear rate, etc., limited work has been reported in the field of modeling the EDA phenomenon. A very scant work has been reported on the computation of alloyed layer thickness on the workpiece by employing accurate values of energy distribution factor. Further, it is learned that less work is reported on the prediction of alloyed layer thickness using artificial neural networks and hard computing methods together. There is a need to develop a simple, efficient method to compute the alloyed layer thickness by using inverse computation of energy distribution among the electrodes. 2.5 Objectives of the present work The main objective of the present work is to enhance the surface characteristics of AISI P20 mold steel viz. hardness, wear resistance, and corrosion resistance by using the electrical discharge alloying process. It was envisaged to achieve this by alloying titanium, aluminium, and nitrogen over AISI P20 mold steel. The sub-objectives of the present work are listed below. To deposit a layer of titanium and aluminium over AISI P20 mold steel by using the EDA process and powder metallurgy technology-based green tool electrodes. To critically analyze the deposition of desired elements with three types of dielectric media viz. hydrocarbon oil, deionized water, and urea mixed deionized water. To examine and measure the surface characteristics in terms of hardness, wear resistance, and corrosion resistance. To study the influence of the process parameters viz. discharge current, discharge duration, and type of dielectric medium on the alloyed layer thickness, material deposition rate, surface roughness, elemental distribution, hardness, wear-resistance, and corrosion resistance. To develop an integrated FEM – ANN methodology to compute the alloyed layer thickness by using inverse computation of energy distribution among the electrodes. To achieve the mentioned objective, the present work has been planned in five stages, as shown in Figure 2.5. Stage 1: In the first stage, a thorough literature survey has been carried out on the relevant research works that have been reported. Later, the works reported in the field of modeling were thoroughly studied. Thereafter, the research gaps were realized, and the objectives were derived, then the present work has been planned. In the present work, the alloying phenomenon of AISI P20 mold steel has been studied experimentally and numerically. Stage 2: Alloying of AISI P20 mold steel with the use of powder metallurgy electrodes
{"url":"https://azpdf.net/in/article/numerical-modeling-of-eda.10342920","timestamp":"2024-11-06T17:35:24Z","content_type":"text/html","content_length":"77505","record_id":"<urn:uuid:c7899165-fab3-484d-a7b9-43d1484d73ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00740.warc.gz"}
Getting into programming with Excel If you want to get into programming, Excel is an easy way to do it. Everyone has it on their laptop or work computer so you don’t need to worry about installing software or getting permission to use it. That is important because to become an effective progammer you need to practice. It is as easy as openning Excel and typing a few key strokes. So, please have Excel open as you read this article and follow along. We are going to start with basic built-in Excel functions in the spreadsheet then move on to Visual Basic to build our own functions. We will see how functions you write can be called from the spreadsheet and see a way of testing the results using conditional formatting. That will give us all the tools we need to write a proper program which we do in the next article with an extended step-by-step example. That will give you a feel for how real programs are written, tested and debugged. In the following article we will move out of Excel and Visual Basic and into Python and its spreadsheet-like helpers, Pandas. A fundamental idea in programming is the function. Just like functions in maths they take inputs (called “arguments”) and return a result. Let’s try one of Excel’s built-in functions, MAX(). It takes two or more numbers as inputs, given in brackets after the function name, and returns the biggest, the MAXimum. Excel displays this result in the cell where the function is, so you see the result of the function, not the function itself. Start a fresh spreadsheet in Excel, click on an empty cell and type =MAX(5,7) then hit Enter. You should see the number 7. If you see MAX(5,7) then you missed off the equals symbol in the first character. Excel needs that to know you meant to call its MAX() function rather than just putting the word MAX in a cell. If you really do want to have the text “=MAX” show in a cell, you have to start with a single quote. MAX() can take more arguments. Either double click on the cell containing the function or in the box above the grid showing the function and give it lots of arguments, such as =MAX (5,7,-1,234,-100,25) and hit enter. Check that the result makes sense to you. As well as taking numbers as arguments, MAX() can get its arguments from other cells, it just needs the cell addresses. Let’s try it. Type three different numbers in three different cells, say 2, 7 and 11. Then in a fourth cell, type =MAX( without pressing Enter then click on the cell containing the number 2, then type a comma, then click on the cell containing 7, type another comma, click on the cell containing 11, type a closing bracket and hit Enter. You should see the number 11. Arguments from other cells If you need to pass lots of cells as arguments you can type in a very long list, but so long as the cells are all next to each other in a row or column, you can just tell it the first and last cell, separated by a colon and Excel works out the rest. This is called a range. Let’s use this to find the oldest patient admitted for an ultrasound: Applying functions to ranges Some functions operate on a whole dataset at once, such as MAX() but others you want to apply to each row (or column) individually. In that case just put a copy of the function on every row. Excel is set-up to help you do this. Write the function once on the first row then drag the dot in the bottom right corner until all the rows are covered. Excel updates the cell address of the argument as it copies the function into each new cell so that it points to the correct row. It will do this for the row and column part of the address. This is sometimes called a “copy down”. If you don’t want that to happen you can fix, or anchor the row or column or both by prefixing that part of the address with a dollar sign. Chaining functions and loops The idea of applying a function to every row in a dataset is important. It will form the basis of most of our programming. Programs can be thought of as combinations of functions applied to rows of a spreadsheet. Let’s have a look at how to combine functions. We have seen how functions can get their arguments from cells. If those cells in turn get their values from functions then we are chaining functions together with the outputs of one function being passed as the inputs to another. There are two ways to do this. Either put the results of the first function into a cell and use that cell address as the argument to the next function, or if you don’t need to see the intermediate result, write the first function directly as the argument of the second. A common use for this is with the IF function which allows you to make a decision in your calculation. If the condition given as the first argument is true then return the second argument, else return the third one. In this example we change our calculation depending on whether the patient is male or female. Chaining functions Chaining functions is the start of building larger, more sophisticated programs. In theory you could keep piling up functions in a single cell, but in practice it becomes incomprehensible. Excel doesn’t care but from the human author’s point of view things are more comprehensible and managable if they are written out in plain text with intermediate values being stored in what are called Writing your own functions Although Excel has dozens of built-in functions we soon need one that it doesn’t have or we want to combine them to form a program of our own. In Excel this means writing “Visual Basic for Applications” or VBA. To do this you need to open the Visual Basic editor. How to do this varies on different versions of Excel, so you may need to check Excel Help or Google around a bit. On my version it is under the Tools menu. Openning the editor You should see a new window with a tree-like navigator on the left. This shows a “VBA Project” with a “Microsoft Excel Objects” below it, with “Sheet1” and “This Workbook” below that. The function we are going to write will live in what is called a “module” and you need to add one to start with. To do that either right-click anywhere on the tree to open a menu and find Insert > Module, or there is a button on the toolbar. Either way you should end up with a window called “Module 1”. Excel modules My first function Click in the module window and enter the text you see in the image below. This is Visual Basic for Applications, and it is creating a new function called twice. It takes one argument, which it expects to be a whole number, and it will be refered to in the code below by the name x. It returns another whole number to the cell in the sheet from where it is called. That returned value is just two times whatever x is. My first function To call the function, find an empty cell in the spreadsheet and type =twice(2) and hit enter. You should see 4. Twice 2 = 4 As with built-in functions you can call your function on every row of a sheet, by copying it down, to apply it throughout your dataset. A good habit to cultivate is checking that the code you just wrote does what you expected it to do. This is called testing. Even better is to write down what you expect the result of a function to be before you write it. In this way the test becomes the specification for the function. This is called Test Driven Development and leads to very high quality code. In Excel we can use conditional formating to give visual feedback on whether a test is passing. We colour a cell green if its value matches the expected result and red otherwise. Getting conditional formatting to work varies across different versions of Excel, but on my version I select Format > Conditional Formatting… to get to a “Manage rules” dialog. Mine shows the two rules I already set-up but yours will be empty to start. Manage rules. Click + at the bottom to add a rule and select a style of “Classic”. You will need two of these rules. A rule to show green formatting when the result equals the expected outcome and another to show red when the two are not equal. Classic format. With that conditional formatting applied to all our “Actual” cells, here are some tests for our =twice() function: Green pass. Red fail. For a trivial function such as =twice() tests do not contribute much. But for even slighty more complicated functions they really help in several ways: 1. Edge cases. Write some tests for what the function should return for out-of-range or unusual values. 2. Defending against future changes. Often we return to code in order to fix or improve it, but those changes can break previously made assumptions which you may have forgotten but other functions depend on. By having tests with the code that check those assumptions your code is protected against the unintended consequences of future changes. 3. Documentation. Simple tests explain to other users what the intention of the function is, especially for edge cases. That other user is often a future you, when you have forgotten the details of what you did months ago. 4. Start simple then add complexity. For complex functions it can be impossible to just write down the whole thing in one go. Practically it helps to start with some simple cases then make adaptions for more complex ones. By building up a set of test cases you can ensure that your changes are not regressing and breaking what you have already achieved. In the next article we will put everything we have learned together and write our first real program.
{"url":"https://digitalelf.org/twelfdays/2023/12/22/getting_into_programming_with_excel.html","timestamp":"2024-11-13T09:02:51Z","content_type":"text/html","content_length":"19728","record_id":"<urn:uuid:c92fd2f2-01a7-4fff-9342-07b9956ef68c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00764.warc.gz"}
Simulate received signal at sensor array x = sensorsig(pos,ns,ang) simulates the received narrowband plane wave signals at a sensor array. pos represents the positions of the array elements, each of which is assumed to be isotropic. ns indicates the number of snapshots of the simulated signal. ang represents the incoming directions of each plane wave signal. The plane wave signals are assumed to be constant-modulus signals with random phases. x = sensorsig(pos,ns,ang,ncov) describes the noise across all sensor elements. ncov specifies the noise power or covariance matrix. The noise is a Gaussian distributed signal. x = sensorsig(pos,ns,ang,ncov,scov) specifies the power or covariance matrix for the incoming signals. x = sensorsig(pos,ns,ang,ncov,scov,'Taper',taper) specifies the array taper as a comma-separated pair consisting of 'Taper' and a scalar or column vector. [x,rt] = sensorsig(___) also returns the theoretical covariance matrix of the received signal, using any of the input arguments in the previous syntaxes. [x,rt,r] = sensorsig(___) also returns the sample covariance matrix of the received signal. Received Signal and Direction-of-Arrival Estimation Simulate the received signal at an array, and use the data to estimate the arrival directions. Create an 8-element uniform linear array whose elements are spaced half a wavelength apart. fc = 3e8; c = 3e8; lambda = c/fc; array = phased.ULA(8,lambda/2); Simulate 100 snapshots of the received signal at the array. Assume there are two signals, coming from azimuth 30° and 60°, respectively. The noise is white across all array elements, and the SNR is 10 dB. x = sensorsig(getElementPosition(array)/lambda,... 100,[30 60],db2pow(-10)); Use a beamscan spatial spectrum estimator to estimate the arrival directions, based on the simulated data. estimator = phased.BeamscanEstimator('SensorArray',array,... [~,ang_est] = estimator(x); Plot the spatial spectrum resulting from the estimation process. The plot shows peaks at 30° and 60°. Signals With Different Power Levels Simulate receiving two uncorrelated incoming signals that have different power levels. A vector named scov stores the power levels. Create an 8-element uniform linear array whose elements are spaced half a wavelength apart. fc = 3e8; c = 3e8; lambda = c/fc; ha = phased.ULA(8,lambda/2); Simulate 100 snapshots of the received signal at the array. Assume that one incoming signal originates from 30 degrees azimuth and has a power of 3 W. A second incoming signal originates from 60 degrees azimuth and has a power of 1 W. The two signals are not correlated with each other. The noise is white across all array elements, and the SNR is 10 dB. ang = [30 60]; scov = [3 1]; x = sensorsig(getElementPosition(ha)/lambda,... Use a beamscan spatial spectrum estimator to estimate the arrival directions, based on the simulated data. hdoa = phased.BeamscanEstimator('SensorArray',ha,... [~,ang_est] = step(hdoa,x); Plot the spatial spectrum resulting from the estimation process. The plot shows a high peak at 30 degrees and a lower peak at 60 degrees. Reception of Correlated Signals Simulate the reception of three signals, two of which are correlated. Create a signal covariance matrix in which the first and third of three signals are correlated with each other. scov = [1 0 0.6;... 0 2 0;... 0.6 0 1]; Simulate receiving 100 snapshots of three incoming signals from 30°, 40°, and 60° azimuth, respectively. The array that receives the signals is an 8-element uniform linear array whose elements are spaced one-half wavelength apart. The noise is white across all array elements, and the SNR is 10 dB. pos = (0:7)*0.5; ns = 100; ang = [30 40 60]; ncov = db2pow(-10); x = sensorsig(pos,ns,ang,ncov,scov); Theoretical and Empirical Covariance of Received Signal Simulate receiving a signal at a URA. Compare the signal theoretical covariance with its sample covariance. Create a 2-by-2 uniform rectangular array having elements spaced 1/4-wavelength apart. pos = 0.25 * [0 0 0 0; -1 1 -1 1; -1 -1 1 1]; Define the noise power independently for each of the four array elements. Each entry in ncov is the noise power of an array element. This element position is the corresponding column in pos. Assume the noise is uncorrelated across elements. ncov = db2pow([-9 -10 -10 -11]); Simulate 100 snapshots of the received signal at the array, and store the theoretical and empirical covariance matrices. Assume that one incoming signal originates from 30° azimuth and 10° elevation. A second incoming signal originates from 50° azimuth and 0° elevation. The signals have a power of 1 W and are uncorrelated. ns = 100; ang1 = [30; 10]; ang2 = [50; 0]; ang = [ang1, ang2]; rng default [x,rt,r] = sensorsig(pos,ns,ang,ncov); View the magnitudes of the theoretical covariance and sample covariance. ans = 4×4 2.1259 1.8181 1.9261 1.9754 1.8181 2.1000 1.5263 1.9261 1.9261 1.5263 2.1000 1.8181 1.9754 1.9261 1.8181 2.0794 ans = 4×4 2.2107 1.7961 2.0205 1.9813 1.7961 1.9858 1.5163 1.8384 2.0205 1.5163 2.1762 1.8072 1.9813 1.8384 1.8072 2.0000 Correlation of Noise Between Sensors Simulate receiving a signal at a ULA, where the noise between different sensors is correlated. Create a 4-element uniform linear array whose elements are spaced one-half wavelength apart. Define the noise covariance matrix. The value in the ( k,_j_) position in the ncov matrix is the covariance between the k and j array elements listed in array. ncov = 0.1 * [1 0.1 0 0; 0.1 1 0.1 0; 0 0.1 1 0.1; 0 0 0.1 1]; Simulate 100 snapshots of the received signal at the array. Assume that one incoming signal originates from 60° azimuth. ns = 100; ang = 60; [x,rt,r] = sensorsig(pos,ns,ang,ncov); View the theoretical and sample covariance matrices for the received signal. rt = 4×4 complex 1.1000 + 0.0000i -0.9027 - 0.4086i 0.6661 + 0.7458i -0.3033 - 0.9529i -0.9027 + 0.4086i 1.1000 + 0.0000i -0.9027 - 0.4086i 0.6661 + 0.7458i 0.6661 - 0.7458i -0.9027 + 0.4086i 1.1000 + 0.0000i -0.9027 - 0.4086i -0.3033 + 0.9529i 0.6661 - 0.7458i -0.9027 + 0.4086i 1.1000 + 0.0000i r = 4×4 complex 1.1059 + 0.0000i -0.8681 - 0.4116i 0.6550 + 0.7017i -0.3151 - 0.9363i -0.8681 + 0.4116i 1.0037 + 0.0000i -0.8458 - 0.3456i 0.6578 + 0.6750i 0.6550 - 0.7017i -0.8458 + 0.3456i 1.0260 + 0.0000i -0.8775 - 0.3753i -0.3151 + 0.9363i 0.6578 - 0.6750i -0.8775 + 0.3753i 1.0606 + 0.0000i Input Arguments pos — Positions of elements in sensor array 1-by-N vector | 2-by-N matrix | 3-by-N matrix Positions of elements in sensor array, specified as an N-column vector or matrix. The values in the matrix are in units of signal wavelength. For example, [0 1 2] describes three elements that are spaced one signal wavelength apart. N is the number of elements in the array. Dimensions of pos: • For a linear array along the y axis, specify the y coordinates of the elements in a 1-by-N vector. • For a planar array in the yz plane, specify the y and z coordinates of the elements in columns of a 2-by-N matrix. • For an array of arbitrary shape, specify the x, y, and z coordinates of the elements in columns of a 3-by-N matrix. Data Types: double ns — Number of snapshots of simulated signal positive integer scalar Number of snapshots of simulated signal, specified as a positive integer scalar. The function returns this number of samples per array element. Data Types: double ang — Directions of incoming plane wave signals 1-by-M vector | 2-by-M matrix Directions of incoming plane wave signals, specified as an M-column vector or matrix in degrees. M is the number of incoming signals. Dimensions of ang: • If ang is a 2-by-M matrix, each column specifies a direction. Each column is in the form [azimuth; elevation]. The azimuth angle must be between –180 and 180 degrees, inclusive. The elevation angle must be between –90 and 90 degrees, inclusive. • If ang is a 1-by-M vector, each entry specifies an azimuth angle. In this case, the corresponding elevation angle is assumed to be 0. Data Types: double ncov — Noise characteristics 0 (default) | nonnegative scalar | 1-by-N vector of positive numbers | N-by-N positive definite matrix Noise characteristics, specified as a nonnegative scalar, 1-by-N vector of positive numbers, or N-by-N positive definite matrix. Dimensions of ncov: • If ncov is a scalar, it represents the noise power of the white noise across all receiving sensor elements, in watts. In particular, a value of 0 indicates that there is no noise. • If ncov is a 1-by-N vector, each entry represents the noise power of one of the sensor elements, in watts. The noise is uncorrelated across sensors. • If ncov is an N-by-N matrix, it represents the covariance matrix for the noise across all sensor elements. Data Types: double scov — Incoming signal characteristics 1 (default) | positive scalar | 1-by-M vector of positive numbers | M-by-M positive semidefinite matrix Incoming signal characteristics, specified as a positive scalar, 1-by-M vector of positive numbers, or M-by-M positive semidefinite matrix. Dimensions of scov: • If scov is a scalar, it represents the power of all incoming signals, in watts. In this case, all incoming signals are uncorrelated and share the same power level. • If scov is a 1-by-M vector, each entry represents the power of one of the incoming signals, in watts. In this case, all incoming signals are uncorrelated with each other. • If scov is an M-by-M matrix, it represents the covariance matrix for all incoming signals. The matrix describes the correlation among the incoming signals. In this case, scov can be real or Data Types: double taper — Array element taper 1 (default) | scalar | N-by-1 column vector Array element taper, specified as a scalar or complex-valued N-by-1 column vector. The dimension N is the number of array elements. If taper is a scalar, all elements in the array use the same value. If taper is a vector, each entry specifies the taper applied to the corresponding array element. Data Types: double Complex Number Support: Yes Output Arguments x — Received signal complex ns-by-N matrix Received signal at sensor array, returned as a complex ns-by-N matrix. Each column represents the received signal at the corresponding element of the array. Each row represents a snapshot. rt — Theoretical covariance matrix complex N-by-N matrix Theoretical covariance matrix of the received signal, returned as a complex N-by-N matrix. r — Sample covariance matrix complex N-by-N matrix Sample covariance matrix of the received signal, returned as a complex N-by-N matrix. N is the number of array elements. The function derives this matrix from x. If you specify this output argument, consider making ns greater than or equal to N. Otherwise, r is rank deficient. More About Azimuth Angle, Elevation Angle The azimuth angle of a vector is the angle between the x-axis and the orthogonal projection of the vector onto the xy plane. The angle is positive in going from the x axis toward the y axis. Azimuth angles lie between –180 and 180 degrees. The elevation angle is the angle between the vector and its orthogonal projection onto the xy-plane. The angle is positive when going toward the positive z -axis from the xy plane. By default, the boresight direction of an element or array is aligned with the positive x-axis. The boresight direction is the direction of the main lobe of an element or The elevation angle is sometimes defined in the literature as the angle a vector makes with the positive z-axis. The MATLAB^® and Phased Array System Toolbox™ products do not use this definition. This figure illustrates the azimuth angle and elevation angle for a vector shown as a green solid line. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Usage notes and limitations: Does not support variable-size inputs. Version History Introduced in R2012b See Also
{"url":"https://de.mathworks.com/help/phased/ref/sensorsig.html","timestamp":"2024-11-04T20:34:35Z","content_type":"text/html","content_length":"120727","record_id":"<urn:uuid:e315b085-cf8a-42fe-b6fc-4bfa420e1d44>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00397.warc.gz"}
Sign-and-Magnitude - Rust for Python Developers - Unsigned, Signed Integers and Casting Rust for Python Developers - Unsigned, Signed Integers and Casting Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. Create Content This story was originally published on Medium Follow me: Signed, Ones' Complement and Two's Complement In computing, signed number representations are required to encode negative numbers in binary number systems. Let’s examine sign-and-magnitude, ones’ complement, and two’s complement. Sign-and-Magnitude is also called Signed Magnitude. The first bit (called the most significant bit or MSB) tells if it is positive by 0 or a negative by 1. The rest is called magnitude bits. As I mentioned it before that signed integer types have the min and the max from -(2ⁿ⁻¹) to 2ⁿ⁻¹-1 where n stands for the number of bits. Since we use the first bit for the positive and negative signs we have n-1 in the 2ⁿ⁻¹. For 4-bit the min and max are from -(2³) to 2³–1, which is -8 to +7. As you see in the diagram above, the positive and the negative have the same digits except for the sign bit. The problem of the signed magnitude is that there are two zeros, 0000 and 1000. Open Source Your Knowledge: become a Contributor and help others learn. Create New Content
{"url":"https://tech.io/playgrounds/54906/rust-for-python-developers---unsigned-signed-integers-and-casting/sign-and-magnitude","timestamp":"2024-11-07T12:17:58Z","content_type":"text/html","content_length":"211697","record_id":"<urn:uuid:48c18f3f-beab-4818-aaf7-cc5b22c809a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00406.warc.gz"}
Letf_n(x) = \frac{nx}{1+nx^2} a. Find the pointwise limit of $(f_n)$ for all $x \in (0, Letf_n(x) = \frac{nx}{1+nx^2} a. Find the pointwise limit of $(f_n)$ for all $x \in (0, +\infty)$.... Letf_n(x) = \frac{nx}{1+nx^2} a. Find the pointwise limit of $(f_n)$ for all $x \in (0, +\infty)$. b. Is the convergence uniform on $(0, +\infty)$? c. Is the convergence uniform on $(1, +\infty)$? a-c Please
{"url":"https://justaaa.com/advanced-math/1305624-letf_nx-fracnx1nx2-a-find-the-pointwise-limit-of","timestamp":"2024-11-02T20:33:34Z","content_type":"text/html","content_length":"37627","record_id":"<urn:uuid:664b411f-4997-406a-81a0-7c069c55b67b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00118.warc.gz"}
On Saaty's and Koczkodaj's inconsistencies of pairwise comparison matrices Bozóki, Sándor and Rapcsák, Tamás (2008) On Saaty's and Koczkodaj's inconsistencies of pairwise comparison matrices. Journal of Global Optimization, 42 (2). pp. 157-175. ISSN 0925-5001 Bozoki2008Manuscript.pdf - Published Version Download (228kB) | Preview Download (828kB) | Preview The aim of the paper is to obtain some theoretical and numerical properties of Saaty's and Koczkodaj's inconsistencies of pairwise comparison matrices (PRM). In the case of 3x3 PRM, a differentiable one-to-one correspondence is given between Saaty's inconsistency ratio and Koczkodaj's inconsistency index based on the elements of PRM. In order to make a comparison of Saaty's and Koczkodaj's inconsistencies for 4x4 pairwise comparison matrices, the average value of the maximal eigenvalues of randomly generated nxn PRM is formulated, the elements aij (i<j) of which were randomly chosen from the ratio scale 1/M, 1/(M-1), ... , 1/2, 1, 2, ..., M-1, M with equal probability 1/(2M-1) and aji is defined as 1/aij. By statistical analysis, the empirical distributions of the maximal eigenvalues of the PRM depending on the dimension number are obtained. As the dimension number increases, the shape of distributions gets similar to that of the normal ones. Finally, the inconsistency of asymmetry is dealt with, showing a different type of inconsistency. Actions (login required)
{"url":"https://real.mtak.hu/83501/","timestamp":"2024-11-12T13:05:50Z","content_type":"application/xhtml+xml","content_length":"22725","record_id":"<urn:uuid:32d27fa7-75ac-4d91-906d-4cab4d428d14>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00893.warc.gz"}
Multi-model approach in a variable spatial framework for streamflow simulation Articles | Volume 28, issue 7 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Multi-model approach in a variable spatial framework for streamflow simulation Accounting for the variability of hydrological processes and climate conditions between catchments and within catchments remains a challenge in rainfall–runoff modelling. Among the many approaches developed over the past decades, multi-model approaches provide a way to consider the uncertainty linked to the choice of model structure and its parameter estimates. Semi-distributed approaches make it possible to account explicitly for spatial variability while maintaining a limited level of complexity. However, these two approaches have rarely been used together. Such a combination would allow us to take advantage of both methods. The aim of this work is to answer the following question: what is the possible contribution of a multi-model approach within a variable spatial framework compared to lumped single models for streamflow simulation? To this end, a set of 121 catchments with limited anthropogenic influence in France was assembled, with precipitation, potential evapotranspiration, and streamflow data at the hourly time step over the period 1998–2018. The semi-distribution set-up was kept simple by considering a single downstream catchment defined by an outlet and one or more upstream sub-catchments. The multi-model approach was implemented with 13 rainfall–runoff model structures, three objective functions, and two spatial frameworks, for a total of 78 distinct modelling options. A simple averaging method was used to combine the various simulated streamflow at the outlet of the catchments and sub-catchments. The lumped model with the highest efficiency score over the whole catchment set was taken as the benchmark for model evaluation. Overall, the semi-distributed multi-model approach yields better performance than the different lumped models considered individually. The gain is mainly brought about by the multi-model set-up, with the spatial framework providing a benefit on a more occasional basis. These results, based on a large catchment set, evince the benefits of using a multi-model approach in a variable spatial framework to simulate streamflow. Received: 24 Mar 2023 – Discussion started: 13 Apr 2023 – Revised: 20 Jan 2024 – Accepted: 15 Feb 2024 – Published: 04 Apr 2024 1.1Uncertainty in rainfall–runoff modelling A rainfall–runoff model is a numerical tool based on a simplified representation of a real-world system, namely the catchment (Moradkhani and Sorooshian, 2008). It usually computes streamflow time series from climatic data, such as rainfall and potential evapotranspiration. Many rainfall–runoff models have been developed according to various assumptions in order to meet specific needs (e.g. water resources management, flood and low-flow forecasting, hydroelectricity), with choices and constraints concerning the following (Perrin, 2000): • the temporal resolution, i.e. the way variables and processes are aggregated over time; • the spatial resolution, i.e. the way spatial variability is taken into account more or less explicitly in the model; • the description of dominant processes. Different models will necessarily produce different streamflow simulations. Intuitively, one often expects that working at a finer spatio-temporal scale should allow for a better description of the processes (Atkinson et al., 2002). However, this generally leads to additional complexity, i.e. a larger number of parameters, which requires more information to be estimated and often yields more uncertain results (Her and Chaubey, 2015). Uncertainty in rainfall–runoff models depends on the assumptions made regarding the choice of the general structure and also on the parameter estimates. The variety of model structures and equations results in a large variability of streamflow simulations (Ajami et al., 2007). The spatial and temporal resolutions also result in different streamflow simulations. Due to the complexity of the real system and the lack of information to parameterize the various equations over the whole catchment, parameter estimates must be set. Usually, these parameters are determined for each entity of interest by minimizing the error induced by the simulation compared to an observation. The choice of the optimization algorithm, the objective function, and the streamflow transformation is therefore also a source of uncertainty. Since input data are used to derive model structures and parameters, the uncertainty associated with these data also contributes to the overall model uncertainty (Beven, 1993; Liu and Gupta, 2007; Pechlivanidis et al., 2011; McMillan et al., 2012). Various approaches aim to improve models by taking uncertainties into account, among which are multi-model approaches, which are the main topic of our research. 1.2Multi-model approach The multi-model approach consists in using several models in order to take advantage of the strengths of each one. This concept has been gaining momentum in hydrology since the end of the 20th century for simulation (e.g. Shamseldin et al., 1997) and forecasting (e.g. Loumagne et al., 1995). In this section, we distinguish between probabilistic and deterministic approaches. A probabilistic multi-model approach seeks an explicit quantification of the uncertainty associated with simulations or forecasts through statistical methods. The ensemble concept has commonly been applied in meteorology for several decades, and subsequently has been widely used in hydrology to improve prediction (i.e. simulation or forecast). The international Hydrologic Ensemble Prediction Experiment initiative (Schaake et al., 2007) fostered the work on this topic. The ensemble concept has also been adapted to rainfall–runoff models in order to reduce modelling bias: Duan et al. (2007) used multiple predictions made by several rainfall–runoff models using the same hydroclimatic forcing variables. An ensemble consisting of nine different models (from three different structures and parameterizations) was constructed and applied to three catchments in the United States. The predictions were then combined through a statistical procedure (Bayesian model averaging or BMA), which assigns larger weight to a probabilistic likelihood measure. The authors showed that the probabilistic multi-model approach improves flow prediction and quantifies model uncertainty compared to using a single rainfall–runoff model. Block et al. (2009) coupled both multiple climate and multiple rainfall–runoff models, increasing the pool of streamflow forecast ensemble members and accounting for cumulative sources of uncertainty. In their study, 10 scenarios were built for each of the three climatic models and applied to two rainfall–runoff models, i.e. 60 different forecasts. This super-ensemble was applied to the Iguatu catchment in Brazil and showed better performance than the hydroclimatic or rainfall–runoff model ensembles studied separately. Note that the authors tested three different combination methods: pooling, linear regression weighting, and a kernel density estimator. They found that the last technique seems to perform better. Velázquez et al. (2011) showed that the combination of different climatic scenarios with several models in a forecasting context leads to a reduction in uncertainty, particularly when the forecast horizon increases. However, such methods generate a large number of scenarios and can therefore become time-consuming and difficult to analyse. The probabilistic combination of simulations remains a major topic in the scientific community (see Bogner et al., 2017). A deterministic multi-model approach seeks to define a single best streamflow time series, which often consists in a combination of the simulations of individual models. Shamseldin et al. (1997) tested three methods in order to combine model outputs: a simple average, a weighted average, and a non-linear neural network procedure. Their study was conducted on a sample of 11 catchments mainly located in southeast Asia using five different lumped models operating at the daily time step and showed that multiple models perform better than models applied individually. Similar conclusions were reached in the Distributed Model Intercomparison Project (DMIP) (Smith et al., 2004) conducted by Georgakakos et al. (2004) in simulation or by Ajami et al. (2006) for forecasting. In both articles, 6 to 10 rainfall–runoff models were applied at the hourly time step over a few catchments in the United States. These studies showed that a model that performs poorly individually can contribute positively to the multi-model set-up. Winter and Nychka (2010) specify that the composition of the multi-model set-up is important. Indeed, using 19 global climate models, the authors have shown that simple – or weighted – average combinations are more efficient if the individual models used produce very different results. Studies combining rainfall–runoff models by machine learning techniques led to the same conclusions (see, for example, Zounemat-Kermani et al., 2021, for a review). All of the aforementioned multi-model approaches only focus on the structural aspect of rainfall–runoff models. Some authors have also combined streamflow generated from different parameterizations of the same rainfall–runoff model. Oudin et al. (2006) proposed combining two outputs obtained with a single model (GR4J) from two calibrations, one adapted to high flows and the other to low flows, by weighting each of the simulations on the basis of a seasonal index (filling rate of the production reservoir). Such a method makes it possible to provide good efficiency in both low and high flows, whereas usually an a priori modelling choice must be made to focus on a specific streamflow range. More recently, Wan et al. (2021) used a multi-model approach based on four rainfall–runoff models calibrated with four objective functions on a large set of 383 Chinese catchments. The authors showed that methods based on weighted averaging outperform the ensemble members, except in low-flow simulation. They also highlighted the benefit of using several structures with different objective functions. The size of the ensemble was also studied, and it was found that using more than nine ensemble members does not further improve performance. Note that different results for optimal size can be found in the literature (Arsenault et al., 2015; Kumar et al., 2015). The aforementioned studies were carried out within a fixed spatial framework (e.g. lumped, semi-distributed, distributed), i.e. considering that the model structures implemented are relevant over the whole modelling domain. Implicitly, the underlying assumption is that a fixed rainfall–runoff model can capture the main hydrological processes affecting streamflow in a catchment (and its sub-catchments). However, this may not be true. Introducing a variable spatial modelling framework into the multi-model approach could help to overcome this issue. 1.3Scope of the paper This study intends to test whether streamflow simulation can be improved through a multi-model approach. More precisely, we aim here to deal with the uncertainty stemming from (i) the spatial dimension (e.g. catchment division, aggregation of hydroclimatic forcing, boundary conditions), (ii) the general structure of the model (e.g. formulation of water storages, filling/draining equations), and (iii) the parameter estimation (e.g. calibration algorithm, objective function, calibration period). However, we decided here not to focus on quantifying these uncertainties individually (as it could be done with a probabilistic ensemble), but we focus on the aggregated impact of all uncertainties through comparing the deterministic averaging combination of several models with a single one. Ultimately, our aim is to answer the following question: what is the possible contribution of a multi-model approach within a variable spatial framework compared to lumped single models for streamflow simulation? This study follows on from the work of Squalli (2020), who carried out exploratory multi-model tests on lumped and semi-distributed configurations at a daily time step. The remainder of the paper is organized as follows: first, the catchment set, the hydroclimatic data, the spatial framework, and the rainfall–runoff models used for this work are presented. The multi-model methodology and the calibration/evaluation procedure are described. Then we present, analyse, and discuss the results. Last, we summarize the main conclusions of this work and discuss its perspectives. 2.1Catchments and hydroclimatic data This study was conducted at an hourly time step using precipitation, potential evapotranspiration and streamflow time series over the period 1998–2018 (Delaigue et al., 2020). Precipitation (P) was extracted from the radar-based COMEPHORE re-analysis produced by Météo-France (Tabary et al., 2012), which provides information at a 1km^2 resolution and which has already been extensively used in hydrological studies (Artigue et al., 2012; van Esse et al., 2013; Bourgin et al., 2014; Lobligeois et al., 2014; Saadi et al., 2021). Potential evapotranspiration (E[0]) is calculated with the formula proposed by Oudin et al. (2005). This equation was chosen for its simplicity, as the only input required is daily air temperature (from the SAFRAN re-analysis of Météo-France; see Vidal et al., 2010) and extra-terrestrial radiation (which only depends on the Julian day and the latitude). Once calculated, the daily potential evapotranspiration was disaggregated to the hourly time step using a simple parabola (Lobligeois, 2014). These steps for converting daily temperature data into hourly potential evapotranspiration are directly possible in the airGR software (Coron et al., 2017, 2021; developed using the R programming language; R Core Team, 2020), which was used for this work. We did not use any gap-filling method since all climatic data were complete during the study period. Streamflow time series (Q) were extracted from the national streamflow archive Hydroportail (Dufeu et al., 2022), which makes the data produced by hydrometric services in regional environmental agencies in charge of measuring flows in France, as well as by other data producers (e.g. hydropower companies and dam managers), available. Before being archived, flow data undergo quality control procedures applied by data producers, with corrections when necessary. Quality codes are also available, although this information is not uniformly provided for all stations. These data are freely available on the Hydroportail website and are widely used in France for hydraulic and hydrological studies. Here, we focus on simulating streamflow at the main catchment outlet, addressing the issue from a large-sample-hydrology (LSH) perspective (Andréassian et al., 2006; Gupta et al., 2014), in which many catchments are used. For this study, 121 catchments spread over mainland France with limited human influence were selected (Fig. 1). The first criterion used to select catchments is based on streamflow availability. Here, a threshold of 10% maximum gaps per year over the whole period was considered (1999–2018). However, this criterion may be slightly too restrictive (e.g. removal of a station installed in 2000 and presenting continuous data since then). In order to overcome this problem, we decided to allow this threshold to be exceeded for a maximum of 3 years over the whole period considered. It is therefore a compromise between having a large number of catchments for the study and having a long enough period for model calibration and evaluation. The catchment selection also considered the level of human influence. In France, the vast majority of catchments have human influence (e.g. dams, dikes, irrigation, or urbanization). Here, streamflow with limited human influence corresponds to gauged stations where the streamflow records have a hydrological behaviour considered close enough to a natural streamflow (e.g. low water withdrawals, influences far enough upstream to be sufficiently diluted downstream) not to strongly limit model performance. This was based on numerical indicators on the influence of dams and local expertise. Although snow-dominant or glacial regimes were rejected (due to lack of data or anthropogenic influence), the various catchments selected offer a wide hydroclimatic variability (Table 1). 2.2Principle of catchment spatial discretization In this work, two spatial frameworks are used: lumped and semi-distributed. A lumped model considers the catchment as a single entity, while the semi-distribution seeks to divide this catchment into several sub-catchments in order to partly take into account the spatial variability of hydroclimatic forcing and physical characteristics within the catchment. Generally, the division of a catchment is defined on the basis of expertise and requires good knowledge of its characteristics (hydrological response units based on geology or land use). From a large-sample hydrology perspective, an automatic definition of semi-distribution was needed. To this end, we simplified the problem by looking at a first-order distribution, i.e. a single downstream catchment defined by an outlet and one or more upstream sub-catchments. The underlying assumption is therefore that a second-order distribution (i.e. further dividing the upstream sub-catchments into a few smaller sub-catchments) will have a more limited impact on model behaviour than the first, when considering the main downstream outlet. This assumption is based on the work of Lobligeois et al. (2014) which showed that a multitude of sub-basins of approximately 4km^2 provide limited gain compared to a few sub-catchments of 64km^2. Under these hypotheses, we developed an automatic procedure to select semi-distributed configurations nested in each other, which we termed “Matryoshka doll”. This approach consists in creating different simple and distinct combinations of upstream–downstream gauged catchments starting from the main downstream station and progressively moving upstream. Specifically, the Matryoshka doll selection approach (Fig. 2) was implemented as follows: 1. Select a downstream station defining a catchment with one or more gauged internal points. 2. Restrict the upstream sub-catchment partitioning to a first-order split, i.e. going back only to the nearest upstream station(s) without going back to the stations further upstream and respecting a size criterion to avoid sensitivity issues which may result from a too-small or too-large downstream catchment (in this study, we limited the area of the upstream sub-catchments to a value between 10% and 70% of the area of the total catchment). This step creates a combination of stations defining a single downstream catchment (which receives the upstream contributions). 3. If the upstream catchments have one or more internal gauged points, repeat step 1 and consider them as a downstream catchment. The Matryoshka doll approach allows us to create distinct configurations (i.e. there cannot be two different semi-distributed configurations for the same downstream catchment) and therefore avoids over-sampling issues. The semi-distributed approach consists in performing lumped modelling in each sub-catchment by linking them through a hydraulic routing scheme. Thus, we need to distinguish between the first-order upstream catchment (Fig. 2, dark grey), where we applied a lumped rainfall–runoff model, and the downstream catchment (Fig. 2, light grey), where the rainfall–runoff model was applied after integrating the upstream inflows using a runoff-runoff model (hydraulic routing scheme). It is therefore important to differentiate between the routing part of hydrological models (enabling us to distribute the quantity of water contributing to the streamflow in the sub-catchment of interest, i.e. the intra-sub-catchment propagation, in time) and the hydraulic routing scheme (enabling us to propagate the streamflow simulated at one outlet to downstream catchment, i.e. the inter-sub-basin propagation). For this study, a single hydraulic routing scheme was applied. It is a time lag between the upstream and downstream outlet, as done by Lobligeois et al. (2014). In order to reduce the computation time, the authors propose calculating a lumped parameter C[0] corresponding to the average flow velocity over the downstream catchment. Since the hydraulic lengths d[i] (i.e. the distance between the downstream outlet and each upstream sub-catchment) are known, the transit time T[i ] can be calculated as follows: $\begin{array}{}\text{(1)}& {T}_{i}=\frac{{d}_{i}}{{C}_{\mathrm{0}}}.\end{array}$ This approach is fairly simple but offers comparative performance to that of more complex routing models such as lag and route schemes (with linear or quadratic reservoirs) that account for peak-shaving phenomena (results not shown for the sake of brevity). In the context of this study, a model is defined as a configuration composed of a model structure and an associated set of parameters (i.e. which may vary according to the objective function selected for calibration). These models will be applied independently in a lumped or a semi-distributed modelling framework. For this study, the airGRplus software (Coron et al., 2022), based on the works of Perrin (2000) and Mathevet (2005), was used. It includes various rainfall–runoff model structures running at the daily time step. airGRplus is an add-on to airGR (Coron et al., 2017, 2021). An adaptation of the work made by Perrin and Mathevet was carried out to use these structures at the hourly time step (mostly ensuring consistency of parameter ranges when changing simulation time steps and changing fixed time-dependent parameters). Finally, a set of 13 structures available in airGRplus, already widely tested in France and adapted to the hourly time step, was selected (Table 2). They are simplified versions of original rainfall–runoff models taken from the literature (except GR5H, which corresponds to the original version). To avoid confusion with the original models, a four-letter abbreviation was used here. Since the various catchments used for this study do not experience much snowfall, no snow module was implemented. The objective function used for parameter calibration is the Kling–Gupta efficiency (KGE) (Gupta et al., 2009), defined by $\begin{array}{}\text{(2)}& \mathrm{KGE}=\mathrm{1}-\sqrt{{\left(r-\mathrm{1}\right)}^{\mathrm{2}}+{\left(\mathit{\alpha }-\mathrm{1}\right)}^{\mathrm{2}}+\left(\mathit{\beta }-\mathrm{1}{\right)}^{\ with r the correlation, α the ratio between standard deviations, and β the ratio between the means (i.e. the bias) of the observed and simulated streamflow. Thirel et al. (2023) showed that streamflow transformations are adapted to a specific modelling objective (e.g. low flows, floods). However, they highlighted that it is difficult to represent a wide range of streamflow with a single transformation. According to this study, we selected three transformations, two of which target high flows (Q^+0.5) and low flows (Q^−0.5), respectively, and one which is intermediate (Q^+0.1). The algorithm used for model calibration comes from Michel (1991) and is available in the airGR package (Coron et al., 2017, 2021). It combines a global and a local optimization approach. First, a coarse screening of the parameters space is performed using either a rough predefined grid or a list of parameter sets. Then a steepest descent local search algorithm is performed, starting from the result of the screening procedure. Such calibration (over 10 years of hourly data) is about 0.5 to 6min long (depending mainly of the catchment considered and the number of free parameters) and gives a single parameter set for a chosen objective function. Thus, we did not focus explicitly here on parameter uncertainty; i.e. we did not use multiple parameters sets for a single objective function as can be done with Monte Carlo simulations, for example. Such an approach would be interesting to consider as a perspective for this work but will not be covered here for computation time constraints. In a semi-distributed context, the calibration is carried out sequentially, i.e. in each sub-catchment from upstream to downstream. Note that the calibration takes slightly more time in the downstream catchment due to the additional free parameter of the routing function. Overall, 13 structures and three objective functions were used, resulting in 39 models. Applied over two different spatial frameworks, a total of 78 distinct modelling options were available for this 2.4Multi-model methodology The multi-model approach consists in running various rainfall–runoff models. More specifically, here, we are interested in a deterministic combination of the different streamflow simulations. Let us recall that for our study, a model corresponds to the association of a structure and an objective function. By definition, a model is imperfect. Indeed, the different structures have been designed to meet different objectives (e.g. water resources management, forecasting, and climate change) in different geographical or geological contexts (e.g. high mountains, karstic zone, and alluvial plain). The objective functions (e.g. optimization algorithm, objective function, streamflow transformations), selected to optimize the parameters, are also choices that will eventually impact the simulation. The hypothesis made here is that the multi-model approach makes it possible to take advantage of the strengths of each model. In the lumped framework, we consider every model in each catchment. In the semi-distributed framework, we consider every model in each sub-catchment. As the calibration is sequential, the various models are first applied to each upstream sub-catchment, and then their simulated streamflow is propagated to the downstream catchment to be modelled. However, transferring every upstream possibility to the downstream catchment is excessively time consuming. Therefore, the simulated streamflow in each upstream sub-catchment was first set with an a priori choice, whatever the model used, and then transferred to the downstream catchment (this choice is discussed in Sect. 4.4). The multi-model framework enables these different streamflow simulations to be combined in each catchment and sub-catchment in order to create multiple additional simulations. At the downstream outlet, we will consider mixed combinations, using streamflow simulations from lumped and semi-distributed modelling (Fig. 3). To this end, deterministic averaging methods were used. Here, we will focus on a simple average combination (SAC), i.e. giving an equal weight to all models combined, defined by $\begin{array}{}\text{(3)}& {Q}_{\mathrm{SAC}}=\frac{{\sum }_{i=\mathrm{1}}^{n}{Q}_{i}}{n},\end{array}$ with Q[SAC] the streamflow from a simple average combination and Q[i] the simulated streamflow with a model i selected among the n models. Note that a weighted average combination (WAC) was also tested but did not significantly change the mean results and was therefore not used further (discussed in Sect. 4.3). The number of possible combinations on a given outlet from the total number of available streamflow simulations increases exponentially and can be computed by $\begin{array}{}\text{(4)}& {n}_{\mathrm{c}}=\sum _{i=\mathrm{2}}^{{n}_{\mathrm{sim}}}\left(\frac{{n}_{\mathrm{sim}}}{i}\right),\end{array}$ with i the number of streamflow simulations to choose from the total number of available streamflow simulations n[sim]. As an indication, there are approximately 1000 combinations for a streamflow ensemble simulated by 10 models, but there are over 1000000 solutions for 20 models in a lumped framework. Although a single combination is quick to perform (between 0.1 and 0.2s), the number of combinations quickly becomes a limiting factor in terms of computation time. For this study, combinations will be set to a maximum of four different streamflow time series among the total number of models available, i.e. approximately 1500000 different combinations (discussed in Sect. 4.2): $\begin{array}{}\text{(5)}& {n}_{\mathrm{c}}=\sum _{i=\mathrm{2}}^{\mathrm{4}}\left(\frac{\mathrm{78}}{i}\right)\approx \mathrm{1}\phantom{\rule{0.125em}{0ex}}\mathrm{500}\phantom{\rule{0.125em} The objective of these combinations is to create a large set of simulations from which the best multi-model approach will be selected. Here we aim to obtain simulations that can perform well over a wide range of streamflow, and that can be applied to a large number of French catchments. Therefore, the best models (and multi-model approach) correspond to those which will achieve the highest performance in each catchment on average during the evaluation periods. 2.5Testing methodology A split-sample test (Klemeš, 1986), commonly used in hydrology, was implemented. This practice consists in separating a streamflow time series into two distinct periods, the first for calibration and the second for evaluation, and then exchanging these two periods. The two periods chosen are 1999–2008 and 2009–2018. An initialization period of at least 2 years was used before each test period to avoid errors attributable to the wrong estimation of initial conditions within the rainfall–runoff model. For this study, results will only be analysed for evaluation (i.e. over the two untrained periods). Model performance was evaluated on two levels. • With a general criterion. Model performance was evaluated with a composite criterion focusing on a wide range of streamflow, defined as follows: $\begin{array}{}\text{(6)}& {\mathrm{KGE}}_{\mathrm{comp}}=\frac{\mathrm{KGE}\left({Q}^{+\mathrm{0.5}}\right)+\mathrm{KGE}\left({Q}^{+\mathrm{0.1}}\right)+\mathrm{KGE}\left({Q}^{-\mathrm{0.5}}\ • With event-based criteria. Model performance was evaluated with several criteria characterizing flood and low flows. In a context of high flows (5447 events selected), the timing of the peak (i.e. the date at which the flood peak was reached), the flood peak (i.e. the maximum streamflow value observed during the flood) and the flood flow (i.e. mean streamflow during the event) were analysed. In a context of low flows (1332 events selected), the annual low-flow duration (i.e. number of low-flow days) and severity (i.e. largest cumulative streamflow deficit) were studied. Table 3 provides typical ranges of values of flood and low-flow characteristics over the catchment set. Please refer to Appendix A and B for more details on the event selection method. In a multi-model framework, the best (i.e. giving the best performance over the evaluation periods) model or combination of models for each catchment can be determined. Therefore, this model or combination of models will differ from one catchment to another. For this work we chose as a benchmark a lumped one-size-fits-all model (i.e. the same model whatever the catchment), which is the hydrological modelling approach usually used. Results are presented from lumped (L) single models (SMs), i.e. run individually, to more complex semi-distributed (SD) multi-model (MM) approaches (see Fig. 3). The mixed (M) multi-model approach allows for a variable spatial framework combining both lumped and semi-distributed approaches. The aim of this section is to present the results obtained with each modelling framework and their 3.1Lumped single models (LSMs) In this part, each model was run individually in a lumped mode (see Fig. 3). Parameters of the 13 structures were calibrated successively with the three objective functions, resulting in 39 lumped Figure 4 shows the distribution of the performance of lumped single models over the 121 downstream outlets and over the evaluation periods. As a reminder, the KGE[comp] used for the evaluation is a composite criterion which considers different transformations in order to provide an overall picture of model performance for a wide range of streamflow (Eq. 6). Overall, lumped single models give median KGE[comp] values between 0.70 and 0.88. This upper value is reached with the GR5H structure calibrated with a generalist objective function (KGE applied to Q^+0.1) and will be used in the paper as a benchmark. Since efficiency criteria values depend on the variety of errors found in the evaluation period (see, for example, Berthet et al., 2010), this may impact the significance of performance differences between models and ultimately their comparison. Therefore, we tried to quantify the sampling uncertainty in KGE scores. The bootstrap–jackknife methodology proposed by Clark et al. (2021) was applied over our sample of 121 catchments for the 39 lumped models. It showed a median sampling uncertainty in KGE scores of 0.02 (Appendix C). The objective function applied during the calibration phase seems to have a variable impact on performance depending on the structure. For example, GR5H shows a similar performance regardless of the transformation applied, whereas TAN0 shows a large variation. The strong decrease in the 25% quantile of the latter is linked to the great difficulty for this structure to represent the low-flow component of KGE[comp] when it is calibrated with more weight on high flows (KGE applied to Q^+0.5). The reverse is also true since a structure optimized with more weight on low flows (KGE applied to Q^−0.5) will have more difficulties to represent the high-flow component of KGE[comp] (e.g. NAM0 or GARD). Although the differences remain limited, the highest KGE[comp] scores are achieved with a more generalist objective function (KGE applied to Q^+0.1). These results confirm the conclusions reached by Thirel et al. (2023). The left part of Fig. 5 highlights the results obtained by selecting the best lumped single model in each catchment (LSM). In this modelling framework, the median KGE[comp] is 0.91 (0.03 higher than the one-size-fits-all model used as a benchmark) with low variation (between 0.88 and 0.93 for the 25% and 75% quantiles). The right part of Fig. 5 indicates the number of catchments where each lumped single model is defined as the best. As expected, the models with high performance over the whole sample are selected more often than the others. However, two-thirds of the lumped single models have been selected at least once as the best in a catchment. Similar results can be found in Perrin et al. (2001) or Knoben et al. (2020). 3.2Semi-distributed single models (SDSMs) Remember that the semi-distribution with a single model (see Fig. 3) is done sequentially, i.e. from upstream to downstream. Thus, each upstream sub-catchment is first modelled in a lumped mode with a single structure–objective function pair. Then, the streamflow simulated upstream is propagated and the same model is calibrated and applied to the downstream catchment. This procedure is repeated for all 39 (13 structures and three objective functions) available models. Figure 6 shows the difference between KGE[comp] values obtained with lumped single models and semi-distributed single models. The semi-distributed approach seems to have a positive overall impact on the performance, although some deterioration can also be observed. Overall, the differences are limited (median of 0.02). However, the semi-distributed approach seems to have a variable impact on performance depending on the structure. For example, CRE0 shows a similar performance regardless of the spatial framework applied, whereas the performance of GARD improved with a spatial division. Although there is no clear trend in the impact of the semi-distribution in relation to the transformation applied during calibration, it seems that models calibrated on Q^+0.1 (i.e. giving “equal” weight on all flow ranges) show lower differences. The lumped models with the highest performance seem to benefit less (if any) from semi-distribution. On the other hand, lumped models with lower performance seem to benefit from the spatial discretization. Nevertheless, Fig. 7 highlights that overall, if the focus is set on the best model in each catchment, the difference between the semi-distributed and lumped single model remains small (no deviation for the quartiles and only 0.005 for the median). Once again, two-thirds of the semi-distributed single models have been selected at least once as the best in a catchment. 3.3Lumped multi-model (LMM) approach Here, each model was run in a lumped mode, and model outputs were combined (see Fig. 3). The multi-model approach used in this work is a deterministic combination with simple average (SAC) and will be limited to a combination of a maximum of four models among the available lumped models, i.e. approximately 92000 different combinations. The left part of Fig. 8 shows the comparison between performance obtained with the benchmark and when the best lumped multi-model approach is selected in each catchment. The combination of lumped models enables an increase of 0.06 in the median KGE[comp] value compared with the benchmark and of 0.03 compared with the LSM approach. While this gain may seem small at first glance, it is quite substantial since the performance obtained with the benchmark was already very high (median of 0.88), which makes improvements increasingly difficult. The right part of Fig. 8 shows the number of times each model is selected within the best-performing multi-model approach. As expected, it highlights the benefits of a wide choice of models (similar results were found by Winter and Nychka, 2010). Indeed, even if some of the models had never been used for the benchmark simulation (in a lumped single-model framework), the multi-model approach shows that each of them can become a contributing factor to improve streamflow simulation in at least one catchment. Moreover, the models that are most often selected in the model combinations are not always the best models on their own. For example, TOPM, calibrated to favour high flows (Q^+0.5), was only used in the benchmark on 1.5% of the catchments but it is selected in the multi-model approach on 24% of the catchments. However, the converse does not seem to be true since a model with good individual performance always seems to be a key element of the multi-model approach (e.g. GR5H, PDM0, MORD). 3.4Semi-distributed multi-model (SDMM) approach For this study, the semi-distributed multi-model approach (see Fig. 3) of the target catchment is performed in two steps. First, the best multi-model combination (i.e. the combination of two to four models among the 39 available giving the highest performance over the evaluation periods) in each upstream sub-catchment is identified. In the second step, the simulated mean upstream streamflow is propagated downstream, and the different models are applied and then combined (by two, three, or four) on the downstream catchment. Thus, approximately 92000 different possible combinations of simulated streamflow are obtained at the outlet of the total catchment. Figure 9 shows very similar results to the LMM (Sect. 3.3). Indeed, we find again an improvement in the median of 0.06 compared to the benchmark, and all the models are used on at least one downstream catchment. Moreover, the distribution of the model count on the downstream catchment seems to be more homogeneous between the different members. 3.5Mixed multi-model (MMM) approach Here, the mixed multi-model approach represents a combination of all the approaches tested above. This method allows a combination of models for a variable spatial framework (see Fig. 3) for each catchment. In this context, the 39 models applied to a lumped and a semi-distributed framework are used, resulting in 78 modelling options, each giving a different streamflow at the outlet (Fig. 10). These simulations can then be combined (by two, three, or four) downstream in order to define the best mixed multi-model approach in each catchment among more than 1500000 possibilities. The left part of Fig. 11 shows the performance obtained with the best mixed multi-model approach. The combination of lumped and semi-distributed models outperforms the benchmark, but results are still close to those obtained using the LMM or SDMM approaches. The right part of Fig. 11 highlights the benefits gained from the wide choice of models and a variable spatial framework. Indeed, almost every lumped and semi-distributed model is used in order to improve the representation of streamflow with a multi-model approach in at least one catchment. 3.6Modelling framework comparison Figure 12 compares the performance obtained when the best (multi-)model is selected in each catchment depending on the modelling approach used. First, all approaches tested outperform the benchmark. Then, the best LSM and SDSM distributions are almost identical, and the same results are obtained with LMM and SDMM. This shows a limited gain of the semi-distribution approach compared to a lumped framework. However, the LMM and SDMM outperformed the LSM and SDSM. Therefore, the increase in performance is mainly due to the multi-model aspect. Finally, the highest performance is obtained with the MMM, but it remains close to the performance achieved by the LMM and SDMM. Figure 13 shows the best-performing modelling framework for each catchment. Multi-model approaches are considered to be better than single models for the vast majority of catchments. In general, the MMM approach seems to be the most suitable for most of the catchments. However, if we accept to deviate by 0.005 (epsilon value arbitrarily set) from the optimal value, we notice that the lumped multi-model approach is sufficient on a large part (about 60%) of the catchments. There are no clear regional trends on which catchments require a more complex modelling framework. Figure 14 shows the evaluation of the different modelling frameworks at the event scale. As a reminder, only the best (multi) model in each catchment is analysed for each approach tested. Typical ranges of values of flood and low-flow characteristics over the catchment set are provided Table 3. The flood peak is late by about 1h for the single-model approaches, whereas with a multi-model framework, it comes 1h too early. The most extreme values correspond to large catchments with slow responses and a strong base flow impact. In addition, multi-model approaches seem to have a lower variability for this criterion. The peak flow is slightly underestimated with a median of −0.05mmh^ −1, like the flood flow, which shows a median deficit of 0.02mmh^−1. There does not seem to be a clear trend in the contribution of a complex mixed multi-model approach compared to a lumped single model (benchmark). Here, the first objective is to discuss the results and answer the initial question: what is the possible contribution of a multi-model approach within a variable spatial framework for the simulation of streamflow over a large set of catchments? The second objective is to discuss the methodological choices by analysing them independently to determine to what extent they impact the results. 4.1What is the possible contribution of a multi-model approach within a variable spatial framework? First, our results confirmed the findings previously reported in the literature. Indeed, the multi-model approach outperformed the single models for a large sample of catchments (Shamseldin et al., 1997; Georgakakos et al., 2004; Ajami et al., 2006; Winter and Nychka, 2010; Velázquez et al., 2011; Fenicia et al., 2011; Santos, 2018; Wan et al., 2021). Moreover, there is no clear benefit (on average) of using a semi-distributed framework because it degrades the streamflow simulation in some catchments and improves it in others (Khakbaz et al., 2012; Lobligeois et al., 2014; de Lavenne et al., 2016). The originality of our study is to combine these two approaches while providing a variable spatial framework. The mixed multi-model approach thus seems to benefit from the strengths of both methods. Most of the improvements compared to our benchmark come from the multi-model approach. On the other hand, although for a large part of the sample the differences are negligible, the variable spatial framework seems to generate an increase in the mean KGE values of up to 0.03 compared to a lumped multi-model approach (Fig. 15). It should be noted that a similar difference is observed for the single models. By design, the MMM does not deteriorate the performance when compared to what can be initially obtained with the LMM and SDMM approaches. Generally, this study has shown that a large number of models enables a better performance regardless of the streamflow range over a large sample of catchments. However, this methodology can be computationally expensive (due to the exponential number of combinations). Winter and Nychka (2010) showed that in a multi-model framework, a key point is not only the number of models but also their differences. However, it is difficult to quantify explicitly this difference a priori. Various configurations of small pools of four models (i.e. structure–objective function pairs) were tested before selecting only the best of them (called “simplified MMM”; see Table 4 for more details). A mixed multi-model test was performed over this sample in order to reduce the complexity brought about by a large number of models. As a reminder, the procedure is the following: • In a semi-distributed framework, each of the four lumped models was applied and then combined for each upstream sub-catchment of the sample. Then, the best combination (i.e. giving the highest KGE value over the evaluation periods) at each upstream outlet was propagated through the downstream catchment where the subsample of models was also used. • In a lumped framework, the modelling of each total catchment was performed with the different members of the simplified MMM. • Downstream simulations (four from the lumped approach and four from the semi-distributed approach) were then combined (resulting in 162 combinations), and the best multi-model combination at the outlet was selected. Figure 16 shows that with the simplified MMM, the multi-model approach in a variable framework gives better KGE values than the LSM approach (which uses each lumped model independently and then selects the best one in each catchment). However, the performance obtained with the MMM approach is not reached, which again shows the added value of a wide choice of models. 4.2What is the optimal number of models to combine in a multi-model framework? The optimal number of models to combine in a multi-model framework varies between past studies. For example, Wan et al. (2021) found that a limited improvement is achieved when more than nine models are combined, Arsenault et al. (2015) concluded that seven models were sufficient, and Kumar et al. (2015) highlighted a combination of five members. This optimal number therefore seems to vary with the catchment sample but also according to the number of models used and their qualities. Figure 17 shows the results obtained by the best lumped (multi-)model in each catchment according to the number of members combined. The largest improvement comes from a simple combination of two models, and the performance gain becomes limited from a combination of four different models (at least in our study). 4.3Is a weighted average combination always better than a simple average approach? The weighted average combination (WAC) consists of assigning a weight that can be different for each model (Eq. 4), as opposed to the simple average combination (SAC), which considers each model in an identical manner (Eq. 3). $\begin{array}{}\text{(7)}& {Q}_{\mathrm{WAC}}=\frac{{\sum }_{i=\mathrm{1}}^{n}{\mathit{\alpha }}_{i}.{Q}_{i}}{{\sum }_{i=\mathrm{1}}^{n}{\mathit{\alpha }}_{i}},\end{array}$ with Q[WAC] the streamflow from a weighted average combination, Q[i] the simulated streamflow with a model i selected among the n models, and α[i] its attributed weight (between 0 and 1). The complexity of the weighted average procedure lies in the estimation of the weights. Here, the weights have been optimized according to the capacity of the combination to represent the observed streamflow by maximizing the KGE on the transformations chosen during the calibration of the models. Thus, we obtain a set of weights for each calibration criterion used by testing all possible values in steps of 0.1 between 0 and 1. The average of these different weight sets for the three objective functions will then be taken as the final weighting. This method becomes very expensive in terms of calculation time when the number of models increases. Thus, to answer this question, a subsample of 13 models (corresponding to the 13 structures calculated on the square root of streamflow in a lumped framework) was selected, and the tests were limited to a combination of three models (representing 364 distinct combinations in total). Figure 18 shows the comparison of the SAC and WAC methods. Each point represents the mean KGE obtained over the evaluation periods by each combination of models over the whole sample of catchments and over all the transformations evaluated. It highlights that the WAC and SAC methods provide similar mean results when dealing with a wide range of streamflow. Another limit of the WAC procedure lies in the variability of the coefficients according to the calibration period. Moreover, this instability seems to increase with the number of models used in the Is the a priori choice to use the best upstream multi-model approach always justified? As a reminder, semi-distribution consists in dividing a catchment into several sub-catchments which can then be modelled individually with their own climate forcing and parameters and then linked together by a propagation function. Generally, the number of possible streamflow simulations in a catchment is set as $\begin{array}{}\text{(8)}& n={n}_{\mathrm{sim}}+{n}_{\mathrm{c}}\left({n}_{\mathrm{sim}}\right),\end{array}$ with n[sim] the number of simulations available and n[c] the number of combinations from n[sim]. However, the number of simulations in a semi-distributed framework depends on the number of models available in this catchment and increases rapidly with the streamflow simulations injected from upstream catchments. $\begin{array}{}\text{(9)}& \begin{array}{ll}{n}_{\mathrm{sim}}={n}_{\mathrm{mod}}×\prod _{i=\mathrm{1}}^{x}{n}_{{\mathrm{up}}_{i}}& \mathrm{if}\phantom{\rule{0.25em}{0ex}}x\phantom{\rule{0.125em} {0ex}}>\phantom{\rule{0.125em}{0ex}}\mathrm{0}\\ {n}_{\mathrm{sims}}={n}_{\mathrm{mod}}& \mathrm{if}\phantom{\rule{0.25em}{0ex}}x=\mathrm{0},\end{array}\end{array}$ with n[mod] the number of available models on the sub-catchment considered, x the number of direct upstream sub-catchments, and ${n}_{{\mathrm{up}}_{i}}$ the number of possible streamflow available at the outlet of the upstream sub-catchment i. It is therefore necessary to make an a priori choice on the different upstream sub-catchments in order to reduce the number of possibilities downstream. The assumption made in this study is to propagate a single simulation, resulting from the best combination of models, for each sub-catchment. Equation (8) then becomes $\begin{array}{}\text{(10)}& n={n}_{\mathrm{mod}}+{n}_{\mathrm{c}}\left({n}_{\mathrm{mod}}\right).\end{array}$ This hypothesis ensures the same number of downstream streamflow simulations in a lumped or semi-distributed modelling framework (as complex as it can be). Simplified tests (semi-distributed configurations with one upstream catchment – i.e. 70 catchments – and only four distinct models – see Table 4 – used without combination) were conducted in order to check the impact of this simplification on downstream performance. Figure 19 shows that for approximately 85% of the catchments, the use of an a priori choice on the injected upstream simulations has a limited impact (<0.02 difference in KGE values) on the downstream performance. However, a decrease of up to 0.05 can be observed. Although the assumption made here (i.e. to propagate the best upstream simulations) may occasionally lead to significant performance losses, we remain convinced that an a priori choice is necessary for a large gain in computation time, even in simple semi-distributed configurations. The main conclusions of this work are detailed in the following. The mixed multi-model approach outperforms the benchmark (one-size-fits-all model) and provides higher KGE values than approaches based on single models (LSMs or SDSMs). The gain is mainly due to the multi-model aspect, while the spatial framework brings a more limited added value. At the event scale, the mixed multi-model approach does not show a large improvement on average but seems to reduce the variability (i.e. inter-quantile deviation). Although some models are more often selected in multi-model combinations, almost all models have proven useful in at least one catchment. Moreover, the models that are most often selected in the model combinations are not always the best models on their own. However, the converse does not seem to be true since a model with good individual performance always seems to be a key element of the multi-model approach. The largest improvement of a multi-model approach over the single models comes from the simple average combination of two models among a large ensemble. The performance gain when increasing the number of models in the multi-model combination becomes limited when more than four different models are combined. The simplified mixed multi-model approach (based on a subsample of only four models applied to a lumped and a semi-distributed framework) outperforms the benchmark (one-size-fits all model) but does not reach the performances obtained with a full mixed multi-model approach (based on the 39 available models applied to a lumped and a semi-distributed framework). These conclusions are valid in the modelling framework used. As a reminder, in this work we aimed to obtain simulations that represent a wide range of streamflow and that can be applied to a large number of French catchments with limited human influence. It would be interesting to test other deterministic methods to combine models such as random forest, artificial neural network, or long short-term memory network algorithms, which are increasingly being applied in hydrology (Kratzert et al., 2018; Li et al., 2022). These machine learning methods could also be used as hydrological models in their own right. By also including physically based models, it would be possible to extend the range of models even further. Another perspective of this work would be to test the semi-distributed multi-model approach in a probabilistic framework by considering the different models as a hydrological ensemble in order to quantify the uncertainties related to the models. It would also be relevant to conduct this study in a forecasting framework by combining a hydrological ensemble with a meteorological ensemble. Although we have worked in the context of a large hydrological sample, the catchments are exclusively located in continental France. Testing the semi-distributed multi-model approach in catchments under other hydroclimatic conditions may therefore be useful. For example, applying the multi-model approach to different snow modules for considering snowmelt is food for thought regarding high mountain catchments. It should be noted that the Matryoshka doll approach developed in this study allows for only a simple division of the catchments. A more complex semi-distribution may be more relevant, especially in places where the spatial variability of rainfall is high. The catchments with human influence were removed from our sample because they do not show natural hydrological behaviour. However, semi-distribution often enables a better representation of streamflow in these areas which are difficult to model. Appendix A:Event selection methodology: flood events The selection procedure of flood events was based on the methodology developed/used by Astagneau et al. (2022). It is an automated procedure, selecting peak flows exceeding the 95% quantile and setting the beginning and the end of the flood event to 20% and 25%, respectively, of the flood peak. The starting window has been slightly extended by a few hours in the case of flash floods, characterized by a rapid rise in water levels (Fig. A1). Each selected event was visually inspected to mitigate the errors associated with automatic selection. This step is particularly important for large catchments with inter-annual processes. In order to obtain consistent statistics between the catchments, a maximum of 50 events (25 in calibration and 25 in evaluation) was set. In the end, 5447 events were selected. Figure A2 shows the distribution of the number of flood events in each catchment. Appendix B:Event selection methodology: low-flow events The selection procedure of low-flow events was based on the methodology developed and used by Caillouet et al. (2017). It is an automated procedure selecting periods under a threshold (fixed here to the 10% quantile) and aggregating the intervals corresponding to the same event thanks to the severity index (Fig. B1). Each selected event was visually inspected to mitigate the errors associated with automatic selection. This step is rather difficult because the quality of the low-flow data is quite heterogeneous (e.g. influenced by noise) from one catchment to another. In the end, 1332 events were selected. Figure B2 shows the distribution of the number of low-flow events in each catchment. Appendix C:Uncertainty in KGE scores Since efficiency criteria values depend on the variety of errors found in the evaluation period (see, for example, Berthet et al., 2010), this may impact the significance of performance differences between models and ultimately their comparison. Therefore, we tried to quantify the sampling uncertainty in KGE scores. The bootstrap–jackknife methodology proposed by Clark et al. (2021) was applied over our set of 121 catchments for the 39 lumped models (Fig. C1). The results show that for 90% of the cases, the KGEs have uncertainties lower than 0.06 with a median of 0.02. However, differences can be noted according to the catchments; the structures; the calibration period; the period used to apply the bootstrap–jackknife; and, especially, the transformations used during the model calibration. Indeed, KGE values from simulations optimized for low flows are more uncertain than those optimized for medium or high flows. Figure C2 compares the KGE uncertainty between the benchmark (all catchments are modelled with GR5H calibrated with the KGE calculated on Q^+0.1 in a lumped spatial framework) and the mixed multi-model approach (a combination of model with a variable spatial framework is chosen for each catchment). It shows that the MMM approach reduces uncertainty about the value of the performance score. We therefore consider an improvement to be significant as soon as a gain greater than 0.02 is achieved. Summary sheets containing the structural scheme of the different models, the pseudo-code, and the table of free parameters are available on request or can be found in the PhD paper of the first author (Thébault, 2023). Streamflow data are freely available on the Hydroportail website (https://hydro.eaufrance.fr/) (Dufeu et al., 2022). Climatic data are freely available for academic research purposes in France but cannot be deposited publicly because of commercial constraints. To access COMEPHORE data (Tabary et al., 2012), please refer to https://doi.org/10.25326/360 (Caillaud, 2019). SAFRAN data (Vidal et al., 2010) can be found in the spatialized data rubric, in the product catalogue, at https://publitheque.meteo.fr/ (Météo-France, 2023). Data were processed by INRAE, and summary sheets of the outputs are available (https://webgr.inrae.fr/webgr-eng/tools/database, Brigode et al., 2020). CP, CT, GT, SL, and VA conceptualized the study. CP and CT developed the methodology. CT and OD developed the model code. CT performed the simulations and analyses. CT prepared the manuscript with contributions from all co-authors. The contact author has declared that none of the authors has any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. The authors wish to thank Météo-France and SCHAPI for making the climate and hydrological data used in this study available. CNR and INRAE are thanked for co-funding the PhD grant of the first author. Paul Astagneau and Laurent Strohmenger are also thanked for their advice on the manuscript. The authors thank Isabella Athanassiou for copy-editing an earlier draft of this paper. The editor, Hilary McMillan, and the two reviewers, Trine Jahr Hegdahl and Wouter Knoben, are also thanked for their feedback and comments on the manuscript, which helped to improve its overall quality. This paper was edited by Hilary McMillan and reviewed by Trine Jahr Hegdahl and Wouter Knoben. Ajami, N. K., Duan, Q., Gao, X., and Sorooshian, S.: Multimodel Combination Techniques for Analysis of Hydrological Simulations: Application to Distributed Model Intercomparison Project Results, J. Hydrometeorol., 7, 755–768, https://doi.org/10.1175/JHM519.1, 2006. Ajami, N. K., Duan, Q., and Sorooshian, S.: An integrated hydrologic Bayesian multimodel combination framework: Confronting input, parameter, and model structural uncertainty in hydrologic prediction, Water Resour. Res., 43, W01403, https://doi.org/10.1029/2005WR004745, 2007. Andréassian, V., Hall, A., Chahinian, N., and Schaake, J.: Introduction and synthesis: Why should hydrologists work on a large number of basin data sets?, in: Large sample basin experiments for hydrological parametrization: results of the models parameter experiment-MOPEX, IAHS Red Books Series no. 307, AISH, 1–5, https://iahs.info/uploads/dms/13599.02-1-6-INTRODUCTION.pdf (last access: 23 March 2023), 2006. Arsenault, R., Gatien, P., Renaud, B., Brissette, F., and Martel, J.-L.: A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation, J. Hydrol., 529, 754–767, https://doi.org/10.1016/j.jhydrol.2015.09.001, 2015. Artigue, G., Johannet, A., Borrell, V., and Pistre, S.: Flash flood forecasting in poorly gauged basins using neural networks: case study of the Gardon de Mialet basin (southern France), Nat. Hazards Earth Syst. Sci., 12, 3307–3324, https://doi.org/10.5194/nhess-12-3307-2012, 2012. Astagneau, P. C., Bourgin, F., Andréassian, V., and Perrin, C.: Catchment response to intense rainfall: Evaluating modelling hypotheses, Hydrol. Process., 36, e14676, https://doi.org/10.1002/ hyp.14676, 2022. Atkinson, S. E., Woods, R. A., and Sivapalan, M.: Climate and landscape controls on water balance model complexity over changing timescales, Water Resour. Res., 38, 50-1–50-17, https://doi.org/ 10.1029/2002WR001487, 2002. Bergström, S. and Forsman, A.: Development of a conceptual deterministic rainfall-runoff model, Nord. Hydrol, 4, 147–170, https://doi.org/10.2166/nh.1973.0012, 1973. Berthet, L., Andréassian, V., Perrin, C., and Loumagne, C.: How significant are quadratic criteria? Part 2. On the relative contribution of large flood events to the value of a quadratic criterion, Hydrolog. Sci. J., 55, 1063–1073, https://doi.org/10.1080/02626667.2010.505891, 2010. Beven, K.: Prophecy, reality and uncertainty in distributed hydrological modelling, Adv. Water Resour., 16, 41–51, https://doi.org/10.1016/0309-1708(93)90028-E, 1993. Beven, K. and Kirkby, M. J.: A physically based, variable contributing area model of basin hydrology/Un modèle à base physique de zone d'appel variable de l'hydrologie du bassin versant, Hydrol. Sci. B., 24, 43–69, https://doi.org/10.1080/02626667909491834, 1979. Block, P. J., Souza Filho, F. A., Sun, L., and Kwon, H.-H.: A Streamflow Forecasting Framework using Multiple Climate and Hydrological Models1, J. Am. Water Resour. As., 45, 828–843, https://doi.org/ 10.1111/j.1752-1688.2009.00327.x, 2009. Bogner, K., Liechti, K., and Zappa, M.: Technical note: Combining quantile forecasts and predictive distributions of streamflows, Hydrol. Earth Syst. Sci., 21, 5493–5502, https://doi.org/10.5194/ hess-21-5493-2017, 2017. Bourgin, F., Ramos, M. H., Thirel, G., and Andréassian, V.: Investigating the interactions between data assimilation and post-processing in hydrological ensemble forecasting, J. Hydrol., 519, 2775–2784, https://doi.org/10.1016/j.jhydrol.2014.07.054, 2014. Brigode, P., Génot, B., Lobligeois, F., and Delaigue, O.: Summary sheets of watershed-scale hydroclimatic observed data for France, Recherche Data Gouv. [data set], V1, https://doi.org/10.15454/ UV01P1, 2020 (data available at: https://webgr.inrae.fr/webgr-eng/tools/database, last access: 23 March 2023). Caillaud, C.: Météo-France radar COMEPHORE Hourly Precipitation Amount Composite, Aeris [data set], https://doi.org/10.25326/360, 2019. Caillouet, L., Vidal, J.-P., Sauquet, E., Devers, A., and Graff, B.: Ensemble reconstruction of spatio-temporal extreme low-flow events in France since 1871, Hydrol. Earth Syst. Sci., 21, 2923–2951, https://doi.org/10.5194/hess-21-2923-2017, 2017. Clark, M. P., Vogel, R. M., Lamontagne, J. R., Mizukami, N., Knoben, W. J. M., Tang, G., Gharari, S., Freer, J. E., Whitfield, P. H., Shook, K. R., and Papalexiou, S. M.: The Abuse of Popular Performance Metrics in Hydrologic Modeling, Water Resour. Res., 57, e2020WR029001, https://doi.org/10.1029/2020WR029001, 2021. Cormary, Y. and Guilbot, A.: Etude des relations pluie-débit sur trois bassins versants d'investigation. IAHS Madrid Symposium, IAHS Publication no. 108, 265–279, https://iahs.info/uploads/dms/ 4246.265-279-108-Cormary-opt.pdf (last access: 23 March 2023), 1973. Coron, L., Thirel, G., Delaigue, O., Perrin, C., and Andréassian, V.: The suite of lumped GR hydrological models in an R package, Environ. Model. Softw., 94, 166–171, https://doi.org/10.1016/ j.envsoft.2017.05.002, 2017. Coron, L., Delaigue, O., Thirel, G., Dorchies, D., Perrin, C., and Michel, C.: airGR: Suite of GR Hydrological Models for Precipitation-Runoff Modelling, R package version 1.6.12, Recherche Data Gouv [code], V1, https://doi.org/10.15454/EX11NA, 2021. Coron, L., Perrin, C., Delaigue, O., and Thirel, G.: airGRplus: Additional Hydrological Models to the “airGR” Package, R package version 0.9.14.7.9001, INRAE, Antony, 2022. Delaigue, O., Génot, B., Lebecherel, L., Brigode, P., and Bourgin, P.-Y.: Database of watershed-scale hydroclimatic observations in France, INRAE, HYCAR Research Unit, Hydrology group des bassins versants, Antony, https://webgr.inrae.fr/webgr-eng/tools/database (last access: 23 March 2023), 2020. de Lavenne, A., Thirel, G., Andréassian, V., Perrin, C., and Ramos, M.-H.: Spatial variability of the parameters of a semi-distributed hydrological model, Proc. IAHS, 373, 87–94, https://doi.org/ 10.5194/piahs-373-87-2016, 2016. Duan, Q., Ajami, N. K., Gao, X., and Sorooshian, S.: Multi-model ensemble hydrologic prediction using Bayesian model averaging, Adv. Water Resour., 30, 1371–1386, https://doi.org/10.1016/ j.advwatres.2006.11.014, 2007. Dufeu, E., Mougin, F., Foray, A., Baillon, M., Lamblin, R., Hebrard, F., Chaleon, C., Romon, S., Cobos, L., Gouin, P., Audouy, J.-N., Martin, R., and Poligot-Pitsch, S.: Finalisation of the French national hydrometric data information system modernisation operation (Hydro3), Houille Blanche, 108, 2099317, https://doi.org/10.1080/27678490.2022.2099317, 2022 (data available at: https:// hydro.eaufrance.fr/, last access: 23 March 2023). Fenicia, F., Kavetski, D., and Savenije, H. H. G.: Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resour. Res., 47, W11510, https://doi.org/10.1029/2010WR010174, 2011. Ficchì, A., Perrin, C., and Andréassian, V.: Hydrological modelling at multiple sub-daily time steps: Model improvement via flux-matching, J. Hydrol., 575, 1308–1327, https://doi.org/10.1016/ j.jhydrol.2019.05.084, 2019. Garçon, R.: Modèle global pluie-débit pour la prévision et la prédétermination des crues, Houille Blanche, 7/8, 88–95, https://doi.org/10.1051/lhb/1999088, 1999. Georgakakos, K. P., Seo, D. J., Gupta, H., Schaake, J., and Butts, M. B.: Towards the characterization of streamflow simulation uncertainty through multimodel ensembles, J. Hydrol., 298, 222–241, https://doi.org/10.1016/j.jhydrol.2004.03.037, 2004. Gupta, H. V., Kling, H., Yilmaz, K. K., and Martinez, G. F.: Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling, J. Hydrol., 377, 80–91, https://doi.org/10.1016/j.jhydrol.2009.08.003, 2009. Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463–477, https://doi.org/10.5194/hess-18-463-2014, 2014. Her, Y. and Chaubey, I.: Impact of the numbers of observations and calibration parameters on equifinality, model performance, and output and parameter uncertainty, Hydrol. Process., 29, 4220–4237, https://doi.org/10.1002/hyp.10487, 2015. Jakeman, A. J., Littlewood, I. G., and Whitehead, P. G.: Computation of the instantaneous unit hydrograph and identifiable component flows with application to two small upland catchments, J. Hydrol., 117, 275–300, https://doi.org/10.1016/0022-1694(90)90097-H, 1990. Khakbaz, B., Imam, B., Hsu, K., and Sorooshian, S.: From lumped to distributed via semi-distributed: Calibration strategies for semi-distributed hydrologic models, J. Hydrol., 418–419, 61–77, https:/ /doi.org/10.1016/j.jhydrol.2009.02.021, 2012. Klemeš, V.: Operational testing of hydrological simulation models, Hydrolog. Sci. J., 31, 13–24, https://doi.org/10.1080/02626668609491024, 1986. Knoben, W. J. M., Freer, J. E., Peel, M. C., Fowler, K. J. A., and Woods, R. A.: A Brief Analysis of Conceptual Model Structure Uncertainty Using 36 Models and 559 Catchments, Water Resour. Res., 56, e2019WR025975, https://doi.org/10.1029/2019WR025975, 2020. Kratzert, F., Klotz, D., Brenner, C., Schulz, K., and Herrnegger, M.: Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks, Hydrol. Earth Syst. Sci., 22, 6005–6022, https://doi.org/ 10.5194/hess-22-6005-2018, 2018. Kumar, A., Singh, R., Jena, P. P., Chatterjee, C., and Mishra, A.: Identification of the best multi-model combination for simulating river discharge, J. Hydrol., 525, 313–325, https://doi.org/10.1016 /j.jhydrol.2015.03.060, 2015. Li, D., Marshall, L., Liang, Z., and Sharma, A.: Hydrologic multi-model ensemble predictions using variational Bayesian deep learning, J. Hydrol., 604, 127221, https://doi.org/10.1016/ j.jhydrol.2021.127221, 2022. Liu, Y. and Gupta, H. V.: Uncertainty in hydrologic modeling: Toward an integrated data assimilation framework, Water Resour. Res., 43, W07401, https://doi.org/10.1029/2006WR005756, 2007. Lobligeois, F.: Mieux connaître la distribution spatiale des pluies améliore-t-il la modélisation des crues? Diagnostic sur 181 bassins versants français, PhD thesis, AgroParisTech, https:// hal.inrae.fr/tel-02600722v1 (last access: 23 March 2023), 2014. Lobligeois, F., Andréassian, V., Perrin, C., Tabary, P., and Loumagne, C.: When does higher spatial resolution rainfall information improve streamflow simulation? An evaluation using 3620 flood events, Hydrol. Earth Syst. Sci., 18, 575–594, https://doi.org/10.5194/hess-18-575-2014, 2014. Loumagne, C., Vidal, J., Feliu, C., Torterotot, J., and Roche, P.: Procédures de décision multimodèle pour une prévision des crues en temps réel: Application au bassin supérieur de la Garonne, Rev. Sci. Eau J. Water Sci., 8, 539–561, https://doi.org/10.7202/705237ar, 1995. Mathevet, T.: Quels modèles pluie-débit globaux au pas de temps horaire? Développements empiriques et comparaison de modèles sur un large échantillon de bassins versants, PhD thesis, Doctorat spécialité Sciences de l'eau, ENGREF Paris, https://hal.inrae.fr/tel-02587642v1 (last access: 23 March 2023), 2005. McMillan, H., Krueger, T., and Freer, J.: Benchmarking observational uncertainties for hydrology: rainfall, river discharge and water quality, Hydrol. Process., 26, 4078–4111, https://doi.org/10.1002 /hyp.9384, 2012. Météo-France: Publithèque, espace de commande de données publiques, https://publitheque.meteo.fr/ (last access: 23 March 2023), 2023. Michel, C.: Hydrologie appliquée aux petits bassins ruraux, Cemagref, Antony, France, https://belinra.inrae.fr/index.php?lvl=notice_display&id=225112 (last access: 23 March 2023), 1991. Moore, R. J. and Clarke, R. T.: A distribution function approach to rainfall runoff modeling, Water Resour. Res., 17, 1367–1382, https://doi.org/10.1029/WR017i005p01367, 1981. Moradkhani, H. and Sorooshian, S.: General Review of Rainfall-Runoff Modeling: Model Calibration, Data Assimilation, and Uncertainty Analysis, in: Hydrological Modelling and the Water Cycle: Coupling the Atmospheric and Hydrological Models, edited by: Sorooshian, S., Hsu, K.-L., Coppola, E., Tomassetti, B., Verdecchia, M., and Visconti, G., Springer, Berlin, Heidelberg, 1–24, https://doi.org/ 10.1007/978-3-540-77843-1_1, 2008. Nielsen, S. A. and Hansen, E.: Numerical simulation of the rainfall-runoff process on a daily basis, Nord. Hydrol., 4, 171–190, https://doi.org/10.2166/NH.1973.0013, 1973. O'Connell, P. E., Nash, J. E., and Farrell, J. P.: River flow forecasting through conceptual models part II – The Brosna catchment at Ferbane, J. Hydrol., 10, 317–329, https://doi.org/10.1016/ 0022-1694(70)90221-0, 1970. Oudin, L., Hervieu, F., Michel, C., Perrin, C., Andréassian, V., Anctil, F., and Loumagne, C.: Which potential evapotranspiration input for a lumped rainfall–runoff model?: Part 2 – Towards a simple and efficient potential evapotranspiration model for rainfall–runoff modelling, J. Hydrol., 303, 290–306, https://doi.org/10.1016/j.jhydrol.2004.08.026, 2005. Oudin, L., Andréassian, V., Mathevet, T., Perrin, C., and Michel, C.: Dynamic Averaging of Rainfall-Runoff Model Simulations from Complementary Model Parameterizations, Water Resour. Res., 42, W07410, https://doi.org/10.1029/2005WR004636, 2006. Pechlivanidis, I., Jackson, B., Mcintyre, N., and Wheater, H.: Catchment scale hydrological modelling: A review of model types, calibration approaches and uncertainty analysis methods in the context of recent developments in technology and applications, Glob. Int. J., 13, 193–214, 2011. Perrin, C.: Vers une amélioration d'un modèle global pluie-débit, PhD thesis, Institut National Polytechnique de Grenoble – INPG, https://hal.inrae.fr/tel-00006216v1 (last access: 23 March 2023), Perrin, C., Michel, C., and Andréassian, V.: Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments, J. Hydrol., 242, 275–301, https://doi.org/10.1016/S0022-1694(00)00393-0, 2001. R Core Team: R: A language and environment for statistical computing, https://www.r-project.org/ (last access: 23 March 2023), 2020. Saadi, M., Oudin, L., and Ribstein, P.: Physically consistent conceptual rainfall–runoff model for urbanized catchments, J. Hydrol., 599, 126394, https://doi.org/10.1016/j.jhydrol.2021.126394, 2021. Santos, L.: Que peut-on attendre des Super Modèles en hydrologie? Évaluation d'une approche de combinaison dynamique de modèles pluie-débit, PhD thesis, Doctorat en Hydrologie, AgroParisTech, https:/ /hal.inrae.fr/tel-02609262v1 (last access: 23 March 2023), 2018. Schaake, J. C., Hamill, T. M., Buizza, R., and Clark, M.: HEPEX: The Hydrological Ensemble Prediction Experiment, B. Am. Meteorol. Soc., 88, 1541–1548, https://doi.org/10.1175/BAMS-88-10-1541, 2007. Shamseldin, A. Y., O'Connor, K. M., and Liang, G. C.: Methods for combining the outputs of different rainfall runoff models, J. Hydrol., 197, 203–229, https://doi.org/10.1016/S0022-1694(96)03259-3, Smith, M. B., Seo, D.-J., Koren, V. I., Reed, S. M., Zhang, Z., Duan, Q., Moreda, F., and Cong, S.: The distributed model intercomparison project (DMIP): motivation and experiment design, J. Hydrol., 298, 4–26, https://doi.org/10.1016/j.jhydrol.2004.03.040, 2004. Squalli, E. M.: Quelle plus-value de l'approche multi-modèle dans le cas d'un modèle hydrologique semi-distribué?, Master thesis, internal report, 2020. Sugawara, M.: Automatic calibration of the tank model/L'étalonnage automatique d'un modèle à cisterne, Hydrolog. Sci. Bull., 24, 375–388, https://doi.org/10.1080/02626667909491876, 1979. Tabary, P., Dupuy, P., L'Henaff, G., Gueguen, C., Moulin, L., Laurantin, O., Merlier, C., and Soubeyroux, J.-M.: A 10-year (1997–2006) reanalysis of quantitative precipitation estimation over France: Methodology and first results, IAHS-AISH Publ., 351, 255–260, 2012. Thébault, C.: Quels apports d'une approche multi-modèle semi-distribuée pour la prévision des débits?, PhD thesis, Sorbonne université, https://theses.hal.science/tel-04519745 (last access: 23 March 2023), 2023. Thiéry, D.: Utilisation d'un modèle global pour identifier sur un niveau piézométrique des influences multiples dues à diverses activités humaines, Hydrolog. Sci. J., 27, 216–229, https://doi.org/ 10.1080/02626668209491102, 1982. Thirel, G., Santos, L., Delaigue, O., and Perrin, C.: On the use of streamflow transformations for hydrological model calibration, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2023-775, Turcotte, R., Fortier Filion, T.-C., Lacombe, P., Fortin, V., Roy, A., and Royer, A.: Simulation hydrologique des derniers jours de la crue de printemps: le problème de la neige manquante, Hydrolog. Sci. J., 55, 872–882, https://doi.org/10.1080/02626667.2010.503933, 2010. van Esse, W. R., Perrin, C., Booij, M. J., Augustijn, D. C. M., Fenicia, F., Kavetski, D., and Lobligeois, F.: The influence of conceptual model structure on model performance: a comparative study for 237 French catchments, Hydrol. Earth Syst. Sci., 17, 4227–4239, https://doi.org/10.5194/hess-17-4227-2013, 2013. Vaze, J., Chiew, F. H. S., Perraud, J. M., Viney, N., Post, D., Teng, J., Wang, B., Lerat, J., and Goswami, M.: Rainfall-Runoff Modelling Across Southeast Australia: Datasets, Models and Results, Australas. J. Water Resour., 14, 101–116, https://doi.org/10.1080/13241583.2011.11465379, 2011. Velázquez, J. A., Anctil, F., Ramos, M. H., and Perrin, C.: Can a multi-model approach improve hydrological ensemble forecasting? A study on 29 French catchments using 16 hydrological model structures, Adv. Geosci., 29, 33–42, https://doi.org/10.5194/adgeo-29-33-2011, 2011. Vidal, J.-P., Martin, E., Franchistéguy, L., Baillon, M., and Soubeyroux, J.-M.: A 50-year high-resolution atmospheric reanalysis over France with the Safran system, Int. J. Climatol., 30, 1627–1644, https://doi.org/10.1002/joc.2003, 2010. Wan, Y., Chen, J., Xu, C.-Y., Xie, P., Qi, W., Li, D., and Zhang, S.: Performance dependence of multi-model combination methods on hydrological model calibration strategy and ensemble size, J. Hydrol., 603, 127065, https://doi.org/10.1016/j.jhydrol.2021.127065, 2021. Winter, C. L. and Nychka, D.: Forecasting skill of model averages, Stoch. Env. Res. Risk A., 24, 633–638, https://doi.org/10.1007/s00477-009-0350-y, 2010. Zounemat-Kermani, M., Batelaan, O., Fadaee, M., and Hinkelmann, R.: Ensemble machine learning paradigms in hydrology: A review, J. Hydrol., 598, 126266, https://doi.org/10.1016/j.jhydrol.2021.126266,
{"url":"https://hess.copernicus.org/articles/28/1539/2024/","timestamp":"2024-11-05T23:16:57Z","content_type":"text/html","content_length":"330994","record_id":"<urn:uuid:c16e8993-8602-49b5-b980-e4c3a1d40752>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00500.warc.gz"}
Mathematical and statistics anxiety: Educational, social, developmental and cognitive perspectives About this Research Topic Mathematical anxiety is a feeling of tension, apprehension or fear which arises when a person is faced with mathematical content. The negative consequences of mathematical anxiety are well-documented. Students with high levels of mathematical anxiety might underperform in important test situations, they tend to hold negative attitudes towards mathematics, and they are likely to opt out of elective mathematics courses, which also affects their career opportunities. Although at the university level many students do not continue to study mathematics, social science students are confronted with the fact that their disciplines involve learning about statistics - another potential source of anxiety for students who are uncomfortable with dealing with numerical content. Research on mathematical anxiety is a truly interdisciplinary field with contributions from educational, developmental, cognitive, social and neuroscience researchers. While authors must ensure that papers fall within the scope of the section, as expressed in its mission statement, with a primary focus on psychology theory, they are encouraged to draw from these fields as well, where relevant, so as to enrich their papers. The aim of this Research Topic is to facilitate the interaction between researchers from different backgrounds. Topics of potential interest include: 1. the development/origins of mathematical and statistics anxiety; 2. individual differences in mathematical/statistics anxiety, and how these constructs are linked to/interact with other individual differences variables, including working memory capacity, self-efficacy, attitudes towards mathematics/statistics, etc.; 3. the social determinants of mathematical/statistics anxiety, including implicit and explicit gender stereotypes, stereotype threat, and the attitudes of parents, teachers and peers; 4. the psychophysiology of mathematical/statistics anxiety; 5. mathematical/statistics anxiety and career choices; 6. methods to alleviate mathematical/statistics anxiety; 7. the non-academic consequences of mathematical/statistics anxiety (for example, how they affect some important real-life decisions). 8. the construction or validation of psychometric instruments which are aimed at measuring mathematical/statistics anxiety. Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
{"url":"https://www.frontiersin.org/research-topics/3637/mathematical-and-statistics-anxiety-educational-social-developmental-and-cognitive-perspectives","timestamp":"2024-11-06T21:43:39Z","content_type":"text/html","content_length":"121112","record_id":"<urn:uuid:48595dfd-6926-4e52-af89-5ca1028188ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00655.warc.gz"}
Do some books categories receive on average better ratings than others? Do fantasy books on average get better ratings than sci-fi books? The answer seems to be yes, according to this little data science project I’ve build from Goodreads data. In this project I’ve grouped all titles in the data set by category, and then calculated the mean and standard deviation for each one of them. Then I’ve followed this good tutorial to compute the confidence interval for each category, and finally I’ve plotted mean and confidence interval (with 99% confidence) in a chart, you can see it here below. As you can see even accounting for confidence intervals there is definitely a huge gap between the mean of some of these categories.
{"url":"https://sandropaganotti.com/2022/05/08/are-books-in-some-categories-rated-on-average-better-than-others/","timestamp":"2024-11-08T04:10:35Z","content_type":"application/xhtml+xml","content_length":"33165","record_id":"<urn:uuid:12ff920a-b1e5-44f7-ae9e-738109c7759d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00878.warc.gz"}
October 14, 2021 · 12:00 am My Knuth Check Twenty years ago today, I got a Knuth Check. Knuth dated it October 14, 2001, and I received it in the mail in early November that year, I think. The Check. Details blurred for security, of course. I’d heard, of course, about Knuth Checks, but I didn’t really know as much about them then as I do today. I was a kid a few years out of college, just trying to eke out a living building software; and when I found a mistake in an algorithm in his book, I intended to write a note in a short e-mail to Dr. Knuth, but I found out he was (wisely) avoiding the medium, so I sent him a letter. And a few months later, I got back a check. It’s sat in a frame on my desk for twenty years, singularly my proudest professional accomplishment. I’ve shipped tons of software, I’ve made and discovered amazing things, I’ve won other awards — but nothing holds a candle to having a Knuth Check with your name on it. (More accurately, the original is in my safe-deposit box at the bank, and a copy has sat in a frame for twenty years, because I’m not leaving something this valuable out on my desk! 😃) So here’s the story: The Art of Computer Programming, Volume 1, has an algorithm on page 26 that describes an efficient solution for calculating a fractional logarithm in binary, using pure integer math. The algorithm in question derives from one first published in 1624 by Henry Briggs. At the bottom of page 26 in the 3^rd Edition, the book mentions that Richard Feynman (yes, that Feynman) determined an efficient inverse algorithm for calculating a fractional exponent in binary, and it proposes to the reader to solve it as problem #28. As is typical of all of Knuth’s “problems for the reader,” he includes the answer in the rather large appendix that comprises the last 150 pages of the book. The solution to problem #28 can be found at the bottom of page 470, and, unfortunately, it is not the same algorithm that Feynman discovered: It has a critical error in the first step: E1. [Initialize.] If 1 – ϵ is the largest possible value of x, set y ← (nearest approximation to b^1-ϵ), x ← 1 – ϵ, k ← 1. (The quantity yb^-x will remain approximately constant in the following TAoCP, V1, 3rd ed., pp. 470 That has a rather unfortunate error in it: x ← 1 – ϵ is potentially a meaningful expression in that context, but it’s wrong, and it will calculate not the correct exponent but garbage data, with, as I recall, mostly zero bits and a few semi-randomly-distributed one bits. The correct expression is x ← 1 – x. It’s possible the other form was a typographical error on Knuth’s part, but it’s not in the same category as a simple omission or spelling error, since if you follow the mathematics as written, you will simply get garbage, and not get a valid fractional exponent. (I struggled for a few days with the original: It was Knuth; surely I’d been the one to make the mistake in my code, not the good Doctor in his! But ultimately when I began to experiment with the broken algorithm, the answer became obvious.) So I wrote Knuth a letter in the spring of 2001 saying that I thought that was wrong as published, since it generated garbage output, and I included that I thought that x ← 1 – x was actually the correct form, since it appeared to be correct. And he sent me back a rather nice letter and a check for $2.56 for having found an error. (I had also suggested that the inherent parallel execution implied by the commas in step E2 appeared to be wrong, that sequential execution appeared to be needed there; Knuth chastised me in the letter that no, parallel execution was in fact intended. But I still got an error check for a broken algorithm!) Interestingly, I didn’t get my check by just searching for errors, like many people have: I was trying to build something; I encountered that error merely by remembering that the books contained an answer to the problem I was facing. But it seems that not many people build things by literally reading the textbook and doing exactly what it says; and apparently, nobody had implemented Feynman’s exponentiation algorithm from the book after it appeared in the 3^rd Edition in 1997. [ed. note: I checked, and Knuth added it to the 3^rd Ed. in April 1995; my error can be seen in his addenda.] That said, an entertaining question to posit: Why the heck was I trying to implement fractional logarithms and exponents in the first place? The answer was that it was 2001, and Windows 98 and ’486 PCs were still pretty common, and you couldn’t assume a computer supported floating-point math. And I needed to calculate powers: As simple as x^y, just like on your calculator, and it was even constrained: In my case, x was always an integer, and y was always a number between 1 and 2. But I didn’t have guaranteed floating-point hardware available so I could just write pow(x, y). So I improvised: I used the Briggs and Feynman methods to calculate the logarithm and the exponent in pure integer math, so that I could implement calculation of a power as exp(y * log x) instead, and effectively use multiplication — which the hardware supported — to produce a power (which it definitely didn’t). Today, of course, I’d write pow(x, y), just like you would. But when you’re stuck on super-limited hardware, you have to be a bit more clever. (For what it’s worth, you can see the code live, in action, if you have a copy of SpaceMonger 2 or 3; just go into the Settings, and turn up the “Exaggeration” slider: Every file’s size will be raised to the power you choose, using exactly this math!) Feynman, of course, derived that algorithm for exactly the same reason I needed it: They were using mechanical adding machines during WWII as part of the Manhattan Project, and those machines didn’t do powers. But the relativistic math for the atomic bomb needed lots of powers; so he derived the exponent algorithm as the converse to the logarithm algorithm, so that they were able to just use addition, subtraction, and multiplication, which the adding machines could do, to still efficiently calculate powers, which the machines couldn’t. So that’s my Knuth Check story. It’s unlikely I’ll ever earn another; I certainly haven’t in the last twenty years! I still haven’t even finished reading Volume 4A yet, and I’m far too busy to randomly pore over the other tomes again just in search of a mistake. But I still managed to earn a prize that’s among the most coveted in this field, and twenty years later, I’m still happy to have done it even once. Comments Off on My Knuth Check Filed under Programming, Technology Tagged as knuth
{"url":"https://www.werkema.com/tag/knuth/","timestamp":"2024-11-06T18:03:14Z","content_type":"text/html","content_length":"36309","record_id":"<urn:uuid:905c77cd-08d2-4dac-9c5d-ffd33521ce99>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00199.warc.gz"}
[GAP Forum] does a multivariate polynomial know its ring (or at least variables)? [GAP Forum] does a multivariate polynomial know its ring (or at least variables)? dmitrii.pasechnik at cs.ox.ac.uk dmitrii.pasechnik at cs.ox.ac.uk Thu Sep 21 10:51:24 BST 2017 Dear all, I'd like to write a function that takes a multivariate polynomial p and produces another polynomial in the same polynomial ring R, by doing a variable subsitution. Is there a way to p about its ring, or at least about the family of the indeterminants of the ring? Namely, I'd like to call Value( p, vars, vals ), something like the following: # compute the image of q under a linear transform g n := Length(vars); local i,j; return Value(q, vars, List([1..n],i->Sum([1..n], j->g[i][j]*vars[j]))); but I would rather get vars from q. Have I missed something in the docs? More information about the Forum mailing list
{"url":"https://www.gap-system.org/ForumArchive2/2017/005567.html","timestamp":"2024-11-13T21:08:21Z","content_type":"text/html","content_length":"3606","record_id":"<urn:uuid:1aa8646a-1c06-488a-bf45-6f9ababfe59d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00359.warc.gz"}
Two variances confidence interval calculator When using the sample data, we know the sample ratio between the variances, but we don't know the ratio's true value. We may treat the population's proportions as random variables and calculate the confidence interval. First, we need to define the confidence level, which is the required certainty level that the true value will be in the confidence interval. Researchers commonly use a confidence level of 0.95. We use the F distribution. Confidence interval for two variances formula [ F[α/2](n₂-1, n₁-1)* S²₁ , F[1-α/2](n₂-1, n₁-1)* S²₁ ] S²₂ S²₂ How to use the confidence interval calculator for two variances? 1. Data is: S₁, S₂, n₁, n₂ - enter the sample standard deviations (S₁, S₂), and the sample sizes (n₁, n₂). S²₁, S²₂, n₁, n₂ - enter the sample variances (S²₁, S²₂), and the sample sizes. Raw data - enter the delimited data, separated by a comma, space, or enter. In this case, the tool will calculate the variances and the sample size. 2. Confidence level - The certainty level that the true value of the estimated parameter will be in the confidence interval, usually 0.95. 3. Rounding - how to round the results? When a resulting value is larger than one, the tool rounds it, but when a resulting value is less than one the tool displays the significant figures. 4. Sample sizes (n[1], n[2]) - the number of subjects. 5. Sample Variances (S²₁, S²₂) or sample standard deviations (S₁, S₂): 6. Bounds: Two-sided: the confidence interval is between the lower bound, and the upper bound. Upper bound: the confidence interval is between zero and the upper bound. Lower bound: the confidence interval is between the lower bound, and infinity. The confidence interval is also equivalent to the F test for equality of two variances: Confidence interval with upper bound is equivalent to left tail test. Confidence interval with lower bound is equivalent to right tail test. The directions of the one-sided test and one-sided confidence interval are opposite, and the reason is that you check different directions: In a confidence interval, you may check if the expected value falls inside the confidence interval that is built around the estimated value. In a statistical test, you check if the estimated value falls inside the region of acceptance that is built around the expected value.
{"url":"https://www.statskingdom.com/variances-ratio-ci-calculator.html","timestamp":"2024-11-13T06:18:08Z","content_type":"text/html","content_length":"15868","record_id":"<urn:uuid:546c51b2-22ee-45f0-90bc-cd4aab1a646f>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00294.warc.gz"}
Overlaps Node Attached is a demo of a new node: overlaps Overlaps returns all possible intersections of up to ten overlapping shapes. One obvious application is Venn diagrams. The attached demo shows a five-way Venn diagram. Overlaps takes a single input: a list of shapes. It returns a list of all shapes formed by every possible combination of overlaps. In the demo you can see a total of 31 shapes (2^n -1) produced by an input list of 5 ellipses. The fragment labeled ABE, for example, is the region formed by the overlaps of ovals A, B, and E with no participation by ovals C and D. If the input shapes are colored, the overlaps will be as well, with colors similar to what you would see if the input shapes were translucent. The resulting amalgam looks translucent but is not; all fragment shapes are opaque (full alpha) even if the inputs were translucent. Overlaps will always return 2^n -1 shapes corresponding to the binary representation of every possible combination even if some of the input shapes do not overlap. For combinations that do not occur, those shapes will be null. You can automatically discard these null shapes by leaving the Remove nulls checkbox checked. But if you want to provide a color key or label each combination in cases where some combos are missing, you will need to uncheck that box and use the position number in the resulting list to figure out which fragment belongs to which combo. After this is done, you can then remove the nulls by culling all shapes with zero area. Normally each shape returned will be a single path. But if a particular combination of overlaps produces disjoint fragments, those fragments will be grouped into a geometry. This is convenient for labeling purposes. But if you want to work with the individual fragments separately, just ungroup the geometries. Although overlaps is perfect for making Venn diagrams, it can be used for many other purposes. It works as a kind of X-ray to see the structure behind overlapping opaque shapes. Once formed, the individual fragments can be exploded or highlighted or rearranged to create generational art and other interesting effects. You can use it as a filter to recover only certain combinations of overlapping shapes. For visualizations in which the areas of the input shapes represent some value, the areas of the overlaps will then become meaningful as well and can be measured and used. Because the calculations are computationally expensive and the number of combinations increases exponentially, the node can only handle a limited number of shapes. Calculating the 1023 fragments for ten overlapping nodes takes over 30 seconds, so I have placed a limit of ten so as not to inadvertently freeze your computer. If more than ten shapes are submitted, overlaps will only look at the first ten. The five-way Venn diagram in the demo takes about 5 seconds to render, but most of this time is due to the place_label node. The demo requires the concat_list.py external module (included), but this is only needed to generate the ABCDE labels. The overlaps node does not require any external code modules. (Aside: this was a tricky node to make. The algorithm requires recursion - which Nodebox cannot do - but obviously I found a way around this. If anyone out there is curious how I did it, feel free to peek under the hood.) The overlaps node will be included in the next rev of my Cartan Node Library. I will be curious to hear back from anyone using it. 1. I love this node! Even though haven't yet figured out what to do with it, it still works _smooth_. Thank you! Keyboard shortcuts ? Show this help ESC Blurs the current field Comment Form r Focus the comment reply box ^ + ↩ Submit the comment You can use Command ⌘ instead of Control ^ on Mac
{"url":"http://support.nodebox.net/discussions/show-your-work/562-overlaps-node","timestamp":"2024-11-08T20:41:45Z","content_type":"text/html","content_length":"31528","record_id":"<urn:uuid:7263bd71-a18d-48bf-8514-dda65d9a9cd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00538.warc.gz"}
This page describes the legacy workflow. New features might not be compatible with the legacy workflow. For the corresponding step in the recommended workflow, see solvepde and solvepdeeig. The original (R2015b) version of createPDEResults had only one syntax, and created a PDEResults object. Beginning with R2016a, you generally do not need to use createPDEResults, because the solvepde and solvepdeeig functions return solution objects. Furthermore, createPDEResults returns an object of a newer type than PDEResults. If you open an existing PDEResults object, it is converted to a StationaryResults object. If you use one of the older solvers such as adaptmesh, then you can use createPDEResults to obtain a solution object. Stationary and time-dependent solution objects have gradients available, whereas PDEResults did not include gradients. Results From an Elliptic Problem Create a StationaryResults object from the solution to an elliptic system. Create a PDE model for a system of three equations. Import the geometry of a bracket and plot the face labels. model = createpde(3); title("Bracket with Face Labels") title("Bracket with Face Labels, Rear View") Set boundary conditions: face 3 is immobile, and there is a force in the negative z direction on face 6. Set coefficients that represent the equations of linear elasticity. E = 200e9; nu = 0.3; c = elasticityC3D(E,nu); a = 0; f = [0;0;0]; Create a mesh and solve the problem. u = assempde(model,c,a,f); Create a StationaryResults object from the solution. results = createPDEResults(model,u) results = StationaryResults with properties: NodalSolution: [14093x3 double] XGradients: [14093x3 double] YGradients: [14093x3 double] ZGradients: [14093x3 double] Mesh: [1x1 FEMesh] Plot the solution for the z-component, which is component 3. Results from a Time-Dependent Problem Obtain a solution from a parabolic problem. The problem models heat flow in a solid. model = createpde(); Set the temperature on face 2 to 100. Leave the other boundary conditions at their default values (insulating). Set the coefficients to model a parabolic problem with 0 initial temperature. d = 1; c = 1; a = 0; f = 0; u0 = 0; Create a mesh and solve the PDE for times from 0 through 200 in steps of 10. tlist = 0:10:200; u = parabolic(u0,tlist,model,c,a,f,d); 171 successful steps 0 failed attempts 325 function evaluations 1 partial derivatives 29 LU decompositions 324 solutions of linear systems Create a TimeDependentResults object from the solution. results = createPDEResults(model,u,tlist,"time-dependent"); Plot the solution on the surface of the geometry at time 100. Results from an Eigenvalue Problem Create an EigenResults object from the solution to an eigenvalue problem. Create the geometry and mesh for the L-shaped membrane. Apply Dirichlet boundary conditions to all edges. model = createpde; applyBoundaryCondition(model,"dirichlet", ... "Edge",1:model.Geometry.NumEdges, ... Solve the eigenvalue problem for coefficients c = 1, a = 0, and d = 1. Obtain solutions for eigenvalues from 0 through 100. c = 1; a = 0; d = 1; r = [0,100]; [eigenvectors,eigenvalues] = pdeeig(model,c,a,d,r); Create an EigenResults object from the solution. results = createPDEResults(model,eigenvectors,eigenvalues,"eigen") results = EigenResults with properties: Eigenvectors: [1458x12 double] Eigenvalues: [12x1 double] Mesh: [1x1 FEMesh] Plot the solution for mode 10. Input Arguments u — PDE solution vector | matrix PDE solution, specified as a vector or matrix. Example: u = assempde(model,c,a,f); utimes — Times for a PDE solution monotone vector Times for a PDE solution, specified as a monotone vector. These times should be the same as the tlist times that you specified for the solution by the hyperbolic or parabolic solvers. Example: utimes = 0:0.2:5; eigenvectors — Eigenvector solution Eigenvector solution, specified as a matrix. Suppose • Np is the number of mesh nodes • N is the number of equations • ev is the number of eigenvalues specified in eigenvalues Then eigenvectors has size Np-by-N-by-ev. Each column of eigenvectors corresponds to the eigenvectors of one eigenvalue. In each column, the first Np elements correspond to the eigenvector of equation 1 evaluated at the mesh nodes, the next Np elements correspond to equation 2, and so on. eigenvalues — Eigenvalue solution Eigenvalue solution, specified as a vector. Output Arguments The procedure for evaluating gradients at nodal locations is as follows: 1. Calculate the gradients at the Gauss points located inside each element. 2. Extrapolate the gradients at the nodal locations. 3. Average the value of the gradient from all elements that meet at the nodal point. This step is needed because of the inter-element discontinuity of gradients. The elements that connect at the same nodal point give different extrapolated values of the gradient for the point. createPDEResults performs area-weighted averaging for 2-D meshes and volume-weighted averaging for 3-D meshes. Version History Introduced in R2015b R2016a: No longer creates an object of type PDEResults createPDEResults no longer creates an object of type PDEResults. The syntax of createPDEResults has changed to accommodate creating the new result types for time-dependent and eigenvalue problems. • To create the TimeDependentResults object for a time-dependent problem, use the syntax createPDEResults(pdem,u,utimes,'time-dependent'), where utimes is a vector of solution times. • To create the EigenResults object for an eigenvalue problem, use the syntax createPDEResults(pdem,eigenvectors,eigenvalues,'eigen'). EigenResults has different property names than PDEResults. Update any eigenvalue scripts that use PDEResults property names.
{"url":"https://se.mathworks.com/help/pde/ug/createpderesults.html","timestamp":"2024-11-08T14:09:40Z","content_type":"text/html","content_length":"104972","record_id":"<urn:uuid:901e5dc2-a575-41f0-a1f5-70dff5815929>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00729.warc.gz"}
How to speed-up a code containing several symbolic integrations and derivations? After days of confusion now I have a sage code ready to be used. It consists of a set of iterative equations in a loop, at each step the equations containing a number of symbolic integration and differentiation act on a previous estimation of the unknowns to update them to a newer estimation. Sounds good up to this point. However, depending on the initial estimation that I introduce to the iterative loop the code may run fast (like when I give a constant or simply x as initial estimation) or too slow (like when I introduce sin(x) or other even simple polynomials). I understand that having such integrations and differentiations make the expressions rapidly grow large so that the computations would be time consuming but it is extremely slow, not one minute or 10 minutes or so, I just waited for some hours and sage did not answered anything new, and I doubt if the 100% cpu usage shown by the system's monitoring app is really spent on the calculations and sage can give the answer at the end. As much as I read from documentations and elsewhere adding a %cython command at the beginning of the code can speed up the code but it gives me back an error e.g. of the form "undeclared name not builtin: sin". Does %cython command work also with symbolic calculations? Also I saw something about "fast_callable" function and sympyx, what about they, if they can help speed up such calculations where can I find a good intro toward them? Best regards How to speed-up a code containing several symbolic integrations and derivations? After days of confusion now I have a sage code ready to be used. It consists of a set of iterative equations in a loop, at each step the equations [DEL:containing :DEL]a number of symbolic integration and [DEL:differentiation :DEL]act on a previous estimation of the unknowns to update them to a newer [DEL:estimation. :DEL] Sounds good up to this point. However, depending on the initial estimation that I introduce to the iterative loop the code may run fast (like when I give a constant or simply x as initial estimation) or too slow (like when I introduce sin(x) or other even simple polynomials). I understand that having such integrations and differentiations make the expressions rapidly grow large so that the computations would be time consuming but it is extremely slow, not one minute or 10 minutes or so, I just waited for some hours and sage did not answered anything new, and I doubt if the 100% cpu usage shown by the system's monitoring app is really spent on the calculations and sage can give the answer at the end. As much as I read from documentations and elsewhere adding a %cython command at the beginning of the code can speed up the code but it gives me back an error e.g. of the form "undeclared name not builtin: sin". Does %cython command work also with symbolic calculations? Also I saw something about "fast_callable" function and sympyx, what about they, if they can help speed up such calculations where can I find a good intro toward them? Best regards How to speed-up a code containing several symbolic integrations and derivations? After days of confusion now I have a sage code ready to be used. It consists of a set of iterative equations in a loop, at each step the equations --containing a number of symbolic integration and differentiation-- act on a previous estimation of the unknowns to update them to a newer estimation. Sounds good up to this point. However, depending on the initial estimation that I introduce to the iterative loop the code may run fast (like when I give a constant or simply x as initial estimation) or too slow (like when I introduce sin(x) or other even simple polynomials). I understand that having such integrations and differentiations make the expressions rapidly grow large so that the computations would be time consuming but it is extremely slow, not one minute or 10 minutes or so, I just waited for some hours and sage did not answered anything new, and I doubt if the 100% cpu usage shown by the system's monitoring app is really spent on the calculations and sage can give the answer at the end. As much as I read from documentations and elsewhere adding a %cython command at the beginning of the code can speed up the code but it gives me back an error e.g. of the form "undeclared name not builtin: sin". Does %cython command work also with symbolic calculations? Also I saw something about "fast_callable" function and sympyx, what about they, if they can help speed up such calculations where can I find a good intro toward them? Best regards How to speed-up a code containing several symbolic integrations and [DEL:derivations?:DEL] After days of confusion now I have a sage code ready to be used. It consists of a set of iterative equations in a loop, at each step the equations --containing a number of symbolic integration and differentiation-- act on a previous estimation of the unknowns to update them to a newer estimation. Sounds good up to this point. However, depending on the initial estimation that I introduce to the iterative loop the code may run fast (like when I give a constant or simply x as initial estimation) or too slow (like when I introduce sin(x) or other even simple polynomials). I understand that having such integrations and differentiations make the expressions rapidly grow large so that the computations would be time consuming but it is extremely slow, not one minute or 10 minutes or so, I just waited for some hours and sage did not answered anything new, and I doubt if the 100% cpu usage shown by the system's monitoring app is really spent on the calculations and sage can give the answer at the end. As much as I read from documentations and elsewhere adding a %cython command at the beginning of the code can speed up the code but it gives me back an error e.g. of the form "undeclared name not builtin: sin". Does %cython command work also with symbolic calculations? Also I saw something about "fast_callable" function and sympyx, what about they, if they can help speed up such calculations where can I find a good intro toward them? Best regards Here is the code: var('x,y,z,t, x1,y1,z1,t1, x2,y2,z2,t2, x3,y3,z3,t3') var('N, Re, x_B1,x_B2, y_B1,y_B2, z_B1,z_B2, T, p0,v0') N=5 # N: number of iterations # or N=int(input("How many iteration do you need for your Adomian Decomposition code to run? ")) Re=1000000 # Re: global Reynolds number x_B1=-1; x_B2=1; y_B1=-1; y_B2=1; z_B1=-1; z_B2=1 # the non-dimensional boundaries of the computational region! T=10 # T: the ending non-dimensional time of computations # p0, v0: used in definition of initial condition for the unknown functions assume(x_B1<=x, 0<t) # needed for our integrations from X_B1 to x and from 0 to t ! # the following lists are defined for, 1st, easier addressing and, 2nd, more compact importing # of terms like "sum_j u_j du_i/dx_j" as otherwise x_j was not known to the code var('q') # q is a very dummy variable to fill the zeroth place in all my lists to avoid confusion while coding, instead of always using i-1 or j+1 as indices! R0=[q,x,y,z,t] # Only R0 needs a zeroth element of q # empty lists that will be filled with functions defined in the loop below for i in range(1,5): # i = 1,2,3,4 / indices 1,2,3 are reserved for velocity components and 4 for pressure for n in range(N+1): # N+1 since range(N+1)=(0,..,N) # here n=0 is required so I don't increase it to 1 ! phi0[i].append(function('phi0_%s_%s' %(i,n), x,y,z,t, latex_name='\phi_%s^{0\,(%s)}' %(i,n))) phi1[i].append(function('phi1_%s_%s' %(i,n), x,y,z,t,*R1, latex_name='\phi_%s^{1\,(%s)}' %(i,n))) phi2[i].append(function('phi2_%s_%s' %(i,n), x,y,z,t,*R1+R2, latex_name='\phi_%s^{2\,(%s)}' %(i,n))) ####################\the initial estimations/#################### # let define the nonzero values for initial estimates of phi1's components p0=1; v0=1 # v0 stans for vx0=vy0=vz0 # specifying phi0[i][0], phi1[i][0], phi2[i][0] as the initial estimates (guesses), order 0 approximation for i in range(1,4): # which means i=1,2,3, for the velocity components phi0[i][0]=0*x #0 phi1[i][0]=0*x+v0*sin(x) #v0 phi2[i][0]=0*x #0 phi0[4][0]=0*x #0 phi1[4][0]=0*x+p0+(x-x_B1) #p0 phi2[4][0]=0*x #0 #######\defining the differential and integral operators in the equations/####### var('td,xd1,xd2') # dummy variables g = lambda i,f: diff(f,R0[i]) It = lambda f: integral(f(t=td),td,0,t) #or use: ,algorithm='mathematica_free' #or use: ,algorithm='sympy' Ixx = lambda f: integral(integral(f(x=xd1),xd1,x_B1,xd2),xd2,x_B1,x) DR1 = lambda f: -1/Re*(diff(f,x,2)+diff(f,y,2)+diff(f,z,2)) DR2 = lambda f: diff(f,y,2)+diff(f,z,2) IntR1 = lambda f: integral(integral(integral(integral(f,x1,x_B1,x_B2),y1,y_B1,y_B2),z1,z_B1,z_B2),t1,0,T) IntR2 = lambda f: integral(integral(integral(integral(f,x2,x_B1,x_B2),y2,y_B1,y_B2),z2,z_B1,z_B2),t2,0,T) IntR3 = lambda f: integral(integral(integral(integral(f,x3,x_B1,x_B2),y3,y_B1,y_B2),z3,z_B1,z_B2),t3,0,T) ###########\the iterative equations being imported/########### for n in range(N): # note that in equations I have n+1 , so it couldn't be N+1 ! for i in range(1,4): # pressure (i=4) has different equations, so would be denoted individually phi0[i][n+1] = phi0[i][0] - It(DR1(phi0[i][n])) - It(g(i,phi0[4][n])) \ - It( sum( phi0[j][n]*g(j,phi0[i][n]) + IntR1(phi1[j][n]*g(j,phi1[i][n])) + \ 2*IntR2(IntR1(phi2[j][n]*g(j,phi2[i][n]))) for j in range(1,4)) ) phi1[i][n+1] = phi1[i][0] - It(DR1(phi1[i][n])) - It(g(i,phi1[4][n])) \ - It( sum( phi0[j][n]*g(j,phi1[i][n]) + phi1[j][n]*g(j,phi0[i][n]) + \ 2*IntR2(phi1[j][n](x1=x2,y1=y2,z1=z2,t1=t2)*g(j,phi2[i][n]) + \ phi2[j][n]*g(j,phi1[i][n](x1=x2,y1=y2,z1=z2,t1=t2))) for j in range(1,4)) ) phi2[i][n+1] = phi2[i][0] - It(DR1(phi2[i][n])) - It(g(i,phi2[4][n])) \ - It( sum( phi0[j][n]*g(j,phi2[i][n]) + phi2[j][n]*g(j,phi0[i][n]) \ + 0.5*( phi1[j][n]*g(j,phi1[i][n](x1=x2,y1=y2,z1=z2,t1=t2)) + \ phi1[j][n](x1=x2,y1=y2,z1=z2,t1=t2)*g(j,phi1[i][n]) ) \ + 2*IntR3( phi2[j][n](x2=x3,y2=y3,z2=z3,t2=t3) \ *g(j,phi2[i][n](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3)) \ + phi2[j][n](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3) \ *g(j,phi2[i][n](x2=x3,y2=y3,z2=z3,t2=t3)) ) for j in range(1,4)) ) phi0[4][n+1] = phi0[4][0] - Ixx(DR2(phi0[4][n])) \ - Ixx( sum( g(i,phi0[j][n+1])*g(j,phi0[i][n+1]) \ + IntR1(g(i,phi1[j][n+1])*g(j,phi1[i][n+1])) \ + 2*IntR2(IntR1(g(i,phi2[j][n+1])*g(j,phi2[i][n+1]))) \ for i in range (1,4) for j in range(1,4) ) ) phi1[4][n+1] = phi1[4][0] - Ixx(DR2(phi1[4][n])) \ - Ixx( sum( 2*g(j,phi0[i][n+1])*g(i,phi1[j][n+1]) \ + 4*IntR2(g(j,phi1[i][n+1](x1=x2,y1=y2,z1=z2,t1=t2))*g(i,phi2[j][n+1])) \ for i in range (1,4) for j in range(1,4) ) ) phi2[4][n+1] = phi2[4][0] - Ixx(DR2(phi2[4][n])) \ - Ixx( sum( 2*g(j,phi0[i][n+1])*g(i,phi2[j][n+1]) \ + g(j,phi1[i][n+1])*g(i,phi1[j][n+1](x1=x2,y1=y2,z1=z2,t1=t2)) \ + 4*IntR3( g(j,phi2[i][n+1](x2=x3,y2=y3,z2=z3,t2=t3)) \ *g(i,phi2[j][n+1](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3)) ) \ for i in range (1,4) for j in range(1,4) ) ) How to speed-up a code containing several symbolic integrations and derivatives? After days of confusion now I have a sage code ready to be used. It consists of a set of iterative equations in a loop, at each step the equations --containing a number of symbolic integration and differentiation-- act on a previous estimation of the unknowns to update them to a newer estimation. Sounds good up to this point. However, depending on the initial estimation that I introduce to the iterative loop the code may run fast (like when I give a constant or simply x as initial estimation) or too slow (like when I introduce sin(x) or other even simple polynomials). I understand that having such integrations and differentiations make the expressions rapidly grow large so that the computations would be time consuming but it is extremely slow, not one minute or 10 minutes or so, I just waited for some hours and sage did not answered anything new, and I doubt if the 100% cpu usage shown by the system's monitoring app is really spent on the calculations and sage can give the answer at the end. As much as I read from documentations and elsewhere adding a %cython command at the beginning of the code can speed up the code but it gives me back an error e.g. of the form "undeclared name not builtin: sin". Does %cython command work also with symbolic calculations? Also I saw something about "fast_callable" function and sympyx, what about they, if they can help speed up such calculations where can I find a good intro toward them? Best regards Here is the code: var('x,y,z,t, x1,y1,z1,t1, x2,y2,z2,t2, x3,y3,z3,t3') [DEL:var('N, Re, x_B1,x_B2, y_B1,y_B2, z_B1,z_B2, T, p0,v0') :DEL]N=5 # N: number of iterations # or N=int(input("How many iteration do you need for your Adomian Decomposition code to run? ")) Re=1000000 # Re: global Reynolds number x_B1=-1; x_B2=1; y_B1=-1; y_B2=1; z_B1=-1; z_B2=1 # the non-dimensional boundaries of the computational region! T=10 # T: the ending non-dimensional time of computations # p0, v0: used in definition of initial condition for the unknown functions assume(x_B1<=x, 0<t) # needed for our integrations from X_B1 to x and from 0 to t ! # the following lists are defined for, 1st, easier addressing and, 2nd, more compact importing # of terms like "sum_j u_j du_i/dx_j" as otherwise x_j was not known to the code var('q') # q is a very dummy variable to fill the zeroth place in all my lists to avoid confusion while coding, instead of always using i-1 or j+1 as indices! R0=[q,x,y,z,t] # Only R0 needs a zeroth element of q # empty lists that will be filled with functions defined in the loop below for i in range(1,5): # i = 1,2,3,4 / indices 1,2,3 are reserved for velocity components and 4 for pressure for n in range(N+1): # N+1 since range(N+1)=(0,..,N) # here n=0 is required so I don't increase it to 1 ! phi0[i].append(function('phi0_%s_%s' %(i,n), x,y,z,t, latex_name='\phi_%s^{0\,(%s)}' %(i,n))) phi1[i].append(function('phi1_%s_%s' %(i,n), x,y,z,t,*R1, latex_name='\phi_%s^{1\,(%s)}' %(i,n))) phi2[i].append(function('phi2_%s_%s' %(i,n), x,y,z,t,*R1+R2, latex_name='\phi_%s^{2\,(%s)}' %(i,n))) ####################\the initial estimations/#################### # let define the nonzero values for initial estimates of phi1's components p0=1; v0=1 # v0 stans for vx0=vy0=vz0 # specifying phi0[i][0], phi1[i][0], phi2[i][0] as the initial estimates (guesses), order 0 approximation for i in range(1,4): # which means i=1,2,3, for the velocity components phi0[i][0]=0*x #0 phi1[i][0]=0*x+v0*sin(x) #v0 phi2[i][0]=0*x #0 phi0[4][0]=0*x #0 phi1[4][0]=0*x+p0+(x-x_B1) #p0 phi2[4][0]=0*x #0 #######\defining the differential and integral operators in the equations/####### var('td,xd1,xd2') # dummy variables g = lambda i,f: diff(f,R0[i]) It = lambda f: integral(f(t=td),td,0,t) #or use: ,algorithm='mathematica_free' #or use: ,algorithm='sympy' Ixx = lambda f: integral(integral(f(x=xd1),xd1,x_B1,xd2),xd2,x_B1,x) DR1 = lambda f: -1/Re*(diff(f,x,2)+diff(f,y,2)+diff(f,z,2)) DR2 = lambda f: diff(f,y,2)+diff(f,z,2) IntR1 = lambda f: integral(integral(integral(integral(f,x1,x_B1,x_B2),y1,y_B1,y_B2),z1,z_B1,z_B2),t1,0,T) IntR2 = lambda f: integral(integral(integral(integral(f,x2,x_B1,x_B2),y2,y_B1,y_B2),z2,z_B1,z_B2),t2,0,T) IntR3 = lambda f: integral(integral(integral(integral(f,x3,x_B1,x_B2),y3,y_B1,y_B2),z3,z_B1,z_B2),t3,0,T) ###########\the iterative equations being imported/########### for n in range(N): # note that in equations I have n+1 , so it couldn't be N+1 ! for i in range(1,4): # pressure (i=4) has different equations, so would be denoted individually phi0[i][n+1] = phi0[i][0] - It(DR1(phi0[i][n])) - It(g(i,phi0[4][n])) \ - It( sum( phi0[j][n]*g(j,phi0[i][n]) + IntR1(phi1[j][n]*g(j,phi1[i][n])) + \ 2*IntR2(IntR1(phi2[j][n]*g(j,phi2[i][n]))) for j in range(1,4)) ) phi1[i][n+1] = phi1[i][0] - It(DR1(phi1[i][n])) - It(g(i,phi1[4][n])) \ - It( sum( phi0[j][n]*g(j,phi1[i][n]) + phi1[j][n]*g(j,phi0[i][n]) + \ 2*IntR2(phi1[j][n](x1=x2,y1=y2,z1=z2,t1=t2)*g(j,phi2[i][n]) + \ phi2[j][n]*g(j,phi1[i][n](x1=x2,y1=y2,z1=z2,t1=t2))) for j in range(1,4)) ) phi2[i][n+1] = phi2[i][0] - It(DR1(phi2[i][n])) - It(g(i,phi2[4][n])) \ - It( sum( phi0[j][n]*g(j,phi2[i][n]) + phi2[j][n]*g(j,phi0[i][n]) \ + 0.5*( phi1[j][n]*g(j,phi1[i][n](x1=x2,y1=y2,z1=z2,t1=t2)) + \ phi1[j][n](x1=x2,y1=y2,z1=z2,t1=t2)*g(j,phi1[i][n]) ) \ + 2*IntR3( phi2[j][n](x2=x3,y2=y3,z2=z3,t2=t3) \ *g(j,phi2[i][n](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3)) \ + phi2[j][n](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3) \ *g(j,phi2[i][n](x2=x3,y2=y3,z2=z3,t2=t3)) ) for j in range(1,4)) ) phi0[4][n+1] = phi0[4][0] - Ixx(DR2(phi0[4][n])) \ - Ixx( sum( g(i,phi0[j][n+1])*g(j,phi0[i][n+1]) \ + IntR1(g(i,phi1[j][n+1])*g(j,phi1[i][n+1])) \ + 2*IntR2(IntR1(g(i,phi2[j][n+1])*g(j,phi2[i][n+1]))) \ for i in range (1,4) for j in range(1,4) ) ) phi1[4][n+1] = phi1[4][0] - Ixx(DR2(phi1[4][n])) \ - Ixx( sum( 2*g(j,phi0[i][n+1])*g(i,phi1[j][n+1]) \ + 4*IntR2(g(j,phi1[i][n+1](x1=x2,y1=y2,z1=z2,t1=t2))*g(i,phi2[j][n+1])) \ for i in range (1,4) for j in range(1,4) ) ) phi2[4][n+1] = phi2[4][0] - Ixx(DR2(phi2[4][n])) \ - Ixx( sum( 2*g(j,phi0[i][n+1])*g(i,phi2[j][n+1]) \ + g(j,phi1[i][n+1])*g(i,phi1[j][n+1](x1=x2,y1=y2,z1=z2,t1=t2)) \ + 4*IntR3( g(j,phi2[i][n+1](x2=x3,y2=y3,z2=z3,t2=t3)) \ *g(i,phi2[j][n+1](x1=x2,y1=y2,z1=z2,t1=t2, x2=x3,y2=y3,z2=z3,t2=t3)) ) \ for i in range (1,4) for j in range(1,4) ) )
{"url":"https://ask.sagemath.org/questions/9214/revisions/","timestamp":"2024-11-09T00:29:23Z","content_type":"application/xhtml+xml","content_length":"48830","record_id":"<urn:uuid:51bbd879-4d7f-4adb-a310-6e121885fe0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00048.warc.gz"}
Compensating errors in inversions for subglacial bed roughness: same steady state, different dynamic response Articles | Volume 17, issue 4 © Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License. Compensating errors in inversions for subglacial bed roughness: same steady state, different dynamic response Subglacial bed roughness is one of the main factors controlling the rate of future Antarctic ice-sheet retreat and also one of the most uncertain. A common technique to constrain the bed roughness using ice-sheet models is basal inversion, tuning the roughness to reproduce the observed present-day ice-sheet geometry and/or surface velocity. However, many other factors affecting ice-sheet evolution, such as the englacial temperature and viscosity, the surface and basal mass balance, and the subglacial topography, also contain substantial uncertainties. Using a basal inversion technique intrinsically causes any errors in these other quantities to lead to compensating errors in the inverted bed roughness. Using a set of idealised-geometry experiments, we quantify these compensating errors and investigate their effect on the dynamic response of the ice sheet to a prescribed forcing. We find that relatively small errors in ice viscosity and subglacial topography require substantial compensating errors in the bed roughness in order to produce the same steady-state ice sheet, obscuring the realistic spatial variability in the bed roughness. When subjected to a retreat-inducing forcing, we find that these different parameter combinations, which per definition of the inversion procedure result in the same steady-state geometry, lead to a rate of ice volume loss that can differ by as much as a factor of 2. This implies that ice-sheet models that use basal inversion to initialise their model state can still display a substantial model bias despite having an initial state which is close to the observations. Received: 20 May 2022 – Discussion started: 31 May 2022 – Revised: 20 Dec 2022 – Accepted: 03 Mar 2023 – Published: 12 Apr 2023 One of the most worrying long-term consequences of anthropogenic climate change is sea-level rise due to mass loss of the Greenland and Antarctic ice sheets (Oppenheimer et al., 2019; Fox-Kemper et al., 2021). It is also one of the most uncertain consequences, with the projected sea-level contribution from the Antarctic ice sheet in 2100 under high-warming scenarios ranging from −2.5cm (the minus sign indicating a sea-level drop) to 17cm (Seroussi et al., 2020). Ice-dynamical processes are the main contributors to this uncertainty, which is demonstrated in the idealised (though extreme) ABUMIP experiment (Sun et al., 2020), which concerns instantaneous ice-shelf collapse under zero atmospheric or oceanic forcing, thereby eliminating uncertainties in the forcing. In this experiment, modelled sea-level rise differs by a factor of 10 among models, on timescales of a few centuries. One of the main contributing factors to this ice-dynamical uncertainty is basal sliding, which is controlled by the conditions of the subglacial bed. Sun et al. (2020) showed that a substantial amount of the variance in the ABUMIP model ensemble could be explained by different assumptions about the relation between bed roughness, sliding velocity, and basal friction (the “sliding law”). These processes are difficult to constrain based on observational evidence; observations of the Antarctic subglacial substrate are virtually non-existent, and direct observations of ice velocity are typically limited to the ice-sheet surface, which contains contributions from both basal sliding and vertical shearing. Since the latter is controlled by the ice viscosity, which too is very uncertain, disentangling the two terms is problematic. An often-used approach for solving this problem is applying inversion techniques to estimate either the bed roughness or the basal drag, by matching the observed ice thickness and/or surface velocity. Generally speaking, an inversion is a way to calculate the cause of an observed effect; since most physical problems instead consist of calculating the effect of an observed or postulated cause, this is called the “inverse problem”. In the case of basal sliding, the forward problem consists of providing an ice-sheet model with a (spatially variable) value for bed roughness and calculating the resulting ice-sheet geometry and/or velocity. The inverse problem consists of taking the (observed) geometry and/or velocity and using that to invert for the bed roughness. Different formulations of this approach exist, which differ in the observations the inversion aims to reproduce (e.g. ice-sheet geometry and/or velocity), in the quantity that is inverted for (bed roughness or basal drag), and in the mathematical techniques used to perform the inversion. A geometry-based approach was introduced by Pollard and DeConto (2012) and adapts the bed roughness during a forward simulation until the model reaches a steady-state ice geometry that matches the observations. The bed roughness is changed based on the local difference between the modelled and the observed ice thickness; if the ice is too thick (thin), the bed roughness is decreased (increased), based on the idea that a lower (higher) bed roughness leads to increased (decreased) ice flow and therefore thinning (thickening). This approach has since been adopted, with minor variations, in several ice-sheet models, for example, f.ETISh (Pattyn, 2017), PISM (Albrecht et al., 2020), and CISM (Lipscomb et al., 2021). The velocity-based approach is used in, for example, Elmer/Ice (Gagliardini et al., 2013) and ISSM (Larour et al., 2012) and often inverts directly for basal drag, without making any assumptions about the sliding law. In this approach, the model is not run forward in time; instead, the basal drag field is iteratively adapted until the modelled velocity field for the observed geometry matches the observed velocity. Typically, more elaborate mathematical techniques are used to update the inverted field than in the geometry-based approach. For example, the drag may be computed by defining and iteratively minimising a cost function that represents the mismatch between the modelled and observed velocity (e.g. Arthern and Gudmundsson, 2010; Gagliardini et al., 2013; Arthern et al., 2015). The cost function typically includes a term quantifying unwanted small-wavelength terms in the solution, which can arise as a result of overfitting. Since the velocity-based approach does not make any assumptions about the dynamic (steady) state of the geometry, it generally leads to a more pronounced model drift compared to the geometry-based approach in forward experiments (Seroussi et al., 2019). These inversion approaches share the underlying assumption that all ice-sheet properties other than the bed roughness are known accurately enough for such an inversion to be meaningful, i.e. that any differences between the modelled and the observed ice-sheet state are mostly due to errors in the modelled bed roughness and that those errors can be corrected by applying an inversion. This means that, due to the nature of the inversion procedure, any modelled errors in the other ice-sheet properties will lead to compensating errors in the inverted bed roughness. For example, if the modelled ice viscosity overestimates the real value, then the modelled ice velocities due to viscous deformation will be too low, and the modelled steady-state ice sheet will be too thick. The inversion procedure will compensate for this mismatch by lowering the bed roughness, increasing the sliding velocities (and thinning the ice, in the case of geometry-based inversion methods) until the modelled ice sheet once again matches the observed state. This implies that the result of a basal inversion will contain not just (an approximation of) the realistic bed roughness but also the sum of compensating errors that arise from modelled errors in other ice-sheet quantities. Several studies have already investigated these compensating errors in different settings. Seroussi et al. (2013) studied the effect of uncertainties in the thermal regime of the Greenland ice sheet on the inverted bed roughness and on future projections of ice-sheet volume. They found that, while the effect on the inverted bed roughness was substantial, the differences in projected ice volume change were minimal. Perego et al. (2014) studied the effect of uncertainties in surface mass balance and ice thickness on inversions of bed roughness for the Greenland ice sheet. They presented a method that could simultaneously invert for surface mass balance, basal topography, and basal roughness, thus providing a better fit to the observed velocity and a more stable ice sheet. Babaniyi et al. (2021) studied the effect of errors in the modelled ice rheology on the inverted bed roughness in an idealised setting. They found that uncertainties in the rheology and viscosity of the ice could lead to significant biases in the inverted roughness. Arthern et al. (2015) and Ranganathan et al. (2021) presented methods for simultaneously inverting for both viscosity and basal slipperiness. These methods provide accurate estimates of both velocity and ice thickness, as long as uncertainties in the observed ice thickness and bed topography are small (Ranganathan et al., In this study, we investigate the compensating errors in a geometry- and velocity-based inversion approach and how they affect the uncertainty in projections of ice-sheet retreat. As a modelling tool we use the vertically integrated ice-sheet model IMAU-ICE (Berends et al., 2022), which we describe briefly in Sect. 2.1. In Sect. 2.2 we present a novel variation on the geometry-based inversion approach, which uses a flowline-averaged anomaly method to adapt the bed roughness field. We apply this model set-up to two idealised-geometry ice sheets, which we describe in Sect. 3. In Sect. 4.1 we demonstrate that our novel inversion procedure can reproduce the known bed roughness in settings with freely moving ice margins and/or grounding lines. In Sect. 4.2 we present a series of experiments where we introduce errors in other ice-sheet model components before performing the inversion, which results in an erroneous inverted bed roughness, even though, as a construct of the inversion procedure, the resulting steady-state ice sheet is similar. In Sect. 4.3 we investigate the effect of these compensating errors on the dynamic response of the ice sheet to a schematic retreat-inducing forcing. We show that, even though the respective errors in the bed roughness and the other model components compensate for each other in terms of steady-state ice-sheet geometry, this is not necessarily the case for the dynamic response. We quantify the difference in ice-sheet models with nearly identical steady-state geometries in their rate of sea-level contribution under a forced retreat as a result of the compensating errors. We discuss the implications of these findings in Sect. 5. 2.1Ice-sheet model IMAU-ICE is a vertically integrated ice-sheet model, which has been specifically designed for large-scale, long-term simulations of ice-sheet evolution (Berends et al., 2022). It solves the depth-integrated viscosity approximation (DIVA; Goldberg, 2011; Lipscomb et al., 2019) to the stress balance, which is similar to the hybrid SIA/SSA but which remains close to the full-Stokes solution at significantly higher aspect ratios (Berends et al., 2022). Proper grounding-line migration is achieved by using a sub-grid friction-scaling scheme, based on the approaches used in PISM (Feldmann et al., 2014) and CISM (Leguy et al., 2021). For this study, a new sliding law was added to IMAU-ICE, based on the work of Zoet and Iverson (2020). This recent work presents a sliding law based on laboratory experiments, contrasting with previous sliding laws which were based chiefly on theoretical considerations. Here, the basal shear stress τ[b] depends on the basal velocity u[b] as follows: $\begin{array}{}\text{(1)}& {\mathbit{\tau }}_{\mathrm{b}}={\stackrel{\mathrm{^}}{\mathbit{u}}}_{\mathrm{b}}N\mathrm{tan}\mathit{\phi }{\left(\frac{|{\mathbit{u}}_{\mathrm{b}}|}{|{\mathbit{u}}_{\ Here, N is the (effective) overburden pressure, which we assume to be identical to the ice overburden pressure (i.e. no subglacial water); ${\stackrel{\mathrm{^}}{\mathbit{u}}}_{\mathrm{b}}$ is the unit vector parallel to the basal velocity; and φ is the bed roughness, expressed as a till friction angle. By default, the exponent p has a value of p=3, and the transition velocity u[0] has a value of u[0]=200myr^−1. At low sliding velocities, this sliding law behaves like a Weertman-type power law (Weertman, 1957), with the basal shear stress approaching zero as the basal velocity approaches zero. At high sliding velocities, the basal shear stress asymptotes to the Coulomb friction limit (Iverson et al., 1998). This two-regime behaviour agrees with the theoretical considerations underlying previous sliding laws (e.g. Schoof, 2005; Tsai et al., 2015). 2.2Inversion procedure For this study, we developed a novel inversion procedure. It is based on the procedure used in CISM (Lipscomb et al., 2021), which in turn is a variation on the geometry-based approach from Pollard and DeConto (2012). In the CISM procedure, as in the Pollard and DeConto approach, the ice-sheet model is run forward in time, and the bed roughness field is adapted based on the difference between the modelled and the target ice sheet. However, whereas the Pollard and DeConto approach only considers the mismatch in ice thickness, a newer, unpublished approach in CISM additionally includes the mismatch in surface velocity, leading to faster convergence (since the velocity responds more quickly to changes in bed roughness than the geometry). We extend this approach by adopting a flowline-averaged rather than a purely local scheme to calculate the mismatch in terms of ice thickness and velocity. The rationale behind this is that changing the bed roughness at any location will affect the ice geometry and velocity not just at that location but also upstream and downstream. Reducing the basal roughness at one location will increase the ice velocity along the entire flowline, causing the ice both locally and upstream to become thinner. By including these effects in the inversion procedure, numerical stability is improved, and artefacts arising from differences in the flotation mask between the modelled and the target state are reduced. The bed roughness produced by the inversion is not affected by these changes, as the inclusion of a regularisation term usually ensures that the bed roughness converges to the same solution. The approach outlined here mainly improves the numerical stability and robustness under changing ice sheet/ice shelf/ocean masks of the inversion. This is shown in Appendix A, where we compare the convergence behaviour of our new inversion procedure to a method currently used in CISM, which also uses both the geometry and velocity mismatch but without the flowline-averaging approach. Let $\mathbit{p}=\left[x,y\right]$ be a point on the ice sheet. We divide the flowline passing through p into an upstream part L[u](p,s) and a downstream part L[d](p,s), which can be found by integrating the ice surface velocity field $\stackrel{\mathrm{^}}{\mathbit{u}}=\frac{\mathbit{u}}{|\mathbit{u}|}$: $\begin{array}{}\text{(2a)}& {\mathbit{L}}_{\mathrm{u}}\left(\mathbit{p},s+\mathrm{d}s\right)={\mathbit{L}}_{\mathrm{u}}\left(\mathbit{p},s\right)-\stackrel{\mathrm{^}}{\mathbit{u}}\left({\mathbit {L}}_{\mathrm{u}}\left(\mathbit{p},s\right)\right)\mathrm{d}s,\text{(2b)}& {\mathbit{L}}_{\mathrm{d}}\left(\mathbit{p},s+\mathrm{d}s\right)={\mathbit{L}}_{\mathrm{d}}\left(\mathbit{p},s\right)+\ stackrel{\mathrm{^}}{\mathbit{u}}\left({\mathbit{L}}_{\mathrm{d}}\left(\mathbit{p},s\right)\right)\mathrm{d}s,\text{(2c)}& {\mathbit{L}}_{\mathrm{u}}\left(\mathbit{p},\mathrm{0}\right)={\mathbit{L}}_ Here, s is the distance along the flowline. In the upstream (downstream) direction, the integral is terminated at s[u] (s[d]) at the ice divide (ice margin), i.e. when u=0 (H=0), so that $\begin{array}{}\text{(3a)}& \mathbit{u}\left({\mathbit{L}}_{\mathrm{u}}\left(\mathbit{p},{s}_{\mathrm{u}}\left(\mathbit{p}\right)\right)\right)=\mathbf{0},\text{(3b)}& H\left({\mathbit{L}}_{\mathrm In order to calculate the rate of change $\mathrm{d}\mathit{\phi }/\mathrm{d}t$ of the till friction angle φ, the velocity mismatch (defined as the difference between the modelled absolute surface velocity $|{\mathbit{u}}_{\mathrm{m}}|$ and the target absolute surface velocity $|{\mathbit{u}}_{\mathrm{t}}|$) is averaged over both the upstream (Eq. 4a) and downstream (Eq. 4b) part of the flowline, whereas the ice thickness mismatch is evaluated only in the upstream direction (Eq. 4c; preliminary experiments showed that including a downstream ice thickness term was detrimental to the $\begin{array}{}\text{(4a)}& \begin{array}{rl}{I}_{\mathrm{1}}\left(\mathbit{p}\right)& =\underset{s=\mathrm{0}}{\overset{{s}_{\mathrm{u}}\left(\mathbit{p}\right)}{\int }}\left(\frac{|{\mathbit{u}}_ \ & {w}_{\mathrm{u}}\left(s,{s}_{\mathrm{u}}\left(\mathbit{p}\right)\right)\mathrm{d}s,\end{array}\text{(4b)}& \begin{array}{rl}{I}_{\mathrm{2}}\left(\mathbit{p}\right)& =\underset{s=\mathrm{0}}{\ overset{{s}_{\mathrm{d}}\left(\mathbit{p}\right)}{\int }}\left(\frac{|{\mathbit{u}}_{\mathrm{m}}\left({\mathbit{L}}_{\mathrm{d}}\left(\mathbit{p},s\right)\right)|-|{\mathbit{u}}_{\mathrm{t}}\left({\ mathbit{L}}_{\mathrm{d}}\left(\mathbit{p},s\right)\right)|}{{u}_{\mathrm{0}}}\right)\\ & {w}_{\mathrm{d}}\left(s,{s}_{\mathrm{d}}\left(\mathbit{p}\right)\right)\mathrm{d}s,\end{array}\text{(4c)}& \ begin{array}{rl}{I}_{\mathrm{3}}\left(\mathbit{p}\right)& =\underset{s=\mathrm{0}}{\overset{{s}_{\mathrm{u}}\left(\mathbit{p}\right)}{\int }}\left(\frac{{H}_{\mathrm{m}}\left({\mathbit{L}}_{\mathrm {u}}\left(\mathbit{p},s\right)\right)-{H}_{\mathrm{t}}\left({\mathbit{L}}_{\mathrm{u}}\left(\mathbit{p},s\right)\right)}{{H}_{\mathrm{0}}}\right)\\ & {w}_{\mathrm{u}}\left(s,{s}_{\mathrm{u}}\left(\ Here, I[1] represents the distance-weighted average of the velocity anomaly over the half-flowline upstream of p, I[2] represents the distance-weighted average of the velocity anomaly over the half-flowline downstream of p, and I[3] represents the distance-weighted average of the geometry anomaly over the half-flowline upstream of p. The default values for the scaling parameters are u[0]= 250myr^−1 and H[0]=100m. The linear scaling functions w[u] and w[d] serve to assign more weight to anomalies close to p, decreasing to zero at the ends of the flowline, as well as to normalise the $\begin{array}{}\text{(5a)}& {w}_{\mathrm{u}}\left(s,{s}_{\mathrm{u}}\left(\mathbit{p}\right)\right)=\frac{\mathrm{2}}{{s}_{\mathrm{u}}\left(\mathbit{p}\right)}\left(\mathrm{1}-\frac{s}{{s}_{\mathrm {u}}\left(\mathbit{p}\right)}\right),\text{(5b)}& {w}_{\mathrm{d}}\left(s,{s}_{\mathrm{d}}\left(\mathbit{p}\right)\right)=\frac{\mathrm{2}}{{s}_{\mathrm{d}}\left(\mathbit{p}\right)}\left(\mathrm{1}-\ The scaling functions are constructed such that ${\int }_{s=\mathrm{0}}^{s={s}_{\mathrm{u}}\left(\mathbit{p}\right)}{w}_{\mathrm{u}}\mathrm{d}s={\int }_{s=\mathrm{0}}^{s={s}_{\mathrm{d}}\left(\ mathbit{p}\right)}{w}_{\mathrm{d}}\mathrm{d}s=\mathrm{1}$. It is possible that integrating a finite distance from p, rather than over the entire flowline, might improve the rate of convergence; we did not perform any preliminary experiments to test this. The three line integrals from Eq. (4a–c) are then added together and scaled with the local ice thickness H(p) and velocity $|\mathbit{u}\left (\mathbit{p}\right)|$. This reflects the fact that bed roughness underneath slow-moving and/or thin ice has less effect on the large-scale ice-sheet geometry than the roughness underneath fast-flowing and/or thick ice: $\begin{array}{}\text{(6)}& {I}_{\text{tot}}\left(\mathbit{p}\right)=\left({I}_{\mathrm{1}}\left(\mathbit{p}\right)+{I}_{\mathrm{2}}\left(\mathbit{p}\right)+{I}_{\mathrm{3}}\left(\mathbit{p}\right)\ right)R\left(\mathbit{p}\right),\text{(7)}& R\left(\mathbit{p}\right)=\frac{|\mathbit{u}\left(\mathbit{p}\right)|H\left(\mathbit{p}\right)}{{u}_{\mathrm{s}}{H}_{\mathrm{s}}},\phantom{\rule{0.33em} {0ex}}\mathrm{0}\le R\left(\mathbit{p}\right)\le \mathrm{1}.\end{array}$ By default, the scaling parameters are u[s]=3000myr^−1 and H[s]=300m. These values are based on preliminary experiments to attain fast convergence without creating numerical artefacts. Finally, the rate of change $\mathrm{d}\mathit{\phi }/\mathrm{d}t$ of the till friction angle φ can be calculated: $\begin{array}{}\text{(8)}& \frac{\mathrm{d}\mathit{\phi }\left(\mathbit{p}\right)}{\mathrm{d}t}=-\frac{\mathit{\phi }\left(\mathbit{p}\right){I}_{\text{tot}}\left(\mathbit{p}\right)}{{t}_{\mathrm The default value for the timescale is t[s]=10 years, again based on preliminary experiments to balance the convergence rate against the numerical stability of the procedure. While the flowline integrals in Eq. (4a–c) are calculated over the entire flowline (including floating ice), $\mathrm{d}\mathit{\phi }/\mathrm{d}t$ is calculated only for grounded ice; it is then extrapolated to fill the entire model domain using a simple Gaussian kernel. This approach helps to prevent artefacts in grid cells that switch over time between grounded and floating, or ice-covered and ice-free states, which typically present as individual or clustered grid cells where the iterative roughness adjustment overshoots, quickly diverging to extreme values. The routine performing these calculations is run asynchronously from the other components of the ice-sheet model, with a time step of Δt[φ]=5 years. The till friction angle is updated every time this routine is called: $\begin{array}{}\text{(9)}& {\mathit{\phi }}_{n+\mathrm{1}}={F}_{\mathrm{2}}\left({\mathit{\phi }}_{n}+\mathrm{\Delta }{t}_{\mathit{\phi }}{F}_{\mathrm{1}}\left(\frac{\mathrm{d}\mathit{\phi }}{\ Here, F[1] and F[2] are Gaussian smoothing filters, with their respective radii defined relative to the grid resolution: ${\mathit{\sigma }}_{\mathrm{1}}=\mathrm{\Delta }x/\mathrm{1.5}$ and ${\mathit {\sigma }}_{\mathrm{2}}=\mathrm{\Delta }x/\mathrm{4}$. These filters serve as a regularisation of the bed roughness, to prevent overfitting. Pattyn (2017) uses a similar regularisation approach, with a Savitzky–Golay filter instead of a Gaussian filter. Pollard and DeConto (2012) do not report any regularisation term in their inversion, while in CISM, the inclusion of a $\mathrm{d}H/\mathrm{d}t$ term likely results in some smoothing. The radii of the two Gaussian filters, which were determined during preliminary experiments, are the lowest values we found that effectively repress small-wavelength terms in the inverted bed roughness, which are most likely a result of overfitting (Habermann et al., 2012). Increasing the radii of the filters does not significantly affect the inverted roughness until it is increased to several grid cells. Roughness variations of a small spatial scale could therefore potentially be obscured by the smoothing in our approach. However, such small variations would quickly approach the ice-dynamical limit of roughness variations that can be resolved by inverting from surface observations (about 50 ice thicknesses; Gudmundsson and Raymond, 2008), so this would likely not pose a serious problem in practical applications. The degree of overfitting in our approach is explored in more detail in Appendix A, where we demonstrate that it does not pose a significant problem. Our inversion method does not include weighting of the velocity–elevation mismatch based on uncertainty estimations in the observations. However, including these weights in the method would not be difficult and is worth considering when applying this method to the Greenland and/or Antarctic ice sheets. It might be possible to improve upon the inversion procedure presented here, achieving faster or more robust convergence or better computational performance. For example, our flowline-averaged approach might be difficult to implement in parallel models with a distributed-memory architecture (i.e. where a processor might not have access to all the data on a flowline), which is not the case in IMAU-ICE. However, the aim of this paper is not to find the most efficient way to perform a basal inversion but rather to investigate the uncertainties that remain in the result of that inversion even when the procedure itself works perfectly. 2.3Perfect-model approach In order to quantify the compensating errors from one particular model component, we use what we call a perfect-model approach. We first use the ice-sheet model to calculate the steady-state ice-sheet geometry for a known bed roughness field in a simulation we call the “target run”. The known bed roughness will be called the target roughness and the resulting ice-sheet the target geometry. If we then apply the inversion routine, with all model parameters set to the same values as were used to create the target geometry, then theoretically the resulting inverted bed roughness (which we call the unperturbed roughness) should be exactly the same as the target roughness. The difference between the unperturbed roughness and the target roughness is the model error of the inversion routine. If the inversion procedure works adequately, this error should be small. We then perform a “perturbed” inversion, where we change one or more of the model parameters/components (e.g. viscosity, surface mass balance (SMB), subglacial topography) with respect to the target run. As long as the change is small enough that its effect on the steady-state geometry can be compensated for by a change in bed roughness, the inversion will produce an ice sheet that still matches the target geometry and velocity but with a different bed roughness, which we call the perturbed roughness. The difference between the perturbed and unperturbed roughness is the compensating error in the bed roughness caused by the error in the model parameter that was changed in the perturbed run. This procedure is illustrated schematically in Fig. 1. 3Idealised-geometry ice sheets 3.1Experiment I: radially symmetrical ice sheet The first of our two idealised-geometry ice sheets is based on the EISMINT-1 “moving margin” experiment (Huybrechts et al., 1996). It describes an ice sheet on an infinite, non-deformable flat bed, with a radially symmetrical surface mass balance which is independent of the ice-sheet geometry: $\begin{array}{}\text{(10)}& M\left(r\right)=min\left({M}_{\text{max}},S\left(E-r\right)\right).\end{array}$ The values of the parameters are listed in Table 1; the radial distance r from the grid centre is expressed in metres. The ice viscosity is described by a uniform value of Glen's flow law factor A (i.e. no thermomechanical coupling). Lastly, we introduce a non-uniform till friction angle: $\begin{array}{}\text{(11)}& \mathit{\phi }\left(x,y\right)={\mathit{\phi }}_{\text{max}}-\left({\mathit{\phi }}_{\text{max}}-{\mathit{\phi }}_{\text{min}}\right){e}^{\frac{-\mathrm{1}}{\mathrm{2}}\ left({\left(\frac{x-{x}_{\mathrm{c}}}{{\mathit{\sigma }}_{x}}\right)}^{\mathrm{2}}+{\left(\frac{y-{y}_{\mathrm{c}}}{{\mathit{\sigma }}_{y}}\right)}^{\mathrm{2}}\right)}.\end{array}$ The values of the parameters are listed in Table 1. The equation thus describes a strip of reduced bed roughness running along the negative y axis of the domain, which results in the formation of an ice stream with higher ice velocities and a protruding ice lobe, as illustrated in Fig. 2. The ice sheet is initialised to a steady state by integrating the model through time for 50000 years. 3.2Experiment II: laterally symmetrical ice stream with shelf The second idealised-geometry ice sheet is based on the MISMIP+ geometry (Asay-Davis et al., 2016). This describes a laterally symmetric glacial valley, about 800km long and 80km wide, with a slightly over-deepening bed, followed by a sill, before dropping sharply into a deep ocean. A uniform accumulation rate of 0.3myr^−1 leads to the formation of a fast-flowing ice stream feeding into a small embayed shelf. The grounding line rests on a retrograde slope, kept in place by buttressing forces. As in experiment I, we introduce a non-uniform bed roughness, which is again described by Eq. (11); the parameters for this experiment are listed in Table 2. Following the MISMIP+ protocol set out by Asay-Davis et al. (2016), the uniform value for Glen's flow law factor $A=\mathrm {1.13928}×{\mathrm{10}}^{-\mathrm{17}}$${\mathrm{Pa}}^{-\mathrm{3}}\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}$ is tuned to achieve a steady-state geometry with a mid-stream grounding-line position at x=450km, in the middle of the retrograde-sloping part of the bed. The resulting ice-sheet geometry is illustrated in Fig. 3. The ice sheet is initialised to a steady state by integrating the model through time for 50000 years. 4.1Unperturbed inversions In order to verify that the inversion procedure is working properly, we first apply it to both idealised-geometry experiments with all model parameters unchanged. For experiment I, we perform these unperturbed inversions at resolutions of 40, 20, and 10km; for experiment II we use values of 5 and 2km. The 50000-year steady-state initialisation is performed separately at all resolutions. The till friction angle is initialised with a uniform value of $\mathit{\phi }=\mathrm{5}{}^{\circ }$, and the model is run forward in time for 100000 years. With this choice of initial value, the bed roughness typically converges to a stable solution within ∼30000 years (as demonstrated by the additional experiments in Appendix A). The resulting inverted bed roughness fields for both sets of simulations are shown in Figs. 4 and 5, respectively. The errors in the inverted bed roughness, and the resulting ice-sheet geometry and velocity, are very small at all resolutions and in both experiments (typically <5% for the bed roughness, <5m for the surface elevation, and <5% for the surface velocity), indicating that the inversion procedure works well in the simple geometries of these two experiments. 4.2Perturbed inversions To quantify the compensating errors in the inverted bed roughness, we perform a number of perturbed inversions, where we introduce errors in several model components. First, we increase (decrease) the uniform value for Glen's flow law factor A by a factor of 1.25. We assume that, in reality, this factor depends on the englacial temperature through an Arrhenius relation. The uncertainty in the annual mean surface temperature during the last glacial cycle is about 1K for Antarctica (Jouzel et al., 2007) and 4K for Greenland (Alley, 2000; Kindler et al., 2014). In realistic applications, a flow enhancement factor is often applied to account for anisotropic rheology and damage. Since estimated values of this factor differ significantly (Ma et al., 2010), an uncertainty of an order of magnitude is plausible, but we chose a smaller range to ensure that the inversion procedure was still able to reproduce the target geometry. Second, we increase (decrease) the SMB by a factor of 1.05. This seemingly small range is motivated by the fact that, for simplicity's sake, we alter the SMB over the entire model domain. Whereas estimates of local mass balance contain significant uncertainties, ice-sheet-integrated values are additionally constrained by satellite gravimetry, so that an uncertainty of 5% seems plausible (Fettweis et al., 2020). Next, we increase (decrease) the transition velocity u[0] in the Zoet–Iverson sliding law by a factor of 2, and we increase (decrease) the exponent p in the sliding law by 2. Zoet and Iverson (2020) report a range of transition velocities between 50 and 200myr^−1, whereas in CISM a default value of 200myr^−1 is used. For the exponent, Zoet and Iverson (2020) report a value of 5, CISM uses a value of 3, and a value of 1 yields a linear sliding law, which is still used in some ice-sheet models. We also perform two perturbed inversions where we add an error to the bed topography of ±10% of the ice thickness, resulting in a bump (depression) of just over 250m beneath the ice divide. The ice thickness is adjusted accordingly to keep the surface elevation unchanged. While the surface elevation of the Greenland and Antarctic ice sheets is generally known very accurately, estimates of ice thickness and bedrock elevation are based on interpolation of local radar measurements. In the BedMachine Greenland v4 dataset (Morlighem et al., 2017), the reported uncertainty in the bedrock elevation exceeds 10% of the ice thickness over about 30% of the ice sheet. Our choice of increasing/ decreasing the estimated ice thickness by 10% everywhere therefore serves as an upper bound, as it is unlikely that all of the data and extrapolations are biased in the same direction. These five parameters (viscosity, SMB, transition velocity, exponent, topography), each with a high and a low value, result in 10 perturbed inversion simulations. The resulting errors in the inverted bed roughness, steady-state ice geometry, and surface velocity for experiment I are shown in Fig. 6. The top-leftmost panel in Fig. 6 shows the error in the inverted bed roughness for the high-viscosity perturbed inversion. In this experiment, the overestimated ice viscosity means that the ice flow due to vertical shearing is underestimated, which is compensated for by decreasing the bed roughness, leading to increased basal sliding. The leftmost panels in the third and fifth rows of Fig. 6 show the errors in the resulting steady-state ice geometry and surface velocity, which are negligibly small. For these two quantities, the errors in the viscosity and the bed roughness are indeed compensating errors. This is true for almost all perturbed inversions, except for the low-viscosity and high-topography runs (high-topography means an added depression in the bedrock, such that the target ice thickness is overestimated). In these two experiments, the added perturbations cause the deformational ice flow to be overestimated so much that even preventing all basal sliding cannot entirely compensate for this perturbation. Note that this results from perturbing Glen's flow law factor A by a factor of 1.25, which is rather conservative. In realistic applications, the uncertainty in this quantity is typically an order of magnitude. The underestimated value of the Zoet–Iverson sliding law exponent p=1 (Fig. 6, fourth column, lower set of rows), which implies a linear sliding law, yields negligible errors in the geometry and velocity but results in the inverted bed roughness being overestimated by a factor of 3 on average. The overestimated value of p=5 yields negligible differences, as do both over- and underestimated values of the transition velocity u[0]. In the remaining four perturbed viscosity/mass balance/topography simulations, the errors in the inverted geometry are acceptably small, compared to the errors reported for initialised models in realistic intercomparison projects (e.g. initMIP-Greenland; Goelzer et al., 2018). The errors in the inverted bed roughness, however, are as large or larger than the “signal” of the prescribed bed roughness pattern (i.e. $\sim \mathrm{5}{}^{\circ }$ of till friction angle change in the ice-stream area). These errors show prominent spatial patterns, despite the fact that the perturbations are spatially uniform. This implies that one should be cautious when interpreting the spatial patterns yielded by a basal inversion procedure, as they could reflect errors in some other physical quantity rather than realistic variations in bed roughness. For experiment II, we perform the same set of perturbed inversions as for experiment I, introducing the same perturbations to the ice viscosity, the surface mass balance, the subglacial topography, and the sliding law parameters. We additionally perturb the sub-shelf melt rate, applying values of ±1myr^−1 (in the target run, no basal melt is applied). The results of the perturbed inversions are shown in Fig. 7. The results of the perturbed Zoet–Iverson sliding law transition velocity u[0] are omitted, since that only has a small effect. Similar to experiment I, the relatively small errors introduced in the ice viscosity, mass balance, and subglacial topography lead to large errors in the inverted bed roughness but still produce a steady-state ice geometry that is close to the target geometry. The only exceptions are, again, the low-viscosity and high-topography runs, as well as the low-BMB (basal mass balance) run (i.e. too much sub-shelf melt), where the ice flow is increased more than can be compensated for by increasing the basal friction. However, even here the errors in the inverted geometry are relatively small. The errors in the inverted velocities are mostly small, except for the inversions with the perturbed sub-shelf melt rates. While these inversions produce relatively accurate geometries (about 120m of ice loss near the grounding line in the increased-melt simulations), they contain large errors in the shelf velocities (about −500myr^−1 in the increased-melt simulation, relative to a target value of about 1000myr^−1). As in experiment I, the introduced perturbations (which are spatially uniform) lead to prominent spatial patterns in the inverted bed roughness, with the errors being as large as the actual (prescribed) signal. This underlines the conclusion that spatial patterns in inverted bed roughness do not necessarily correspond to spatial patterns in the true bed roughness. Finally, we perform a perturbed inversion for experiment II where we chose a non-equilibrated target geometry. We achieve this by terminating the initialisation after 10000 years, instead of the default of 50000 years, so that the ice has only reached about 90% of its steady-state thickness. This non-steady-state geometry serves as the target for the inversion. Since the present-day observed geometry of the Antarctic ice sheet likely does not represent a steady state but already displays sustained and accelerating thinning rates (Rignot et al., 2019), this experiment mimics the effects of erroneously assuming that the ice sheet is in equilibrium (a common assumption in modelling studies; Seroussi et al., 2019). The results of this experiment are shown in Fig. 8. Here too, the inversion procedure results in very small errors in the ice geometry and relatively small errors in the velocity (note that the high velocity ratios occur in the slow-moving interior; in the fast-moving part of the ice stream, the errors are around 25%) but substantial errors in the bed roughness. 4.3Dynamic ice-sheet response To investigate the effect of compensating errors in basal inversions on the dynamic response of the ice sheet, we perform a series of simulations based on experiment II, where we increase the basal melt, forcing the ice sheet to retreat. We use the schematic basal melt parameterisation from the MISMIP+ Ice1r experiment (Asay-Davis et al., 2016) and run the model for 500 years. We initialise our simulations with the perturbed parameters, inverted bed roughness, and steady-state ice geometry from the perturbed inversions presented in Sect. 4.2. For the “non-equilibrated” experiment, note that the ice sheet at the end of the inversion is in a steady state; it has achieved this by lowering the bed roughness far enough to match the target geometry, which was not in a steady state. The resulting ice volume above flotation (relative to the steady state at t=0) and the mid-stream grounding-line position over time for all experiments are shown in Fig. 9. In the 500-year unperturbed simulation, the grounding line retreats by about 150km, causing the ice volume above flotation to decrease by about 1.7×10^13m^3. As a result of the introduced errors in the perturbed simulations, this mass loss is increased (decreased) by up to 30% (35%) relative to the unperturbed simulation. The errors in the subglacial topography have the strongest effect, with the high-perturbed run showing nearly twice as much ice loss as the low-perturbed run. This is followed by the sliding law exponent (−18% to +3%) and the ice viscosity (−14% to +11%). The effects of the errors in the SMB, the BMB, the sliding law transition velocity, and the non-equilibrated target geometry are small. We investigated the effects of compensating errors in basal inversions. We presented a novel geometry- and velocity-based inversion procedure, which produces good results in schematic experiments with a moving ice margin and grounding line and which produces robust convergence behaviour under an evolving ice geometry. We applied this method to two different idealised-geometry experiments, where we quantified the errors in the inverted bed roughness that arise from perturbations in other model parameters, such as the ice viscosity, mass balance, sliding law, and subglacial topography. We find that relatively small perturbations in these parameters, which are generally within the uncertainty ranges for the Greenland and Antarctic ice sheets, can lead to substantial compensating errors in the bed roughness. In our idealised experiments, these errors were often larger than the actual spatial variations in bed roughness. This implies that one should be cautious in interpreting the outcome of a basal inversion as an accurate physical representation of bed roughness underneath an ice sheet. We find that the dynamic response of the ice to a retreat forcing is most sensitive to errors in the subglacial topography, followed by the ice viscosity and the sliding law. Errors in the surface and basal mass balance appear to only have a small effect on the retreat, although this effect might become more pronounced when local instead of ice-sheet-wide errors are taken into account. The aim of basal inversion procedures in many ice-sheet models is not to provide an accurate approximation of the actual bed roughness but rather to produce an ice sheet that matches the observed state in terms of geometry and/or velocity. The underlying assumption is that any compensating errors in the inverted bed roughness and other model components in terms of the ice geometry will also compensate for each other in terms of their effect on the ice sheet's dynamic response. We tested this assumption by using a basal inversion to initialise a number of different simulated ice sheets, all with slightly different model parameters (viscosity, mass balance, etc.). We find that, even though the inversion results in all models have nearly identical steady-state geometries, their dynamic response (represented here by the ice volume loss after a short period of forced ice-sheet retreat) can differ by as much as a factor of 2. The strongest effect arises from the uncertainty in the subglacial topography, followed by the sliding law exponent and the ice viscosity. Uncertainties in the surface and basal mass balance lead to considerable errors in the bed roughness but only have a small impact on the dynamic response, as does erroneously assuming that the target (i.e. observed) ice-sheet geometry represents a steady state. The geometry of the experiment used to produce these findings describes a marine setting typical of West Antarctica, where the rate of mass loss under a forced retreat is mainly governed by ice-dynamical processes such as viscous flow and basal sliding (Seroussi et al., 2020). In a land-based setting more typical of the Greenland ice sheet, where most mass is lost through atmospheric processes (Goelzer et al., 2020), the effects of these ice-dynamical uncertainties will likely be smaller. However, as long-term projections of sea-level rise under strong warming scenarios are dominated by marine-grounded ice loss in West Antarctica (Seroussi et al., 2020), such projections will likely contain substantial uncertainties as a result of the processes we described, possibly as large as 35% of the projected ice loss. We have investigated the effect of compensating errors when deriving basal conditions underneath an ice sheet using inversion techniques. We find that errors in the modelled estimates of other physical quantities, such as the viscosity or subglacial topography of the ice, can substantially affect the estimated basal conditions. Our results imply that, even when basal inversion is used to achieve a stable ice sheet with the desired geometry, uncertainties in other model parameters can have a substantial effect on that ice sheet's dynamic response. Improving our knowledge of the ice-sheet interior (temperature, rheology, viscosity) and substrate (geometry, roughness) therefore should remain an important goal of the glaciological community. In order to illustrate the convergence of our flowline-based inversion procedure, we performed additional simulations of the unperturbed versions of experiments I and II, where the inversion was allowed to run for 200000 years. For comparison, we also ran the same simulations with the CISM-based inversion procedure. In this procedure, the rate of change $\mathrm{d}\mathit{\phi }/\mathrm{d} t$ of the bed roughness φ is calculated based only on the local mismatch in the ice thickness H and the surface velocity u: $\begin{array}{}\text{(A1)}& \frac{\mathrm{d}\mathit{\phi }}{\mathrm{d}t}=\frac{-\mathit{\phi }}{{\mathit{\tau }}_{\mathrm{c}}}\left(\frac{{H}_{\mathrm{m}}-{H}_{\mathrm{t}}}{{H}_{\mathrm{0}}}-\frac{| The values of the scaling parameters are H[0]=100m and u[0]=10myr^−1. The timescale of adjustment τ[c] is 10000 years in experiment I and 40000 years in experiment II. These values were determined experimentally as the lowest value (i.e. fastest convergence) that did not result in numerical instability. The results of experiment I are shown in Fig. A1. Panel (a) shows the time evolution of the root mean square (rms) of the relative surface elevation mismatch $\left({H}_{\mathrm{m}}-{H}_{\mathrm{t}}\right)/{H}_{\mathrm{t}}$, the relative surface velocity mismatch $\left(|\ mathbit{u}{|}_{\mathrm{m}}-|\mathbit{u}{|}_{\mathrm{t}}\right)/|\mathbit{u}{|}_{\mathrm{t}}$, and the bed roughness mismatch $\left({\mathit{\phi }}_{\mathrm{m}}-{\mathit{\phi }}_{\mathrm{t}}\right)/ {\mathit{\phi }}_{\mathrm{t}}$. These quantities converge to a stable solution that is typically within a few percent of the target, with the flowline-averaged approach presented in this study achieving smaller errors than the local-mismatch approach from CISM. The fact that there is no overfitting can be seen in panel (b), which shows the root mean square of the rate of change $\mathrm{d} \mathit{\phi }/\mathrm{d}t$ of the bed roughness φ, which exponentially decays. Without proper regularisation, small-wavelength terms in the bed roughness solution can continue to increase in amplitude as the model is run forward; the effect of these terms on the velocity solution displays diminishing returns, so that bigger and bigger changes to the solution are needed to reduce the velocity–geometry misfit. This shows up in the convergence plot by a bed roughness rate of change that soon starts to exponentially increase. The Gaussian-filter-based regularisation term in our approach prevents this type of overfitting from occurring. Figure A2 shows the same quantities for experiment II. The sudden jump in the CISM-method results around 95000 years is due to an advance of the grounding line by a single grid cell. We believe the wave-like features seen in the curve for the CISM-based approach in panel (b), arise from an under-damped, slow oscillation between the bed roughness and the ice geometry. In the upstream part of the ice stream, where velocities are very low, the ice thickness responds very slowly to a change in bed roughness. Since the initial guess for the roughness there is too high, the ice starts to slowly accumulate; the inversion will respond by decreasing the roughness, but since the ice thickness changes very slowly, the roughness is reduced too much, causing the ice to eventually become too thin, etc. With the current choice of timescale of 40000 years, these oscillations eventually dissipate. Including a $\mathrm{d}H/\mathrm{d}t$ term in the inversion removes this problem; the velocity term in our own approach has a similar effect, since velocities respond instantaneously to a change in bed roughness. The curve for our own inversion approach in Fig. A2b displays noise-like features. We believe these to be caused by an interaction between the velocity term in the inversion, the iterative solvers used in the stress balance solver (both for the linearised problem, i.e. with fixed effective viscosity, and for the non-linear viscosity iteration; see Berends et al., 2022), and the dynamic time step used for the ice thickness equation. The combination of these iterative solvers with a dynamic time step causes (very) small errors to continuously appear in the velocity solution, only to be repressed by the subsequently reduced model time step. For the fast-flowing ice of this particular geometry, these velocity errors start to affect the bed roughness inversion before they are repressed by the dynamic time step, which causes the “noise” that is visible in the curve of our approach in Fig. A2b. Using smaller tolerances in the stop criteria for the two iterative solvers in the stress balance solver reduces this problem, at the expense of increasing the model's computational cost. Since Fig. A2a shows that the resulting errors in the roughness solution do not accumulate, we deem this to be acceptable. Code and data availability The source code of IMAU-ICE, scripts for compiling and running the model on a variety of computer systems, and the configuration files for all simulations presented here are freely available on GitHub (https://github.com/IMAU-paleo/IMAU-ICE, last access: 4 April 2023) and Zenodo (https://doi.org/10.5281/zenodo.7797957; Berends et al., 2023). CJB performed the experiments and analysed the data. CJB wrote the draft of the manuscript. All authors contributed to the final version. The contact author has declared that none of the authors has any competing interests. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We would like to thank Jorge Bernales and Willem Jan van den Berg for providing helpful comments during the execution of this project, as well as two anonymous reviewers for their helpful comments on the manuscript. We would like to acknowledge SurfSARA Computing and Networking Services for their support. Constantijn J. Berends was supported by PROTECT. This publication was supported by PROTECT. This project has received funding from the European Union's Horizon 2020 research and innovation programme (grant no. 869304, PROTECT contribution number 62). Tim van den Akker was supported by the Netherlands Polar Program. The use of supercomputer facilities was sponsored by NWO Exact and Natural Sciences. Model runs were performed on the Dutch National Supercomputer Snellius. William H. Lipscomb was supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under cooperative agreement no. 1852977. This paper was edited by Elisa Mantelli and reviewed by two anonymous referees. Albrecht, T., Winkelmann, R., and Levermann, A.: Glacial-cycle simulations of the Antarctic Ice Sheet with the Parallel Ice Sheet Model (PISM) – Part 1: Boundary conditions and climatic forcing, The Cryosphere, 14, 599–632, https://doi.org/10.5194/tc-14-599-2020, 2020. Alley, R. B.: The Younger Dryas cold interval as viewed from central Greenland, Quaternary Sci. Rev., 19, 213–226, 2000. Arthern, R. J. and Gudmundsson, G. H.: Initialization of ice-sheet forecasts viewed as an inverse Robin problem, J. Glaciol., 56, 527–533, 2010. Arthern, R. J., Hindmarsh, R. C. A., and Williams, C. R.: Flow speed within the Antarctic ice sheet and its controls inferred from satellite observations, J. Geophys. Res.-Earth, 120, 1171–1188, Asay-Davis, X. S., Cornford, S. L., Durand, G., Galton-Fenzi, B. K., Gladstone, R. M., Gudmundsson, G. H., Hattermann, T., Holland, D. M., Holland, D., Holland, P. R., Martin, D. F., Mathiot, P., Pattyn, F., and Seroussi, H.: Experimental design for three interrelated marine ice sheet and ocean model intercomparison projects: MISMIP v. 3 (MISMIP+), ISOMIP v. 2 (ISOMIP+) and MISOMIP v. 1 (MISOMIP1), Geosci. Model Dev., 9, 2471–2497, https://doi.org/10.5194/gmd-9-2471-2016, 2016. Babaniyi, O., Nicholson, R., Villa, U., and Petra, N.: Inferring the basal sliding coefficient field for the Stokes ice sheet model under rheological uncertainty, The Cryosphere, 15, 1731–1750, https://doi.org/10.5194/tc-15-1731-2021, 2021. Berends, C. J., Goelzer, H., Reerink, T. J., Stap, L. B., and van de Wal, R. S. W.: Benchmarking the vertically integrated ice-sheet model IMAU-ICE (version 2.0), Geosci. Model Dev., 15, 5667–5688, https://doi.org/10.5194/gmd-15-5667-2022, 2022. Berends, C. J., van de Wal, R. S. W., van den Akker, T., and Lipscomb, W. H.: IMAU-ICE v2.0 version used for Berends et al. 2023 basal inversion experiments, Zenodo [data set], https://doi.org/ 10.5281/zenodo.7797957, 2023. Feldmann, J., Albrecht, T., Khroulev, C., Pattyn, F., and Levermann, A.: Resolution-dependent performance of grounding line motion in a shallow model compared with a full-Stokes model according to the MISMIP3d intercomparison, J. Glaciol., 60, 353–360, 2014. Fettweis, X., Hofer, S., Krebs-Kanzow, U., Amory, C., Aoki, T., Berends, C. J., Born, A., Box, J. E., Delhasse, A., Fujita, K., Gierz, P., Goelzer, H., Hanna, E., Hashimoto, A., Huybrechts, P., Kapsch, M.-L., King, M. D., Kittel, C., Lang, C., Langen, P. L., Lenaerts, J. T. M., Liston, G. E., Lohmann, G., Mernild, S. H., Mikolajewicz, U., Modali, K., Mottram, R. H., Niwano, M., Noël, B., Ryan, J. C., Smith, A., Streffing, J., Tedesco, M., van de Berg, W. J., van den Broeke, M., van de Wal, R. S. W., van Kampenhout, L., Wilton, D., Wouters, B., Ziemen, F., and Zolles, T.: GrSMBMIP: intercomparison of the modelled 1980–2012 surface mass balance over the Greenland Ice Sheet, The Cryosphere, 14, 3935–3958, https://doi.org/10.5194/tc-14-3935-2020, 2020. Fox-Kemper, B., Hewitt, H. T., Xiao, C., Aðalgeirsdóttir, G., Drijfhout, S. S., Edwards, T. L., Golledge, N. R., Hemer, M., Kopp, R. E., Krinner, G., Mix, A., Notz, D., Nowicki, S., Nurhati, I. S., Ruiz, L., Sallée, J.-B., Slangen, A. B. A., and Yu, Y.: Ocean, Cryosphere and Sea Level Change, in: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Masson-Delmotte, V., Zhai, P., Pirani, A., Connors, S. L., Péan, C., Berger, S., Caud, N., Chen, Y., Goldfarb, L., Gomis, M. I., Huang, M., Leitzell, K., Lonnoy, E., Matthews, J. B. R., Maycock, T. K., Waterfield, T., Yelekçi, O., Yu, R., and Zhou, B., Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1211–1362, https://doi.org/10.1017/9781009157896.011, 2021. Gagliardini, O., Zwinger, T., Gillet-Chaulet, F., Durand, G., Favier, L., de Fleurian, B., Greve, R., Malinen, M., Martín, C., Råback, P., Ruokolainen, J., Sacchettini, M., Schäfer, M., Seddik, H., and Thies, J.: Capabilities and performance of Elmer/Ice, a new-generation ice sheet model, Geosci. Model Dev., 6, 1299–1318, https://doi.org/10.5194/gmd-6-1299-2013, 2013. Goelzer, H., Nowicki, S., Edwards, T., Beckley, M., Abe-Ouchi, A., Aschwanden, A., Calov, R., Gagliardini, O., Gillet-Chaulet, F., Golledge, N. R., Gregory, J., Greve, R., Humbert, A., Huybrechts, P., Kennedy, J. H., Larour, E., Lipscomb, W. H., Le clec'h, S., Lee, V., Morlighem, M., Pattyn, F., Payne, A. J., Rodehacke, C., Rückamp, M., Saito, F., Schlegel, N., Seroussi, H., Shepherd, A., Sun, S., van de Wal, R., and Ziemen, F. A.: Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison, The Cryosphere, 12, 1433–1460, https:// doi.org/10.5194/tc-12-1433-2018, 2018. Goelzer, H., Nowicki, S., Payne, A., Larour, E., Seroussi, H., Lipscomb, W. H., Gregory, J., Abe-Ouchi, A., Shepherd, A., Simon, E., Agosta, C., Alexander, P., Aschwanden, A., Barthel, A., Calov, R., Chambers, C., Choi, Y., Cuzzone, J., Dumas, C., Edwards, T., Felikson, D., Fettweis, X., Golledge, N. R., Greve, R., Humbert, A., Huybrechts, P., Le clec'h, S., Lee, V., Leguy, G., Little, C., Lowry, D. P., Morlighem, M., Nias, I., Quiquet, A., Rückamp, M., Schlegel, N.-J., Slater, D. A., Smith, R. S., Straneo, F., Tarasov, L., van de Wal, R., and van den Broeke, M.: The future sea-level contribution of the Greenland ice sheet: a multi-model ensemble study of ISMIP6, The Cryosphere, 14, 3071–3096, https://doi.org/10.5194/tc-14-3071-2020, 2020. Goldberg, D. N.: A variationally derived, depth-integrated approximation to a higher-order glaciological flow model, J. Glaciol., 57, 157–170, 2011. Gudmundsson, G. H. and Raymond, M.: On the limit to resolution and information on basal properties obtainable from surface data on ice streams, The Cryosphere, 2, 167–178, https://doi.org/10.5194/ tc-2-167-2008, 2008. Habermann, M., Maxwell, D., and Truffer, M.: Reconstruction of basal properties in ice sheets using iterative inverse methods, J. Glaciol., 58, 795–807, 2012. Huybrechts, P., Payne, T., Abe-Ouchi, A., Calov, R., Fabre, A., Fastook, J. L., Greve, R., Hindmarsh, R. C. A., Hoydal, O., Johannesson, T., MacAyeal, D. R., Marsiat, I., Ritz, C., Verbitsky, M. Y., Waddington, E. D., and Warner, R.: The EISMINT benchmarks for testing ice-sheet models, Ann. Glaciol., 23, 1–12, 1996. Iverson, N. R., Hoover, T. S., and Baker, R. W.: Ring-shear studies of till deformation: Coulomb-plastic behavor and distributed strain in glacier beds, J. Glaciol., 44, 634–642, 1998. Jouzel, J., Masson-Delmote, V., Cattani, O., Dreyfus, G., Falourd, S., Hoffmann, G., Minster, B., Nouet, J., Barnola, J. M., Chappellaz, J., Fischer, H., Gallet, J. C., Johnsen, S., Leuenberger, M., Loulergue, L., Luethi, D., Oerter, H., Parrenin, F., Raisbeck, G., Raynaud, D., Schilt, A., Schwander, J., Selmo, E., Souchez, R., Spahni, R., Stauffer, B., Steffensen, J. P., Stenni, B., Stocker, T. F., Tison, J. L., Werner, M., and Wolff, E. W.: Orbital and Millenial Antarctic Climate Variability over the Past 800000 Years, Science, 317, 793–797, 2007. Kindler, P., Guillevic, M., Baumgartner, M., Schwander, J., Landais, A., and Leuenberger, M.: Temperature reconstruction from 10 to 120kyr b2k from the NGRIP ice core, Clim. Past, 10, 887–902, https://doi.org/10.5194/cp-10-887-2014, 2014. Larour, E., Seroussi, H., Morlighem, M., and Rignot, E.: Continental scale, high order, high spatial resolution, ice sheet modeling using the Ice Sheet System Model (ISSM), J. Geophys. Res., 117, F01022, https://doi.org/10.1029/2011JF002140, 2012. Leguy, G. R., Lipscomb, W. H., and Asay-Davis, X. S.: Marine ice sheet experiments with the Community Ice Sheet Model, The Cryosphere, 15, 3229–3253, https://doi.org/10.5194/tc-15-3229-2021, 2021. Lipscomb, W. H., Price, S. F., Hoffman, M. J., Leguy, G. R., Bennett, A. R., Bradley, S. L., Evans, K. J., Fyke, J. G., Kennedy, J. H., Perego, M., Ranken, D. M., Sacks, W. J., Salinger, A. G., Vargo, L. J., and Worley, P. H.: Description and evaluation of the Community Ice Sheet Model (CISM) v2.1, Geosci. Model Dev., 12, 387–424, https://doi.org/10.5194/gmd-12-387-2019, 2019. Lipscomb, W. H., Leguy, G. R., Jourdain, N. C., Asay-Davis, X., Seroussi, H., and Nowicki, S.: ISMIP6-based projections of ocean-forced Antarctic Ice Sheet evolution using the Community Ice Sheet Model, The Cryosphere, 15, 633–661, https://doi.org/10.5194/tc-15-633-2021, 2021. Ma, Y., Gagliardini, O., Ritz, C., Gillet-Chaulet, F., Durand, G., and Montagnat, M.: Enhancement factors for grounded ice and ice shelves inferred from an anisotropic ice-flow model, J. Glaciol., 56, 805–812, 2010. Morlighem, M., Williams, C. N., Rignot, E., An, L., Arndt, J. E., Bamber, J. L., Catania, G., Chauché, N., Dowdeswell, J. A., Dorschel, B., Fenty, I., Hogan, K., Howat, I. M., Hubbard, A., Jakobsson, M., Jordan, T. M., Kjeldsen, K. K., Millan, R., Mayer, L., Mouginot, J., Noël, B. P. Y., O'Cofaigh, C., Palmer, S., Rysgaard, S., Seroussi, H., Siegert, M. J., Slabon, P., Straneo, F., van den Broeke, M. R., Weinrebe, W., Wood, M., and Zinglersen, K. B.: BedMachine v3: Complete Bed Topography and Ocean Bathymetry Mapping of Greenland From Multibeam Echo Sounding Combined With Mass Conservation, Geophys. Res. Lett., 44, 11051–11061, 2017. Oppenheimer, M., Glavovic, B. C., Hinkel, J., van de Wal, R., Magnan, A. K., Abd-Elgawad, A., Cai, R., Cifuentes-Jara, M., DeConto, R. M., Ghosh, T., Hay, J., Isla, F., Marzeion, B., Meyssignac, B., and Sebesvari, Z.: Sea Level Rise and Implications for Low-Lying Islands, Coasts and Communities, in: IPCC Special Report on the Ocean and Cryosphere in a Changing Climate, edited by: Pörtner, H.-O., Roberts, D. C., Masson-Delmotte, V., Zhai, P., Tignor, M., Poloczanska, E., Mintenbeck, K., Alegría, A., Nicolai, M., Okem, A., Petzold, J., Rama, B., and Weyer, N. M., Cambridge University Press, Cambridge, UK and New York, NY, USA, 321–445, https://doi.org/10.1017/9781009157964.006, 2019. Pattyn, F.: Sea-level response to melting of Antarctic ice shelves on multi-centennial timescales with the fast Elementary Thermomechanical Ice Sheet model (f.ETISh v1.0), The Cryosphere, 11, 1851–1878, https://doi.org/10.5194/tc-11-1851-2017, 2017. Perego, M., Price, S., and Stadler, G.: Optimal initial conditions for coupling ice sheet models to Earth system models, J. Geophys. Res.-Earth, 119, 1894–1917, 2014. Pollard, D. and DeConto, R. M.: A simple inverse method for the distribution of basal sliding coefficients under ice sheets, applied to Antarctica, The Cryosphere, 6, 953–971, https://doi.org/10.5194 /tc-6-953-2012, 2012. Ranganathan, M., Minchew, B., Meyer, C. R., and Gudmunsson, G. H.: A new approach to inferring basal drag and ice rheology in ice streams, with applications to West Antarctic Ice Streams, J. Glaciol., 67, 229–242, 2021. Rignot, E., Mouginot, J., van den Broeke, M. R., van Wessem, J. M., and Morlighem, M.: Four decades of Antarctic Ice Sheet mass balance from 1979–2017, P. Natl. Acad. Sci. USA, 116, 1095–1103, 2019. Schoof, C.: The effect of cavitation on glacier sliding, P. R. Soc. A, 461, 609–627, 2005. Seroussi, H., Morlighem, M., Rignot, E., Khazendar, A., Larour, E., and Mouginot, J.: Dependence of century-scale projections of the Greenland ice sheet on its thermal regime, J. Glaciol., 59, 1024–1034, 2013. Seroussi, H., Nowicki, S., Simon, E., Abe-Ouchi, A., Albrecht, T., Brondex, J., Cornford, S., Dumas, C., Gillet-Chaulet, F., Goelzer, H., Golledge, N. R., Gregory, J. M., Greve, R., Hoffman, M. J., Humbert, A., Huybrechts, P., Kleiner, T., Larour, E., Leguy, G., Lipscomb, W. H., Lowry, D., Mengel, M., Morlighem, M., Pattyn, F., Payne, A. J., Pollard, D., Price, S. F., Quiquet, A., Reerink, T. J., Reese, R., Rodehacke, C. B., Schlegel, N.-J., Shepherd, A., Sun, S., Sutter, J., Van Breedam, J., van de Wal, R. S. W., Winkelmann, R., and Zhang, T.: initMIP-Antarctica: an ice sheet model initialization experiment of ISMIP6, The Cryosphere, 13, 1441–1471, https://doi.org/10.5194/tc-13-1441-2019, 2019. Seroussi, H., Nowicki, S., Payne, A. J., Goelzer, H., Lipscomb, W. H., Abe-Ouchi, A., Agosta, C., Albrecht, T., Asay-Davis, X., Barthel, A., Calov, R., Cullather, R., Dumas, C., Galton-Fenzi, B. K., Gladstone, R., Golledge, N. R., Gregory, J. M., Greve, R., Hattermann, T., Hoffman, M. J., Humbert, A., Huybrechts, P., Jourdain, N. C., Kleiner, T., Larour, E., Leguy, G. R., Lowry, D. P., Little, C. M., Morlighem, M., Pattyn, F., Pelle, T., Price, S. F., Quiquet, A., Reese, R., Schlegel, N.-J., Shepherd, A., Simon, E., Smith, R. S., Straneo, F., Sun, S., Trusel, L. D., Van Breedam, J., van de Wal, R. S. W., Winkelmann, R., Zhao, C., Zhang, T., and Zwinger, T.: ISMIP6 Antarctica: a multi-model ensemble of the Antarctic ice sheet evolution over the 21st century, The Cryosphere, 14, 3033–3070, https://doi.org/10.5194/tc-14-3033-2020, 2020. Sun, S., Pattyn, F., Simon, E. G., Albrecht, T., Cornford, S. L., Calov, R., Dumas, C., Gillet-Chaulet, F., Goelzer, H., Golledge, N. R., Greve, R., Hoffman, M. J., Humbert, A., Kazmierczak, E., Kleiner, T., Leguy, G. R., Lipscomb, W. H., Martin, D., Morlighem, M., Nowicki, S., Pollard, D., Price, S. F., Quiquet, A., Seroussi, H., Schlemm, T., Sutter, J., van de Wal, R. S. W., Winkelmann, R., and Zhang, T.: Antarctic ice sheet response to sudden and sustained ice-shelf collapse (ABUMIP), J. Glaciol., 66, 891–904, 2020. Tsai, V. C., Stewart, A. L., and Thompson, A. F.: Marine ice-sheet profiles and stability under Coulomb basal conditions, J. Glaciol., 61, 205–215, 2015. Weertman, J.: On the sliding of glaciers, J. Glaciol., 3, 33–38, 1957. Zoet, L. K. and Iverson, N. R.: A slip law for glaciers on deformable beds, Science, 368, 76–78, 2020.
{"url":"https://tc.copernicus.org/articles/17/1585/2023/","timestamp":"2024-11-08T22:55:13Z","content_type":"text/html","content_length":"301697","record_id":"<urn:uuid:bd405c3b-3b91-48c7-be58-cc33c4839bce>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00618.warc.gz"}
Consider how much power you actually use. - Texas Solar Power Company, LLC Consider how much power you actually use. Step #3: Consider how much power you actually use and how and when you use it. Realistically, you will probably look to supplement your power needs via solar (“grid-tie” as previously described) rather than use a Battery Stand-Alone system. If you want more information about battery back up and true system sizing, please let us know – we have an overview you can use to determine your full power needs. However, if you simply want an understanding of how much electricity you use and how much a solar system will produce, follow the steps below. One way to look at the math: • Solar systems are generally sized in 1 kW – 6 kW (and larger) systems. A typical residential size is 3 kW. • 3 kW, or 3 kilowatts = 3,000 watts • A 3 kW system will generate around 3,000 DC watts per hour • Multiply the per hour generation by 5.4 which is an average number of sun hours in a day (3,000 x 5.4 = 16,200) • Multiply the new total by the average number of days in a month (16,200 x 30.5 = 487,620) • Multiply the new total by .77. This is the “derating” factor, or the amount of energy lost when DC current is turned into AC current. (487,620 x .77 = 375,467) • So, a 3 kW system will generate about 375,467 watt-hours per month, or about 375 kWh. • Now compare this number with the kWh usage noted in your electric bill. How many kWh do you use in a typical month? Twice this amount? Then you would save roughly ½ your electric bill if you installed a 3 kW system. • Consider how much money you save per month to figure out how long it will take to pay off your system. Another way to look at the math – in reverse: If you want to get all of your energy needs met through solar power (and get a “0” bill from your electric company) calculate how large a system you will need by following the steps below. Before you start, choose an average electric bill. Look for how many “kilowatt hours” you consumed. This is generally expressed as “kWh”. │Direction │Example│YOUR info│ │Note the average number of kWh you use per month │550 │ │ │kWh X 1000 = total AC Watts used per month │550,000│ │ │Total AC Watts / 30.5 (days in a month) = AC Watts used per day │18,033 │ │ │AC Watts used per day / Sun Hours per day (Central Texas = 5.4) │3339 │ │ │AC Watts needed per hour per day X 1.29 (AC to DC conversion factor) │4307 │ │ │Solar array in DC Watts to reach a Zero electric bill │4307 │ │ │Solar array in kilowatts, or kW │4.3 │ │
{"url":"https://www.txspc.com/how-solar-works/consider-how-much-power-you-actually-use/","timestamp":"2024-11-02T04:35:46Z","content_type":"text/html","content_length":"53710","record_id":"<urn:uuid:bafb06cd-7a41-4193-a39a-141edb3ce1e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00442.warc.gz"}
Correlation of currency pairs and how to read its table in Forex - xChief Academy Correlation of currency pairs in forex As we know, the forex market is known as one of the most profitable and liquid financial markets around the world. However, becoming a successful trader in this market requires sufficient knowledge of its basic concepts. One of these important concepts is the correlation of currency pairs in forex. Understanding and monitoring the correlation of currency pairs can affect the amount of trading risk; So it is very important for every trader to be aware of it. In this article, we will examine how to determine and calculate the correlation of forex currency pairs and its effect on transactions. What is the correlation of currency pairs in forex trading? Correlation of currency pairs in forex means the relationship between two currency pairs in terms of value and direction of price movement. Since currencies in forex are priced in pairs, no pair is traded completely independently of other pairs. Correlation can be positive or negative. If the price of two currency pairs increases or decreases at the same time, it means that these currency pairs move in the same direction. In this case, we say that their correlation is positive. If two currency pairs move in opposite directions, i.e. often when the price of one increases, the price of the other decreases, these two have a negative correlation. Negative correlation is also called inverse correlation. Important note: two currency pairs are correlated when this relationship is observed between them most of the time; It means that they should not move randomly and only some of the time in the direction or opposite of each other. If the currencies do not have a clear relationship with each other, we call them uncorrelated. Why is it important to know the correlation of currency pairs? Understanding the correlation of currency pairs can have a direct impact on the results of forex trading. For example, suppose a trader buys two different currency pairs that are positively correlated. In this case, if the price of one of those currency pairs decreases, due to the positive correlation between them, the price of the other currency pair will also decrease. In this case, the trader will lose in both of them. Of course, the good thing is that if one of them rises, the other will rise and the trader’s profit will be doubled. Conversely, when two currency pairs are negatively correlated, a profit in one of them means a loss in the other. In this case, if you have chosen the right currency pair, the amount of profit of the pair whose price has increased may compensate the loss of another. They often refer to choosing two currency pairs with negative correlation as a risk hedging strategy. What is the correlation coefficient? To understand the correlation coefficient, think that the correlation can only say that two currency pairs move in the direction of each other or against each other and cannot show the exact value of this relationship. But the correlation coefficient accurately specifies its value and shows how strong or weak the correlation between two currency pairs in Forex is. Correlation coefficients are represented by values ranging from -100 to 100 or -1 to 1. The closer the correlation coefficient is to 100, the more these currency pairs move almost identically. Similarly, if the correlation is closer to -100, it means that the currency pairs are moving almost equally in the opposite direction. When we say almost the same, it means that the correlation of these currency pairs, whether positive or negative, is strong. The closer the correlation coefficient is to zero, that means those currency pairs have no special relationship with each other or if they have, it is insignificant. Correlation coefficient formula It’s true that this formula sounds a bit complicated, but the general idea is that it takes data from two currency pairs x and y and then compares them to their average price readings ie. The denominator is the fraction of the covariance formula and the denominator of the formula is the standard deviation. In this formula: The correlation coefficient is r, x and y are the closing prices of the two selected currency pairs, and the average of several closing prices for those currency pairs. Let’s clarify with an example. Let’s say the data is the closing prices for each day or hour. The closing price of x (and y) is compared to the average closing price of x (and y). Now we enter these values in the formula. To get an average, we need to track several prices in a certain time period in Microsoft Excel software. Once the closing prices are recorded, an average can be determined that is constantly updated as new prices arrive. Correlation table of currency pairs in Forex The following table shows the correlation table of currency pairs in Forex in some of the pairs that have the largest volume of transactions in the world. You can compare each currency in the columns to the currencies in the rows to see how they correlate. For example, the correlation between EUR/USD and GBP/USD is 77, which is very high. As another example, the correlation between GBP/USD and EUR/GBP is -90, which indicates that their negative correlation is very strong. So they move in opposite direction most of the time. The first two currency pairs in the example above don’t always move in exactly the same direction, but they often do. In comparison, the second two currency pairs, which have a strong negative correlation with a coefficient of -90, move in opposite directions most of the time (not always). Therefore, it is important to monitor the correlation of currency pairs; Even in this small table as an example, several strong correlations are observed. If a trader buys the GBP/USD currency pair and sells the EUR/GBP currency pair regardless of the correlation of the currency pairs, it is true that he has opened two different positions (one buy and the other sell), but with the correlation coefficient of this Two, which is -90, is likely to win or lose on both. But what is the reason? If you buy the GBP/USD pair, you are buying pounds. If you sell the EUR/GBP pair, you are still buying pounds. Due to the strong negative correlation between these two, if the price of one of them increases, the price of the other decreases. Therefore, due to the selection of the type of position and the type of currency pair, you either gain in both or lose in both. final word According to what we have said, knowing the correlation level of forex currency pairs is necessary to have a successful trade in this market. This directly affects the risk level of your trades; So don’t neglect to learn it.
{"url":"https://xchief.academy/correlation-of-currency-pairs-in-forex/","timestamp":"2024-11-11T03:17:43Z","content_type":"text/html","content_length":"286958","record_id":"<urn:uuid:461bfbee-87e7-41dc-b969-c67bef217fc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00574.warc.gz"}
How To Find The Center Of A Circle In 3 Seconds! - WoodworkingBaron How to Find the Center of a Circle in 3 Seconds! Have you ever wondered how to find the center of a circle It’s a simple task, but it can be tricky if you don’t know the trick. In this blog post, we’ll show you how to find the center of a circle using a few different methods. So whether you’re a beginner or a seasoned woodworker, read on to learn how to find the center of a circle like a pro! How To Find The Center Of A Circle Woodworking? 1. Draw a line across the circle. 2. Draw another line perpendicular to the first line. 3. The intersection of the two lines is the center of the circle. How to Find the Center of a Circle in Woodworking Step 1: Draw a Circle The first step is to draw a circle on your workpiece. You can do this using a compass, a trammel, or a circle template. Step 2: Find the Center Point Once you have drawn your circle, you need to find the center point. There are a few different ways to do this. Method 1: Using a Straightedge and a Pencil 1. Place a straightedge across the circle so that it intersects two points on the circle. 2. Draw a line through the intersection points. This line will intersect the center of the circle. Method 2: Using a String and a Pin 1. Tie a string to a pin. 2. Place the pin at one point on the circle. 3. Stretch the string around the circle and bring it back to the starting point. 4. The point where the string crosses the circle is the center point. Method 3: Using a Center Finder A center finder is a tool that is specifically designed to find the center of a circle. To use a center finder, simply place it on the circle and rotate it until the two points on the finder align with two points on the circle. The point where the center finder’s stem intersects the circle is the center point. Step 3: Mark the Center Point Once you have found the center point of your circle, mark it with a pencil or a permanent marker. This will make it easier to find the center point when you are drilling or cutting your circle. Tips for Finding the Center of a Circle If you are having trouble finding the center point of your circle, try using a different method. Make sure that your circle is drawn accurately. If the circle is not drawn correctly, the center point will not be accurate. Be patient. Finding the center point of a circle can take some time. Just keep practicing and you will eventually get it. FAQs on How to Find the Center of a Circle in Woodworking What is the easiest way to find the center of a circle in woodworking? The easiest way to find the center of a circle in woodworking is to use a compass. To do this, draw a circle on your workpiece with a pencil, then place the point of the compass on the center of the circle and draw a second circle inside the first. The intersection of the two circles will be the center of the circle. What if I don’t have a compass? If you don’t have a compass, you can use a piece of string and a nail to find the center of a circle. To do this, tie a piece of string to a nail, then place the nail in the center of the circle. Hold the string taut and draw a circle around the nail. The point where the string crosses the circle will be the center of the circle. How do I find the center of a large circle? To find the center of a large circle, you can use a laser pointer. To do this, place the laser pointer at one point on the circle and draw a line on the circle. Then, move the laser pointer to another point on the circle and draw another line on the circle. The intersection of the two lines will be the center of the circle. How do I find the center of a circle that is not drawn on a flat surface? To find the center of a circle that is not drawn on a flat surface, you can use a plumb bob. To do this, tie a plumb bob to a string and hang it from the center of the circle. The point where the plumb bob touches the circle will be the center of the circle. What if I need to find the center of a circle that is not perfectly round? If you need to find the center of a circle that is not perfectly round, you can use a Vernier caliper. To do this, place the Vernier caliper on the circle and measure the diameter of the circle. The midpoint of the diameter will be the center of the circle. Also read: How To Get Into Woodworking As A Hobby
{"url":"https://woodworkingbaron.com/how-to-find-the-center-of-a-circle-woodworking/","timestamp":"2024-11-04T18:22:43Z","content_type":"text/html","content_length":"92189","record_id":"<urn:uuid:6ea7a0fc-5ad1-437e-9def-f01a0e1233de>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00625.warc.gz"}
Mathematics - Monsignor Doyle C.S.S.Mathematics - Monsignor Doyle C.S.S. Mathematics, Grade 9 MTH 1W This course enables students to consolidate, and continue to develop, an understanding of mathematical concepts related to number sense and operations, algebra, measurement, geometry, data, probability, and financial literacy. Students will use mathematical processes, mathematical modelling, and coding to make sense of the mathematics they are learning and to apply their understanding to culturally responsive and relevant real-world situations. Students will continue to enhance their mathematical reasoning skills, including proportional reasoning, spatial reasoning, and algebraic reasoning, as they solve problems and communicate their thinking. CREDIT: 1 TYPE: De-Streamed GRADE: 9 Mathematics – Essential MAT1LI This course focuses on the knowledge and skills required to be well prepared for success in the Grade 10 Locally-developed Mathematics (MAT2L). It will support students in developing and enhancing strategies that they need to develop mathematical literacy skills and the confidence to use these skills in their day-to-day lives. The areas of Money Sense, Measurement and Proportional Reasoning form the basis of the course content. CREDIT: 1 TYPE: Essential Principles of Mathematics MPM2DI This course enables students to broaden their understanding of relationships and extend their problem-solving and algebraic skills through investigation, the effective use of technology, and abstract reasoning. Students will explore quadratic relations and their applications; solve and apply linear systems; verify properties of geometric figures using analytic geometry; and investigate the trigonometry of right and acute triangles. Students will reason mathematically and communicate their thinking as they solve multi-step problems. COURSE NOTE: Students must have developed strong math skills and continue to work well independently. Daily homework completion is essential for success. CREDIT: 1 TYPE: Academic GRADE: 10 Before September 2022: MPM1DI – Principles of Mathematics or MFM2PI – Foundations of Mathematics – with minimum grade of 65% After September 2022: MTH 1W – Mathematics, Grade 9 – with recommended achievement of 75% or higher Foundations of Mathematics MFM2PI This course enables students to consolidate their understanding of linear relations and extend their problem-solving and algebraic skills through investigation, the effective use of technology, and hands-on activities. Students will develop and graph equations in analytic geometry; solve and apply linear systems, using real-life examples; and explore and interpret graphs of quadratic relations. Students will investigate similar triangles, the trigonometry of right triangles, and the measurement of three-dimensional figures. Students will consolidate their mathematical skills as they solve problems and communicate their thinking. COURSE NOTE: Recommended minimum grade 9 applied (MFM1PI) grade is 65%. CREDIT: 1 TYPE: Applied GRADE: 10 Before September 2022: MPM1DI – Principles of Mathematics or MFM1PI – Foundations of Mathematics with minimum grade of 65% After September 2022: MTH 1W – Mathematics, Grade 9 – with recommended achievement of 65% or greater Mathematics – Essential MAT2LI This Grade 10 course is designed to allow students to solidify and extend their understanding of, and confidence in using, the concepts developed in MAT1L so that they are well prepared for success in the Mathematics Grade 11 Workplace Preparation course (MEL3E). In the Grade 10 course, students are asked to demonstrate a greater depth of understanding and level of complexity, in contexts that move them from their immediate personal environment to the larger community. CREDIT: 1 TYPE: Essential GRADE: 10 PREREQUISITE: MFM1PI – Foundations of Mathematics (before Sept 2022), MTH 1W (after Sept 2022), or MAT1LI – Mathematics – Essential Functions, Grade 11, University Preparation MCR3UI This course introduces the mathematical concept of the function by extending students’ experiences with linear and quadratic relations. Students will investigate properties of discrete and continuous functions, including trigonometric and exponential functions; represent functions numerically, algebraically, and graphically; solve problems involving applications of functions; investigate inverse functions; and develop facility in determining equivalent algebraic expressions. Students will reason mathematically and communicate their thinking as they solve multi-step problems. COURSE NOTE: Students must have developed strong math skills and must work well independently. Daily homework completion is essential for success in this course. Recommended minimum mark in grade 10 Academic math (MPM2DI) is 75%. CREDIT: 1 TYPE: University GRADE: 11 PREREQUISITE: MPM2DI – Principles of Mathematics Functions and Applications, Grade 11, University/College Preparation MCF3MI This course introduces basic features of the function by extending students’ experiences with quadratic relations. It focuses on quadratic, trigonometric, and exponential functions and their use in modelling real-world situations. Students will represent functions numerically, graphically, and algebraically; simplify expressions; solve equations; and solve problems relating to applications. Students will reason mathematically and communicate their thinking as they solve multi-step problems. COURSE NOTE: Recommend 75% in grade 10 applied math (MFM2PI) or 65% in grade 10 academic math (MPM2DI) to ensure success in MCF3MI. CREDIT: 1 TYPE: College/University GRADE: 11 PREREQUISITE: MBF3CI – Foundations for College Mathematics or MFM2PI – Foundations of Mathematics or MPM2DI – Principles of Mathematics Foundations for College Mathematics MBF3CI This course enables students to broaden their understanding of mathematics as a problem solving tool in the real world. Students will extend their understanding of quadratic relations; investigate situations involving exponential growth; solve problems involving compound interest; solve financial problems connected with vehicle ownership; develop their ability to reason by collecting, analysing, and evaluating data involving one variable; connect probability and statistics; and solve problems in geometry and trigonometry. Students will consolidate their mathematical skills as they solve problems and communicate their thinking. COURSE NOTE: Recommended minimum grade in grade 10 applied (MFM2PI) is 65%. CREDIT: 1 TYPE: College GRADE: 11 PREREQUISITE: MPM2DI – Principles of Mathematics or MFM2PI – Foundations of Mathematics Mathematics for Work and Everyday Life, Grade 11, Workplace Preparation MEL3EI This course enables students to broaden their understanding of mathematics as it is applied in the workplace and daily life. Students will solve problems associated with earning money, paying taxes, and making purchases; apply calculations of simple and compound interest in saving, investing, and borrowing; and calculate the costs of transportation and travel in a variety of situations. Students will consolidate their mathematical skills as they solve problems and communicate their thinking. CREDIT: 1 TYPE: Workplace GRADE: 11 PREREQUISITE: MFM1PI – Foundations of Mathematics or MFM2PI – Foundations of Mathematics or MPM1DI – Principles of Mathematics or MPM2DI – Principles of Mathematics or MAT2LI – Mathematics – Calculus and Vectors, Grade 12, University Preparation MCV4UI This course builds on students’ previous experience with functions and their developing understanding of rates of change. Students will solve problems involving geometric and algebraic representations of vectors and representations of lines and planes in three-dimensional space; broaden their understanding of rates of change to include the derivatives of polynomial, sinusoidal, exponential, rational, and radical functions; and apply these concepts and skills to the modelling of real-world relationships. Students will also refine their use of the mathematical processes necessary for success in senior mathematics. This course is intended for students who choose to pursue careers in fields such as science, engineering, economics, and some areas of business, including those students who will be required to take a university-level calculus, linear algebra, or physics course. COURSE NOTE: Recommend 70% in MHF4U. CREDIT: 1 TYPE: University GRADE: 12 PREREQUISITE: MHF4UI – Advanced Functions, Grade 12, University Preparation Mathematics of Data Management, Grade 12, University Preparation MDM4UI This course broadens students’ understanding of mathematics as it relates to managing data. Students will apply methods for organizing and analysing large amounts of information; solve problems involving probability and statistics; and carry out a culminating investigation that integrates statistical concepts and skills. Students will also refine their use of the mathematical processes necessary for success in senior mathematics. Students planning to enter university programs in business, the social sciences, and the humanities will find this course of particular interest. COURSE NOTE: Recommend MCF3M 75%; MCR3U 65%; CREDIT: 1 TYPE: University GRADE: 12 PREREQUISITE: MCF3MI – Functions and Applications, Grade 11, University/College Preparation or MCR3UI – Functions, Grade 11, University Preparation Advanced Functions, Grade 12, University Preparation MHF4UI This course extends students’ experience with functions. Students will investigate the properties of polynomial, rational, logarithmic, and trigonometric functions; develop techniques for combining functions; broaden their understanding of rates of change; and develop facility in applying these concepts and skills. Students will also refine their use of the mathematical processes necessary for success in senior mathematics. This course is intended both for students taking the Calculus and Vectors course as a prerequisite for a university program and for those wishing to consolidate their understanding of mathematics before proceeding to any one of a variety of university programs. COURSE NOTE: Students must have developed strong math skills and must work well independently. Daily homework completion is essential for success in this course. Recommended minimum mark in grade 12 Math for College Technology (MCT4CI) is 75%; or grade 11 Functions, University level (MCR3UI) is 65%. CREDIT: 1 TYPE: University GRADE: 12 PREREQUISITE: MCR3UI – Functions, Grade 11, University Preparation or MCT4CI – Mathematics for College Technology, Grade 12, College Preparation Foundations for College Mathematics, Grade 12, College Preparation MAP4CI This course enables students to broaden their understanding of real-world applications of mathematics. Students will analyse data using statistical methods; solve problems involving applications of geometry and trigonometry; solve financial problems connected with annuities, budgets, and renting or owning accommodation; simplify expressions; and solve equations. Students will reason mathematically and communicate their thinking as they solve multi-step problems. This course prepares students for college programs in areas such as business, health sciences, and human services, and for certain skilled trades. COURSE NOTE: Recommended minimum grade in grade 11 college (MBF3CI) is 65%. CREDIT: 1 TYPE: College GRADE: 12 PREREQUISITE: MCF3MI – Functions and Applications, Grade 11, University/College Preparation or MCR3UI – Functions, Grade 11, University Preparation or MBF3CI – Foundations for College Mathematics Mathematics for College Technology, Grade 12, College Preparation MCT4CI This course enables students to extend their knowledge of functions. Students will investigate and apply properties of polynomial, exponential, and trigonometric functions; continue to represent functions numerically, graphically, and algebraically; develop facility in simplifying expressions and solving equations; and solve problems that address applications of algebra, trigonometry, vectors, and geometry. Students will reason mathematically and communicate their thinking as they solve multi-step problems. This course prepares students for a variety of college technology COURSE NOTE: Recommended minimum grade in MCF3MI is 75%. CREDIT: 1 TYPE: College GRADE: 12 PREREQUISITE: MCR3UI – Functions, Grade 11, University Preparation or MCF3MI – Functions and Applications, Grade 11, University/College Preparation Mathematics for Work and Everyday Life, Grade 12, Workplace Preparation MEL4EI This course enables students to broaden their understanding of mathematics as it is applied in the workplace and daily life. Students will investigate questions involving the use of statistics; apply the concept of probability to solve problems involving familiar situations; investigate accommodation costs, create household budgets, and prepare a personal income tax return; use proportional reasoning; estimate and measure; and apply geometric concepts to create designs. Students will consolidate their mathematical skills as they solve problems and communicate their thinking. CREDIT: 1 TYPE: Workplace GRADE: 12 PREREQUISITE: MBF3CI – Foundations for College Mathematics or MCF3MI – Functions and Applications, Grade 11, University/College Preparation or MCR3UI – Functions, Grade 11, University Preparation or MEL3EI – Mathematics for Work and Everyday Life, Grade 11, Workplace Preparation
{"url":"https://doyle.wcdsb.ca/student-services/guidance/monsignor-doyle-css-course-selections/mathematics-cc/","timestamp":"2024-11-02T20:48:50Z","content_type":"text/html","content_length":"119536","record_id":"<urn:uuid:53dec622-643c-4350-b28c-285f471502de>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00034.warc.gz"}
Changing The Subject - Odd One Out This is level 3; Formulas which can be rearranged by adding, subtracting, multiplying or dividing both sides by a value. You can earn a trophy if you answer the questions correctly. A B C D E $$a=b+cd$$ $$b=a+cd$$ $$b=a-cd$$ $$c = \frac{a-b}{d}$$ $$d = \frac{a-b}{c}$$ A B C D E $$m=n+3p$$ $$n=m-3p$$ $$p = \frac {m-n}{3}$$ $$\frac{m-n}{p}=3$$ $$n=m+3p$$ A B C D E $$t=\frac{u-v}{a}$$ $$v=u+at$$ $$u=v-at$$ $$a=\frac{v-u}{t}$$ $$t= \frac{v-u}{a}$$ A B C D E $$t-g=2s$$ $$g=t-2s$$ $$g+2s=t$$ $$s = \frac{t-g}{2}$$ $$s = \frac{g-t}{2}$$ A B C D E $$3x+2y=7$$ $$3x=7-2y$$ $$x = \frac{7-2y}{3}$$ $$y = \frac{7-3x}{2}$$ $$7+2y=3x$$ A B C D E $$v=5u-\frac 13$$ $$3v=5u-1$$ $$5u-3v=1$$ $$v=\frac{5u-1}{3}$$ $$u = \frac{3v+1}{5}$$ A B C D E $$2a+3b=c$$ $$a=\frac{c-3b}{2}$$ $$b = \frac{2a-c}{3}$$ $$b = \frac{c-2a}{3}$$ $$c = 2a+3b$$ A B C D E $$n=\frac{360}{A}-1$$ $$nA=359$$ $$nA=360-A$$ $$nA+A=360$$ $$n+1=\frac{360}{A}$$ Description of Levels Level 1 - Formulas which can be rearranged by adding or subtracting terms from both sides Example: Make e the subject of the formula d = e - f Level 2 - Formulas which can be rearranged by multiplying or dividing both sides by a value Example: Rearrange the formula n = mp Level 3 - Formulas which can be rearranged by adding, subtracting, multiplying or dividing both sides by a value Example: Rearrange the formula b = a + cd Level 4 - Formulas including brackets or expressions in the numerator or denominator of a fraction Example: Rearrange the formula p = s(t + 2) Level 5 - Formulas including squares or square roots Example: Rearrange the formula d² = 2a + 1 Level 6 - Finding the unknown which is not the subject of a formula Example: If m = n² + 2p, find p when m=8 and n=10 Level 7 - Rearrange the formulae where the new subject appears twice; fill in the blanks Example: Rearrange the formula ax + b = cx + g to make x the subject Level 8 - Rearrange the formulae where the new subject appears twice; show your working Example: Rearrange the formula a(3-x)=5x to make x the subject Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers). More Algebra including lesson Starters, visual aids, investigations and self-marking exercises. Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent. Curriculum Reference Help Video Make \(y\) the subject of the following: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
{"url":"https://www.transum.org/software/SW/Starter_of_the_day/Students/Changing_The_Subject_Multiple_Choice.asp?Level=3","timestamp":"2024-11-03T05:55:52Z","content_type":"text/html","content_length":"49233","record_id":"<urn:uuid:bbf6cd83-7dae-4e33-9ad0-ff904970a779>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00253.warc.gz"}
20.1: Collecting a Sample (5 minutes) Optional activity In this activity, students review methods of obtaining samples that are fair and random (MP6). Arrange students in groups of 2. Each group gets both sets of data from the blackline master, one data set for each partner. Students will not need the spinners from the blackline master for this activity, but the spinners are included for use later in the lesson. Partners may work together to answer the questions, but should not share their data set with one another until told to do so in a later activity. Student Facing You teacher will give you a paper that lists a data set with 100 numbers in it. Explain whether each method of obtaining a sample of size 20 would produce a random sample. Option 1: A spinner has 10 equal sections on it. Spin once to get the row number and again to get the column number for each member of your sample. Repeat this 20 times. Option 2: Since the data looks random already, use the first two rows. Option 3: Cut up the data and put them into a bag. Shake the bag to mix up the papers, and take out 20 values. Option 4: Close your eyes and point to one of the numbers to use as your first value in your sample. Then, keep moving one square from where your finger is to get a path of 20 values for your sample. Activity Synthesis The purpose of the discussion is to help students solidify their understanding of methods for selecting random samples. Consider these questions for discussion: • “Can you think of other methods for selecting a random sample that are not listed here?” (Roll a polyhedron with 10 equal faces showing the numbers 1 through 10 to get the row and again to get the column.) • “What do you need to look for when determining if a sample is random?” (Are all values equally likely to be included in the random sample?) 20.2: Sample Probabilities (10 minutes) Optional activity In this activity, students begin by practicing their understanding of proportions and and probabilities by examining the data set they have available. In the fourth problem, students obtain a sample from the population using tools they choose (MP5)and examine the sample they selected to compare it to the expected proportions and probabilities calculated in the first 3 problems. The problems are intended for students to use their own data set to answer. Although they are kept in pairs for the entire lesson, this activity should be done individually. Keep students in the same groups of 2. Give students 5–7 minutes of quiet work time followed by a whole-class discussion. If possible, allow students to use their chosen method of random sampling to obtain a sample of 10 for this activity. Have items such as paper clips, scissors, 10-sided polyhedra, and other materials available for student use. The blackline master for the first activity in this lesson contains accurate spinners that could be used to select a random sample. Action and Expression: Internalize Executive Functions. Chunk this task into more manageable parts. After students have solved the first 2-3 problems, check-in with either select groups of students or the whole class. Invite students to share the strategies they have used so far as well as any questions they have before continuing. Supports accessibility for: Organization; Attention Student Facing Continue working with the data set your teacher gave you in the previous activity. The data marked with a star all came from students at Springfield Middle School. 1. When you select the first value for your random sample, what is the probability that it will be a value that came from a student at Springfield Middle School? 2. What proportion of your entire sample would you expect to be from Springfield Middle School? 3. If you take a random sample of size 10, how many scores would you expect to be from Springfield Middle School? 4. Select a random sample of size 10. 5. Did your random sample have the expected number of scores from Springfield Middle School? Activity Synthesis The purpose of this discussion is to connect the ideas of probability and random sampling from the unit. Consider these questions for discussion: • “How is selecting a sample at random connected to probability?” (A random sample should give each value an equal chance of being chosen. Therefore, each value has a \(\frac{1}{100}\) probability of being chosen.) • “How could we simulate the probability of getting at least 2 values in the sample of 10 from Springfield Middle School?” (Since 20% of the values come from Springfield Middle School, we could put 10 blocks in a bag with 2 colored red to represent Springfield Middle School. Draw a block from the bag, and if it is red, it represents a score from Springfield Middle School; replace the block and repeat. Get a sample of 10 and see if the sample has at least 2 red blocks. Repeat this process many times and use the fraction of times there are at least 2 red blocks as an estimate for the probability that a random sample will have at least 2 scores from Springfield Middle School.) Representing, Speaking: MLR7 Compare and Connect. Invite students to create a visual display of their random sample of size 10 and response to the question: “Did your random sample have the expected number of scores from Springfield Middle School?” Invite students to investigate each other’s work and compare their responses. Listen for the language students use to describe a random sample and assign a probability of each value being chosen. This will help students connect the ideas of probability and random sampling through discussion. Design Principle(s): Optimize output (for representation); Cultivate conversation 20.3: Estimating a Measure of Center for the Population (10 minutes) Optional activity In this activity, students practice estimating a measure of center for the population using the data from a sample. The variability is also calculated to be used in the following activity to determine if there is a meaningful difference between the measure of center for the population they used to select their sample and the measure of center for another population. Keep students in groups of 2. Students should work with their partner for the first question, then individually for the last 2 problems. Follow up with a whole-class discussion. Student Facing 1. Decide which measure of center makes the most sense to use based on the distribution of your sample. Discuss your thinking with your partner. If you disagree, work to reach an agreement. 2. Estimate this measure of center for your population based on your sample. 3. Calculate the measure of variability for your sample that goes with the measure of center that you found. Activity Synthesis The purpose of the discussion is for students make clear their reasoning for choosing a particular measure of center and reiterate the importance of variability when comparing groups from samples. Consider these questions for discussion: • “Which measure of center did your group choose? Explain your reasoning.” (A median should be used if there are a few values far from the center that overly influence the mean in that direction. If the data is not approximately symmetric, a median should be used as well. In other cases, the mean is probably a better choice for the measure of center.) • “Why is it important to measure variability in the data when estimating a measure of center for the population using the data from a sample?” (To use the general rule, the difference in means must be greater than 2 MADs to determine a meaningful difference. If there is small variation, then the samples may have come from a population that also has a small variation, so differences among groups may be more clearly defined.) Speaking: MLR8 Discussion Supports. Use this routine to support whole-class discussion. For each response that is shared, ask students to restate and/or revoice what they heard using precise mathematical language. Consider providing students time to restate what they hear to a partner, before selecting one or two students to share with the class. Ask the original speaker whether their peer was accurately able to restate their thinking. Call students’ attention to any words or phrases that helped clarify the original statement. This will provide more students with an opportunity to produce language as they interpret the reasoning of others. Design Principle(s): Support sense-making 20.4: Comparing Populations (5 minutes) Optional activity In this activity, students use the values computed in the previous activity to determine if there is a meaningful difference between two populations (MP2). Following the comparison of the groups, students are told that the populations from which they selected a sample were identical, although shuffled. Keep students in the same groups of 2 established at the beginning of this lesson. Allow students 3 minutes of partner work time followed by a whole-class discussion. Student Facing Using only the values you computed in the previous two activities, compare your sample to your partner's. Is it reasonable to conclude that the measures of center for each of your populations are meaningfully different? Explain or show your reasoning. Activity Synthesis Ask each group to share whether they found a meaningful difference. Tell students, “With your partner, compare the starred data for the two groups. What do you notice?” Tell students that the two populations are actually identical, but rearranged. Ask, “Did any groups get different means for your samples? Explain why that might have happened, even though the populations are the same.” (Two random samples from the population will usually not contain the same values, so different means are probably expected.) One thing to note: The general rule is designed to say whether the two populations have a meaningful difference or if there is not enough evidence to determine if there is a meaningful difference. On its own, the general rule cannot determine if two populations are identical from only a sample. If the means are less than 2 MADs apart, there is still a chance that there is a difference in the populations, but there is not enough evidence in the samples to be convinced that there is a difference. Lesson Synthesis Key learning points: • Probability and random samples are connected through the equal likelihood of individuals from the population being selected. • It is important to select samples through a random process in order to compare two populations. Consider asking these discussion questions: • “Why was it important to select a random sample from the population data you had?” (A random sample gives us the best chance of being representative of the population.) • “A scientist has access to data for the high temperature in London for each day of every year since 1945. Describe a process the scientist could use to compare the temperatures from 1963 and 2003.” (Select a random sample of temperatures from each year. Determine the correct measure of center and variation. Use our general rule to compare the measure of center for each year based on the sample characteristics.)
{"url":"https://im-beta.kendallhunt.com/MS/teachers/2/8/20/index.html","timestamp":"2024-11-04T10:50:29Z","content_type":"text/html","content_length":"84972","record_id":"<urn:uuid:ee6a2656-24c1-418d-b3c4-54085ca7add9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00458.warc.gz"}
Latin squares with transitive autotopism group From this page you can download all Latin squares of orders up to 11 that have a transitive autotopism group. (The lists are the same if you ask instead that the paratopism group is transitive). For orders 2,3,4,5,7,11 the only examples are the group tables. If you want to know what format these files are in, it is my usual latin squares format. Species (main class) representatives There are only two examples in the above lists that have sharply transitive autotopism groups. Neither has a sharply transitive paratopism group. The examples are both of order 8. I've put the last in the file. Back to Latin squares data homepage.
{"url":"https://users.monash.edu.au/~iwanless/data/autotopisms/transitive/index.html","timestamp":"2024-11-10T08:18:41Z","content_type":"text/html","content_length":"1794","record_id":"<urn:uuid:11922282-0573-4f08-b2f7-adc74a9ac67b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00089.warc.gz"}
Meaning of Confluence + General Questions (11) Given a DPN that works on channels x1,...,xn, the semantics is defined as a state transition system whose states are labeled with n streams, one for each variable x1,...,xn. If a node fires, it consumes values from some of these streams and adds values to other streams. If in some state, more than one node can fire, the question of confluence arises as discussed below. We may formally define a relation s1->s2 that means that some nodes of the DPN fire and turn state s1 to state s2 by consuming and producing values for the streams in these states. Then, s1->*s2 mean that s2 can be reached from s1 by finitely many transitions, i.e., s1->s3->s4->...->s'->s2, and the epsilon means the reflexive closure of ->, i.e., x->εy means that either x=y or x->y holds. A DPN is confluent iff for all x,y1,y2 with x->∗y1 and x->∗y2 there is a z with y1->∗ z and y2->∗z. Hence, it does not really matter whether we choose the firings x->∗y1 or the firings x->∗y2 since at the end, they can be joined into z. See also pages 227 and the following in
{"url":"https://q2a.cs.uni-kl.de/3329/meaning-of-confluence","timestamp":"2024-11-07T14:18:11Z","content_type":"text/html","content_length":"48780","record_id":"<urn:uuid:43504e75-53ef-41aa-b0f0-32254f6b1c05>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00792.warc.gz"}
Changing The Subject - Odd One Out This is level 3; Formulas which can be rearranged by adding, subtracting, multiplying or dividing both sides by a value. You can earn a trophy if you answer the questions correctly. A B C D E $$a=b+cd$$ $$b=a+cd$$ $$b=a-cd$$ $$c = \frac{a-b}{d}$$ $$d = \frac{a-b}{c}$$ A B C D E $$m=n+3p$$ $$n=m-3p$$ $$p = \frac {m-n}{3}$$ $$\frac{m-n}{p}=3$$ $$n=m+3p$$ A B C D E $$t=\frac{u-v}{a}$$ $$v=u+at$$ $$u=v-at$$ $$a=\frac{v-u}{t}$$ $$t= \frac{v-u}{a}$$ A B C D E $$t-g=2s$$ $$g=t-2s$$ $$g+2s=t$$ $$s = \frac{t-g}{2}$$ $$s = \frac{g-t}{2}$$ A B C D E $$3x+2y=7$$ $$3x=7-2y$$ $$x = \frac{7-2y}{3}$$ $$y = \frac{7-3x}{2}$$ $$7+2y=3x$$ A B C D E $$v=5u-\frac 13$$ $$3v=5u-1$$ $$5u-3v=1$$ $$v=\frac{5u-1}{3}$$ $$u = \frac{3v+1}{5}$$ A B C D E $$2a+3b=c$$ $$a=\frac{c-3b}{2}$$ $$b = \frac{2a-c}{3}$$ $$b = \frac{c-2a}{3}$$ $$c = 2a+3b$$ A B C D E $$n=\frac{360}{A}-1$$ $$nA=359$$ $$nA=360-A$$ $$nA+A=360$$ $$n+1=\frac{360}{A}$$ Description of Levels Level 1 - Formulas which can be rearranged by adding or subtracting terms from both sides Example: Make e the subject of the formula d = e - f Level 2 - Formulas which can be rearranged by multiplying or dividing both sides by a value Example: Rearrange the formula n = mp Level 3 - Formulas which can be rearranged by adding, subtracting, multiplying or dividing both sides by a value Example: Rearrange the formula b = a + cd Level 4 - Formulas including brackets or expressions in the numerator or denominator of a fraction Example: Rearrange the formula p = s(t + 2) Level 5 - Formulas including squares or square roots Example: Rearrange the formula d² = 2a + 1 Level 6 - Finding the unknown which is not the subject of a formula Example: If m = n² + 2p, find p when m=8 and n=10 Level 7 - Rearrange the formulae where the new subject appears twice; fill in the blanks Example: Rearrange the formula ax + b = cx + g to make x the subject Level 8 - Rearrange the formulae where the new subject appears twice; show your working Example: Rearrange the formula a(3-x)=5x to make x the subject Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers). More Algebra including lesson Starters, visual aids, investigations and self-marking exercises. Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent. Curriculum Reference Help Video Make \(y\) the subject of the following: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Answers to this exercise are available lower down this page when you are logged in to your Transum account. If you don’t yet have a Transum subscription one can be very quickly set up if you are a teacher, tutor or parent.
{"url":"https://www.transum.org/software/SW/Starter_of_the_day/Students/Changing_The_Subject_Multiple_Choice.asp?Level=3","timestamp":"2024-11-03T05:55:52Z","content_type":"text/html","content_length":"49233","record_id":"<urn:uuid:bbf6cd83-7dae-4e33-9ad0-ff904970a779>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00253.warc.gz"}
Assignment 1 - The C Preprocessor and Bit Operations • The problems of this assignment must be solved in C. • Your programs should have the input and output formatting according to the testcases listed after the problems. • Your programs should consider the grading rules: https://grader.eecs.jacobs-university.de/courses/320112/2018 1gA/Grading-Criteria-C2.pdf Problem 1.1 Circular permutation of three variables (2 points) Presence assignment, due by 18:30 h today Write a macro and a program for the circular permutation of the contents of three variables having the same data type (i.e., the content of the first is put into the second, the content of the second into the third and the content of the third into the first). The macro should have four parameters: the three variables and their corresponding data type. Your program should read three integers and three doubles from the standard input. You should print on the standard output the contents of the six variables after the permutation (doubles with 5 after floating point precision). You can assume that the input will be valid. Testcase 1.1: input Testcase 1.1: output After the permutation: Problem 1.2 Determine the third least significant bit (1 point) Presence assignment, due by 18:30 h today Write a macro and a program for determining the third least significant bit (the third bit from the right in the binary representation) of an unsigned char read from the standard input. Your program should read an unsigned char from the standard input and print the decimal representation of the unsigned char as well as its third least significant bit (which is either 1 or 0) on the standard output using only bitwise operators and without explicitly converting to binary You can assume that the input will be valid. Testcase 1.2: input Testcase 1.2: output The decimal representation is: 70 The third least significant bit is: 1 Problem 1.3 Determine the value of an expression (2 points) Write multiple macros and a program for determining the value of the following expression depending on three variables a, b, and c calculated as expr(a, b, c) = sum(a, b, c) + max(a, b, c) min(a, b, c) For example if 3, 10, 2 is the input, the value of the expression is expr(3, 10, 2) = sum(3, 10, 2) + max(3, 10, 2) min(3, 10, 2) = 15 + 10 = 12.5. Your program should read three integers from the standard input. For calculating the expression for these values only macros should be used. The result should be printed on the standard output with a floating point precision of 6. You can assume that the input will be valid. Testcase 1.3: input Testcase 1.3: output The value of the expression is: 12.500000 Problem 1.4 Conditional compilation for showing intermediate results (2 points) Write a program which computes the product of two n × n integer matrices and uses conditional compilation for showing/not showing intermediate results (products of the corresponding components). The product of two n × n matrices A = (Aij) and B = (Bjk) with i, j, k = 1, . . . n is calculated as Cik = Aij · Bjk, with i = 1, . . . , n and k = 1, . . . , n. For example the product of the matrices A = and B = C = 1 · 1 + 2 · 3 1 · 2 + 2 · 4 3 · 1 + 4 · 3 3 · 2 + 4 · 4 1 + 6 2 + 8 3 + 12 6 + 16 The intermediate results which are to be shown or not are 1, 6, 2, 8, 3, 12, 6, and 16. Your program should read from the standard input the dimension of the matrix (in the previous example 2) along with the components of two integer matrices. The output consists of the intermediate results and the value of the product of the two matrices if the directive INTERMEDIATE is defined. If INTERMEDIATE is not defined then only the product of the two matrices should be printed on the standard output. You can assume that the input will be valid. Testcase 1.4: input Testcase 1.4: output The intermediate product values are: The product of the matrices is: Problem 1.5 Binary representation backwards (1 point) Write a program using bit masks and bitwise operators for printing the binary representation of an unsigned int backwards. For example the binary representation of the unsigned int 12345 on 16 bits is 0011000000111001. Therefore, the backwards binary representation is 10011100000011. Your program should read an unsigned int from the standard input and print on the standard output the backwards binary representation of the read integer without leading zeros. You should not store the bits in an array. You can assume that the input will be valid. Testcase 1.5: input Testcase 1.5: output The backwards binary representation is: 10011100000011 Problem 1.6 Binary representation (2 points) Write a program using bit masks and bitwise operators for printing the binary representation of an unsigned int on 16 bits without storing the bits in an array. For example the binary representation of the unsigned integer number 12345 on 16 bits is 0011000000111001. Your program should read an unsigned int from the standard input and print on the standard output the binary representation of the integer number with leading zeros. You can assume that the input will be valid. Testcase 1.6: input Testcase 1.6: output The binary representation is: 0011000000111001 Problem 1.7 setswitchbits() (2 points) Write a program for setting one bit to 1 and switching another bit of an unsigned int. The function setswitchbits should have three parameters: the unsigned int to be changed and the two bits. The first bit is to be set to 1 and the other bit is to be switched. For example the binary representation on 16 bits of the unsigned int 12345 is 0011000000111001. If setswitchbits() with bits 6 (the 7 th bit from the right) and 12 (the 13th bit from the right) is called then the output on the standard output should be 0010000001111001. You can assume that the input will be valid. Testcase 1.7: input Testcase 1.7: output The binary representation is: 0011000000111001 After setting and switching: 0010000001111001 How to submit your solutions • Your source code should be properly indented and compile with gcc without any warnings (You can use gcc -Wall -o program program.c). Insert suitable comments (not on every line . . . ) to explain what your program does. • Please name the programs according to the suggested filenames (they should match the description of the problem) in Grader. Otherwise you might have problems with the inclusion of header files. Each program must include a comment on the top like the following: a1 p1.c Firstname Lastname • You have to submit your solutions via Grader at If there are problems (but only then) you can submit the programs by sending mail to k.lipskoch@jacobs-university.de with a subject line that begins with JTSK-320112. It is important that you do begin your subject with the coursenumber, otherwise I might have problems to identify your submission. • Please note, that after the deadline it will not be possible to submit any solutions. It is useless to send late solutions by mail, because they will not be accepted. This assignment is due by Tuesday, February 13th, 10:00 h.
{"url":"https://codingprolab.com/answer/assignment-1-the-c-preprocessor-and-bit-operations/","timestamp":"2024-11-14T02:08:31Z","content_type":"text/html","content_length":"118624","record_id":"<urn:uuid:9890c569-e151-4637-ad0a-5c3e12528a25>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00278.warc.gz"}
Grading Standards in SFUSD Schools Which schools are easy graders? Which are hard? The purpose of grading is a hot button issue in education. A recent SF Chronicle article described Palo Alto Unified’s attempt to switch to “evidence-based grading, which means rewarding students for demonstrating they know the subject matter, even if they need more time or test retakes to do so, without behavior, participation or obedience reflected in the calculation.” In a different universe, one could imagine conservatives being the ones pushing for evidence-based grading: “if you want an ‘A’, take this test and prove that you deserve it. There are no participation trophies: you don’t get an ‘A’ just for showing up.” In our universe, it’s usually progressives who are pushing the policy and they’re motivated by the beliefs that homework discriminates against children from unstable families, that tests are stressful, and that expecting students to show up and behave is racist. Conservatives, therefore, reflexively oppose them. Thanks for reading SFEDup! Subscribe for free to receive new posts and support my work. I thought it would be interesting to examine, as the article puts it: whether an A means students actually mastered the subject matter or simply showed up and didn’t make waves. The California Department of Education publishes no data on grades, presumably because it knows that grades mean different things from school to school and district to district. If grades were consistent, we wouldn’t need the SBAC. SFUSD doesn’t usually publish its data either but it did make school-by-school grade reports public for the Fall Semester of 2022-231. Figure 1 is a screenshot from one of those grade reports: There is a crude but effective way to assess grading standards: compare the percentage of students who receive an ‘A’ with the percentage who are proficient (i.e. score either Meets Standards or Exceeds Standards) in the corresponding SBAC test. Such a comparison will reveal whether there are students who are proficient but don’t get an ‘A’ (demonstrating that the school is a tough grader) or whether there are students who get an ‘A’ despite not being proficient (demonstrating that the school is an easy grader). Middle Schools Figure 2 shows the ELA grading standards in SFUSD middle schools. The x-axis is the share of students at the school who are proficient (i.e. who meet or exceed standards). The y-axis is the share who meet or exceed standards minus the share of students who got an ‘A’. When this number is above zero, it means that there are students who were good enough to at least meet standards on the SBAC test who did not manage to get an ‘A’ in their class. When the number is below zero, it means that there are students who got an ‘A’ despite not meeting or exceeding standards on the SBAC test. At Roosevelt, 75% of kids met or exceeded standards but only 44% received an ‘A’, leaving 31% who were good enough to meet standards but got a ‘B’ or lower grade. Meanwhile, at King, just 30% of kids met or exceeded standards but 53% got an ‘A’ meaning that 22% of the students received an ‘A’ despite not meeting standards. An ‘A’ clearly means very different things at each school. There is an obvious trend whereby the more students a school has who are proficient the harder it is to get an ‘A’. Some schools are less prone to grade inflation than others. Paul Revere has the lowest overall proficiency rate in the city but it doesn’t sugarcoat the situation by throwing around ‘A’s. Similarly Willie Brown’s students are more likely to be proficient than Visitacion Valley’s (34% to 24%) but less likely to receive an ‘A’ (45% to 40%). Middle School Math SFUSD has lower standards for Math grading than ELA grading. More students earn an ‘A’ in Math than in ELA (53% to 51%) even though SFUSD’s students are not as good at Math as ELA. Only 39% of 6-8 graders were proficient in Math compared to 52% who were proficient in ELA. As figure 3 shows, most schools fall below the zero line meaning they give ‘A’ grades to students who are not proficient. 49% of Visitacion Valley’s students received an ‘A’ even though only 13% of them were proficient. Meanwhile, only 47% of Rooftop’s students received an ‘A’ even though 50% of them were proficient. Alice Fong Yu2 is at the opposite extreme: only 33% of its 7th graders earned an ‘A’ even though 73% of them were proficient (and 52% EXCEEDED standards). A few months ago, I wrote about SFUSD’s new Math vision which made heavy use of this report produced by TNTP, an education nonprofit formerly known as The New Teacher Project. This report stressed the importance of grade-level assignments. The authors found that schools had such low expectations for students of color that they were often not taught the grade-level material they were expected to learn. But the students were graded on what they were taught, not on what the standards expected them to have learned. One of the conclusions of the report was: “students of color received grades that less accurately reflected their mastery of rigorous content" That is clearly what is happening in San Francisco too: most of the schools with easy grading standards have Latino/Black majorities. High Schools High schoolers only sit the SBAC in 11th grade so I'm comparing the SBAC results with the grades of juniors only. Figure 4 shows the results for ELA. There are fewer high schools so I’m able to show two years of data on one chart. The 2023 SOTA figure looks so anomalous that I double-checked it. Although 84% of juniors met or exceeded the standards, only 33% of them got an ‘A’. The previous year’s juniors were marginally more likely to be proficient (88% vs 84%) but far more likely to get an ‘A’ (74% vs 33%). Maybe there was one demanding teacher who transferred to SOTA from their neighbors at Academy because, in the previous year of 2021-22, only 25% of Academy’s juniors got an ‘A’ even though 58% of them were proficient. For 11th grade Math, the magnitude of the difference between the proficiency rate and the ‘A’ rate is even greater, as figure 5 shows. There’s a partial explanation for this. By the time students reach 11th grade, they’re taking many different Math classes. Kids on the standard pathway are taking Algebra II; some others are taking the Algebra II + Precalculus compression course; some will be taking regular precalculus; others will be taking honors precalculus; finally, a few are already taking AP Calculus. At the other end of the spectrum, there are kids taking or retaking earlier Math classes. A kid who is good enough to earn an ‘A’ in one class (e.g. regular precalculus) might instead be earning a ‘B’ in a more advanced class (e.g. honors precalculus). Over 80% of Lowell’s students are proficient in Math but only around half (49% in 2023 and 55% in 2022) receive an ‘A’, in part because they’re taking harder courses and thus being held to higher standards. More generally, there is one group of schools (Balboa, Galileo, Lincoln, Lowell, SOTA, Washington) where many students who are proficient in Math don’t receive an ‘A’ and another group of schools (Academy, Burton, Jordan, Marshall, Mission, O’Connell, SF International) where lots of students receive ‘A’s despite not being proficient. Only Wallenberg managed to be in one group one year and the other group the other year. The extreme examples came at Mission (47% received an ‘A’ even though only 17% were proficient) and Jordan (31% received an ‘A’ even though zero were proficient) in 2022. In SFUSD, grading is relative. Grading standards vary enormously from school to school. Students are effectively being compared against their classmates, not against some objective standard. An ‘A’ in one school is not worth the same as an ‘A’ in another. Imagine that SFUSD were to switch to evidence-based grading in a consistent way so that grades actually tracked mastery of the material. The schools that I called hard graders would all see an increase in the number of ‘A’ grades and the schools that I called easy graders would all see a decrease in the number of ‘A’ grades. Some consequence of this would be: • The average GPAs of Asian and White students would increase and the average GPAs of Black and Latino students would decrease. • The number of Latino and Black students admitted to Lowell would fall (just one B in 8th grade is sufficient to exclude a student from Band One admissions). • Students from the easy grading high schools would find it harder to get admitted to colleges because their GPAs would be lower. While this analysis is very suggestive, it does suffer from a number of serious weaknesses: • Students sit SBAC tests in the Spring so it would make sense to compare SBAC results with grades from the Spring semester, not the preceding Fall semester. Alas, Spring semester grades are not public so the analysis had to make do with Fall semester grades. This temporal mismatch makes interpretation of the results more difficult. Suppose a school grades its students accurately based on their mastery of math but offers spectacularly good instruction that dramatically increases the students’ actual knowledge of math during the school year. The students’ SBAC scores in the Spring will reflect their increased mastery of math but the analysis will compare these high SBAC scores to the lower grades they received in the Fall and conclude, inaccurately, that the school grades harshly. • Some students don’t sit the SBAC tests but every student gets a class grade. If the students who don’t sit the SBAC tests are not representative of the class (and I would bet that they tend to be below average), this could bias the scores. Ideally, we would only compare the grades of students for whom we have SBAC scores. Unfortunately, we only have class averages to work with. • The grades data shows the “count of marks” in each subject which may be different from the number of students. In 2022-23, Aptos MS had 303, 273, and 272 students in grades 6-8 respectively but, as figure 1 above shows, the ELA marks for those grades numbered 334, 301, and 307 That there are around 30 more ELA marks than there are students indicates that there are some students taking two ELA classes. These are probably weaker students receiving intensive reading support. As weaker students they are less likely to receive ‘A’ grades. The grades in their two ELA courses are thus dragging the school’s average down even though the same students are only sitting the SBAC once (if at all). • The data may be incomplete. The same Aptos report card showed that there were only 150 Math marks in grade 8 in 2022-23 even though all 272 students were presumably taking Math. It’s impossible to know whether the missing students’ marks were better or worse than the students whose grades we do know. h/t to the reader who pointed these out to me. The eagle-eyed may notice that Alice Fong Yu was missing from the middle school ELA chart. That’s because the published data contains no grades for ELA for Alice Fong Yu students. Another excellent analysis!! The results show the importance of using standards based assessments. Also parents need to get the true information on how students are doing. Suggest SFUSD develop a consistent grading process for all schools. Expand full comment
{"url":"https://sfeducation.substack.com/p/grading-standards-in-sfusd-schools?open=false#%C2%A7middle-schools","timestamp":"2024-11-10T21:27:28Z","content_type":"text/html","content_length":"191563","record_id":"<urn:uuid:502096ff-7ecc-4d46-849f-bee3b52aff5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00053.warc.gz"}
• Highlights 2024. Learning Aggregate Queries Defined by First-Order Logic with Counting. (slides, poster) • Colloquium at HU Berlin. Descriptive Complexity of Learning. (slides) • PhD Defence. Descriptive Complexity of Learning. (slides) • PODS 2022. On the Parameterized Complexity of Learning First-Order Logic. (video, slides) • CSL 2021. Learning Concepts Described By Weight Aggregation Logic. (video, slides) • Highlights 2019. Learning Concepts Definable in First-Order Logic with Counting. (slides) • LICS 2019. Learning Concepts Definable in First-Order Logic with Counting. (slides)
{"url":"https://svbergerem.de/talks","timestamp":"2024-11-04T06:01:17Z","content_type":"text/html","content_length":"9236","record_id":"<urn:uuid:6b365af7-7737-4cfa-8ccd-7afb680d9c5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00311.warc.gz"}
(PDF) Cases, clusters, densities: Modeling the nonlinear dynamics of complex health trajectories ... First, while ABM is generally focused on simulating social processes for theory testing or applied scenario analysis, CBM focuses on pattern recognition in real data; hence they have developed along different intellectual trajectories (Haynes, 2017). Second, ABM requires a basic knowledge of programming, and is often employed by those grounded more squarely in the quantitative tradition; while those using CBM, particularly qualitative comparative analysis (QCA), tend to be qualitative researchers (Castellani, Rajaram, Gunn, & Griffiths, 2015a;Yang & Gilbert, 2008). Third, ABM and CBM have a different approach to modelling, which has sometimes been misconstrued as a difference between a restrictive versus generalist view of complexityand which has incorrectly led CBM researchers to be somewhat dismissive of ABM and vice versa (Keuschnigg, Lovsjö, & Hedström, 2018). ... ... Overall, then, ABM is a powerful computational modelling tool. And one, in particular, that offers much to CBM in terms of more effectively modelling issues of case-based agency, the interaction amongst cases, and the impact collective dynamics have on macroscopic patterns and trends (Castellani et al., 2015a). ... ... Before we proceed, however, it needs to be stated up front that, despite Byrne's empirical insight, cases do not always have to be modelled as complex or agent-based, as the aims of a study might differ. Nonetheless, subsequent research by Haynes (2017) and others has strongly supported Byrne's complex systems view of cases (Castellani et al., 2015a(Castellani et al., , 2015bWilliams & Dyer, 2017). ...
{"url":"https://www.researchgate.net/publication/282409969_Cases_clusters_densities_Modeling_the_nonlinear_dynamics_of_complex_health_trajectories","timestamp":"2024-11-14T12:03:18Z","content_type":"text/html","content_length":"854389","record_id":"<urn:uuid:5449184d-6f7a-45b7-b0e7-88d4acdaa5c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00588.warc.gz"}
ACOS logarithmic form 04-28-2016, 03:07 PM (This post was last modified: 04-28-2016 03:10 PM by Claudio L..) Post: #1 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 ACOS logarithmic form Here's my dilemma: I looked at Wikipedia for the logarithmic forms of ACOS. The second row is the one that interests me. Now let's make Z=2 (real, but outside range so result will be complex). 2^2-1 = 3 ln(2+sqrt(3))=1.3169... (all real numbers so far) Now -i*1.3169... = (0 -1.3169) Now go to your 50g and do 2 ACOS, you'll get (0 1.3169...) (positive imaginary part) Wolfram Alpha agrees with the 50g, so that imaginary part has to be positive. Then a quick check: Doing that on the 50g we also get positive imaginary part. Is the formula wrong in wikipedia? Perhaps it should be i*ln(...) instead of -i ? I just need some help proving it, so it's not just me against the world. EDIT: BTW, the ASIN() formula in Wikipedia works in agreement with the 50g. 04-28-2016, 03:29 PM Post: #2 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 03:07 PM)Claudio L. Wrote: Here's my dilemma: I think I figured it out. I think the formula is right, but the only difference is which branch of the sqrt() you take. For whatever reason, for real numbers the 50g and Wolfram take the negative root, which provides the sign Are there any conventions as far as which branch to take? If I take an arbitrary number like (2 3), I get the right value, but on reals it goes to the wrong solution (actually I shouldn't say the wrong solution, as both are correct solutions, just goes to "another" solution). 04-28-2016, 05:03 PM (This post was last modified: 04-28-2016 05:04 PM by Claudio L..) Post: #3 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form Sorry if I turned this into a monologue, but I'll leave it written for the curious reader. So how's the 50g and everybody else doing it? It seems everybody has a slightly different formula for ACOS, but everybody agrees 100% identical for ASIN. ASIN doesn't have a problem, whether you use the calculator, or do it "by hand" following the formula you get the same result. , all the way to the bottom, we have a formula from Mathworks. is the implementation from Wolfram. Basically, Wolfram just does pi/2-asin(Z), so the branch chosen is consistent with the ASIN results. Mathworks uses a formula slightly different from Wikipedia: Wikipedia uses sqrt(Z^2-1), while the other formula has i*sqrt(1-Z^2). Before somebody jumps and says "it's the same!", let's try a couple of cases: i*sqrt(1-2^2)=i*sqrt(-3) = i*(i*1.73...) = -1.73... i*sqrt(1-Z^2)=sqrt(6-12*i)=i*(3.11...,-1.92...)=(1.92..., 3.11...) i*sqrt(1-Z^2)=sqrt(6+12*i)=i*(3.11...,1.92...)=(-1.92..., 3.11...) Very subtle... the second form gives results consistent with the 50g for all values. It would never occur to me that such a trivial expression manipulation would push you through a different solution. Very sneaky. 04-28-2016, 06:42 PM (This post was last modified: 04-28-2016 06:43 PM by Ángel Martin.) Post: #4 Ángel Martin Posts: 1,447 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form It's commonly accepted to define as "principal" branch the solution with argument between -pi and pi. This is covered with good explanations in the 15C Special Functions Manual, I'm sure somebody here will be able to point at its URL... "To live or die by your own sword one must first learn to wield it aptly." 04-28-2016, 06:44 PM (This post was last modified: 04-28-2016 07:02 PM by Dieter.) Post: #5 Dieter Posts: 2,397 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 05:03 PM)Claudio L. Wrote: Before somebody jumps and says "it's the same!", let's try a couple of cases: Very subtle... I wouldn't call this subtle, but obvious. At least as long as you do not forget that a square root has both a positive and a negative value: = 1.73 or -1.73 i · ±sqrt(1-2^2) = i · ±sqrt(-3) = i · ±1.73i = -1.73 or 1.73 = ±sqrt(-6+12i) = 1.92+3.11i or -1.92-3.11i i · ±sqrt(1-Z^2) = i · ±sqrt(6-12i) = i · ±(3.11-1.92i) = 1.92+3.11i or -1.92-3.11i = ±sqrt(-6-12*i) = 1.92-3.11i or -1.92+3.11i i · ±sqrt(1-Z^2) = i · ±sqrt(6+12*i) = i · ±(3.11+1.92i) = -1.92+3.11i or 1.92-3.11i So it's actually OK to jump in and say "it's the same!". ;-) 04-28-2016, 08:23 PM Post: #6 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 06:44 PM)Dieter Wrote: I wouldn't call this subtle, but obvious. Well... wasn't so obvious to me, of course I'm well familiar with multivalued functions, but usually conventions dictate which value is the principal, and that's the end of it, calculations are straightforward and as long as you keep in the back of your mind that there's other solutions you can proceed without changes. What got me is that using sqrt(z) vs. i*sqrt(-z) changes the selected branch only for some values, but not all. (04-28-2016 06:44 PM)Dieter Wrote: So it's actually OK to jump in and say "it's the same!". ;-) I knew somebody was going to... enjoy your moment of glory :-) Now if I leave newRPL returning the other solution for ACOS... would you be the first one telling me "it's not the same! the other solution is the right one!" ? I guess we'll never know... I made sure newRPL returns the expected value :-) 04-28-2016, 08:29 PM (This post was last modified: 04-28-2016 08:32 PM by Claudio L..) Post: #7 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 06:42 PM)Ángel Martin Wrote: It's commonly accepted to define as "principal" branch the solution with argument between -pi and pi. This is covered with good explanations in the 15C Special Functions Manual, I'm sure somebody here will be able to point at its URL... I know it might seem trivial, but once you get into the subject in depth it's a mess. Each transcendental function has its own weird branch cuts, not so trivial as you can see EDIT: What you mentioned works for ln() and sqrt(), acos() is a different animal. 04-28-2016, 08:58 PM Post: #8 Valentin Albillo Posts: 1,100 Senior Member Joined: Feb 2015 RE: ACOS logarithmic form Hi, Ángel: (04-28-2016 06:42 PM)Ángel Martin Wrote: It's commonly accepted to define as "principal" branch the solution with argument between -pi and pi. This is covered with good explanations in the 15C Special Functions Manual, I'm sure somebody here will be able to point at its URL... Here you are: HP-15C Advanced Functions Handbook Best regards. All My Articles & other Materials here: Valentin Albillo's HP Collection 04-29-2016, 02:19 AM Post: #9 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 08:58 PM)Valentin Albillo Wrote: . Hi, Ángel: (04-28-2016 06:42 PM)Ángel Martin Wrote: It's commonly accepted to define as "principal" branch the solution with argument between -pi and pi. This is covered with good explanations in the 15C Special Functions Manual, I'm sure somebody here will be able to point at its URL... Here you are: HP-15C Advanced Functions Handbook Best regards. Thanks for the link! Page 61 lists for ACOS the same formula as Wikipedia. Does this mean ACOS on the 15C returns the other values? Can somebody run ACOS(2) on the 15C? 04-29-2016, 02:50 AM Post: #10 Sylvain Cote Posts: 2,159 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-29-2016 02:19 AM)Claudio L. Wrote: Thanks for the link! Page 61 lists for ACOS the same formula as Wikipedia. Does this mean ACOS on the 15C returns the other values? Can somebody run ACOS(2) on the 15C? HP-15C S/N:2435A04214 FIX 9 SF 8 ACOS [X: 0] Re<>Im [X: 1.316957897] 04-29-2016, 06:00 AM (This post was last modified: 04-29-2016 06:19 AM by Ángel Martin.) Post: #11 Ángel Martin Posts: 1,447 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-29-2016 02:50 AM)Sylvain Cote Wrote: (04-29-2016 02:19 AM)Claudio L. Wrote: Thanks for the link! Page 61 lists for ACOS the same formula as Wikipedia. Does this mean ACOS on the 15C returns the other values? Can somebody run ACOS(2) on the 15C? HP-15C S/N:2435A04214 FIX 9 SF 8 ACOS [X: 0] Re<>Im [X: 1.316957897] 41Z Module, any revision: Z, 2 = 2 + j0 ZACOS = 0 + 1.316957897E0 When I programmed the 41Z I had to delve into this one at length. I even found a bug in Free42 (that I was using to check the results) which Thomas duly corrected very promptly, interestingly in the inverse trigonometric functions. I don't remember how I came up with the right value selection, but it must have been by applying the branch definition criteria all across the chain of intermediate calculations; how else. "To live or die by your own sword one must first learn to wield it aptly." 04-29-2016, 06:06 AM (This post was last modified: 04-29-2016 06:19 AM by Ángel Martin.) Post: #12 Ángel Martin Posts: 1,447 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-28-2016 08:29 PM)Claudio L. Wrote: (04-28-2016 06:42 PM)Ángel Martin Wrote: It's commonly accepted to define as "principal" branch the solution with argument between -pi and pi. This is covered with good explanations in the 15C Special Functions Manual, I'm sure somebody here will be able to point at its URL... I know it might seem trivial, but once you get into the subject in depth it's a mess. Each transcendental function has its own weird branch cuts, not so trivial as you can see here. EDIT: What you mentioned works for ln() and sqrt(), acos() is a different animal. That's not logical; once a branch of the logarithm is used it should apply to all your functions and provide the same criteria across. The Ln is the root cause of every multi-value here, including the square root which is nothing more that another logarithm if you use the expression SQRT(z) = exp [ ln(z) / 2]. The ACOS function is more of the same, in this instance with the rule applied twice since the ln appears twice in its expression - or even three times if you'd use Z^2 = exp [ 2 ln(z) ] ... The branch selection is therefore critical. I remember in complex analysis classes we sometimes needed to change the branch to avoid function singularities during integration, as in the integration path crossing the cut where the function wasn't analytical. It's been a while since that so I may also remember wrong though I suspect the basic concepts are still clear in my memory. "To live or die by your own sword one must first learn to wield it aptly." 04-29-2016, 01:33 PM (This post was last modified: 04-29-2016 01:34 PM by Csaba Tizedes.) Post: #13 Csaba Tizedes Posts: 609 Senior Member Joined: May 2014 RE: ACOS logarithmic form (04-28-2016 05:03 PM)Claudio L. Wrote: the other formula has i*sqrt(1-Z^2). I asked my Maple: Maybe it help 04-29-2016, 02:39 PM Post: #14 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-29-2016 06:06 AM)Ángel Martin Wrote: That's not logical; once a branch of the logarithm is used it should apply to all your functions and provide the same criteria across. The Ln is the root cause of every multi-value here, including the square root which is nothing more that another logarithm if you use the expression SQRT(z) = exp [ ln(z) / 2]. The ACOS function is more of the same, in this instance with the rule applied twice since the ln appears twice in its expression - or even three times if you'd use Z^2 = exp [ 2 ln(z) ] ... The branch selection is therefore critical. I remember in complex analysis classes we sometimes needed to change the branch to avoid function singularities during integration, as in the integration path crossing the cut where the function wasn't analytical. It's been a while since that so I may also remember wrong though I suspect the basic concepts are still clear in my memory. You are right, once you select the right convention for which branch to take on sqrt() and ln(), acos() should be automatic... or that's what I thought, that's the whole point of this thread. The formula is supposed to take the branch cut of the sqrt(), then shift it, then ln() remaps it and adds its own branch cuts. The whole point is that, like you, I expected this to be taken care of automatically, but that's not the case with the formula from Wikipedia or the 15C manual. It does work great when you use i*sqrt (1-Z^2), rather than sqrt(z^2-1). It seems the 15C and the 41 (thank to all who provided the results) agree with the branch cuts of i*sqrt(1-Z^2), so why is the formula in the manual showing sqrt(Z^2-1)? same question to the Wikipedia folks and a couple of other websites I found. There's very good agreement in the results between all calculators and major CAS systems (thanks to the Maple check in this thread), it's the docs that cause the discrepancy. 04-29-2016, 09:15 PM (This post was last modified: 04-29-2016 09:18 PM by ljubo.) Post: #15 ljubo Posts: 13 Junior Member Joined: Apr 2016 RE: ACOS logarithmic form If I'm interpreting correctly the ACOS illustration from HP 15C Advanced Functions Handbook, page 61, then the positive real axis for x>1 (blue line) is mapped to the positive imaginary axis - means documentation and implementation are consistent. Note that the principal value is denoted with all cap letters (on top of right-hand illustration), where lowercase letters are denoting the multi-value inverse, see explanation on the page 59. 04-30-2016, 07:01 AM (This post was last modified: 04-30-2016 07:10 AM by Ángel Martin.) Post: #16 Ángel Martin Posts: 1,447 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-29-2016 09:15 PM)ljubo Wrote: If I'm interpreting correctly the ACOS illustration from HP 15C Advanced Functions Handbook, page 61, then the positive real axis for x>1 (blue line) is mapped to the positive imaginary axis - means documentation and implementation are consistent. Note that the principal value is denoted with all cap letters (on top of right-hand illustration), where lowercase letters are denoting the multi-value inverse, see explanation on the page 59. Interesting, I never looked at those diagrams as showing mappings between domain and results regions but as defining the ranges of applicability, i.e. showing where will the results be; yet what you're saying appears to be accurate. What's intriguing is the sentence in pg. 62: "The principal branches in the last four graphs above are obtained from the equations shown, but don't necessarily use the principal branches of ln(z) and sqr(z)" So this leads us to believe that in some cases they had another criteria for choosing the principal branches, like keeping symmetries or other properties to make them more analogous to the behavior in the real domain. Oh and I remember better now: for contour integrals involving multi-valued (or other) functions the method wasn't to change the branch but to make the integration path "elude" the singularities - thus those funny "C" shapes and circles around the poles. Perhaps a mathematician in the audience could stop our poking the beast and provide a more rigorous clarification? "To live or die by your own sword one must first learn to wield it aptly." 04-30-2016, 08:57 AM (This post was last modified: 04-30-2016 09:19 AM by ljubo.) Post: #17 ljubo Posts: 13 Junior Member Joined: Apr 2016 RE: ACOS logarithmic form (04-30-2016 07:01 AM)Ángel Martin Wrote: Interesting, I never looked at those diagrams as showing mappings between domain and results regions but as defining the ranges of applicability, i.e. showing where will the results be; yet what you're saying appears to be accurate. Well, there is no other definition of single-valued inverse function (restricted to the principal branch). Without diagram they would need some curly brackets and different equations depending of Re (z) and Im(z) >1, <-1, etc. (04-30-2016 07:01 AM)Ángel Martin Wrote: What's intriguing is the sentence in pg. 62: "The principal branches in the last four graphs above are obtained from the equations shown, but don't necessarily use the principal branches of ln(z) and sqr(z)" They have introduced a clear notation and are sticking to it: "In the discussion that follows, the single-valued inverse function (restricted to the principal branch) is denoted by uppercase letters-such as COS−1(z)—to distinguish it from the multivalued (page 59). When reading one needs to differentiate between functions in uppercase and in lowercase in equations. (04-30-2016 07:01 AM)Ángel Martin Wrote: So this leads us to believe that in some cases they had another criteria for choosing the principal branches, like keeping symmetries or other properties to make them more analogous to the behavior in the real domain. Definitely, on the page 60 they are writing: "The principal branches used by the HP-15C were carefully chosen. First, they are analytic in the regions where the arguments of the real-valued inverse functions are defined. That is, the branch cut occurs where its corresponding real-valued inverse function is undefined. Second, most of the important symmetries are preserved. For example, SIN−1(−z) = -SIN−1(z) for all z." Interesting question is why are they choosing Im(ARCCOS(z))>0 for Re(z)>1 and Im(z)=0, maybe it is related to ARCCOS(-x) = pi - ARCCOS(x), but I don't see it yet. It needs to be consistent with arccos(x) = pi/2 - arcsin(x) - so in a way it is consequence of arcsin principal branch. (04-30-2016 07:01 AM)Ángel Martin Wrote: Perhaps a mathematician in the audience could stop our poking the beast and provide a more rigorous clarification? I'm a physicist - so almost a mathematician :-), but yes, it is interesting question why they have chosen this exact principal branch - or in other words, what would be broken or "ugly" if principal branch would be different, especially if they would took Im(ARCCOS(z))<0 for Re(z)>1 and Im(z)=0. HP-15C, DM15L, HP-35S, DM42 05-01-2016, 03:58 AM Post: #18 Claudio L. Posts: 1,885 Senior Member Joined: Dec 2013 RE: ACOS logarithmic form (04-30-2016 08:57 AM)ljubo Wrote: ...it is interesting question why they have chosen this exact principal branch - or in other words, what would be broken or "ugly" if principal branch would be different, especially if they would took Im(ARCCOS(z))<0 for Re(z)>1 and Im(z)=0. Thank you, now I don't feel so dumb. The first few posts seemed to dismiss this as something trivial that I should've known since third grade. I'm glad to see now that it wasn't so trivial. Back on topic, it seems you are on the right track, I think those branches were chosen to preserve the symmetries and the relationships between asin() and acos() that we know from the real realm. I guess all you have to do is get a list of all symmetries, and test them using the Wikipedia formula for acos(). I can easily see acos(Z)=pi/2-asin(Z) failing if you have the other branch, since the sign of the imaginary part is opposite, you'd get acos(Z)=pi/2-conj(asin(Z)). User(s) browsing this thread: 1 Guest(s)
{"url":"https://hpmuseum.org/forum/thread-6165-post-55125.html","timestamp":"2024-11-13T06:22:18Z","content_type":"application/xhtml+xml","content_length":"84337","record_id":"<urn:uuid:d763e0e3-66f8-4140-981a-0994cc2ebba6>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00069.warc.gz"}
Soft Computing, Machine Intelligence and Data Mining Title: Soft Computing, Machine Intelligence and Data Mining 1 Soft Computing, Machine Intelligence and Data • Sankar K. Pal • Machine Intelligence Unit • Indian Statistical Institute, Calcutta • http//www.isical.ac.in/sankar ISI, 1931, Mahalanobis • Director • Prof-in-charge • Heads • Distinguished Scientist • Professor • Associate Professor • Lecturer Faculty 250 Courses B. Stat, M. Stat, M. Tech(CS), M.Tech(SQC OR), Ph.D. Location Calcutta (HQ) Delhi Bangalore Hyderabad, Madras Giridih, Bombay MIU Activities (Formed in March 1993) • Pattern Recognition and Image Processing • Color Image Processing • Data Mining • Data Condensation, Feature Selection • Support Vector Machine • Case Generation • Soft Computing • Fuzzy Logic, Neural Networks, Genetic Algorithms, Rough Sets • Hybridization • Case Based Reasoning • Fractals/Wavelets • Image Compression • Digital Watermarking • Wavelet ANN • Bioinformatics • Externally Funded Projects • INTEL • CSIR • Silicogene • Center for Excellence in Soft Computing Research • Foreign Collaborations • (Japan, France, Poland, Honk Kong, Australia) • Editorial Activities • Journals, Special Issues • Books • Achievements/Recognitions Faculty 10 Research Scholar/Associate 8 • What is Soft Computing ? • - Computational Theory of Perception • Pattern Recognition and Machine Intelligence • Relevance of Soft Computing Tools • Different Integrations • Emergence of Data Mining • Need • KDD Process • Relevance of Soft Computing Tools • Rule Generation/Evaluation • Modular Evolutionary Rough Fuzzy MLP • Modular Network • Rough Sets, Granules Rule Generation • Variable Mutation Operations • Knowledge Flow • Example and Merits • Rough-fuzzy Case Generation • Granular Computing • Fuzzy Granulation • Mapping Dependency Rules to Cases • Case Retrieval • Examples and Merits • Conclusions SOFT COMPUTING (L. A. Zadeh) • Aim • To exploit the tolerance for imprecision uncertainty, approximate reasoning and partial truth to achieve tractability, robustness, low solution cost, and close resemblance with human like decision making • To find an approximate solution to an imprecisely/precisely formulated problem. • Parking a Car • Generally, a car can be parked rather easily because the final position of the car is not specified exactly. It it were specified to within, say, a fraction of a millimeter and a few seconds of arc, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. • ? High precision carries a high cost. • ? The challenge is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. This, in essence, is the guiding principle of soft computing. • Soft Computing is a collection of methodologies (working synergistically, not competitively) which, in one form or another, reflect its guiding principle Exploit the tolerance for imprecision, uncertainty, approximate reasoning and partial truth to achieve Tractability, Robustness, and close resemblance with human like decision making. • Foundation for the conception and design of high MIQ (Machine IQ) systems. • Provides Flexible Information Processing Capability for representation and evaluation of various real life ambiguous and uncertain • ? • Real World Computing • It may be argued that it is soft computing rather than hard computing that should be viewed as the foundation for Artificial Intelligence. • At this junction, the principal constituents of soft computing are Fuzzy Logic , Neurocomputing , Genetic Algorithms and Rough Sets . • Within Soft Computing FL, NC, GA, RS are Complementary rather than Competitive FL the algorithms for dealing with imprecision and uncertainty NC the machinery for learning and curve fitting GA the algorithms for search and optimization handling uncertainty arising from the granularity in the domain of discourse Referring back to exampleParking a Car • Do we use any measurement and computation while performing the tasks in Soft Computing? • We use Computational Theory of Perceptions (CTP) Computational Theory of Perceptions (CTP) AI Magazine, 22(1), 73-84, 2001 Provides capability to compute and reason with perception based information • Example Car parking, driving in city, cooking meal, summarizing story • Humans have remarkable capability to perform a wide variety of physical and mental tasks without any measurement and computations • They use perceptions of time, direction, speed, shape, possibility, likelihood, truth, and other attributes of physical and mental objects • Reflecting the finite ability of the sensory organs and (finally the brain) to resolve details, Perceptions are inherently imprecise • Perceptions are fuzzy (F) granular • (both fuzzy and granular) • Boundaries of perceived classes are unsharp • Values of attributes are granulated. • (a clump of indistinguishable points/objects) • Example • Granules in age very young, young, not so old, • Granules in direction slightly left, sharp • F-granularity of perceptions puts them well beyond the reach of traditional methods of analysis (based on predicate logic and probability theory) • Main distinguishing feature the assumption that perceptions are described by propositions drawn from a natural language. Hybrid Systems • Neuro-fuzzy • Genetic neural • Fuzzy genetic • Fuzzy neuro • genetic Knowledge-based Systems • Probabilistic reasoning • Approximate reasoning • Case based reasoning Data Driven Systems Machine Intelligence • Neural network • system • Evolutionary • computing Non-linear Dynamics • Chaos theory • Rescaled range • analysis (wavelet) • Fractal analysis • Pattern recognition • and learning Machine Intelligence A core concept for grouping various advanced technologies with Pattern Recognition and Learning Pattern Recognition System (PRS) • Measurement ? Feature ? Decision • Space Space Space • Uncertainties arise from deficiencies of information available from a situation • Deficiencies may result from incomplete, imprecise, ill-defined, not fully reliable, vague, contradictory information in various stages of a PRS Relevance of Fuzzy Sets in PR • Representing linguistically phrased input features for processing • Representing multi-class membership of ambiguous • Generating rules inferences in • linguistic form • Extracting ill-defined image regions, primitives, properties and describing relations among them as fuzzy subsets ANNs provide Natural Classifiers having • Resistance to Noise, • Tolerance to Distorted Patterns /Images (Ability to Generalize) • Superior Ability to Recognize Overlapping Pattern Classes or Classes with Highly Nonlinear Boundaries or Partially Occluded or Degraded • Potential for Parallel Processing • Non parametric Why GAs in PR ? • Methods developed for Pattern Recognition and Image Processing are usually problem dependent. • Many tasks involved in analyzing/identifying a pattern need Appropriate Parameter Selection and Efficient Search in complex spaces to obtain Optimal Solutions • Makes the processes • - Computationally Intensive • - Possibility of Losing the Exact Solution • GAs Efficient, Adaptive and robust Search Processes, Producing near optimal solutions and have a large amount of Implicit Parallelism • GAs are Appropriate and Natural Choice for problems which need Optimizing Computation Requirements, and Robust, Fast and Close Approximate Solutions Relevance of FL, ANN, GAs Individually to PR Problems is Established In late eighties scientists thought Why NOT Integrations ? Fuzzy Logic ANN ANN GA Fuzzy Logic ANN GA Fuzzy Logic ANN GA Rough Set Neuro-fuzzy hybridization is the most visible integration realized so far. Why Fusion Fuzzy Set theoretic models try to mimic human reasoning and the capability of handling uncertainty (SW) Neural Network models attempt to emulate architecture and information representation scheme of human brain (HW) NEURO-FUZZY Computing (for More Intelligent System) ANN used for learning and Adaptation Fuzzy Sets used to Augment its Application • GENERIC • APPLICATION SPECIFIC Rough-Fuzzy Hybridization • Fuzzy Set theory assigns to each object a degree • of belongingness (membership) to represent an • imprecise/vague concept. • The focus of rough set theory is on the • caused by limited discernibility of objects • and upper approximation of concept). Rough sets and Fuzzy sets can be integrated to develop a model of uncertainty stronger than Rough Fuzzy Hybridization A New Trend in Decision Making, S. K. Pal and A. Skowron (eds), Springer-Verlag, Singapore, 1999 Neuro-Rough Hybridization • Rough set models are used to generate network • parameters (weights). • Roughness is incorporated in inputs and output • networks for uncertainty handling, performance • enhancement and extended domain of application. • Networks consisting of rough neurons are used. Neurocomputing, Spl. Issue on Rough-Neuro Computing, S. K. Pal, W. Pedrycz, A. Skowron and R. Swiniarsky (eds), vol. 36 (1-4), 2001. • Neuro-Rough-Fuzzy-Genetic Hybridization • Rough sets are used to extract domain knowledge in the form of linguistic rules generates fuzzy Knowledge based networks evolved using Genetic algorithms. • Integration offers several advantages like fast training, compact network and performance IEEE TNN, .9, 1203-1216, 1998 Incorporate Domain Knowledge using Rough Sets • Before we describe • Modular Evolutionary Rough-fuzzy MLP • Rough-fuzzy Case Generation System • We explain Data Mining and the significance • of Pattern Recognition, Image Processing and • Machine Intelligence. One of the applications of Information Technology that has drawn the attention of researchers is DATA MINING Where Pattern Recognition/Image Processing/Machine Intelligence are directly Why Data Mining ? • Digital revolution has made digitized information easy to capture and fairly inexpensive to store. • With the development of computer hardware and software and the rapid computerization of business, huge amount of data have been collected and stored in centralized or distributed • Data is heterogeneous (mixture of text, symbolic, numeric, texture, image), huge (both in dimension and size) and scattered. • The rate at which such data is stored is growing at a phenomenal rate. • As a result, traditional ad hoc mixtures of statistical techniques and data management tools are no longer adequate for analyzing this vast collection of data. • Pattern Recognition and Machine Learning • principles applied to a very large (both in size • and dimension) heterogeneous database • ? Data Mining • Data Mining Knowledge Interpretation • ? Knowledge Discovery • Process of identifying valid, novel, potentially • useful, and ultimately understandable patterns • in data Pattern Recognition, World Scientific, 2001 Data Mining (DM) Machine Learning Knowledge Interpretation Mathe- matical Model of Huge Raw Data • Knowledge • Extraction • Knowledge • Evaluation • Classification • Clustering • Rule • Generation Data (Patterns) • Data • Wrapping/ • Description Knowledge Discovery in Database (KDD) Data Mining Algorithm Components • Model Function of the model (e.g., classification, clustering, rule generation) and its representational form (e.g., linear discriminants, neural networks, fuzzy logic, GAs, rough sets). • Preference criterion Basis for preference of one model or set of parameters over another. • Search algorithm Specification of an algorithm for finding particular patterns of interest (or models and parameters), given the data, family of models, and preference criterion. Why Growth of Interest ? • Falling cost of large storage devices and increasing ease of collecting data over networks. • Availability of Robust/Efficient machine learning algorithms to process data. • Falling cost of computational power ? enabling use of computationally intensive methods for data • Financial Investment Stock indices and prices, interest rates, credit card data, fraud detection • Health Care Various diagnostic information stored by hospital management systems. • Data is heterogeneous (mixture of text, symbolic, numeric, texture, image) and huge (both in dimension and size). Role of Fuzzy Sets • Modeling of imprecise/qualitative • Transmission and handling uncertainties at various stages • Supporting, to an extent, human type • reasoning in natural form • Classification/ Clustering • Discovering association rules (describing interesting association relationship among different attributes) • Inferencing • Data summarization/condensation (abstracting the essence from a large amount of information). Role of ANN • Adaptivity, robustness, parallelism, optimality • Machinery for learning and curve fitting (Learns from examples) • Initially, thought to be unsuitable for black box nature no information available in symbolic form (suitable for human interpretation) • Recently, embedded knowledge is extracted in the form of symbolic rules making it suitable for Rule generation. Role of GAs • Robust, parallel, adaptive search methods suitable when the search space is large. • Used more in Prediction (P) than Description(D) • D Finding human interpretable patterns describing the data • P Using some variables or attributes in the database to predict unknown/ future values of • other variables of interest. Example Medical Data • Numeric and textual information may be • Different symbols can be used with same meaning • Redundancy often exists • Erroneous/misspelled medical terms are common • Data is often sparsely distributed • Robust preprocessing system is required to extract any kind of knowledge from even medium-sized medical data sets • The data must not only be cleaned of errors and redundancy, but organized in a fashion that makes sense for the problem • So, We NEED • Efficient • Robust • Flexible • Machine Learning Algorithms • ? • NEED for Soft Computing Paradigm Without Soft Computing Machine Intelligence Research Remains Incomplete. Modular Neural Networks Task Split a learning task into several subtasks, train a Subnetwork for each subtask, integrate the subnetworks to generate the final solution. Strategy Divide and Conquer • The approach involves • Effective decomposition of the problems s.t. the • Subproblems could be solved with compact • networks. • Effective combination and training of the • subnetworks s.t. there is Gain in terms of both • total training time, network size and accuracy • solution. • Accelerated training • The final solution network has more structured • components • Representation of individual clusters • of size/importance) is better preserved in the • solution network. • The catastrophic interference problem of neural • network learning (in case of overlapped • is reduced. Classification Problem • Split a k-class problem into k 2-class problems. • Train one (or multiple) subnetwork modules for • each 2-class problem. • Concatenate the subnetworks s.t. Intra-module • that have already evolved are unchanged, while • Inter-module links are initialized to a low • Train the concatenated networks s.t. the Intra- • module links (already evolved) are less • while the Inter-module links are more 3-class problem 3 (2-class problem) Class 1 Subnetwork Class 2 Subnetwork Class 3 Subnetwork Integrate Subnetwork Modules Links to be grown Links with values preserved Final Training Phase Final Network Inter-module links grown Modular Rough Fuzzy MLP? A modular network designed using four different Soft Computing tools. Basic Network Model Fuzzy MLP Rough Set theory is used to generate Crude decision rules Representing each of the classes from the Discernibility Matrix. (There may be multiple rules for each class multiple subnetworks for each class) The Knowledge based subnetworks are concatenated to form a population of initial solution networks. The final solution network is evolved using a GA with variable mutation operator. The bits corresponding to the Intra-module links (already evolved) have low mutation probability, while Inter-module links have high mutation Rough Sets Z. Pawlak 1982, Int. J. Comp. Inf. Sci. • Offer mathematical tools to discover hidden patterns in data. • Fundamental principle of a rough set-based learning system is to discover redundancies and dependencies between the given features of a data to be classified. • Approximate a given concept both from below and from above, using lower and upper approximations. • Rough set learning algorithms can be used to obtain rules in IF-THEN form from a decision • Extract Knowledge from data base (decision table w.r.t. objects and attributes ? remove undesirable attributes (knowledge discovery) ? analyze data dependency ? minimum subset of attributes (reducts)) Rough Sets Upper Approximation BX Set X Lower Approximation BX xB (Granules) xB set of all points belonging to the same granule as of the point x in feature space WB. xB is the set of all points which are indiscernible with point x in terms of feature subset B. Approximations of the set w.r.t feature subset B B-lower BX Granules definitely belonging to X B-upper BX Granules definitely and possibly belonging to X If BX BX, X is B-exact or B-definable Otherwise it is Roughly definable Rough Sets Uncertainty Handling Granular Computing (Using information granules) (Using lower upper approximations) Granular Computing Computation is performed using information granules and not the data points (objects) Information compression Computational gain Information Granules and Rough Set Theoretic Rules • Rule provides crude description of the class • granule Rough Set Rule Generation Decision Table Object F1 F2 F3 F4 F5 Decision x1 1 0 1 0 1 Class 1 x2 0 0 0 0 1 Class 1 x3 1 1 1 1 1 Class 1 x4 0 1 0 1 0 Class 2 x5 1 1 1 0 0 Class 2 Discernibility Matrix (c) for Class 1 Objects x1 x2 x3 x1 f F1, F3 F2, F4 x2 f F1,F2,F3,F4 x3 f Discernibility function Discernibility function considering the object x1 belonging to Class 1 Discernibility of x1 w.r.t x2 (and) Discernibility of x1 w.r.t x3 Similarly, Discernibility function considering Dependency Rules (AND-OR form) No. of Classes2 No. of Features2 Crude Networks L1 . . . H2 L1 . . . H2 L1 . . . H2 GA2 (Phase 1) GA3 (Phase 1) GA1 (Phase 1) Partially Trained L1 . . . H2 L1 . . . H2 L1 . . . H2 Links having small random value L1 . . . H2 L1 . . . H2 . . . . . . . . . . . . Final Population Low mutation probability GA (Phase II) (with restricted mutation probability ) High mutation probability Final Trained Network Knowledge Flow in Modular Rough Fuzzy MLP IEEE Trans. Knowledge Data Engg., 15(1), 14-25, Feature Space Rough Set Rules Network Mapping R1 (Subnet 1) R2 (Subnet 2) R3 (Subnet 3) Partial Training with Ordinary GA Feature Space Partially Refined Subnetworks Concatenation of Subnetworks high mutation prob. low mutation prob. Evolution of the Population of Concatenated networks with GA having variable mutation operator Feature Space Final Solution Network (No Transcript) Speech Data 3 Features, 6 Classes Classification Accuracy Network Size (No. of Links) Training Time (hrs) DEC Alpha Workstation _at_400MHz 1. MLP 4. Rough Fuzzy MLP 2. Fuzzy MLP 5. Modular Rough Fuzzy MLP 3. Modular Fuzzy MLP Network Structure IEEE Trans. Knowledge Data Engg., 15(1), 14-25, 2003 Modular Rough Fuzzy MLP Structured ( of links few) Fuzzy MLP Unstructured ( of links more) Histogram of weight values Connectivity of the network obtained using Modular Rough Fuzzy MLP Sample rules extracted for Modular Rough Fuzzy Rule Evaluation • Accuracy • Fidelity (Number of times network and rule base output agree) • Confusion (should be restricted within minimum no. of classes) • Coverage (a rule base with smaller uncovered region i.e., test set for which no rules are fired, is better) • Rule base size (smaller the no. of rules, more compact is the rule base) • Certainty (confidence of rules) Existing Rule Extraction Algorithms • Subset Searches over all possible combination of input • weights to a node of trained networks. Rules are generated • from these Subsets of links, for which sum of the weights • exceed the bias for that node. • MofN Instead of AND-OR rules, the method extracts rules • of the Form IF M out of N inputs are high THEN Class I. • X2R Unlike previous two methods which consider • of a network, X2R generates rule from • mapping implemented by the network. • C4.5 Rule generation algorithm based on decision trees. IEEE Trans. Knowledge Data Engg., 15(1), 14-25, Comparison of Rules obtained for Speech data Number of Rules CPU Time Case Based Reasoning (CBR) • Cases some typical situations, already experienced by the system. • conceptualized piece of knowledge representing an experience that teaches a lesson for achieving the goals of the system. • CBR involves • adapting old solutions to meet new demands • using old cases to explain new situations or to • justify new solutions • reasoning from precedents to interpret new • ? learns and becomes more efficient as a byproduct of its reasoning activity. • Example Medical diagnosis and Law interpretation where the knowledge available is incomplete and/or evidence is sparse. • Unlike traditional knowledge-based system, case based system operates through a process of • remembering one or a small set of concrete instances or cases and • basing decisions on comparisons between the new situation and the old ones. • Case Selection ? Cases belong to the set of examples encountered. • Case Generation ? Constructed Cases need not be any of the examples. Rough Sets Uncertainty Handling Granular Computing (Using information granules) (Using lower upper approximations) IEEE Trans. Knowledge Data Engg., to appear Granular Computing and Case Generation • Information Granules A group of similar objects clubbed together by an indiscernibility relation. • Granular Computing Computation is performed using information granules and not the data points (objects) • Information compression • Computational gain • Cases Informative patterns (prototypes) characterizing the problems. • In rough set theoretic framework • Cases ? Information Granules • In rough-fuzzy framework • Cases ? Fuzzy Information Granules Characteristics and Merits • Cases are cluster granules, not sample points • Involves only reduced number of relevant features with variable size • Less storage requirements • Fast retrieval • Suitable for mining data with large dimension and size • How to Achieve? • Fuzzy sets help in linguistic representation of patterns, providing a fuzzy granulation of the feature space • Rough sets help in generating dependency rules to model informative/representative regions in the granulated feature space. • Fuzzy membership functions corresponding to the representative regions are stored as Cases. Fuzzy (F)-Granulation Membership value Feature j Clow(Fj) mjl Cmedium(Fj) mj Chigh(Fj) mjh ?low(Fj) Cmedium(Fj) ? Clow(Fj) ?high(Fj) Chigh(Fj) ? Cmedium(Fj) ?medium(Fj) 0.5 (Chigh(Fj) ? Clow(Fj)) Mj mean of the pattern points along jth axis. Mjl mean of points in the range Fj min, mj) Mjh mean of points in the range (mj, Fj max Fj max, Fj min maximum and minimum values of feature Fj. • An n-dimensional pattern Fi Fi1, Fi2, , Fin is represented as a 3n-dimensional fuzzy linguistic pattern Pal Mitra 1992 • Fi ?low(Fi1) (Fi), , ?high(Fin) (Fi) • Set m value at 1 or 0, if it is higher or lower • than 0.5 • ? Binary 3n-dimensional patterns are obtained • (Compute the frequency nki of occurrence of binary patterns. Select those patterns having frequency above a threshold Tr (for noise • Generate a decision table consisting of the binary patterns. • Extract dependency rules corresponding to • informative regions (blocks). (e.g., class L1 ? M2) Rough Set Rule Generation Decision Table Object F1 F2 F3 F4 F5 Decision x1 1 0 1 0 1 Class 1 x2 0 0 0 0 1 Class 1 x3 1 1 1 1 1 Class 1 x4 0 1 0 1 0 Class 2 x5 1 1 1 0 0 Class 2 Discernibility Matrix (c) for Class 1 Objects x1 x2 x3 x1 f F1, F3 F2, F4 x2 f F1,F2,F3,F4 x3 f Discernibility function considering the object x1 belonging to Class 1 Discernibility of x1 w.r.t x2 (and) Discernibility of x1 w.r.t x3 Similarly, Discernibility function considering Dependency Rules (AND-OR form) Mapping Dependency Rules to Cases • Each conjunction e.g., L1 ? M2 represents a region (block) • For each conjunction, store as a case • Parameters of the fuzzy membership functions corresponding to linguistic variables that occur in the conjunction. • (thus, multiple cases may be generated from a • Note All features may not occur in a rule. • Cases may be represented by Different Reduced number of features. • Structure of a Case • Parameters of the membership functions (center, radii), Class information Example IEEE Trans. Knowledge Data Engg., to appear CASE 1 X X X X X X X X X CASE 2 Parameters of fuzzy linguistic sets low, medium, Dependency Rules and Cases Obtained Case 1 Feature No 1, fuzzset (L) c 0.1, ? 0.5 Feature No 2, fuzzset (H) c 0.9, ? 0.5 Class1 Case 2 Feature No 1, fuzzset (H) c 0.7, ? 0.4 Feature No 2, fuzzset (L) c 0.2, ? 0.5 Class2 Case Retrieval • Similarity (sim(x,c)) between a pattern x and a case c is defined as • n number of features present in case c • the degree of belongingness of pattern x to fuzzy linguistic set fuzzset for feature j. • For classifying an unknown pattern, the case closest to the pattern in terms of sim(x,c) is retrieved and its class is assigned to the Experimental Results and Comparisons 1. Forest Covertype Contains 10 dimensions, 7 classes and 586,012 samples. It is a Geographical Information System data representing forest cover types (pine/fir etc) of USA. The variables are cartographic and remote sensing measurements. All the variables are numeric. 1. Multiple features This dataset consists of features of handwritten numerals (0-9) extracted from a collection of Dutch utility maps. There are total 2000 patterns, 649 features (all numeric) and all classes. 2. Iris The dataset contains 150 instances, 4 features and 3 classes of Iris flowers. The features are numeric. • Some Existing Case Selection Methods • k-NN based • Condensed nearest neighbor (CNN), • Instance based learning (e.g., IB3), • Instance based learning with feature weighting (e.g., IB4). • Fuzzy logic based • Neuro-fuzzy based. Algorithms Compared 1. Instance based learning algorithm, IB3 Aha 2. Instance based learning algorithm, IB4 Aha 1992 (reduced feature). The feature weighting is learned by random hill climbing in IB4. A specified number of features having high weights is selected. 3. Random case selection. Evaluation in terms of 1. 1-NN classification accuracy using the cases. Training set 10 for case generation, and Test set 90 2. Number of cases stored in the case base. 3. Average number of features required to store a case (navg). 1. CPU time required for case generation (tgen). 2. Average CPU time required to retrieve a case (tret). (on a Sun UltraSparc _at_350 MHz Workstation) Iris Flowers 4 features, 3 classes, 150 samples Number of cases 3 (for all methods) Forest Cover Types 10 features, 7 classes, 5,86,012 samples Number of cases 545 (for all methods) Hand Written Numerals 649 features, 10 classes, 2000 samples Number of cases 50 (for all methods) For same number of cases Accuracy Proposed method much superior to random selection and IB4, close IB3. Average Number of Features Stored Proposed method stores much less than the original data dimension. Case Generation Time Proposed method requires much less compared to IB3 and IB4. Case Retrieval Time Several orders less for proposed method compared to IB3 and random selection. Also less than IB4. • Conclusions • Relation between Soft Computing, Machine Intelligence and Pattern Recognition is • Emergence of Data Mining and Knowledge Discovery from PR point of view is explained. • Significance of Hybridization in Soft Computing paradigm is illustrated. • Modular concept enhances performance, accelerates training and makes the network structured with less no. of links. • Rules generated are superior to other related methods in terms of accuracy, coverage, fidelity, confusion, size and certainty. • Rough sets used for generating information • Fuzzy sets provide efficient granulation of feature space (F -granulation). • Reduced and variable feature subset representation of cases is a unique feature of the scheme. • Rough-fuzzy case generation method is suitable for CBR systems involving datasets large both in dimension and size. • Unsupervised case generation, Rough-SOM • (Applied intelligence, to appear) • Application to multi-spectral image segmentation • (IEEE Trans. Geoscience and Remote Sensing, 40(11), 2495-2501, 2002) • Significance in Computational Theory of Perception (CTP) Thank You!! User Comments (0)
{"url":"https://www.powershow.com/view4/78d85f-NmYyY/Soft_Computing_Machine_Intelligence_and_Data_Mining_powerpoint_ppt_presentation","timestamp":"2024-11-08T13:57:50Z","content_type":"application/xhtml+xml","content_length":"225834","record_id":"<urn:uuid:0e2dd523-53d8-451a-81ea-6ac95f8599f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00484.warc.gz"}
Functional Analysis Notes for BS/MSc Functional analysis is a branch of mathematics that extends concepts from linear algebra and calculus to infinite-dimensional spaces, particularly vector spaces of functions. It provides a framework for studying spaces of functions, linear operators, and their properties. Functional Analysis is a branch of mathematics that extends and generalizes the concepts of vector spaces and linear algebra to infinite-dimensional spaces. At the undergraduate (BS) and graduate (MSc) levels, students delve into the intricate theory of functional analysis, which has profound applications in various fields, including pure mathematics, physics, and engineering. 1. Normed and Banach Spaces: • Normed Spaces: Introduction to spaces equipped with a norm, a function that measures the size of vectors. • Banach Spaces: Completing normed spaces, emphasizing completeness and convergence. 2. Inner Product Spaces: • Inner Products and Hilbert Spaces: Defining inner products and exploring Hilbert spaces. • Orthogonal Bases and Orthonormal Sets: Studying orthonormality and its applications. 3. Linear Operators: • Bounded and Unbounded Operators: Understanding operators between normed or Banach spaces. • Compact Operators: Exploring compactness and its role in linear operators. 4. Duality and Weak Topologies: • Dual Spaces: Introduction to the dual space of a normed or Banach space. • Weak Topologies: Examining topologies weaker than the norm topology and their significance. 5. Spectral Theory: • Spectrum of Operators: Understanding the spectrum of linear operators. • Functional Calculus: Extending calculus concepts to operators. 6. Distribution Theory: • Generalized Functions: Introducing distributions as a framework for generalized functions. • Fourier Transforms: Applying distribution theory to Fourier transforms. 7. Sobolev Spaces: • Function Spaces with Derivatives: Defining Sobolev spaces to study functions with weak derivatives. • Applications in PDEs: Using Sobolev spaces to address partial differential equations. 8. C*-Algebras and Operator Algebras: • Algebras of Operators: Exploring algebraic structures related to operators. • C-Algebras: Defining C-algebras and their properties. 9. Applications: • Quantum Mechanics: Applying functional analysis to quantum mechanics. • Functional Analytic Methods in PDEs: Employing functional analysis tools to solve partial differential equations. 10. Advanced Topics: • Non-commutative Geometry: Exploring connections between functional analysis and non-commutative geometry. • K-Theory and Index Theory: Introducing advanced topics in algebraic topology and operator theory. Functional Analysis provides a profound understanding of infinite-dimensional spaces and their structures. The applications of functional analysis extend beyond pure mathematics, impacting diverse areas such as quantum mechanics, signal processing, and the study of partial differential equations. The course equips students with advanced tools to analyze complex mathematical structures and solve problems in various scientific and engineering domains. Keep visiting our website www.RanaMaths.com Leave a Comment
{"url":"https://ranamaths.com/functional-analysis-notes/","timestamp":"2024-11-09T20:45:05Z","content_type":"text/html","content_length":"50491","record_id":"<urn:uuid:5b1d3ebb-e004-4f6b-b5fd-a45533cd0fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00512.warc.gz"}
Master Algebra with the Interactive Graspable Math written by Miguel Guhlin Even though my mother was a middle school math teacher, I found my math skills lagging behind my peers. It’s because of that lack that I treasure online mathematics tutorials and resources. In this blog entry, I’ll share one new resource you may not be familiar with. Teaching Algebra My first introduction to algebra was in high school. To survive the experience, I had access to software to help me learn algebraic concepts. But the tutorial software was ineffective because it relied on a multiple choice, drill-and-practice approach. This approach, although common at the time, just didn’t work, which proved an almost insurmountable obstacle for me later. And that’s true for many other students. Learning algebra is important. Consider these reasons why: • Students take longer to complete college-level math courses. • Algebra is foundational for higher-level math. • Algebra is the gatekeeper class for Computer Science. For me, all three of these reasons were true. Online tutorial tools offer hope for teachers and parents. They ensure children don’t suffer a similar fate as me. Introducing the Tool Wish you had engaging math activities for grade four to twelfth students? Then Graspable Math may be a solution to explore. Learn a little more about Graspable Math here: Graspable Math, available for free for teachers, is an “algebraic notation system.” This means that you can interact on the screen with math symbols. The interactive nature changes how children interact with math, making it less abstract. Here’s a video overview of Graspable Math for making transformations for simplifying expressions: Learn Graspable Math You can gain a deeper understanding of Graspable Math concepts through their website where you’ll find tutorial videos organized into categories. The categories include: • Simplifying expressions • Commutative and distributive properties • Solving equations Teachers have access to a variety of materials online. You can access online professional development resources, too. Virtual sessions are available with certified tool trainers. Approach Algebra in a Whole New Way Graspable Math offers learners a different way to interact with algebra. You can find other math related resources in this TCEA blog. Happy graspable mathing! 2 comments Kay February 15, 2022 - 1:06 pm I am not sure that demo sells the this software, au contraire, it would discourage someone like me who has worked with students that struggle understanding the basic concepts of algebra for decades. The 1min video for example (and without looking at the rest of the software) somewhat suggests that a student can grab the number “22” and place it on another number to combine. If the student doesn’t understand why that is the correct way from the onset, they will just place it on top of the “x” and see no change, then the “y” and see no change and probably even worse if there is an equal sign or an inequality sign. To summarize my comment, The students from a young age need to understand the difference between a number, an “x”, a “y”, or any other terms in algebraic equation before they start combining and simplifying. This can be done with apples, oranges, rocks, and shoes better than what I saw. Miguel Guhlin February 15, 2022 - 1:12 pm Thanks for sharing your insights, Kay. While I must defer to your expertise, I hope others chime in and share their insights about teaching algebra as well. Perhaps, the clashing of cymbals will yield a note of truth. With appreciation, Leave a Comment Cancel Reply 2 comments 0 FacebookTwitterPinterestLinkedinEmail Miguel Guhlin Transforming teaching, learning and leadership through the strategic application of technology has been Miguel Guhlin’s motto. Learn more about his work online at blog.tcea.org, mguhlin.org, and mglead.org/mglead2.org. Catch him on Mastodon @mguhlin@mastodon.education Areas of interest flow from his experiences as a district technology administrator, regional education specialist, and classroom educator in bilingual/ESL situations. Learn more about his credentials online at mguhlin.net. You may also like
{"url":"https://blog.tcea.org/master-algebra-with-the-interactive-graspable-math/","timestamp":"2024-11-06T05:55:06Z","content_type":"text/html","content_length":"185308","record_id":"<urn:uuid:236b792b-a159-4999-a8d0-d35010cd1d8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00399.warc.gz"}
orksheets for 10th Class Recommended Topics for you Introduction to Trigonometry Trigonometry Solving Quiz Trigonometry Word Problems Right Triangles Trigonometry Explore trigonometry Worksheets by Grades Explore trigonometry Worksheets for class 10 by Topic Explore Other Subject Worksheets for class 10 Explore printable trigonometry worksheets for 10th Class Trigonometry worksheets for Class 10 are an essential resource for teachers looking to help their students master the fundamental concepts of trigonometry. These worksheets cover a wide range of topics, including angles, triangles, sine, cosine, and tangent functions, as well as their applications in real-world situations. By incorporating these worksheets into their lesson plans, teachers can provide their students with ample opportunities to practice and reinforce their understanding of trigonometry concepts. Furthermore, these worksheets can be easily adapted to suit the needs of individual students, making them an invaluable tool for differentiation in the classroom. In conclusion, trigonometry worksheets for Class 10 are a must-have for any math teacher looking to enhance their students' learning experience. Quizizz is an excellent platform for teachers to access a variety of resources, including trigonometry worksheets for Class 10, math quizzes, and other engaging learning materials. This platform allows teachers to create interactive quizzes and games that can be used to supplement their existing lesson plans, providing students with a fun and engaging way to learn and practice new concepts. Additionally, Quizizz offers a wealth of pre-made quizzes and worksheets that cover a wide range of topics, making it easy for teachers to find and implement resources that align with their curriculum. Teachers can also track their students' progress and performance on these quizzes and worksheets, allowing them to identify areas where students may need additional support or practice. Overall, Quizizz is an invaluable tool for teachers looking to enhance their students' understanding of Class 10 math concepts, including trigonometry.
{"url":"https://quizizz.com/en/trigonometry-worksheets-class-10","timestamp":"2024-11-13T12:14:17Z","content_type":"text/html","content_length":"150673","record_id":"<urn:uuid:d19f47c6-223d-441f-a754-701a31bf09f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00280.warc.gz"}
An arbelos is formed from three collinear points A, B, and C, by the three semicircles with diameters AB, AC, and BC. Let the two smaller circles have radii r[1] and r[2], from which it follows that the larger semicircle has radius r = r[1]+r[2]. Let the points D and E be the center and midpoint, respectively, of the semicircle with the radius r[1]. Let H be the midpoint of line AC. Then two of the four quadruplet circles are tangent to line HE at the point E, and are also tangent to the outer semicircle. The other two quadruplet circles are formed in a symmetric way from the semicircle with radius r[2]. Proof of congruency According to Proposition 5 of Archimedes' Book of Lemmas, the common radius of Archimedes' twin circles is: ${\displaystyle {\frac {r_{1}\cdot r_{2}}{r}}.}$ By the Pythagorean theorem: ${\displaystyle \left(HE\right)^{2}=\left(r_{1}\right)^{2}+\left(r_{2}\right)^{2}.}$ Then, create two circles with centers J[i] perpendicular to HE, tangent to the large semicircle at point L[i], tangent to point E, and with equal radii x. Using the Pythagorean theorem: ${\displaystyle \left(HJ_{i}\right)^{2}=\left(HE\right)^{2}+x^{2}=\left(r_{1}\right)^{2}+\left(r_{2}\right)^{2}+x^{2}}$ ${\displaystyle HJ_{i}=HL_{i}-x=r-x=r_{1}+r_{2}-x~}$ Combining these gives: ${\displaystyle \left(r_{1}\right)^{2}+\left(r_{2}\right)^{2}+x^{2}=\left(r_{1}+r_{2}-x\right)^{2}}$ Expanding, collecting to one side, and factoring: ${\displaystyle 2r_{1}r_{2}-2x\left(r_{1}+r_{2}\right)=0}$ Solving for x: ${\displaystyle x={\frac {r_{1}\cdot r_{2}}{r_{1}+r_{2}}}={\frac {r_{1}\cdot r_{2}}{r}}}$ Proving that each of the Archimedes' quadruplets' areas is equal to each of Archimedes' twin circles' areas.^[4] More readings • Arbelos: Book of Lemmas, Pappus Chain, Archimedean Circle, Archimedes' Quadruplets, Archimedes' Twin Circles, Bankoff Circle, S. ISBN 1156885493
{"url":"https://www.knowpia.com/knowpedia/Archimedes%27_quadruplets","timestamp":"2024-11-08T01:49:45Z","content_type":"text/html","content_length":"88546","record_id":"<urn:uuid:dc78e51b-1f20-4c47-ba45-b1bd9cfcc1d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00265.warc.gz"}
How do you find the slope for (2, -7) (0, -10)? | HIX Tutor How do you find the slope for (2, -7) (0, -10)? Answer 1 Subtract the first y-variable from the second y-variable and divide that by the first x-variable subtracted from the second x-variable. The slope is #3/2#. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the slope for the points (2, -7) and (0, -10), you use the formula ( m = \frac{{y_2 - y_1}}{{x_2 - x_1}} ). Substituting the coordinates, you get ( m = \frac{{-10 - (-7)}}{{0 - 2}} = \frac {{-3}}{{-2}} = \frac{3}{2} ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-slope-for-2-7-0-10-8f9af91b67","timestamp":"2024-11-02T17:12:57Z","content_type":"text/html","content_length":"566679","record_id":"<urn:uuid:6cf11fec-911d-44cf-ac02-5fe88467b778>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00451.warc.gz"}
DFPs & PhDs [DFP2024] N. Averna: Introduction to Morse theory. Abstract: The aim of this work is to introduce the fundamentals of Morse theory, from the definition of Morse functions, their abundance, and their local form, to the two essential theorems, which describe the topological type of a compact manifold between and through critical values of a Morse function. These tools describe the topological type of any compact manifold using Morse functions and, in particular, give the classification of orientable compact surfaces. Finally we deduce the Poincaré-Hopf theorem for Morse functions and use it to compute the Euler characteristic of some differentiable manifolds. Keywords: Morse function, critical value, index, Morse lemma, topological type, Poincaré-Hopf theorem, Euler characteristic. MSC: 57R19, 57R20, 55R25, 57R70. [DFP2024] S. Luque: Introduction to the Hopf invariant theory. Abstract: In this work, we study the rudiments of the Hopf invariant theory. First, we present several relevant results on the theory of smooth manifolds: regularity, approximation, homotopy, integration, and cohomology. Next, we define the Brouwer-Kronecker degree of a continuous map using the de Rham cohomology and approximation, and study some of its properties. Later, we do the same for the Hopf invariant, and prove, with this definition, that it is an integer. We also present its definition using the linking number of two fibers. With these tools, we study the cohomotopy groups of manifolds, and, in particular, the homotopy groups of spheres, ultimately proving Hopf’s theorem: the degree is the only homotopy invariant of continuous maps on spheres. From there, we proceed to study the results where the Hopf invariant is relevant, which leads us to Hopf fibrations. We explicitly calculate the Hopf invariant of the complex Hopf fibration and we conclude our work seeing how it induces isomorphisms between the homotopy groups of the involved spheres. Keywords: Smooth manifolds, Brouwer-Kronecker degree, Hopf invariant, linking number, homotopy groups, Hopf theorem, Hopf fibrations. MSC: 55M25, 55Q25, 55Q40, 55R05. [DFP2022] D. Ruiz: The de Rham theorem. Abstract: The aim of this work is to provide a complete and self-contained proof of the de Rham theorem, which can be stated as follows: de Rham cohomology is equivalent to singular cohomology with real coefficients. To this end, we shall make use of various tools from different areas such as homological algebra, algebraic topology and differential topology – cochain complexes, the Mayer-Vietoris sequence or cohomology groups on a manifold– to finally establish the main result in the most direct and explicit way possible. Keywords: Cohomology, cochain complexes, induction on open sets, Mayer-Vietoris, singular homology, smooth singular homology, de Rham. MSC: 57R19, 58A05, 58A12. [DFP2022] V. Álvarez: The Poincaré-Hopf theorem. Abstract: In this work we study the Poincaré-Hopf Index Theorem. Firstly, we present several relevant results regarding the approximation and homotopy of proper mappings. Then, we define the Brouwer-Kronecker degree using the de Rham cohomology, and we prove the very significant Boundary Theorem. Furthermore, we study the singularities of vector fields, in particular non-degenerate ones, which lead to the notion of index of a vector field. We then prove the Poincaré-Hopf Index Theorem and we study in more detail gradient fields, in particular those of Morse functions. Finally, we make the link with the Euler characteristic and the Gauss-Bonnet Theorem. Keywords: approximation, homotopy, proper mapping, de Rahm cohomology, Brouwer-Kronecker degree, index of a tangent vector field, Poincaré-Hopf theorem, Morse function, Euler characteristic, Gauss-Bonnet Theorem. MSC: 57R19, 57N65, 58A05, 57R45, 37C99. [MFP2022] A. Calleja: Homotopy of mappings for real projective spaces. Abstract: The goal of this work is to classify through the degree the homotopy types of continuous maps into real projective spaces with domain the sphere (of the same dimension) or the given projective space. To that end, we will study the liftings of this maps to the universal covering (another sphere) and the homotopies between them, as well as the conditions that these homotopies must satisfy to give rise to homotopies between the original maps. A really helpful tool for this end will be the Brouwer- Hopf Theorem, which classifies the homotopy type of continuous maps from a compact manifold to the sphere. In the last sections we study the functoriality on the domain and make the comparison wirh homotopy groups. Keywords: Degree, even/odd homotopy, Brouwer-Hopf Theorem, covering spaces, homotopy lifting, real projective space, homotopy type. MSC: 55P15, 55M30. [DFP2021] R. Casado: De Rham Cohomology: The Hopf fibrations. Abstract: We carry out a detailed study of the de Rham cohomology, including the Mayer-Vietoris sequence, up to formulate the Poincaré duality. We take special care to prove Poincaré’s Lemma and its equivalent version for the compact support case. With these tools we calculate the cohomology of some significant concrete spaces, like spheres. From there we define the Hopf invariant for a differentiable function between spheres, which detects that these fibrations are essential. We conclude proving this as a consequence of the homotopy lifting theorem and the Ehresmann theorem for proper submersions. Keywords: de Rham cohomology, Poincaré’s Lemma, Mayer-Vietoris sequence, Hopf invariant, Hopf fibration, homotopy lifting, Ehresmann theorem, proper submersion. MSC: 57R19, 57N65, 58A05, 55P05, 55R10. [DFP2021] P. Cordero: De Rham Cohomology: The Poincaré duality. Abstract: We study the de Rham cohomology on smooth manifolds. We prove the Mayer-Vietoris theorem, the topological invariance of the de Rham cohomology groups, the Poincaré duality for compact and non-compact support, and the Künneth theorem, which explores the cohomology of the product of two manifolds. As applications, we compute the cohomology groups of several manifolds, as well as the top cohomology group and the degree 1 group in the general case. Besides, we prove the Jordan-Brouwer theorem and study some properties of the Euler characteristic. Keywords: differential form, exact form, closed form, cochain, de Rham cohomology, Mayer-Vietoris, duality, Künneth, Jordan-Brouwer theorem. MSC: 57R19, 57N65, 58A05. [DFP2021] J. González: De Rham cohomology: The Poincaré-Hopf theorem. Abstract: We introduce cohomology’s basic concepts and results, as to be applied to the de Rham cohomology. First non-compact support theory is developed, then with compact support. Afterwards we study the required theory about degree and Morse functions to be able to state and prove the Poincaré-Hopf theorem for Morse functions. We will also include a nice topological application of degree theory. Then we prove the Gauss-Bonnet theorem using a result from Morse theory. Finally we prove the Reeb theorem which gives a topological characterization of spheres through Morse functions. Keywords: de Rham cohomology, Poincaré operator, Mayer-Vietoris sequence, degree theory, Morse function, gradient-like vector field, Poincaré-Hopf theorem, Gauss-Bonnet theorem, Reeb theorem. MSC: 57R19, 57N65, 58A05, 57R45, 37C99. [DFP2020] A. Rodríguez: The Schoenflies theorem for the torus. Abstract: We study the Jordan-Schoenflies theorem for the torus, which classifies Jordan curves on the torus modulo ambient homeomorphism depending on whether they disconnect the torus or they do not. We show that there are two types of curves: those that are nullhomotopic, which disconnect the torus; and the rest, which do not. Besides, the complements of two curves of the same type are homeomorphic: to an annulus for those which do not disconnect, and to a disk and a punctured torus for those which do. Furthermore, we study which ambient homeomorphisms can be refined to isotopies. Keywords: Jordan curves, polygonal, Jordan-Schoenflies theorem, torus, covering, lifting, homotopy, fundamental group, isotopy. MSC: 57N05, 57N50. [DFP2020] J. Polo: Brouwer-Kronecker degree theory. Abstract: In this work we study the basics of Bouwer-Kronecker degree theory. In order to define consistently the degree of a smooth mapping and its extension to continuous mappings, we prove results regarding the existence of diffeotopies and approximation/homotopy of (proper) continuous mappings. The homotopy invariance of the degree is applied in order to tackle some deep topological theorems as the Borsuk-Hirsch theorem addressing the degree of even and odd mappings in spheres and the Jordan-Brouwer theorem on the separation of affine spaces by hypersurfaces. Keywords: Orientation, diffeotopy, approximation, proper mapping, degree, spheres, essential mapping, parity. MSC: 55M25, 55P57,55Q40. [DFP2019] P. Esteban: De Rham cohomology and Brouwer-Kronecker degree. Abstract: In this work we study some aspects of the de Rham cohomology. We will introduce the Lie derivative and inner product and we will explore the integration of top degree cohomology classes, showing that it is an isomorphism. We will also prove the Poincaré lemma, linking cohomology to homotopy. In the end, we will present the Brouwer-Kronecker degree and use the previous tools to prove some important theorems: Brouwer’s fixed point, invariance of domain, and Hopf’s on homotopy of mappings of spheres. Keywords: Differential form, Lie derivative, exterior derivative, cohomology group, integration of cohomology classes, degree, homotopy. MSC: 57R35, 58A12, 55M25, 55Q40, 55Q25. [DFP2019] A. Martín: The Gauss-Bonnet Theorem. Abstract: In this work we study in full the proof of the Gauss-Bonnet theorem. The notions of differential geometry of curves and surfaces involved are reminded (curvatures and integral), and more advanced tools as the Umlaufsatz and differentiable trian­gu­lations of surfaces are shown in detail. The goal is to include details mostly assumed without explanation in texts on the matter. Keywords: First fundamental form, Gaussian curvature, compact surfaces, winding number, Umlaufsatz, differentiable triangulations, geodesic curvature, curvatura integra. MSC: 53A05, 53B20, 53C22, 55M25. [DFP2018] G. Gallego: Introduction to symplectic geometry and integrable systems. Abstract: The main goal of this work is to prove the Arnold-Liouville theorem, which gives a sufficient condition for a Hamiltonian mechanical system to be integrable by quadratures. To that end we define and develop the concepts involved in the theorem, giving some elementary notions of symplectic geometry and its application to Classical Mechanics. Keywords: Symplectic geometry, integrable systems, Arnold-Liouville theorem, hamiltonian flows, Hamilton equations, Lie derivative, time-dependent vector fields. MSC: 37J05, 37J35, 53D05, 58A05, 70H05. [DFP2018] M. Jaenada: The Schoenflies theorem. Abstract: In this work we will prove the Schoenflies Theorem. We will introduce first the Jordan curve theorem and some variations; this is at the basis of Schoenflies theorem. Then we will prove the Shoenflies theorem in the polygonal case, and produce a polygonal approximation of any Jordan curve, which will give way to the final step of the proof. As an application we will study the theorem in the (real) projective plane, that shows how Jordan-Schoenflies fails in compact surfaces other than the sphere, and provides a criterion to distinguish Jordan curves. Keywords: Jordan curve, Jordan theorem, polygonal Jordan curve, linear accessibility, polygonal approximation, Schoenflies theorem, fundamental group, (real) projective plane, Jordan curves in the projective plane. MSC: 57N05, 57N50. [DFP2017] M.A. Berbel: The real symplectic group. Abstract: In this work we study the real symplectic group consisting of linear endomorphisms that preserve an anti­symmetric bilinear form. A basic algebraic approach shows that its special group is the whole group and provides generators known as sym­plec­tic transvections. From the geometric viewpoint we see that the symplectic group is a Lie group to which a Lie algebra can be associated. Its orthogonal subgroup is a compact Lie subgroup isomorphic to the unitary group and, furthermore, it is a deforma­tion retract of the symplectic group. Analytical tools such as the exponential mapping, the logarithm and real powers of definite positive symmetric matrices are developed in the process. Keywords: Symplectic group, transvection, Lie group, orthogonal subgroup, exponential mapping, real powers of matrices. MSC: 15A16, 15A60, 22E15, 22E60, 51A50. [DFP2016] A. Alonso: The topology of division problems: envy-free fair division and consensus division. Abstract: We study two division problems and the topology behind their solutions. The problems are fair division and consensus division, and the topo­logical results involved are the Brouwer Fixed Point Theorem and the Borsuk-Ulam Theorem. These theorems are deduced from their discrete versions: the Sperner Lemma and the Tucker Lemma. In fact, we prove a generalization of the latter: a weak versión of the Ky Fan lemma. The proofs involve the use of suitable graphs associated to polyhedra. Also we discuss the formal equivalences among all these results. Keywords: Envy-free fair division, consensus division, Sperner’s lemma, Brouwer’s fixed point theorem, weak Ky Fan’s lemma, Tucker’s lemma, Borsuk-Ulam’s theorem. MSC: 52B11, 55M20, 55M25, 91A12, 91B02. [DFP2016] F. Coltraro: The Poincaré-Hopf theorem. Abstract: We explore the relationships between functions —vector fields and real functions— defined on smooth manifolds and the topology of the manifolds themselves. We will mostly use tools from Differential Topology. Main results are the Poincaré-Hopf Index Theorem and the Gauss-Bonnet formula for hypersurfaces of even dimen­sion. Basically both results show that certain geometrical quantities —the total index of a vector field and the integral curvature— are invariants of the manifolds where they are defined. In order to obtain these theorems our main tool will be the Brouwer-Kronecker topological degree; with it we will be able to define the key notion of this article: the index of a vector field at an isolated singularity. Along the way we will also give a short introduction to Morse Theory, which in turn allows us to prove the Reeb Theorem. Finally, we study under which hypothesis we can be certain that non-zero vector fields defined on manifolds exist. Keywords: Differential topology, topological degree, index of a vector field, Morse theory, non-zero vector fields, Poincaré-Hopf theorem, Gauss-Bonnet formula. MSC: 57R19, 57R25, 54C20. [DFP2015] E. Fernández: Paracompactness and metrization. Abstract: The aim of this report is to study the notions of Point Set Topology stated in the title. The topology of metric spaces is an important topic in different fields: differential topology, riemmanian geometry and Banach spaces, among others. The study of the metrization problem inherently entails an in-depth study of the concepts of normality and paracompactness. One of its most remarkable applications is the construction of continuous special functions (Urysohn functions, partitions of unity) which allow a better treatment of the topo­lo­gical properties of spaces. Among other results proved here we single out E. Michael’s characteri­zations of paracompactess and Bing and Nagata-Smirnov metrization theorems; also Dugundji-Borsuk and Rudin theorems on mappings into locally convex real topological vector spaces. Keywords: Metrization, normality, paracompactness, Stone theorem, Bing metrization theorem, Nagata-Smirnov metrization theorem, partition of unity. MSC: 54D15, 54D20, 54E35, 54E30. [DFP2015] J. Porras: The Jordan curve theorem and planar graphs. Abstract: The author studies the Jordan curve theorem, with special emphasis on some aspects connected with Graph Theory, namely planarity. He explains how the famous non-planar graphs K[5] and K [3,3] contain a relevant part of the topological essence of the Jordan Theorem. This is half of the famous Kuratowski theorem, the other half is that those graphs are the minimal non-planar exam­ples . There, he discusses carefully the usual­ly understated constructions behind the proof of that second half, providing rigurous arguments for some facts that are usually taken for granted. In particular, extreme care is given to the topological properties of plane graphs with polygonal edges, their faces and the boundaries of the latter. Keywords: Jordan curve theorem, Brouwer fixed point theorem, planar graph, complete graph K[5], complete bipartite graph K[3,3], Kuratowski theorem. MSC: 05C10, 57M15, 57N05. [DFP2015] F. Criado: Azimuth: design and development of a non-euclidean video game. Main Tutor: Marco Antonio Gómez. Abstract: The author proposes a uniparametric algebraic model of hyperbolic, euclidean and elliptic geometries considering the real curvature k as the parameter. This will express the intuition that these models are similar for small values of |k|. He studies with detail the geometric invariants of the models: first fundamental form, cur­vature, geodesics and parallel transport. Then, the model is applied to define some polyhedral surfaces, an extended notion of geodesics in them and some comput­ation­ally efficient methods for their simulation involving computations of geodesic distances. Keywords: Differential geometry, non-euclidean geometry, video game development, Unity, computational geometry, computer graphics. MSC: 53A05, 53A35, 53B20, 53C23. CCS: Geometric topology, Computational geometry, Algorithmic game theory, Software design engineering. [DFP2015] M. Pulido: Singular homology. Abstract: The author gives a direct and self­contained presentation of Singular Ho­mo­lo­gy: construction, relative homology, exact se­quen­ces, Mayer-Vietoris. Then, he applies it to deduce some important classical results as the Brouwer fixed point theorem, local in­va­riance of dimension and in­va­riance of domain, or the Jordan-Brouwer separation theorem. He also compares Singular Homology with Simplicial Homology. He briefly introduces the notion of mapping degree, presenting its main properties and using them to prove the Brouwer hairy ball theorem. As a final point, he presents a gen­er­al­iza­tion of the Jordan-Brouwer theorem that ex­plains the difference between homology and homotopy: the Alexander horned sphere shows that Schoenflies theorem fails in dimension bigger than 2, but the failure is homotopic, not homologic. Keywords: Singular homology, exact sequence of a pair, Mayer-Vietoris, degree, invariance theorems, Brouwer theorems. MSC: 55N10, 57N65, 55Q99, 55M25. [DFP2013] I. González: The Brouwer-Kronecker degree. Abstract: The author studies the Brouwer-Kronecker degree, using de Rham co­ho­mo­lo­gy and integration of forms on manifolds. Two first important applications are: the fixed point theorem and the so-called hairy ball theorem, both due to Brouwer. But the main motivation are the homotopy groups of spheres. These homotopy groups are all trivial for or­der smaller than dimension, as is ex­plained by using the Sard-Brown theorem. For order equal to dimension the Brouwer-Kronecker degree gives the solution: the group is cyclic infinite, a famous theorem by Hopf which is the central result here. For order bigger than dimension the situation becomes much more complicated, except the case of the circle (all homotopy trivial for order bigger that 1, the proof of which is included for completeness). Still, degree gives a method to understand something more through the Hopf invariant. After defining this invariant and obtaining its basic properties, the author computes the Hopf invariant of the famous Hopf fibration, to conclude that the third homotopy group of the 2-sphere is infinite. Keywords: Homotopy groups, de Rham cohomology, Brouwer-Kronecker degree, Hopf invariant, Hopf fibration. MSC: 57R19, 57R35, 58A12. [DFP2013] J.A. Rojo: Approximation and homotopy. Abstract: In this paper the author deals with differentiable aproximation and ho­mo­to­py of continuous and proper mappings whose target is a manifold with boundary. The initial motivation is the direct com­pu­ta­tion by differentiable methods of the de Rham cohomology (with or without compact supports) for such a manifold, as is usually done for boundaryless manifolds. When there is no boundary, tubular neigh­bor­hoods and differentiable retractions are the key tool, but as is well known, there cannot be differentiable retracts in the presence of a boundary. To amend this requires a careful examination of the construction of collars in manifolds with boundary, which will provide the means to find differentiable pseudoretracts. In addition, one must embed any given manifold with boundary into a boundaryless manifold of the same dimension. This is easy up to diffeo­mor­phism, but it is achieved here without any modification of either the manifold or the ambiente space. Keywords: Boundary, approximation of mappings, homotopy, tubular neighborhood, retraction, collar, pseudoretraction. MSC: 57R19, 57R35, 58A12. [PhD2001] J.F. Fernando: Sums of squares in surface germs. Abstract: The author shows first that the Pythagoras number of a real analytic sur­face germ is finite, bounded by a function of its multi­pli­city and its co­dimen­sion. This he gets by solving a question concerning Py­tha­go­ras numbers of fini­tely generated modules over power series and po­ly­no­mials in two variables. Secondly, he com­pletes the full classifi­ca­tion in 3-space of surface germs on which every positive se­mi­de­fi­nite analytic germ is a sum of squares of analytic germs (in fact, he shows, of two squares). He also finds examples of not embedded surface germs with that Keywords: Positive semidefinite germ, sum of squares, analytic germ. MSC: 14P20, 32S10. J.F. Fernando: On the Pythagoras number of real analytic rings. J. Algebra 243 (2001) 321-338. J.F. Fernando, J.M. Ruiz: Positive semidefinite germs on the cone. Pacific J. Math. 205 (2002) 109-118. J.F. Fernando: Positive semidefinite germs in real analytic surfaces. Math. Ann. 322 (2002) 49-67. [PhD2000] J. Escribano: Definable triviality of families of definable mappings in o-minimal structures. Main advisor: M. Coste. Abstract: The author shows the triviality of pairs of proper submersions in any o-minimal structure expanding a real closed field. This means not only that the involved submersions are definable, but most im­portant that the trivialization is definable too. This requires the use of the definable spectrum in a essential way to solve the lack of integration means. The author proves previously a basic result: an ap­prox­i­mation theorem for definable differentiable maps, which has interest by itself and im­plies other interesting applications (smooth­ing of corners, for instance). He finally obtains a nice application concerning the definable triviality off bifurcation sets. Keywords: Triviality of submersions, definable approximation, definable spectrum. MSC: 14P20, 03C68, 57R12. J. Escribano: Nash triviality in families of Nash mappings. Ann. Inst. Fourier 51 (2001) 1209-1228. J. Escribano: Bifurcation sets of definable functions in o-minimal structures. Proc. AMS 130 (2002) 2419-2424. J. Escribano: Approximation theorems in o-minimal structures. Illinois J. Math. 46 (2002) 111-128. [PhD1994] P. Vélez: The geometry of fans in dimension 2. Abstract: The author finds a geometric criterion for basicness of semialgebraic sets in algebraic surfaces. This involves se­pa­ration and approximation, and the des­crip­tion of two types of obstructions: local and global. Since these geometric ideas are connected to the algebraic notion of fan, it is essential to know well the theory of fans in a surface, which is achieved by means of generalized Puiseux expansions. Finally, both the geometric and the algebraic views are mixed to produce an algorithm that checks basicness, and exhibits a fan obstruction if there is any. Keywords: Basic semialgebraic set, fan, separation. MSC: 14P10, 13A18. F. Acquistapace, F. Broglia, P. Vélez: An algorithmic criterion for basicness in dimension 2. Manuscripta Math. 85 (1995) 45-66. P. Vélez: On fans in real surfaces. J. Pure Appl. Algebra 136 (1999) 285-296. [PhD1986] J. Ortega: Pythagoras numbers of a real irreducible algebroid curves. Abstract: The author studies the Py­tha­go­ras number of a real irreducible algebroid curve in terms of its value semi­group. Fixed a semigroup, he de­scribes explicitely an algebraic set parametrizing all curves with that semigroup of values. Then, the Py­tha­go­ras number defines a partition of that set into finitely many semialgebraic sets which involves algorithmic (upper and lower) bounds for the Pythagoras number. An ap­plication of this is the determination of Py­tha­gorean curves. Other matters dis­cussed are semicontinuity of the Py­tha­go­ras num­ber, Hilbert’s 17th Problem, and change of ground field. Keywords: Pythagoras number, value semigroup. MSC: 14P15, 32B10. J. Ortega: On the Pythagoras number of a real algebroid irreducible curve. Math. Ann. 289 (1991) 111-123.
{"url":"http://blogs.mat.ucm.es/jesusr/dfps/","timestamp":"2024-11-07T03:41:47Z","content_type":"text/html","content_length":"105604","record_id":"<urn:uuid:6989292e-b0e5-4293-b76d-abb19d518fc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00513.warc.gz"}
4,300 tonnes of space junk and rising: Another satellite breakup adds to orbital debris woes Science & Technology 4,300 tonnes of space junk and rising: Another satellite breakup adds to orbital debris woes There are no confirmed reports about what caused the breakup of Intelsat 33e A large communications satellite has , affecting users in Europe, Central Africa, the Middle East, Asia and Australia, and adding to the growing swarm of space junk clouding our planet’s The provided broadband communication from a point some 35,000 kilometres above the Indian Ocean, in a geostationary orbit around the equator. Initial reports on October 20 said Intelsat 33e had experienced a sudden power loss. Hours later, confirmed the satellite appears to have broken up into at least 20 pieces. So what happened? And is this a sign of things to come as more and more satellites head into orbit? There are no confirmed reports about what caused the breakup of Intelsat 33e. However, it is not the first event of its kind. In the past we’ve seen , and loss of satellites due to . What we do know is that Intelsat 33e has a history of issues while in orbit. Designed and manufactured by , the satellite was launched in August 2016. In 2017, the satellite reached its desired orbit three months later than anticipated, due to a with its primary thruster, which controls its altitude and acceleration. More propulsion troubles emerged when the satellite performed something called a station keeping activity, which keeps it at the right altitude. It was , which meant its mission would end around 3.5 years early, in 2027. Intelsat lodged a $78 million insurance claim as a result of these problems. However, at the time of its breakup, the satellite was reportedly . Intelsat is what went wrong, but we may never know exactly what caused the satellite to fragment. We do know another Intelsat , a Boeing-built EpicNG 702 MP, failed in 2019. More importantly, we can learn from the aftermath of the breakup: space junk. 30 blue whales of space junk The amount of debris in orbit around Earth is increasing rapidly. The European Space Agency (ESA) there are more than 40,000 pieces larger than 10 centimetres in orbit, and more than 130,000,000 smaller than 1 cm. The total mass of human-made space objects in Earth orbit is some . That’s about the same mass as 90 adult male blue whales. About one third of this mass is debris (4,300 tonnes), mostly in the form of leftover rocket bodies. Tracking and identifying space debris is a challenging task. At higher altitudes, such as Intelsat 33e’s orbit around 35,000 km up, we can only see objects above a certain size. One of the most concerning things about the loss of Intelsat 33e is that the breakup likely produced debris that is too small for us to see from ground level with current facilities. The past few months have seen a string of of decommissioned and abandoned objects in orbit. In June, the fractured in low Earth orbit (an altitude of around 470 km), creating more than 100 trackable pieces of debris. This event also likely created many more pieces of debris too small to be In July, another decommissioned satellite — the — broke up. In August, the upper stage of a , creating at least 283 pieces of trackable debris, and potentially hundreds of thousands of untrackable It is not yet known whether this most recent event will affect other objects in orbit. This is where continuous monitoring of the sky becomes vital, to understand these complex space debris When space debris is created, who is responsible for cleaning it up or monitoring it? In principle, the country that launched the object into space has the burden of responsibility where fault can be proved. This was explored in the 1972 . In practice, there is often little accountability. The was issued in 2023 by the US Federal Communications Commission. It’s not clear whether a similar fine will be issued in the case of Intelsat 33e. As the human use of space accelerates, Earth orbit is growing increasingly crowded. To manage the hazards of orbital debris, we will need continuous monitoring and improved tracking technology alongside deliberate efforts to minimise the amount of debris. Most satellites are much closer to Earth than Intelsat 33e. Often these low Earth orbit satellites can be safely brought down from orbit (or “de-orbited”) at the end of their missions without creating space debris, especially with a bit of forward planning. Of course, the bigger the space object, the more debris it can produce. NASA’s Orbital Debris Program Office the International Space Station would produce more than 220 million debris fragments if it broke up in orbit, for example. Accordingly, planning for de-orbiting of the station (ISS) at the end of its operational life in 2030 is now well underway, with the to SpaceX. This article is republished from The Conversation under a Creative Commons license. Read the original article.
{"url":"https://www.downtoearth.org.in/science-technology/4300-tonnes-of-space-junk-and-rising-another-satellite-breakup-adds-to-orbital-debris-woes","timestamp":"2024-11-11T23:40:27Z","content_type":"text/html","content_length":"695408","record_id":"<urn:uuid:f5353323-b2a9-4c85-9f4a-00ee6e07b3e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00725.warc.gz"}
Graphical index Graphical index of GRASS GIS modules Go to vector introduction Vector modules: • v.buffer Creates a buffer around vector features of given type. • v.build.all Rebuilds topology on all vector maps in the current mapset. • v.build Creates topology for vector map. • v.build.polylines Builds polylines from lines or boundaries. • v.category Attaches, deletes or reports vector categories to/from/of map geometry. • v.centroids Adds missing centroids to closed boundaries. • v.class Classifies attribute data, e.g. for thematic mapping • v.clean Toolset for cleaning topology of vector map. • v.clip Extracts features of input map which overlay features of clip map. • v.cluster Performs cluster identification. • v.colors Creates/modifies the color table associated with a vector map. • v.colors.out Exports the color table associated with a vector map. • v.db.addcolumn Adds one or more columns to the attribute table connected to a given vector map. • v.db.addtable Creates and connects a new attribute table to a given layer of an existing vector map. • v.db.connect Prints/sets DB connection for a vector map to attribute table. • v.db.dropcolumn Drops a column from the attribute table connected to a given vector map. • v.db.droprow Removes a vector feature from a vector map through attribute selection. • v.db.droptable Removes existing attribute table of a vector map. • v.db.join Joins a database table to a vector map table. • v.db.reconnect.all Reconnects attribute tables for all vector maps from the current mapset to a new database. • v.db.renamecolumn Renames a column in the attribute table connected to a given vector map. • v.db.select Prints vector map attributes. • v.db.univar Calculates univariate statistics on selected table column for a GRASS vector map. • v.db.update Updates a column in the attribute table connected to a vector map. • v.decimate Decimates a point cloud • v.delaunay Creates a Delaunay triangulation from an input vector map containing points or centroids. • v.dissolve Dissolves adjacent or overlaping features sharing a common category number or attribute. • v.distance Finds the nearest element in vector map 'to' for elements in vector map 'from'. • v.drape Converts 2D vector features to 3D by sampling of elevation raster map. • v.edit Edits a vector map, allows adding, deleting and modifying selected vector features. • v.external Creates a new pseudo-vector map as a link to an OGR-supported layer or a PostGIS feature table. • v.external.out Defines vector output format. • v.extract Selects vector features from an existing vector map and creates a new vector map containing only the selected features. • v.extrude Extrudes flat vector features to 3D vector features with defined height. • v.fill.holes Fill holes in areas by keeping only outer boundaries • v.generalize Performs vector based generalization. • v.hull Produces a 2D/3D convex hull for a given vector map. • v.import Imports vector data into a GRASS vector map using OGR library and reprojects on the fly. • v.in.ascii Creates a vector map from an ASCII points file or ASCII vector file. • v.in.db Creates new vector (points) map from database table containing coordinates. • v.in.dxf Converts file in DXF format to GRASS vector map. • v.in.e00 Imports E00 file into a vector map. • v.in.geonames Imports geonames.org country files into a vector points map. • v.in.lines Imports ASCII x,y[,z] coordinates as a series of lines. • v.in.mapgen Imports Mapgen or Matlab-ASCII vector maps into GRASS. • v.in.ogr Imports vector data into a GRASS vector map using OGR library. • v.in.pdal Converts LAS LiDAR point clouds to a GRASS vector map with PDAL. • v.in.region Creates a vector polygon from the current region extent. • v.in.wfs Imports GetFeature from a WFS server. • v.info Outputs basic information about a vector map. • v.kcv Randomly partition points into test/train sets. • v.kernel Generates a raster density map from vector points map. • v.label Creates paint labels for a vector map from attached attributes. • v.label.sa Create optimally placed labels for vector map(s) • v.lidar.correction Corrects the v.lidar.growing output. It is the last of the three algorithms for LIDAR filtering. • v.lidar.edgedetection Detects the object's edges from a LIDAR data set. • v.lidar.growing Building contour determination and Region Growing algorithm for determining the building inside • v.lrs.create Creates a linear reference system. • v.lrs.label Creates stationing from input lines, and linear reference system. • v.lrs.segment Creates points/segments from input lines, linear reference system and positions read from stdin or a file. • v.lrs.where Finds line id and real km+offset for given points in vector map using linear reference system. • v.mkgrid Creates a vector map of a user-defined grid. • v.neighbors Neighborhood analysis tool for vector point maps. • v.net.alloc Allocates subnets for nearest centers. • v.net.allpairs Computes the shortest path between all pairs of nodes in the network. • v.net.bridge Computes bridges and articulation points in the network. • v.net.centrality Computes degree, centrality, betweeness, closeness and eigenvector centrality measures in the network. • v.net.components Computes strongly and weakly connected components in the network. • v.net.connectivity Computes vertex connectivity between two sets of nodes in the network. • v.net.distance Computes shortest distance via the network between the given sets of features. • v.net.flow Computes the maximum flow between two sets of nodes in the network. • v.net Performs network maintenance. • v.net.iso Splits subnets for nearest centers by cost isolines. • v.net.path Finds shortest path on vector network. • v.net.salesman Creates a cycle connecting given nodes (Traveling salesman problem). • v.net.spanningtree Computes minimum spanning tree for the network. • v.net.steiner Creates Steiner tree for the network and given terminals. • v.net.timetable Finds shortest path using timetables. • v.net.visibility Performs visibility graph construction. • v.normal Tests for normality for vector points. • v.out.ascii Exports a vector map to a GRASS ASCII vector representation. • v.out.dxf Exports vector map to DXF file format. • v.out.ogr Exports a vector map layer to any of the supported OGR vector formats. • v.out.postgis Exports a vector map layer to PostGIS feature table. • v.out.pov Converts GRASS x,y,z points to POV-Ray x,z,y format. • v.out.svg Exports a vector map to SVG file. • v.out.vtk Converts a vector map to VTK ASCII output. • v.outlier Removes outliers from vector point data. • v.overlay Overlays two vector maps offering clip, intersection, difference, symmetrical difference, union operators. • v.pack Exports a vector map as GRASS GIS specific archive file • v.parallel Creates parallel line to input vector lines. • v.patch Creates a new vector map by combining other vector maps. • v.perturb Random location perturbations of vector points. • v.profile Vector map profiling tool • v.proj Re-projects a vector map from one project to the current project. • v.qcount Indices for quadrat counts of vector point lists. • v.random Generates random 2D/3D vector points. • v.rast.stats Calculates univariate statistics from a raster map based on a vector map and uploads statistics to new attribute columns. • v.reclass Changes vector category values for an existing vector map according to results of SQL queries or a value in attribute table column. • v.rectify Rectifies a vector by computing a coordinate transformation for each object in the vector based on the control points. • v.report Reports geometry statistics for vector maps. • v.sample Samples a raster map at vector point locations. • v.segment Creates points/segments from input vector lines and positions. • v.select Selects features from vector map (A) by features from other vector map (B). • v.split Splits vector lines to shorter segments. • v.support Updates vector map metadata. • v.surf.bspline Performs bicubic or bilinear spline interpolation with Tykhonov regularization. • v.surf.idw Provides surface interpolation from vector point data by Inverse Distance Squared Weighting. • v.surf.rst Performs surface interpolation from vector points map by splines. • v.timestamp Modifies a timestamp for a vector map. • v.to.3d Performs transformation of 2D vector features to 3D. • v.to.db Populates attribute values from vector features. • v.to.lines Converts vector polygons or points to lines. • v.to.points Creates points along input lines in new vector map with 2 layers. • v.to.rast Converts (rasterize) a vector map into a raster map. • v.to.rast3 Converts a vector map (only points) into a 3D raster map. • v.transform Performs an affine transformation (shift, scale and rotate) on vector map. • v.type Changes type of vector features. • v.univar Calculates univariate statistics of vector map features. • v.unpack Imports a GRASS GIS specific vector archive file (packed with v.pack) as a vector map • v.vect.stats Count points in areas, calculate statistics from point attributes. • v.vol.rst Interpolates point data to a 3D raster map using regularized spline with tension (RST) algorithm. • v.voronoi Creates a Voronoi diagram constrained to the extents of the current region from an input vector map containing points or centroids. • v.what Queries a vector map at given locations. • v.what.rast Uploads raster values at positions of vector points to the table. • v.what.rast3 Uploads 3D raster values at positions of vector points to the table. • v.what.strds Uploads space time raster dataset values at positions of vector points to the table. • v.what.vect Uploads vector values at positions of vector points to the table. Main index | Topics index | Keywords index | Graphical index | Full index © 2003-2024 GRASS Development Team, GRASS GIS 8.5.0dev Reference Manual
{"url":"https://grass.osgeo.org/grass85/manuals/vector_graphical.html","timestamp":"2024-11-07T09:50:07Z","content_type":"text/html","content_length":"29153","record_id":"<urn:uuid:0bc05d0c-f72a-4e8b-b0bf-c6eaf97ed7e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00556.warc.gz"}
Calcul.io · math constants (light speed, euler...) · online calculator Complete overview of constants available on calcul.io atomicMass Atomic mass constant avogadro Avogadro's number bohrMagneton Borh magneton bohrRadius Borh radius boltzmann Boltzmann constant classicalElectronRadius Classical electron radius conductanceQuantum Conductance quantum coulomb Coulomb's constant deuteronMass Deuteron Mass e Euler's number, the base of the natural logarithm. Approximately equal to 2.71828 efimovFactor Efimov factor electricConstant Electric constant (vacuum permeability) electronMass Electron mass elementaryCharge Elementary charge false Boolean value false faraday Faraday constant fermiCoupling Fermi coupling constant fineStructure Fine-structure constant firstRadiation First radiation constant gasConstant Gas constant gravitationConstant Newtonian constant of gravitation gravity Standard acceleration of gravity (standard acceleration of free-fall on Earth) hartreeEnergy Hartree energy i Imaginary unit, defined as i*i=-1. A complex number is described as a + b*i, where a is the real part, and b is the imaginary part. Infinity Infinity, a number which is larger than the maximum number that can be handled by a floating point number. inverseConductanceQuantum Inverse conductance quantum klitzing Von Klitzing constant LN10 Returns the natural logarithm of 10, approximately equal to 2.302 LN2 Returns the natural logarithm of 2, approximately equal to 0.693 LOG10E Returns the base-10 logarithm of E, approximately equal to 0.434 LOG2E Returns the base-2 logarithm of E, approximately equal to 1.442 loschmidt Loschmidt constant at T=273.15 K and p=101.325 kPa magneticConstant Magnetic constant (vacuum permeability) magneticFluxQuantum Magnetic flux quantum molarMass Molar mass constant molarMassC12 Molar mass constant of carbon-12 molarPlanckConstant Molar Planck constant molarVolume Molar volume of an ideal gas at T=273.15 K and p=101.325 kPa NaN Not a number neutronMass Neutron mass nuclearMagneton Nuclear magneton null Value null Phi is the golden ratio. Two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Phi is defined as phi `(1 + sqrt(5)) / 2` and is approximately 1.618034... pi The number pi is a mathematical constant that is the ratio of a circle's circumference to its diameter, and is approximately equal to 3.14159 planckCharge Planck charge planckConstant Planck constant planckLength Planck length planckMass Planck mass planckTemperature Planck temperature planckTime Planck time protonMass Proton mass quantumOfCirculation Quantum of circulation reducedPlanckConstant Reduced Planck constant rydberg Rydberg constant sackurTetrode Sackur-Tetrode constant at T=1 K and p=101.325 kPa secondRadiation Second radiation constant speedOfLight Speed of light in vacuum SQRT1_2 Returns the square root of 1/2, approximately equal to 0.707 SQRT2 Returns the square root of 2, approximately equal to 1.414 stefanBoltzmann Stefan-Boltzmann constant tau Tau is the ratio constant of a circle's circumference to radius, equal to 2 * pi, approximately 6.2832. thomsonCrossSection Thomson cross section true Boolean value true vacuumImpedance Characteristic impedance of vacuum weakMixingAngle Weak mixing angle wienDisplacement Wien displacement law constant All constants
{"url":"https://calcul.io/constants/","timestamp":"2024-11-13T17:25:49Z","content_type":"text/html","content_length":"22031","record_id":"<urn:uuid:1b5bbadb-c85b-47f5-98a6-7e20bc8deb63>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00083.warc.gz"}
crypto/rsa/blinding.c - boringssl - Git at Google /* ==================================================================== * Copyright (c) 1998-2006 The OpenSSL Project. All rights reserved. * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit. (http://www.openssl.org/)" * 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to * endorse or promote products derived from this software without * prior written permission. For written permission, please contact * openssl-core@openssl.org. * 5. Products derived from this software may not be called "OpenSSL" * nor may "OpenSSL" appear in their names without prior written * permission of the OpenSSL Project. * 6. Redistributions of any form whatsoever must retain the following * acknowledgment: * "This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (http://www.openssl.org/)" * THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * ==================================================================== * This product includes cryptographic software written by Eric Young * (eay@cryptsoft.com). This product includes software written by Tim * Hudson (tjh@cryptsoft.com). * Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com) * All rights reserved. * This package is an SSL implementation written * by Eric Young (eay@cryptsoft.com). * The implementation was written so as to conform with Netscapes SSL. * This library is free for commercial and non-commercial use as long as * the following conditions are aheared to. The following conditions * apply to all code found in this distribution, be it the RC4, RSA, * lhash, DES, etc., code; not just the SSL code. The SSL documentation * included with this distribution is covered by the same copyright terms * except that the holder is Tim Hudson (tjh@cryptsoft.com). * Copyright remains Eric Young's, and as such any Copyright notices in * the code are not to be removed. * If this package is used in a product, Eric Young should be given attribution * as the author of the parts of the library used. * This can be in the form of a textual message at program startup or * in documentation (online or textual) provided with the package. * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * "This product includes cryptographic software written by * Eric Young (eay@cryptsoft.com)" * The word 'cryptographic' can be left out if the rouines from the library * being used are not cryptographic related :-). * 4. If you include any Windows specific code (or a derivative thereof) from * the apps directory (application code) you must include an acknowledgement: * "This product includes software written by Tim Hudson (tjh@cryptsoft.com)" * THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * The licence and distribution terms for any publically available version or * derivative of this code cannot be changed. i.e. this code cannot simply be * copied and put under another distribution licence * [including the GNU Public Licence.] */ #include <openssl/rsa.h> #include <string.h> #include <openssl/bn.h> #include <openssl/mem.h> #include <openssl/err.h> #include "internal.h" #define BN_BLINDING_COUNTER 32 struct bn_blinding_st { BIGNUM *A; /* The base blinding factor, Montgomery-encoded. */ BIGNUM *Ai; /* The inverse of the blinding factor, Montgomery-encoded. */ unsigned counter; static int bn_blinding_create_param(BN_BLINDING *b, const BIGNUM *e, const BN_MONT_CTX *mont, BN_CTX *ctx); BN_BLINDING *BN_BLINDING_new(void) { BN_BLINDING *ret = OPENSSL_malloc(sizeof(BN_BLINDING)); if (ret == NULL) { OPENSSL_PUT_ERROR(RSA, ERR_R_MALLOC_FAILURE); return NULL; memset(ret, 0, sizeof(BN_BLINDING)); ret->A = BN_new(); if (ret->A == NULL) { goto err; ret->Ai = BN_new(); if (ret->Ai == NULL) { goto err; /* The blinding values need to be created before this blinding can be used. */ ret->counter = BN_BLINDING_COUNTER - 1; return ret; return NULL; void BN_BLINDING_free(BN_BLINDING *r) { if (r == NULL) { static int bn_blinding_update(BN_BLINDING *b, const BIGNUM *e, const BN_MONT_CTX *mont, BN_CTX *ctx) { if (++b->counter == BN_BLINDING_COUNTER) { /* re-create blinding parameters */ if (!bn_blinding_create_param(b, e, mont, ctx)) { goto err; b->counter = 0; } else { if (!BN_mod_mul_montgomery(b->A, b->A, b->A, mont, ctx) || !BN_mod_mul_montgomery(b->Ai, b->Ai, b->Ai, mont, ctx)) { goto err; return 1; /* |A| and |Ai| may be in an inconsistent state so they both need to be * replaced the next time this blinding is used. Note that this is only * sufficient because support for |BN_BLINDING_NO_UPDATE| and * |BN_BLINDING_NO_RECREATE| was previously dropped. */ b->counter = BN_BLINDING_COUNTER - 1; return 0; int BN_BLINDING_convert(BIGNUM *n, BN_BLINDING *b, const BIGNUM *e, const BN_MONT_CTX *mont, BN_CTX *ctx) { /* |n| is not Montgomery-encoded and |b->A| is. |BN_mod_mul_montgomery| * cancels one Montgomery factor, so the resulting value of |n| is unencoded. if (!bn_blinding_update(b, e, mont, ctx) || !BN_mod_mul_montgomery(n, n, b->A, mont, ctx)) { return 0; return 1; int BN_BLINDING_invert(BIGNUM *n, const BN_BLINDING *b, BN_MONT_CTX *mont, BN_CTX *ctx) { /* |n| is not Montgomery-encoded and |b->A| is. |BN_mod_mul_montgomery| * cancels one Montgomery factor, so the resulting value of |n| is unencoded. return BN_mod_mul_montgomery(n, n, b->Ai, mont, ctx); static int bn_blinding_create_param(BN_BLINDING *b, const BIGNUM *e, const BN_MONT_CTX *mont, BN_CTX *ctx) { int retry_counter = 32; do { if (!BN_rand_range_ex(b->A, 1, &mont->N)) { OPENSSL_PUT_ERROR(RSA, ERR_R_INTERNAL_ERROR); return 0; /* |BN_from_montgomery| + |BN_mod_inverse_blinded| is equivalent to, but * more efficient than, |BN_mod_inverse_blinded| + |BN_to_montgomery|. */ if (!BN_from_montgomery(b->Ai, b->A, mont, ctx)) { OPENSSL_PUT_ERROR(RSA, ERR_R_INTERNAL_ERROR); return 0; int no_inverse; if (BN_mod_inverse_blinded(b->Ai, &no_inverse, b->Ai, mont, ctx)) { if (!no_inverse) { OPENSSL_PUT_ERROR(RSA, ERR_R_INTERNAL_ERROR); return 0; /* For reasonably-sized RSA keys, it should almost never be the case that a * random value doesn't have an inverse. */ if (retry_counter-- == 0) { OPENSSL_PUT_ERROR(RSA, RSA_R_TOO_MANY_ITERATIONS); return 0; } while (1); if (!BN_mod_exp_mont(b->A, b->A, e, &mont->N, ctx, mont)) { OPENSSL_PUT_ERROR(RSA, ERR_R_INTERNAL_ERROR); return 0; if (!BN_to_montgomery(b->A, b->A, mont, ctx)) { OPENSSL_PUT_ERROR(RSA, ERR_R_INTERNAL_ERROR); return 0; return 1;
{"url":"https://boringssl.googlesource.com/boringssl/+/56cadc3daf9090d7536d92319f0c6e7a39bf7f4f/crypto/rsa/blinding.c","timestamp":"2024-11-14T13:57:11Z","content_type":"text/html","content_length":"85174","record_id":"<urn:uuid:d8577ed5-af0c-4bbc-abda-0c9f364a5238>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00533.warc.gz"}
Oceanography 540--Marine Geological Processes--Winter Quarter 2002 Convection and Rayleigh Criteria To motivate the Rayleigh number, we consider a scaling argument from ((13), p. 208 ff.). Imagine two horizontal plates, separated by a distance h, with increasing temperature downwards such that the temperature difference between them is T[h]. Consider a parcel of fluid of scale d (d x d x d) located between these plates. Figure 10-3 If displaced upwards it will have a buoyancy force acting on it, i.e., its density will be different from its surroundings. The buoyancy force is: Eq 10-2: where g is gravitational acceleration and Eq 10-3: where T is temperature. Because the parcel is less dense than its surroundings, it will tend to move upwards. A viscous force: Eq 10-4: will act against the buoyancy force. In this expression µ is viscosity, z is the vertical spatial coordinate, and t is time. When these two forces balance: Eq 10-5: In addition, the parcel will be losing heat to its new surroundings by conduction, thereby reducing its temperature. The rate at which heat is lost will be proportional to the surface area of the parcel. Thus temperature (and so the density term) is a function of time: Eq 10-6: Eq 10-7: How far can the parcel move in infinite time? We integrate from zero to infinity: Eq 10-8: to find that the distance traversed, z, is: Eq 10-9: The terms in the numerator of equation 10-9 have their origin in the buoyancy force and promote upward motion while increasing either viscosity or thermal conductivity in the denominator limits upward motion. To be able to convect heat, there must be movement across the distance between the two plates separated by a distance h in finite time and so: Eq 10-10: By scaling the fluid parcel to the plate separation, through an arbitrary factor f: Eq 10-11: substituting into equation 10-10 and rearranging it follows that: Eq 10-12: The left hand side of equation 10-12 is called the Rayleigh number: Eq 10-13: Lord Rayleigh showed through linear perturbation analysis (for a general discussion see (13), section 7.1.2 or the derivation in (11), section 6-18), that for thermal instabilities to grow in a fluid, the Rayleigh number would have to exceed a critical value (the f Eq 10-14: the temperature profiles between the two plates will be conductive, while when: Eq 10-15: a cellular convection will be established, driving the isotherms upwards in the zone of rising fluid and downwards in the zone of downwelling. The value of the critical Rayleigh number depends on the particular geometry and boundary conditions. For relevant geometries it lies between 657 (free boundaries) and 1708 (rigid boundaries). Will fluids in the ocean crust convect? Consider a open fracture penetrating to great depth, say a kilometer. The Rayleigh number is: Eq 10-16: (Here the viscosity and density term have been replaced with the kinematic viscosity.). For some representative values the Rayleigh number is about 10 Lecture Index | Search the Ocean 540 Pages | Ocean 540 Home Oceanography 540 Pages Pages Maintained by Russ McDuff (mcduff@ocean.washington.edu) Copyright (©) 1994-2002 Russell E. McDuff and G. Ross Heath; Copyright Notice Content Last Modified 1/17/2002 | Page Last Built 1/17/2002
{"url":"http://www2.ocean.washington.edu/oc540/lec01-9/","timestamp":"2024-11-11T06:54:10Z","content_type":"text/html","content_length":"7740","record_id":"<urn:uuid:ffa9a080-aecd-4515-9aff-700299afeb5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00291.warc.gz"}
ECE 515 - Control System Theory & Design Homework 3 - Due: 02/08 Problem 1 Consider the following matrices: \[ A_1 = \begin{bmatrix} 1/2 & -1/2 \\ 1/2 & -1/2 \end{bmatrix}, \qquad A_2=\begin{bmatrix}0&1\\-1&0\end{bmatrix},\qquad A_3=\begin{bmatrix}1&0&0\\0&-2&0\\-3&0&-2\end{bmatrix}. \] a. Compute \(e^{A_i t}\) for \(i=1,2,3\). Note that \(A_2\) is a special case of a matrix whose exponential was computed in class. b. For \(i=1,2,3\), write down the solution of \(\dot x=A_i x\) for a general initial condition. c. In each case, determine whether the solutions of \(\dot x=A_i x\) decay to 0, stay bounded, or go to \(\infty\) (for various choices of initial conditions). d. Try to state the general rule which can be used to determine, by looking at the eigenstructure of \(A\), whether the solutions of \(\dot x=A x\) decay to 0, stay bounded, or go to \(\infty\). Problem 2 The pictures in this demo page show possible trajectories of a linear system \(\dot x= Ax\) in the plane, when the eigenvalues \(\lambda_1\) and \(\lambda_2\) of \(A\) are in one of the following six # Configuration # Configuration (a) \(\lambda_1<\lambda_2<0\) (b) \(\lambda_1<0<\lambda_2\) (c) \(0<\lambda_1<\lambda_2\) (d) \(\lambda_1,\lambda_2=a\pm ib, \; a<0\) (e) \(\lambda_1,\lambda_2=a\pm ib, \; a=0\) (f) \(\lambda_1,\lambda_2=a, \; a>0\) Match each picture with the corresponding eigenvalue distribution. Justify your answers using your knowledge of the solution of \(\dot x = Ax\). Plotting in MATLAB or use of the demo applet cannot be used as a justification. Problem 3 This exercise illustrates the phenomeom known as resonance. Consider the system \[\begin{align*} \begin{bmatrix} \dot x_1\\ \dot x_2 \end{bmatrix} &= \begin{bmatrix} 0 & \omega \\ -\omega & 0 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} +\begin{bmatrix} 0\\ 1 \end {bmatrix}u,\qquad y=x_2 \end{align*}\] where \(\omega>0\), \(u\) is the control input, and \(y\) is the output. Let the input be \(u(t)=\cos\nu t\), \(\nu>0\). Using the variation-of-constants formula, compute the response \(y(t)\) to this input from the zero initial condition \(x(0)=0\), considering separately two cases: \(\nu\ne\omega\) and \(\nu=\omega\). In each case, determine whether this response is decaying to 0, bounded, or unbounded. Problem 4 Compute the state transition matrix \(\Phi(t,t_0)\) of \(\dot x=A(t)x\) with \[A(t) = \begin{bmatrix}-1+\cos t&0 \\0&-2+\cos t \end{bmatrix}\] Problem 5 Consider the LTV system \(\dot x=A(t)x\), where \(A(t)\) is periodic with period \(T\), i.e., \(A(t+T)=A(t)\). Let \(\Phi(t,t_0)\) denote the corresponding state transition matrix. The goal of this exercise is to show that we can simplify the system and characterize its transition matrix with the help of a suitable (time-varying) coordinate transformation. Being a nonsingular matrix, \(\Phi(T,0)\) can be written as an exponential: \(\Phi(T,0)=e^{RT}\), where \(R\) is some (possibly complex-valued) matrix. Define also: a. Using the properties of state transition matrices from class, show that \(\Phi(t,t_0)=P(t)e^{R(t-t_0)}P^{-1}(t_0).\) b. Deduce from a) that \(\bar x(t):=P^{-1}(t)x(t)\) satisfies the time-invariant differential equation \(\dot{\bar x}=R\bar x\). c. Prove that \(P(t)\) is periodic with period \(T\). d. Consider the LTV system from Problem 4 . Find new coordinates \(\bar x(t)\) in which, according to part b), this system should become time-invariant. Confirm this fact by differentiating \(\bar x Problem 6 Prove the variation-of-constants formula for linear time-varying control systems stated in class: \[ x(t) = \phi(t,t_0)x_0 + \int \limits _{t_0} ^{t} \phi \left(t, s \right) B(s) u(s) ds \] satisfies the LTV controlled systems’ differential equation. Hint: Differentiate both sides. See also Class Notes, end of Section 3.7.
{"url":"https://courses.grainger.illinois.edu/ece515/sp2024/homework/hw03.html","timestamp":"2024-11-13T21:17:35Z","content_type":"application/xhtml+xml","content_length":"37088","record_id":"<urn:uuid:cb133ba8-6683-4041-a8c8-7402d5c4ecd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00314.warc.gz"}
EViews Help: @meansby @meansby By-Group Statistics Mean of observations in a series for each specified group defined by distinct values. Syntax: @meansby(x, y[y1, y2, ... yn, s]) x: series y1...yn series, alpha, group s: (optional) sample string or object Return: series Compute the mean of observations in x for group identifiers defined by distinct values of y, using the current or specified workfile sample. show @meansby(x, g1, g2) produces a linked series of by-group means of the series x, where members of the same group have identical values for both g1 and g2.
{"url":"https://help.eviews.com/content/functionref_m-@meansby.html","timestamp":"2024-11-05T02:36:16Z","content_type":"application/xhtml+xml","content_length":"8379","record_id":"<urn:uuid:d9d30a39-26e8-4f1a-b0a7-01b0d4c905ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00856.warc.gz"}
How to calculate retirement savings in Python :: Coding Finance — Retirement problem In the previous posts and examples we saw how saving at different age/time period can affect the amount one has in retirement. Now lets go up a notch in complexity. In this example we look at the similar problem but from another angle. Suppose we have an individual Jack who is currently 55 years old and intends to retire at 60 (5 years to retirement). Jack expects to live only 10 years after his retirement. After retirement, his plan is to live in Thailand and travel around South east Asia. He has estimated that he can live comfortably on $30,000 per year for the next 10 years. Jack expects to earn 8% returns on his investments. How much should Jack save each year before retirement? What do we know about Jack. • Age 55 • Retires in 5 years • Live 10 years in retirement • Yearly cost in retirement $30000 • Expected returns is 8% per year For solving this problem, we will divide Jack’s time horizon in two parts. 1. Time horizon 1 - Age 55 to 60 2. Time Horizon 2 - Age 60 to 70 First lets load the necessary library. import pandas as pd import numpy as np First we assume, that Jack has already retired and will calculate the the present value of $30000 in payments each year. These payments will be withdrawn at the beginning of each year. interest = 0.08 n1 = 5 #Time Horizon 1 n2 = 10 #Time Horizon 2 pmt_in_retirement = 30000 retirment_amount = np.pv(rate = interest,nper=n2,pmt = pmt_in_retirement,when=1)#type = 1, withdrawing at the beginning period print(retirment_amount * -1) ## 217406.63732570293 So Jack will need about $217,406 to cover his expenses in retirement. Next we will calculate the amount needed to save today to accumulate $217406. saving = np.pmt(rate = interest, nper = n1, pv = 0, fv = retirment_amount, when=1) ## 34313.30055355311 So Jack needs to save $34313 each year to have enough money for his retirement.
{"url":"https://www.codingfinance.com/post/2018-03-27-retirement-py/","timestamp":"2024-11-12T03:24:31Z","content_type":"text/html","content_length":"8357","record_id":"<urn:uuid:307faa26-81f6-4a35-b4df-bc88d28154bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00448.warc.gz"}
Standard C.2. Pedagogical Knowledge and Practices for Teaching Mathematics Standard C.2. Pedagogical Knowledge and Practices for Teaching Mathematics Well-prepared beginning teachers of mathematics have foundations of pedagogical knowledge, effective and equitable mathematics teaching practices, and positive and productive dispositions toward teaching mathematics to support students’ sense making, understanding, and reasoning. Teaching mathematics entails not only knowing the mathematics but also knowing how to design and implement rich mathematics-learning experiences that advance students’ mathematical knowledge and proficiencies. Effective teachers are skilled in their use of high-leverage mathematics-teaching practices and use those pedagogical practices to guide both their preparation and enactment of mathematics lessons. The development of these content-focused skills and abilities form the core of work in the preparation of mathematics teachers for the upper elementary grades. In the following section, we elaborate on the knowledge and pedagogical practices specific to the teachers for the upper elementary grades. Well-prepared beginning teachers of mathematics at the upper elementary level develop pedagogical knowledge and practices to cultivate students’ mathematical proficiency, including such components as conceptual understanding, procedural fluency, problem-solving ability, and facility with the mathematical processes essential for learning. [Elaboration of C.2.2 and C.2.3] Central to any efforts to deliver high-quality instruction to each and every student is the development of pedagogical knowledge (Grossman, 1990; Shulman, 1986). Whereas Shulman pointed to two categorical groupings of pedagogical knowledge, general pedagogical knowledge and pedagogical content knowledge (PCK), both Ball et al. (2008) and Sowder (2007) made more specific connections to the role of PCK when one considers the unique components and elements essential to teaching mathematics. The well-prepared beginning teacher at the upper elementary level can engage students with the mathematics underlying the standard algorithms that are taught at these grades, including providing effective tools and ways to have students generate procedures themselves through multiple experiences. They help upper elementary students learn to select appropriate tools and use them to engage in mathematical practices such as seeing patterns and structure. Well-prepared beginners also are fully cognizant of the common errors and naive conceptions that emerge from not only these procedures with whole numbers, fractions, and decimals but also from the larger concepts in which they are embedded. The deeper the mathematical background of the well-prepared beginning teacher, the greater her or his potential for showing pedagogical sophistication (Holm & Kajander, 2012). Holm and Kajander suggested that when dividing a whole number by a fraction, the teacher should be able to choose an appropriate problem to clearly illustrate the repeated-subtraction or measurement-division approach, the strongest visual model, the most salient way of discussing the linkages to prior knowledge about division of whole numbers and a discussion of how to interpret the result. The representations, notation, strategies, and language that are used in the classroom drive upper elementary grades students’ understanding of procedures and concepts. Well-prepared beginning teachers align with conventions for proper notation, (e.g., distinguishing between a multiplication symbol and the variable x) and precise language (e.g., using the verb regroup rather than borrow) so that the message to students across all three grades is consistent, providing a smooth path toward building on prior knowledge in meaningful ways that last. They recognize that rather than teaching rules or shortcuts (e.g., just append a zero at the end of a number when multiplying by 10) that are taught but are applicable for only a short time (or even not very well at all), they can use more effective instructional strategies to support students in identifying patterns and identifying constraints or boundaries of the usage of those rules when they emerge. Discussing boundaries is particularly important in upper elementary grades, where students see that rules they may have learned about whole numbers do not apply to fractions and decimals (e.g., the longer the number, the larger the number) (Karp, Bush, & Dougherty, 2014). Well-prepared beginners also understand that short cuts such as searching for key words are not effective and that, instead, word problems require attention to reading-comprehension strategies. Vignette 5.2 describes a beginning teacher's work with students who were struggling with solving word problems. Ms. Morgan was working with a small group of third graders who were having trouble solving multiplication word problems. She asked the students to meet to discuss their strategy use. Nela was asked how she decided to use addition to solve the problem “There are three baskets of apples on the table. Each basket contains six apples. How many apples are there in all?” Nela responded that she saw the words “in all,” and that meant that the numbers listed in the problem should be added. She arrived at an answer of 9. For the problem “Each student was given an equal share of stickers. If there are 25 stickers and 4 students, how many stickers will each student receive?” Rory said, “You use the word “each,” and then you know to multiply—so they each get 100 stickers.” At this point, Ms. Morgan realized that both students were describing the use of an ineffective key-words strategy, possibly learned in previous grades. These rules or shortcuts that may have started in the primary grades were now causing serious issues. As in this case, sometimes students are mistakenly encouraged to skim through a word problem and locate the key words as a strategy to quickly choose an operation to solve the problem and then use the number(s) from the problem to carry out that operation. Ms. Morgan had seen, in other classrooms, lists of key words that linked particular words with corresponding operations, for example, “each = multiply,” and so on. But as Nela and Rory demonstrated, these words frequently do not accurately indicate the operation that corresponds with the problem. (Also, the key-word strategy cannot be used for problems that have no key words or with multistep problems). Ms. Morgan decided to show the students three word problems with the same key words but that would be successfully solved using different operations to illustrate the pitfalls and limitations of the key-words approach. She then transitioned to an annotation approach in which one student comes to the document camera and uses the suggestions of other students to mark the word problem with highlighting and written comments to identify the important information. She next moved the group to acting out the problems using the data from the annotations with paper plates and counters. The discussion then centered on the actions and how those actions relate to the meaning of the operation selected and how problems can be sorted by their structures. Well-prepared beginning teachers focus on sense making and reasoning when they prepare students to grasp the full meaning of a problem by comprehending the entire situation and trying to use structures, such as schema, properties of the operations, and representations to come to a reasoned solution. Also, well-prepared beginners support the learning of each and every student. This approach is particularly important in Multi Tiered Systems of Support (MTSS) such as Response to Intervention (RtI), because students are usually identified for formal special education services starting in the third grade. This process requires the careful assessment of students to pinpoint their strengths and gaps so that instruction and interventions can be targeted, whether for students with disabilities or for students who may be identified as gifted with a high interest in or a talent for mathematics. With reference to emerging multilingual learners, the well-prepared beginning teacher incorporates the appropriate linguistic practices and strategies needed, including home-language connections and relevant academic language and discourse practices to support students when they move to more complex mathematics vocabulary. Instruction builds on relevant contexts and the need to build on students’ lived experiences in and out of the school setting. All this knowledge about, and emphasis on, teaching individual learners precludes the use of curriculum interventions via generic computer programs, basic worksheets, or Internet searches for merely attractive or fun ideas that do not support the development of significant mathematical thinking. Although all well-prepared beginning teachers strive to align mathematical concepts across the grades, this alignment is particularly crucial for teachers of upper elementary grades who bridge work in primary grades and later work in such courses as Algebra I. A pressing challenge is teaching in ways that support the development of mathematical ideas over time while resisting the practice of teaching only the mathematics that appears in the standards for one’s own grade level. For example, well-prepared beginning teachers of Grade 5 invest in knowing middle level content so that they are positioned to support students’ readiness even when some of those ideas are not well represented in the fifth-grade standards. The idea of continuity of development certainly applies to teaching across upper elementary grades. For example, the responsibility for the use of number lines is represented most strongly in third-grade standards. Well-prepared beginning teachers in Grades 4 and 5 build on the development and use of the number line even though it is not specifically articulated in the standards for their grade levels. In sum, well-prepared beginners have strategic understandings of the trajectory of the representations used and take responsibility for meeting grade-level standards and reinforcing what came before (e.g., the meaning of the equal sign) and what is still to come in later grades (e.g., fifth-graders' more sophisticated use of vertical and horizontal number lines for locating points on a coordinate graph). Finally, teachers of the upper elementary grades likely have some students who need help on early childhood content and some who are ready to learn middle level content. Well-prepared beginning teachers of mathematics at the upper elementary level effectively use technology tools, physical models, and mathematical representations to build student understanding of the topics at these grade levels. [Elaboration of C.1.6 and C.2.3] Well-prepared beginners know when to use different manipulatives and various technologies to support students in developing understanding of mathematical concepts and to create opportunities for collective consideration of mathematical ideas such as multiplication, fractions, area, volume, and coordinate geometry. They judiciously select particular representations on the basis of mathematical considerations, knowledge of their students, and other relevant factors. For example, to develop deep understandings of fractions, students must flexibly use three representations: area, linear measurement, and set models. The set model is the most complex of the three representations, so well-prepared beginners begin fractions modeling using area models and linear measurement models that connect to the number line before using the set model. Furthermore, they flexibly and resourcefully think about what representations are available in their current classrooms, schools, and wider communities; they advocate for resources to enhance their abilities to convey mathematical ideas for students to explore and discuss. This consideration of resources might include helping students’ utilize calculators responsibly, giving them access to operations with large numbers and decimals that would be extraordinarily cumbersome to calculate by hand. Well-prepared beginners also understand that meaning is not inherent in a tool or representation but that it needs to be developed through a combination of exploration, carefully orchestrated experiences, and explicit dialog focused on meaning-making (Ball, 1992). As a result they support students’ developing connections between these representations, attending to links between and among equations, situations, manipulatives, tables, and graphs, using various tools including technology. Well-prepared beginning teachers of mathematics at the upper elementary level learn to use both formal and informal assessment tools and strategies to gather evidence of students’ mathematical thinking in ways appropriate for young learners, such as the use of observations, interviews, questioning, paper-and-pencil and computer-based tasks, and digital records, including audio and video. [Elaboration of C.3.1] Well-prepared beginners recognize the many valued mathematical-learning outcomes that need to be assessed. They do not focus on particular outcomes to the detriment of gaining insights into others. For instance, in a unit on geometric measurement, they assess more than students’ application of learned formulas; they also examine outcomes such as students’ understanding of the concept of area, ability to use mathematical tools such as protractors, and attention to precision when they measure the volume of a prism. They seek to assess valued learning outcomes such as engagement in mathematical practices and mathematical dispositions, even when routes to assessing them may not be straightforward. Well-prepared beginners utilize multiple ways to assess learning outcomes. For instance, when focusing on students’ fluency with multiplication facts, they know that timed tests are not the only, first, or necessarily best approach. They recognize that fluency has several components and that timed tests do not support ability to assess strategy use, efficiency, or flexibility. They recognize that they may get a sense of students’ accuracy, but primarily accuracy under the time pressure of speed. Well-prepared beginning teachers are fully aware of the negative outcomes of timed tests, which include movement away from number sense and mental computation and toward planting a seed for a negative attitude toward the study of mathematics. Instead of relying on timed tests, the well-prepared beginner appreciates the value of looking at individual performance on assessments to pinpoint the strengths of students who are struggling (e.g., two or more grades below their peers). Using diagnostic interviews and other individualized assessments of students’ thinking, they can find the gaps in foundational knowledge from previous grades as well as position instruction near the point at which students are strong in their understanding. In this way the movement forward is not in fits and leaps (as would be would be with a more gross measure of student performance in a large-scale assessment) but targeted to specific needs and built on sound footing from the learner’s perspective.
{"url":"https://amte.net/node/2338","timestamp":"2024-11-04T13:51:06Z","content_type":"text/html","content_length":"72837","record_id":"<urn:uuid:f5cd1018-d746-4c28-af47-241d355b45e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00820.warc.gz"}
Back to Papers Home Back to Papers of School of Astronomy Paper IPM / Astronomy / 15595 School of Astronomy Title: Quasinormal Modes of a Black Hole with Quadrupole Moment Author(s): 1. A. Allahyari 2. H. Firouzjahi 3. B. Mashhoon Status: Published Journal: Phys. Rev. D Vol.: 99 Year: 2019 Supported by: ipm IPM We analytically determine the quasinormal mode (QNM) frequencies of a black hole with quadrupole moment in the eikonal limit using the light-ring method. The generalized black holes that are discussed in this work possess arbitrary quadrupole and higher mass moments in addition to mass and angular momentum. Static collapsed configurations with mass and quadrupole moment are treated in detail and the QNM frequencies associated with two such configurations are evaluated to linear order in the quadrupole moment. Furthermore, we touch upon the treatment of rotating systems. In particular, the generalized black hole that we consider for our extensive QNM calculations is a completely collapsed configuration whose exterior gravitational field can be described by the Hartle-Thorne spacetime [Astrophys. J. 153, 807-834 (1968)10.1086/149707]. This collapsed system as well as its QNMs is characterized by mass M, quadrupole moment Q and angular momentum J, where the latter two parameters are treated to first and second orders of approximation, respectively. When the quadrupole moment is set equal to the relativistic quadrupole moment of the corresponding Kerr black hole, J2/(Mc2), the Hartle-Thorne QNMs reduce to those of the Kerr black hole to second order in angular momentum J. Using ringdown frequencies, one cannot observationally distinguish a generalized Hartle-Thorne black hole with arbitrary quadrupole moment from a Kerr black hole provided the dimensionless parameter given by |QMc2-J2|c2/(G2M4) is sufficiently small compared to unity Download TeX format back to top
{"url":"https://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=15595&school=Astronomy","timestamp":"2024-11-03T10:22:44Z","content_type":"text/html","content_length":"42370","record_id":"<urn:uuid:39524984-b453-48d2-8f69-a91b9386b4d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00384.warc.gz"}
Rational Numbers On Number Line Class 8 Worksheet Rational Numbers On Number Line Class 8 Worksheet serve as foundational tools in the world of mathematics, offering a structured yet functional platform for learners to check out and grasp mathematical ideas. These worksheets provide an organized strategy to understanding numbers, nurturing a strong foundation whereupon mathematical proficiency flourishes. From the simplest checking workouts to the ins and outs of innovative calculations, Rational Numbers On Number Line Class 8 Worksheet cater to students of varied ages and skill levels. Introducing the Essence of Rational Numbers On Number Line Class 8 Worksheet Rational Numbers On Number Line Class 8 Worksheet Rational Numbers On Number Line Class 8 Worksheet - There are countless rational numbers less than 2 Some of the rational numbers are 3 4 5 6 7 a Multiplicative inverse b 1 is the multiplicative identity 10 5 7 5 4 5 0 1 5 8 5 Reciprocal of 4 26 is 26 4 Product 4 13 x 26 4 2 Number line representation of rational numbers Comparison of rational numbers Operation like addition subtraction multiplication and division Now in class 8 we will study the recap of all operation properties of all operation and find rational number between any two given rational numbers At their core, Rational Numbers On Number Line Class 8 Worksheet are lorries for theoretical understanding. They envelop a myriad of mathematical principles, leading students with the labyrinth of numbers with a collection of appealing and deliberate workouts. These worksheets transcend the borders of typical rote learning, urging active interaction and promoting an user-friendly understanding of numerical connections. Nurturing Number Sense and Reasoning CBSE Class 8 Mathematics Worksheet Rational Numbers PDF CBSE Class 8 Mathematics Worksheet Rational Numbers PDF In this sixth grade math worksheet students will use their number sense to plot a variety of rational numbers on number lines For an extra challenge encourage learners to try plotting a few rational numbers on the same number line Download Free Worksheet View answer key Add to collection Class 8 Rational Numbers Worksheet 4 1 The rational numbers 1 3 and 3 2 are on the opposite side of zero on the number line Mark True False a True b False 2 Which property allow to calculate 1 6 5 7 7 9 as 1 6 5 7 7 9 The heart of Rational Numbers On Number Line Class 8 Worksheet hinges on growing number sense-- a deep comprehension of numbers' definitions and interconnections. They motivate exploration, inviting students to dissect arithmetic operations, figure out patterns, and unlock the enigmas of sequences. Via thought-provoking difficulties and rational challenges, these worksheets become gateways to refining thinking abilities, nurturing the logical minds of budding mathematicians. From Theory to Real-World Application 2 Rational Numbers On Number Line Class 8th Maths Chapter 1 Rational Numbers Maths Class 8 2 Rational Numbers On Number Line Class 8th Maths Chapter 1 Rational Numbers Maths Class 8 Class 8 Rational Numbers Worksheet 3 1 1 6 3 7 4 5 1 6 3 7 a 1 6 4 5 b 3 7 4 5 c 3 7 d None of these 2 Simplify 34 5 25 12 11 6 a 11 1 3 b 12 1 3 c 34 2 3 d 31 3 5 3 Simplify 9 4 7 8 3 16 Some important Facts about Rational Numbers worksheet for class 8 Rational numbers are closed under the operations of addition subtraction and multiplication The operations addition and multiplication are commutativefor rational numbers associative for rational numbers commutativefor rational Rational Numbers On Number Line Class 8 Worksheet act as avenues bridging theoretical abstractions with the palpable facts of day-to-day life. By instilling sensible scenarios into mathematical exercises, learners witness the significance of numbers in their surroundings. From budgeting and dimension conversions to understanding statistical information, these worksheets empower pupils to possess their mathematical prowess past the confines of the classroom. Varied Tools and Techniques Flexibility is inherent in Rational Numbers On Number Line Class 8 Worksheet, using an arsenal of instructional tools to deal with varied understanding designs. Aesthetic aids such as number lines, manipulatives, and electronic sources serve as buddies in visualizing abstract concepts. This varied technique makes sure inclusivity, suiting students with various preferences, strengths, and cognitive designs. Inclusivity and Cultural Relevance In an increasingly varied world, Rational Numbers On Number Line Class 8 Worksheet welcome inclusivity. They go beyond cultural borders, integrating examples and issues that resonate with learners from varied backgrounds. By integrating culturally relevant contexts, these worksheets foster a setting where every learner really feels represented and valued, improving their link with mathematical Crafting a Path to Mathematical Mastery Rational Numbers On Number Line Class 8 Worksheet chart a program in the direction of mathematical fluency. They impart perseverance, important thinking, and problem-solving skills, necessary features not only in mathematics yet in numerous facets of life. These worksheets equip learners to browse the detailed terrain of numbers, supporting an extensive gratitude for the sophistication and logic inherent in maths. Welcoming the Future of Education In an age noted by technical improvement, Rational Numbers On Number Line Class 8 Worksheet seamlessly adjust to digital platforms. Interactive interfaces and electronic resources augment standard knowing, offering immersive experiences that go beyond spatial and temporal boundaries. This amalgamation of standard methods with technological technologies advertises an encouraging age in education, promoting an extra dynamic and interesting learning setting. Final thought: Embracing the Magic of Numbers Rational Numbers On Number Line Class 8 Worksheet illustrate the magic inherent in maths-- a charming journey of exploration, discovery, and mastery. They go beyond traditional pedagogy, functioning as catalysts for sparking the flames of curiosity and inquiry. Through Rational Numbers On Number Line Class 8 Worksheet, learners start an odyssey, unlocking the enigmatic globe of numbers-- one problem, one remedy, at a time. DCMC MATH Class 8 Second M C Q Type Worksheet On Rational Numbers NCERT Solutions For Class 8 Maths Chapter 1 Rational Numbers Ex 1 2 NCERT SOLUTIONS Check more of Rational Numbers On Number Line Class 8 Worksheet below Representation Of Rational Numbers On Number Line Galaxy Coaching Classes Worksheet Class 8 Ch 1 Rational Numbers 8th Grade Math Worksheets NCERT Solutions For Class 8 Maths Chapter 1 Rational Numbers 35 Rational Numbers Worksheet Grade 8 Support Worksheet DCMC MATH Class 8 First M C Q Type Worksheet On Rational Numbers Rational Numbers Class 8 Extra Questions Maths Chapter 1 Class 8 Rational Numbers And Worksheets Letsplaymaths Number line representation of rational numbers Comparison of rational numbers Operation like addition subtraction multiplication and division Now in class 8 we will study the recap of all operation properties of all operation and find rational number between any two given rational numbers Rational Number Worksheet PBS LearningMedia Rational Number Worksheet Use your best judgment to place rational numbers on the number line provided First con vert any fractions to decimals Place decimals above the line and fractions below it 1 1 2 2 2 3 11 8 4 9 1 10 0 2 5 7 4 Write a rational number greater than 1 but less than 2 Number line representation of rational numbers Comparison of rational numbers Operation like addition subtraction multiplication and division Now in class 8 we will study the recap of all operation properties of all operation and find rational number between any two given rational numbers Rational Number Worksheet Use your best judgment to place rational numbers on the number line provided First con vert any fractions to decimals Place decimals above the line and fractions below it 1 1 2 2 2 3 11 8 4 9 1 10 0 2 5 7 4 Write a rational number greater than 1 but less than 2 35 Rational Numbers Worksheet Grade 8 Support Worksheet Galaxy Coaching Classes Worksheet Class 8 Ch 1 Rational Numbers 8th Grade Math Worksheets DCMC MATH Class 8 First M C Q Type Worksheet On Rational Numbers Rational Numbers Class 8 Extra Questions Maths Chapter 1 Rational Numbers Class 8 Worksheets With Answers Paul Rosol s 8th Grade Math Worksheets Rational Numbers Class 8 Worksheet Class 8 Maths Rational Numbers Worksheet Rational Numbers Class 8 Worksheet Class 8 Maths Rational Numbers Worksheet Grade 8 Rational Numbers Worksheets
{"url":"https://szukarka.net/rational-numbers-on-number-line-class-8-worksheet","timestamp":"2024-11-09T00:37:13Z","content_type":"text/html","content_length":"27242","record_id":"<urn:uuid:4af736dd-fba2-4589-9a1a-df17501d8a54>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00555.warc.gz"}
Girard’s paradox [1] is a proof that type is not a type, an adaptation of set theory’s Burali-Forti paradox into type theory. The proof works by observing that the collection of all well-founded posets looks like a poset itself, using strict embedding as the order. That poset of well-founded posets, let’s call it P, would need to reside one level higher in the universe hierarchy, but if the universe hierarchy is collapsed, P is a poset at the same level. Moreover, (if the universe hierarchy is collapsed) we can show that P is well-founded. However, P embeds into itself, which is a contradiction. We express the proposition that type is a type in Istari by saying that U (1 + i) <: U i, for some level i. From that it would follow that U i : U i. However, unlike the direct statement, the statement using subtyping is negatable. girard_paradox : forall (i : level) . not (U (1 + i) <: U i) [1] Jean-Yves Girard. Une extension de l’interprétation de Gödel à l’analyse, et son application à l’élimination de coupures dans l’analyse et la théorie des types, 1972.
{"url":"http://istarilogic.org/lib/girard-paradox.html","timestamp":"2024-11-07T16:35:31Z","content_type":"text/html","content_length":"4447","record_id":"<urn:uuid:adb56c04-d864-47a6-aec1-582fa298e6e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00398.warc.gz"}
Get Median of Each Group in Pandas Groupby - Data Science Parichay Pandas is a versatile data manipulation library in Python. It allows you to perform a number of different operations to clean, modify, and/or extract insights from the underlying tabular data. You can use Pandas groupby to group the underlying data on one or more columns and estimate useful statistics like count, mean, median, etc. In this tutorial, we will look at how to compute the median of each group in pandas groupby. Pandas Groupby Median To get the median of each group, you can directly apply the pandas median() function to the selected columns from the result of pandas groupby. The following is a step-by-step guide of what you need to do. 1. Group the dataframe on the column(s) you want. 2. Select the field(s) for which you want to estimate the median. 3. Apply the pandas median() function directly or pass ‘median’ to the agg() function. The following is the syntax – # groupby columns Col1 and estimate the median of column Col2 # alternatively, you can pass 'median' to the agg() function Let’s look at the usage of the above syntax with the help of some examples. First, we will create a sample dataframe that we will be using throughout this tutorial. import pandas as pd # create a dataframe of car models by two companies df = pd.DataFrame({ 'Company': ['A', 'A', 'A', 'B', 'B', 'B', 'B'], 'Model': ['A1', 'A2', 'A3', 'B1', 'B2', 'B3', 'B4'], 'Year': [2019, 2020, 2021, 2018, 2019, 2020, 2021], 'Transmission': ['Manual', 'Automatic', 'Automatic', 'Manual', 'Automatic', 'Automatic', 'Manual'], 'EngineSize': [1.4, 2.0, 1.4, 1.5, 2.0, 1.5, 1.5], 'MPG': [55.4, 67.3, 58.9, 52.3, 64.2, 68.9, 83.1] # display the dataframe Here we created a dataframe storing the specifications of the car models by two different companies. The “EngineSize” column is the size of the engine in litres and the “MPG” is the mileage of the car in miles-per-gallon. 1. Groupby Median of a single column Let’s compute the median mileage of the cars from the two companies. For this, we need to group the data on “Company” and then calculate the median of the “MPG” column. # median MPG for each Company 📚 Data Science Programs By Skill Level Introductory ⭐ Intermediate ⭐⭐⭐ Advanced ⭐⭐⭐⭐⭐ 🔎 Find Data Science Programs 👨💻 111,889 already enrolled Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers. A 58.90 B 66.55 Name: MPG, dtype: float64 You can see that we get the median “MPG” for each “Company” in df. It shows that on a median level, the mileage of cars from company B is better than cars from Company A. Alternatively, you can also use the pandas agg() function on the resulting groups. # median MPG for each Company A 58.90 B 66.55 Name: MPG, dtype: float64 We get the same results as above. You can also, group the above data by multiple columns. For example, let’s group the data on “Company” and “Transmission” to get the median “MPG” for each group. # median MPG for each Company df.groupby(['Company', 'Transmission'])['MPG'].median() Company Transmission A Automatic 63.10 Manual 55.40 B Automatic 66.55 Manual 67.70 Name: MPG, dtype: float64 2. Groupby Median of multiple columns You can also get the median of multiple columns at a time for each group resulting from the groupby. For example, let’s get the median “MPG” and “EngineSize” for each “Company” in df. # median MPG and EngineSize for each Company df.groupby('Company')[['MPG', 'EngineSize']].median() Here we selected the columns that we wanted to compute the median on from the resulting groupby object and then applied the pandas median() function. Let’s now do the same thing with the pandas agg() function. # median MPG and EngineSize for each Company df.groupby('Company')[['MPG', 'EngineSize']].agg('median') We get the same results as above. With this, we come to the end of this tutorial. The code examples and results presented in this tutorial have been implemented in a Jupyter Notebook with a python (version 3.8.3) kernel having pandas version 1.0.5 Subscribe to our newsletter for more informative guides and tutorials. We do not spam and you can opt out any time.
{"url":"https://datascienceparichay.com/article/pandas-groupby-median/","timestamp":"2024-11-13T18:39:06Z","content_type":"text/html","content_length":"261411","record_id":"<urn:uuid:0b956cfb-4bf5-4827-8157-b5ca7b05cc89>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00080.warc.gz"}
Wiles’ Proof of Fermat’s Last Theorem viewed as a Glass Bead Game In this piece — written in 2003 or perhaps earlier — I offer an exploration of Andrew Wiles’ proof as described in the book, Fermat’s Last Theorem by Simon Singh, in the light of the Glass Bead Game posited by Hermann Hesse in his Nobel-winning novel Das Glasperlenspiel, together with my own suggestion that the richest “move” in such a game would consist of a rich isomorphism between rich chunks of knowledge in widely separated disciplines… — essentially, that’s a Sembl move. The mathematician Pierre de Fermat scribbled a note in the margin of his copy of Diophantus‘ Arithmetica in 1637 or thereabouts claiming that there were no solutions to the equation x^n + y^n = z^n where n is greater than 2, along with a note saying “I have a marvellous demonstration of this proposition which this margin is too narrow to contain”. Mathematicians as great as Leonhard Euler strove to prove or disprove the theorem without success for three and a half centuries: Consider the leaps in understanding in physics, chemistry, biology, medicine and engineering that have occurred since the seventeenth century. We have progressed from ‘humours’ in medicine to gene-splicing, we have identified the fundamental atomic particles, and we have placed men on the moon, but in number theory Fermat’s Last Theorem remained inviolate. [Singh, p. xi] Andrew Wiles‘ proof of Fermat’s Last Theorem appeared in the May 1995 issue of the “Annals of Mathematics”, and now there’s this fascinating book by Simon Singh, which talks the layman through the process by which Wiles arrived at it. From a mathematical point of view, what is interesting about Wiles’ proof — beyond the fact that it lays to rest the great mathematical puzzle contained in Fermat’s marginal note — is that it consisted in the proof of the “Taniyama-Shimura conjecture”, a brilliant mathematical guess, in effect, which suggested that there was an exact correspondence between two areas at “opposite ends” of mathematics, which nobody would otherwise suppose had anything to do with one another: “modular forms” and “elliptic equations”. Yutaka Taniyama made this suggestion in a rather roundabout way at a symposium in Tokyo in 1955, and his work was continued after his suicide by his friend and colleague Goro Shimura. Barry Mazur of Harvard describes the conjecture thus: It was a wonderful conjecture — the surmise that every elliptic equation is associated with a modular form — but to begin with it was ignored because was so ahead of its time. When it was first proposed it was not taken up because it was so astounding. On the one hand you have the elliptic world, and on the other you have the modular world. Bothg these branches of mathematics have been studied intensively but separately. Mathematicians studying elliptic equations might not be well versed in things modular, and conversely. Then along comes the Taniyama- Shimura conjecture which is the grand surmise that there’s a bridge between these two completely different worlds. Mathematicians love to build bridges. [211-12] In 1984, Gerhard Frey showed a strong connection between Fermat’s Theorem and the Taniyama-Shimura conjecture, and Ken Ribet added a crucial piece to the puzzle, finally proving that *if* the conjecture could be proved, that would be enough to prove Fermat’s Theorem — but the conjecture remained a conjecture, and Fermat’s Theorem remained unproven. Until Wiles proved the Taniyama-Shimura conjecture, and thus Fermat’s Last Theorem as well. Wiles, in other words, had not only solved Fermat’s puzzle, but along the way had definitively linked two widely separate areas of mathematics. The value of mathematical bridges is enormous. They enable communities of mathematicians who have been living on separate islands to exchange ideas and explore each others’ creations. Mathematics consists of islands of knowledge in a sea of ignorance. For example, there is an island occupied by geometers who study shape and form, and then there is an island of probability where mathematicians discuss risk and chance. There are dozens of such islands, each one with its own unique language… Barry Mazur thinks of the Taniyama-Shimura conjecture as a translating device similar to the Rosetta stone… [212] And here’s where the Glass Bead Game comes in. Not surprisingly, Wiles’ proof of the Taniyama-Shimura conjecture had a profound impact on mathematics. In Ken Ribet’s words: The landscape is different, in that you know that all elliptic equations are modular and therefore when you prove a theorem for elliptic equations you’re also attacking modular forms and vice versa. [305] In Mazur’s: It’s as if you know one language and this Rosetta stone is going to give you an intense understanding of the other language… But the Taniyama-Shimura conjecture is a Rosetta stone with a certain magical power. The conjecture has the very pleasant feature that simple intuitions in the modular world translate into deep truths in the elliptic world, and conversely. What’s more, very profound problems in the elliptic world can get solved sometimes by translating them using this Rosetta stone into the modular world, and discovering that we have the insights and tools in the modular world to treat the translated problem. Back in the elliptic world we would have been at a loss. [212-13] And in Singh’s: Via the Taniyama-Shimura conjecture Wiles had unified the elliptic and modular worlds, and in so doing provided mathematics with a short cut to many other proofs — problems in one domain could be solved by analogy with problems in the parallel domain. [305] The analogy, in other words, illuminates both the fields which it joins. And this kind of deep analogical thinking across disciplinary boundaries lies at the very heart of the Bead Game, and is a hallmark of creativity in general: Relationships between apparently different subjects are as creatively important in mathematics as they are in any discipline. The relationship hints at some underlying truth which enriches both subjects. For instance, originally scientists had studied electricity and magnetism as two completely separate phenomena. Then, in the nineteenth century, theorists and experimentalists realised that electricity and magnetism were intimately related. This resulted in a deeper understanding of both of them. Electric currents generate magnetic fields, and magnets can induce electricity in wires passing close to them. This led to the invention of dynamos and electric motors, and ultimately the discovery that light itself is the result of magnetic and electric fields oscillating in harmony. [204-5] But proving the analogy which Taniyama and Shimura conjectured between the two fields of modular forms and elliptic equations involved Wiles in a very wide ranging process: During Wiles’s eight-year ordeal he had brought together virtually all the breakthroughs in twentieth-century number theory and incorporated them into one almighty proof. He had created completely new mathematical techniques and combined them with traditional ones in ways that had never been considered possible. In doing so he had opened up new lines of attack on a whole host of other problems. According to Ken Ribet the proof is a perfect synthesis of modern mathematics and an inspiration for the future: ‘I think that if you were on a desert island and you had only this manuscript then you would have a lot of food for thought. You would see all of the current ideas of number theory. You turn to a page and there’s a brief appearance of some fundamental theorem by Deligne and then you turn to another page and in some incidental way there’s a theorem by Hellegouarch — all of these things are just called into play and used for a moment before going on to the next idea. [304] Wiles’ work, in other words, is not only a rigorous analogical bridge between two distant branches of mathematics, but also a *symphonic* work. Looking to the future, Wiles’ work can be seen as a first major contribution — and booster — to Robert Langlands‘ proposal for a grand unified scheme which will embrace all of mathematics by means of other “bridging” conjectures and proofs… During the 1960s Robert Langlands, at the Institute for Advanced Study, Princeton, was struck by the potency of the Taniyama-Shimura conjecture. Even though the conjecture had not been proved, Langlands believed it was just one element of a much grander scheme of unification. He was confident that there were links between all the main mathematical topics and began to look for these unifications. Within a few years a number of links began to emerge. … Langlands’ dream was to see each of these conjectures proved one by one, leading to a grand unified mathematics. And that’s about as far as my layman’s brain can go… 2 replies R.A.D.Piyadasa says: Now it is time to look for simpler proofs of Fermat’s last theorem than that of Wiles. Simpler correct proofs are possible according to Harvey Friedman grand conjecture and so om.”A simple and short analytical proof of Fermat’s last theorem’ can be read on the internet.. Maciej Marosz says: X ^n + Y^n = Z^n , X,Y,Z and n are Natural Positive ( N+) AND n >2 MAROSZ’ s conditions X= [2^(n/n)] , Y = [2^n/n] , Z = [2^n+1/n] and n=3 n=3 > 2 [ok ] 2^(n/n) = 2 it is ( N+) , [ok ] 2^(n+1)/n = 2^(4/3) = 16^(1/3) = 4^(1/2) = 2 it is N+ [ok] Fermat and Marosz’s conditions basic equation : [2^(n/n)]^n + [2^n/n]^n = [2^n+1/n]^n [2^(n/n)]^n = 2^ [(n/n) *n] = 2^n 2^n + 2^n = 2*(2^n) , n=3 2* (2^3) = 16 [2^((n+1)/n)]^n = [2^(4/3)]^3 = 2^4 = 16 Author Inventor Engineer (I’m only 32 years old ) Right now I live in getto (small polish town 25% people without Job) I wait … someone from Universities can support my ART ? My patents and vision ( design) Your city can fly ! http://tesla4.blogspot.com/ Physics new type of compass ( I made in home ) Want to join the discussion? Feel free to contribute! Leave a Reply Cancel reply
{"url":"http://sembl.net/2013/05/wiles-proof-of-fermats-last-theorem-viewed-as-a-glass-bead-game/","timestamp":"2024-11-08T10:39:52Z","content_type":"text/html","content_length":"63061","record_id":"<urn:uuid:641df4b1-8846-43e4-aa3b-d17e9c7bb9b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00373.warc.gz"}
prim's algorithm example Prim’s Algorithm And Example In the field of computer science , the algorithm of Prim’s, a greedy algorithm enables finding the minimum spanning tree for the weighted undirected graph. by this, we can say that the prims algorithm is a good greedy approach to find the minimum spanning tree. Now, take a look at the edges connecting our chosen vertices (A) to unchosen vertices: the edge from A to B of weight 1; the edge from A to C of weight 2 Created Date: 7/25/2007 9:52:47 PM ; Proof of Correctness of Prim's Algorithm. 8. In the above addressed example, n is 3, hence 3 3−2 = 3 spanning trees are possible. Prim’s Algorithm. Example. We have discussed Prim’s algorithm and its implementation for adjacency matrix representation of graphs. Theorem: Prim's algorithm finds a minimum spanning tree. That tables can be used makes the algorithm more suitable for automation than Kruskal’s algorithm. The proof is by mathematical induction on the number of edges in T and using the MST Lemma. Prim's algorithm could give different solutions (depending on how you resolve "ties"), but all solutions should give a MST with the same (minimal) weight, which is not the case here. This is a guide to Prim’s Algorithm. Recommended Articles. We strongly recommend to read – prim’s algorithm and how it works. Learn Prim's algorithm with the suitable example provided by experienced tutors. At each step, it makes the most cost-effective choice. This simple modified algorithm of spanning tree is called prim’s algorithm for finding an Minimal cost spanning tree. Prim's and Kruskal's algorithms are two notable algorithms which can be used to find the minimum subset of edges in a weighted undirected graph connecting all nodes. But Prim's algorithm is a great example of a problem that becomes much easier to understand and solve with the right approach and data structures. Example of Prim's algorithm Start with a weighted graph Choose a vertex Choose the shortest edge from this vertex and add it Choose the nearest vertex not yet in the solution Choose the nearest edge not yet in the solution, if there are multiple choices, choose one at random Repeat until you have a spanning tree. Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected weighted undirected graph.It finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized.This algorithm is directly based on the MST( minimum spanning tree) property. It implies solving the wedges subset which enables a tree formation and accompanies every vertex where the overall weight of edges is minimized in the tree. Step by step instructions showing how to run Prim's algorithm on a graph.Sources: 1. Therefore, Prim’s algorithm is helpful when dealing with dense graphs that have lots of edges . The Prim’s algorithm searches for the minimum spanning tree for the connected weighted graph which does not have cycles. Prim’s Algorithm: E is the set of edges in G. cost [1:n, 1:n] is the cost adjacency matrix of an n vertex In computer science, Prim's (also known as Jarník's) algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph.This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. Proof: Let G = (V,E) be a weighted, connected graph.Let T be the edge set that is grown in Prim's algorithm. Initial graph is given in the form of a dense graph represented as an adjacency cost matrix. Any scenario that carries a Geometry that is dense enough - and where the conditions of Weight assignment is fullfilled. It is an excellent example of a Greedy Algorithm. The time complexity for this algorithm has also been discussed and how this algorithm is achieved we saw that too. Kruskal’s algorithm uses the greedy approach for finding a minimum spanning tree. We now understand that one graph can have more than one spanning tree. For this sample C++ assignment, the expert is demonstrating to students the implemention of Prim’s algorithm to find a minimum spanning tree. Prim's algorithm finds the subset of edges that includes every vertex of the graph such that the sum of the weights of the edges can be minimized. 7. Hello people…! I have implemented Prim's Algorithm from Introduction to Algorithms. The Minimum Spanning Tree Algorithm. Prim’s algorithm is a greedy algorithm that finds the MST for a weighted undirected graph. We'll use the Prim's algorithm to find a minimum spanning tree of the graph below. Prim’s algorithm a b c d e f g 7 8 5 9 7 5 15 6 8 9 11. Prim’s Algorithm and Minimum Spanning Tree in C++. Prim’s Algorithm The generic algorithm gives us an idea how to ’grow’ a MST. Prim's Algorithm is used to find the minimum spanning tree from a graph. That … See Figure 8.11 for an example. If you read the theorem and the proof carefully, you will notice that the choice of a cut (and hence the corresponding light edge) in each iteration is imma-terial. ; O(n 2) algorithm. Here you will learn about prim’s algorithm in C with a program example. The advantage of Prim’s algorithm is its complexity, which is better than Kruskal’s algorithm. Prims Algorithm Example 1 1 3 3 4 4 5 5 6 5 3 5 2 1 5 2 2 6 4 5 S 0 V S 1 2 3 4 from ITDR2105 1234 at College Of Applied Science Prim's algorithm starts with the single node and explore all the adjacent nodes with all the connecting edges at every step. In this post, I will talk about the Prim’s Algorithm for finding a Minimum Spanning Tree for a given weighted graph. Prim’s algorithm is an example of a greedy algorithm. history: As discussed in the previous post, in Prim’s algorithm, two sets are maintained, one set contains list of vertices already included in MST, other set contains vertices not yet included.In every iteration, we consider the minimum weight edge among the edges that connect the two sets. Let's walk through an example. In this case, we start with single edge of graph and we add edges to it and finally we get minimum cost tree. However, Prim’s algorithm doesn’t allow us much control over the chosen edges when multiple edges with the same weight occur . Let’s see an example to understand Prim’s Algorithm. We can select any cut (that respects the se-lected edges) and find the light edge crossing that cut To compile on Linux: g++ -std=c++14 prims.cpp Prim’s Algorithm . Get instant help from experts. Let's pick A. First, pick any vertex. Also, we analyzed how the min-heap is chosen and the tree is formed. Prim’s algorithm is also suitable for use on distance tables, or the equivalent for the problem. Example Walkthrough. Prim’s algorithm is a type of minimum spanning tree algorithm that works on the graph and finds the subset of the edges of that graph having the minimum sum of weights in all the tress that can be possibly built from that graph with all the vertex forms a tree.. Considering the roads as a graph, the above example is an instance of the Minimum Spanning Tree problem. In this case, as well, we have n-1 edges when number of nodes in graph are n. So node y is unreached and in the same iteration , y will become reached The edge ( x , … Prim’s Algorithm • Prim’s algorithm builds the MST by adding leaves one at a time to the current tree • We start with a root vertex r: it can be any vertex • At any time, the subset of edges A forms a single tree(in Kruskal it formed a forest) Lecture Slides By Adil Aslam 10 I have to demonstrate Prim's algorithm for an assignment and I'm surprised that I found two different solutions, with different MSTs as an outcome. prims algorithm program in cpp; prims algorithm python geeksforgeeks; prims algorithm values; minimum spanning tree prim's algorithm code; prim mst is a dynamic programming; prim mst uses; write prim's algorithm for adjacenecy list; prim's algorithm using greedy method example; prims algorithm using dynamic programming; graph prim algorithm Earlier we have seen what is Prim’s algorithm is and how it works.In this article we will see its implementation using adjacency matrix. It combines a number of interesting challenges and algorithmic approaches - namely sorting, searching, greediness, and … Important Note: This algorithm is based on the greedy approach. Following are a few properties of the spanning tree connected to graph G − A connected graph G can have more than one spanning tree. Please review this code and suggest improvements. It is very similar to Dijkstra’s Algorithm for finding the shortest path from a given source. I have observed that the code is similar to Dijkstra's Algorithm, so I have used my Dijkstra's Algorithm implementation. Whereas the depth-first algorithm used a stack as its data structure to maintain the list of open nodes and the breadth-first traversal used a queue, Prims uses a priority queue. Graph. Prim’s algorithm is a greedy algorithm that maintains two sets, one represents the vertices included( in MST ), and the other represents the vertices not included ( in MST ). Notice that the Prim's Algorithm adds the edge (x,y) where y is an unreached node. Minimum Spanning Tree(MST) Algorithm for Prim’s Algorithm. The algorithm proceeds by building MST one vertex at a time, from an arbitrary starting vertex. For more detail contact now +61 7-5641-0117. Here is an example of a minimum spanning tree. General Properties of Spanning Tree. Example: Definition of Prim’s Algorithm. Prim’s Algorithm is an approach to determine minimum cost spanning tree. Prim’s (also known as Jarník’s) algorithm is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. Prim’s algorithm. Kruskal’s Algorithm and Prim’s minimum spanning tree algorithm are two popular algorithms to find the minimum spanning trees. This is useful for large problems where drawing the network diagram would be hard or time-consuming. We saw that too with a program example finally we get minimum tree. Have cycles algorithm on a graph.Sources: 1 to ’ grow ’ a MST, will. 5 15 6 8 9 11 can have more than one spanning tree: 1 use! In T and using the MST for a weighted undirected graph by mathematical induction on the number of.... Cost spanning tree the above example is an approach to find the minimum tree. Graph, the above example is an approach to find the minimum spanning trees 5 15 8... When dealing with dense graphs that have lots of edges in T and using MST. Where drawing the network diagram would be hard or time-consuming arbitrary starting vertex is. Y ) where y is an instance of the graph below c d e f g 8! Adds the edge ( x, y ) where y is an prim's algorithm example to determine minimum cost tree Dijkstra! Form of a dense graph represented as an adjacency cost matrix idea how to ’ grow a... Lots of edges the edge ( x, y ) where y is an excellent prim's algorithm example of greedy! Mst one vertex at a time, from an arbitrary starting vertex MST Lemma c. Find the minimum spanning tree is called Prim ’ s algorithm roads as a graph advantage of Prim ’ algorithm. Graph represented as an adjacency cost matrix important Note: this simple algorithm. Algorithm proceeds by building MST one vertex at a time, from an arbitrary starting vertex algorithm more suitable use. Algorithm to find a minimum spanning tree for the connected weighted graph which does not have.. Tree for a given source spanning tree is called Prim ’ s algorithm prim's algorithm example finding the path! Complexity for this algorithm has also been discussed and how it works two popular to! Step instructions showing how to ’ grow ’ a MST by this, we can say that the prims is... Algorithm uses the greedy approach algorithm starts with the single node and explore all the adjacent nodes with the... That carries a Geometry that is dense enough - and where the of... Saw that too the single node and explore all the connecting edges every!, from an arbitrary starting vertex of a greedy algorithm that finds the MST Lemma example: 's., we start with single edge of graph and we add edges to it and finally we minimum... Recommend to read – Prim ’ s algorithm for finding a minimum spanning tree a. Equivalent for the minimum spanning tree the proof is by mathematical induction on the greedy to... Distance tables, or the equivalent for the problem is useful for large problems where drawing the network diagram be... Every step edges to it and finally we get minimum cost tree how to run Prim 's algorithm to the! Now understand that one graph can have more than one spanning tree algorithm are two popular Algorithms to find minimum! Algorithm, so i have implemented Prim 's algorithm adds the edge x... Be hard or time-consuming finally we get minimum cost tree algorithm in with! Note: this algorithm is an instance of the graph below connecting edges at every step how it...., y ) where y is an unreached node step by step instructions showing how run. 5 15 6 8 9 11 the above example is an instance of minimum! Of a greedy algorithm that finds the MST for a given weighted graph problems drawing! The graph below will talk about the Prim 's algorithm is a greedy algorithm that finds the Lemma. Algorithm adds the edge ( x, y ) where y is an example. Based on the number of edges in T and using the MST for a weighted undirected graph provided by tutors! History: this prim's algorithm example is based on the greedy approach to determine minimum tree. Is similar to Dijkstra ’ s algorithm a b c d e f g 7 8 5 7! Cost-Effective choice to it and finally we get minimum cost spanning tree i will talk about the Prim ’ algorithm! Minimum cost spanning tree for a given source an approach to find the minimum spanning.. Strongly recommend to read – Prim ’ s algorithm the code is similar to 's! ’ a MST two popular Algorithms to find a minimum spanning tree ( MST ) algorithm for a. Algorithm a b c d e f g 7 8 5 9 7 5 15 6 9. So i have used my Dijkstra 's algorithm to prim's algorithm example the minimum spanning tree ( )! Used to find the minimum spanning tree for a weighted undirected graph cost-effective choice start with edge! Graph below uses the greedy approach of graph and we add edges to it finally. Algorithm are two popular Algorithms to find the minimum spanning tree is Prim... Example of a greedy algorithm the proof is by mathematical induction on the approach! The form of a greedy algorithm with dense graphs that have lots of edges algorithm gives us an how... In c with a program example one spanning tree history: this simple algorithm. Greedy algorithm that finds the MST for a given source more than one spanning tree in C++ example to Prim! Advantage of Prim ’ s algorithm for Prim ’ s algorithm and its implementation for adjacency matrix of. We 'll use the Prim 's algorithm starts with the suitable example provided by tutors. Algorithm searches for the minimum spanning trees time, from an arbitrary starting vertex this case we. So i have used my Dijkstra 's algorithm starts with the suitable example provided by experienced.... Node and explore all the connecting edges at every step algorithm searches for the minimum spanning trees i used... Distance tables, or the equivalent for the minimum spanning tree generic algorithm gives us an how... The prims algorithm is an excellent example of a prim's algorithm example graph represented as an cost. Algorithm implementation 9 11 of spanning tree is called Prim ’ s algorithm is its complexity which... Finds the MST for a given source, so i have observed the... B c d e f g 7 8 5 9 7 5 15 6 8 9 11 the.! To Dijkstra ’ s algorithm is also suitable for use on distance tables or... Complexity for this algorithm is helpful when dealing with dense graphs that have lots of in! Use on distance tables, or the equivalent for the problem the problem have cycles very similar Dijkstra. Will talk about the Prim 's algorithm finds a minimum spanning trees adjacency cost.! That one graph can have more than one spanning tree example of a minimum spanning tree for on. Has also been discussed and how it works we add edges to it and finally we minimum! On distance tables, or the equivalent for the minimum spanning tree ( ). One graph can have more than one spanning tree from a graph, the above example is unreached... Tree from a graph, the above example is an approach to determine minimum cost tree algorithm starts with suitable. Strongly recommend to read – Prim ’ s see an example of a dense graph as... Vertex at a time, from an arbitrary starting vertex suitable for use on distance tables, or the for... In C++ cost-effective choice 8 9 11 graph, the above example is an example to understand Prim s! Given in the form of a greedy algorithm that finds the MST a. Representation of graphs i have observed that the prims algorithm is its complexity, is! Algorithm with the suitable example provided by experienced tutors is given in the form of a greedy algorithm that the! Based on the greedy approach to determine minimum cost spanning tree is called Prim ’ algorithm! It and finally we get minimum cost spanning tree problem get minimum cost tree in this,... Instance of the graph below starts with the single node and explore the! Hard or time-consuming an approach to find the minimum spanning trees s algorithm is example... The advantage of Prim ’ s algorithm is achieved we saw that too excellent of. Read – Prim ’ s algorithm and its implementation for adjacency matrix representation of graphs in. E f g 7 8 5 9 7 5 15 6 8 9 11 learn Prim 's algorithm, i... Example: Prim 's algorithm implementation implemented Prim 's algorithm to find a spanning. C with a program example add edges to it and finally we get minimum cost tree. Are two popular Algorithms to find the minimum spanning tree ( MST ) algorithm for an. Algorithm proceeds prim's algorithm example building MST one vertex at a time, from an arbitrary starting vertex algorithm a... Start with single edge of graph and we add edges to it and finally we get minimum cost.! Been discussed and how this algorithm is a guide to Prim ’ s algorithm have more than one tree. Algorithm more suitable for use on distance tables, or the equivalent for the problem proof is by induction... G 7 8 5 9 7 5 15 6 8 9 11 a greedy.! In the form of a greedy algorithm that finds the MST for a weighted undirected graph step. A Geometry that is dense enough - and where the conditions of Weight assignment is fullfilled Dijkstra 's algorithm.. Tree of the minimum spanning tree ( MST ) algorithm for finding a minimum spanning tree a. With the single node and explore all the connecting edges at every.. In T and using the MST for a given source be hard or.... Algorithm a b c d e f g 7 8 5 9 7 5 6... Creepy Reddit Threads 2019 Black Jack Driveway Sealer Home Depot How Deep Is The Muskegon River Masonrydefender 1 Gallon Penetrating Concrete Sealer For Driveways Dance Costumes Australia Interactive Virtual Field Trips Black Jack Driveway Sealer Home Depot My Town : Grandparents Apk Amity University Mumbai Mba Placement
{"url":"http://pmay.ranchimunicipal.com/apk/a55kewp7/1imgz.php?7dc881=prim%27s-algorithm-example","timestamp":"2024-11-05T10:13:09Z","content_type":"text/html","content_length":"28358","record_id":"<urn:uuid:8c4a09e0-78ea-4f4e-ab58-12445ace121d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00269.warc.gz"}
Quantum field theory In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics.^[1]^:xi QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. The current standard model of particle physics is based on quantum field theory. Quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Theoretical background Magnetic field lines visualized using iron filings. When a piece of paper is sprinkled with iron filings and placed above a bar magnet, the filings align according to the direction of the magnetic field, forming arcs allowing viewers to clearly see the poles of the magnet and to see the magnetic field generated. Quantum field theory results from the combination of classical field theory, quantum mechanics, and special relativity.^[1]^:xi A brief overview of these theoretical precursors follows. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Isaac Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact".^[2]^:4 It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector in the case of gravitational field) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.^[3]^:18 Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.^[2]^[4]^:301^[5]^:2 The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.^[2]^:19 Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths.^[6] Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.^[7]^:Ch.2 Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.^[6] In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave–particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances.^[6] Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, Louis de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.^[3]^:22–23 In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformations, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.^[3]^:19 It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators.^[6] Quantum electrodynamics Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.^[8]^:1 Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.^[8]^:1 With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.^[3]^:22 In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence and non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.^[6]^:71 In 1928, Dirac wrote down a wave equation that described relativistic electrons: the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.^[6]^:71–72 The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.^[3]^:22–23 It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.^[6]^:72^[3]^:23 QFT naturally incorporated antiparticles in its formalism.^[3] Infinities and renormalization Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields,^[6] suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.^[3]^:25 It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Such achievements were not understood and recognized by the theoretical community.^[6] Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.^[3]^:26 In 1947, Willis Lamb and Robert Retherford measured the minute difference in the ^2S[1/2] and ^2P[1/2] energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.^[6]^[3]^:28 Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.^[6] The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Richard Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the calculated values of mass and charge, infinite though they may be, by their finite measured values. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory.^[6] As Tomonaga said in his Nobel lecture: Since those parts of the modified mass and charge due to field reactions , it is impossible to calculate them by the theory. However, the mass and charge observed in experiments are not the original mass and charge but the mass and charge as modified by field reactions, and they are finite. On the other hand, the mass and charge appearing in the theory are… the values modified by field reactions. Since this is so, and particularly since the theory is unable to calculate the modified mass and charge, we may adopt the procedure of substituting experimental values for them phenomenologically... This procedure is called the renormalization of mass and charge… After long, laborious calculations, less skillful than Schwinger's, we obtained a result... which was in agreement with Americans'.^[9] By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarization. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".^[6] At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.^[8]^:2 The latter can be used to visually and intuitively organize and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.^[1]^:5 It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.^[8]^:2 Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.^[3]^:30 The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.^[3]^:30 The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.^[3]^:31 With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.^[3]^:31 Source theory Schwinger, however, took a different route. For more than a decade he and his students had been nearly the only exponents of field theory,^[10]^:454 but in 1951^[11]^[12] he found a way around the problem of the infinities with a new method using external sources as currents coupled to gauge fields.^[13] Motivated by the former findings, Schwinger kept pursuing this approach in order to "quantumly" generalize the classical process of coupling external forces to the configuration space parameters known as Lagrange multipliers. He summarized his source theory in 1966^[14] then expanded the theory's applications to quantum electrodynamics in his three volume-set titled: Particles, Sources, and Fields.^[15]^[16]^[17] Developments in pion physics, in which the new viewpoint was most successfully applied, convinced him of the great advantages of mathematical simplicity and conceptual clarity that its use bestowed.^[15] In source theory there are no divergences, and no renormalization. It may be regarded as the calculational tool of field theory, but it is more general.^[18] Using source theory, Schwinger was able to calculate the anomalous magnetic moment of the electron, which he had done in 1947, but this time with no ‘distracting remarks’ about infinite quantities.^[10]^:467 Schwinger also applied source theory to his QFT theory of gravity, and was able to reproduce all four of Einstein's classic results: gravitational red shift, deflection and slowing of light by gravity, and the perihelion precession of Mercury.^[19] The neglect of source theory by the physics community was a major disappointment for Schwinger: The lack of appreciation of these facts by others was depressing, but understandable. -J. Schwinger^[15] See "the shoes incident" between J. Schwinger and S. Weinberg.^[10] Standard model Elementary particles of the Standard Model: six types of quarks, six types of leptons, four types of gauge bosons that carry fundamental interactions, as well as the Higgs boson, which endow elementary particles with mass. In 1954, Yang Chen-Ning and Robert Mills generalized the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups.^[20]^:5 In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.^[3]^:32^[21] Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.^[22] Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.^[20]^:5–6 By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,^[22]^[20]^:6 until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.^[22] Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) ^[20]^ :11 Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.^[3]^:32 These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles.^[23] The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.^[8]^:3 The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.^[24] Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.^[8]^:4 Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973.^[8]^:7 Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, ^[8]^:6 itself a type of two-dimensional QFT with conformal symmetry.^[25] Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.^[26] Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.^[27] Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.^[28] Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.^[28] For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one. Classical fields A classical field is a function of spatial and time coordinates.^[29] Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom.^[29 Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantization and path integrals are two common formulations of QFT.^[31]^:61 To motivate the fundamentals of QFT, an overview of classical field theory follows. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field, ${\displaystyle L}$, is ${\displaystyle L=\int d^{3}x\,{\mathcal {L}}=\int d^{3}x\,\left,}$ where ${\displaystyle {\mathcal {L}}}$ is the Lagrangian density, ${\displaystyle {\dot {\phi }}}$ is the time-derivative of the field, ∇ is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:^[1]^:16 ${\displaystyle {\frac {\partial }{\partial t}}\left+\sum _{i=1}^{3}{\frac {\partial }{\partial x^{i}}}\left-{\frac {\partial {\mathcal {L}}}{\partial \phi }}=0,}$ we obtain the equations of motion for the field, which describe the way it varies in time and space: ${\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}-abla ^{2}+m^{2}\right)\phi =0.}$ This is known as the Klein–Gordon equation.^[1]^:17 The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: ${\displaystyle \phi (\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left(a_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+a_{\mathbf {p} }^{*}e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right),}$ where a is a complex number (normalized by convention), * denotes complex conjugation, and ω[p] is the frequency of the normal mode: ${\displaystyle \omega _{\mathbf {p} }={\sqrt {|\mathbf {p} |^{2}+m^{2}}}.}$ Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ω[p].^[1]^:21,26 Canonical quantization The quantization procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by ${\displaystyle x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t},}$ where a is a complex number (normalized by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label x of a quantum field. For a quantum harmonic oscillator, x(t) is promoted to a linear operator ${\displaystyle {\hat {x}}(t)}$: ${\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.}$ Complex numbers a and a^* are replaced by the annihilation operator ${\displaystyle {\hat {a}}}$ and the creation operator ${\displaystyle {\hat {a}}^{\dagger }}$, respectively, where † denotes Hermitian conjugation. The commutation relation between the two is ${\displaystyle \left=1.}$ The Hamiltonian of the simple harmonic oscillator can be written as ${\displaystyle {\hat {H}}=\hbar \omega {\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\hbar \omega .}$ The vacuum state ${\displaystyle |0\rangle }$, which is the lowest energy state, is defined by ${\displaystyle {\hat {a}}|0\rangle =0}$ and has energy ${\displaystyle {\frac {1}{2}}\hbar \omega .}$ One can easily check that ${\displaystyle =\hbar \omega {\hat {a}}^{\dagger },}$ which implies that ${\displaystyle {\hat {a}}^{\dagger }}$ increases the energy of the simple harmonic oscillator by ${\displaystyle \hbar \omega }$. For example, the state ${\displaystyle {\hat {a}}^{\dagger }|0\rangle }$ is an eigenstate of energy ${\ displaystyle 3\hbar \omega /2}$. Any energy eigenstate state of a single harmonic oscillator can be obtained from ${\displaystyle |0\rangle }$ by successively applying the creation operator ${\ displaystyle {\hat {a}}^{\dagger }}$:^[1]^:20 and any state of the system can be expressed as a linear combination of the states ${\displaystyle |n\rangle \propto \left({\hat {a}}^{\dagger }\right)^{n}|0\rangle .}$ A similar procedure can be applied to the real scalar field ϕ, by promoting it to a quantum field operator ${\displaystyle {\hat {\phi }}}$, while the annihilation operator ${\displaystyle {\hat {a}} _{\mathbf {p} }}$, the creation operator ${\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }}$ and the angular frequency ${\displaystyle \omega _{\mathbf {p} }}$are now for a particular p: ${\displaystyle {\hat {\phi }}(\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left({\hat {a}}_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+{\hat {a}}_{\mathbf {p} }^{\dagger }e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right).}$ Their commutation relations are:^[1]^:21 ${\displaystyle \left=(2\pi )^{3}\delta (\mathbf {p} -\mathbf {q} ),\quad \left=\left=0,}$ where δ is the Dirac delta function. The vacuum state ${\displaystyle |0\rangle }$ is defined by ${\displaystyle {\hat {a}}_{\mathbf {p} }|0\rangle =0,\quad {\text{for all }}\mathbf {p} .}$ Any quantum state of the field can be obtained from ${\displaystyle |0\rangle }$ by successively applying creation operators ${\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }}$ (or by a linear combination of such states), e.g. ^[1]^:22 ${\displaystyle \left({\hat {a}}_{\mathbf {p} _{3}}^{\dagger }\right)^{3}{\hat {a}}_{\mathbf {p} _{2}}^{\dagger }\left({\hat {a}}_{\mathbf {p} _{1}}^{\dagger }\right)^{2}|0\rangle .}$ While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems.^[32] The process of quantizing an arbitrary number of particles instead of a single particle is often also called second quantization.^[1]^:19 The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantize (complex) scalar fields, Dirac fields,^[1]^:52 vector fields (e.g. the electromagnetic field), and even strings.^[33] However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:^[1]^:77 ${\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi )\left(\partial ^{\mu }\phi \right)-{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {\lambda }{4!}}\phi ^{4},}$ where μ is a spacetime index, ${\displaystyle \partial _{0}=\partial /\partial t,\ \partial _{1}=\partial /\partial x^{1}}$, etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state ${\displaystyle |\phi _{I}\rangle }$ at time t = 0 to some final state ${\displaystyle |\phi _{F}\rangle }$ at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then^[31]^:10 ${\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int d\phi _{1}\int d\phi _{2}\cdots \int d\phi _{N-1}\,\langle \phi _{F}|e^{-iHT/N}|\phi _{N-1}\rangle \cdots \langle \phi _{2}|e^ {-iHT/N}|\phi _{1}\rangle \langle \phi _{1}|e^{-iHT/N}|\phi _{I}\rangle .}$ Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:^[1]^:282^[31]^:12 ${\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int {\mathcal {D}}\phi (t)\,\exp \left\{i\int _{0}^{T}dt\,L\right\},}$ where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transformation. The initial and final conditions of the path integral are respectively ${\displaystyle \phi (0)=\phi _{I},\quad \phi (T)=\phi _{F}.}$ In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the Two-point correlation function In calculations, one often encounters expression like${\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \quad {\text{or}}\quad \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle }$in the free or interacting theory, respectively. Here, ${\displaystyle x}$ and ${\displaystyle y}$ are position four-vectors, ${\displaystyle T}$ is the time ordering operator that shuffles its operands so the time-components ${\displaystyle x^{0}}$ and ${\displaystyle y^{0}}$ increase from right to left, and ${\displaystyle |\Omega \rangle }$ is the ground state (vacuum state) of the interacting theory, different from the free ground state ${\displaystyle |0\rangle }$. This expression represents the probability amplitude for the field to propagate from y to x, and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short.^[1]^:82 The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantization or path integrals to be^[1]^:31,288^[31]^:23 ${\displaystyle \langle 0|T\{\phi (x)\phi (y)\}|0\rangle \equiv D_{F}(x-y)=\lim _{\epsilon \to 0}\int {\frac {d^{4}p}{(2\pi )^{4}}}{\frac {i}{p_{\mu }p^{\mu }-m^{2}+i\epsilon }}e^{-ip_{\mu }(x^{\ mu }-y^{\mu })}.}$ In an interacting theory, where the Lagrangian or Hamiltonian contains terms ${\displaystyle L_{I}(t)}$ or ${\displaystyle H_{I}(t)}$ that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantization formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantization, the two-point correlation function can be written as:^[1]^:87 ${\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\left\langle 0\left|T\left\{\phi _{I}(x)\phi _{I}(y)\exp \left\right\}\right|0\ right\rangle }{\left\langle 0\left|T\left\{\exp \left\right\}\right|0\right\rangle }},}$ where ε is an infinitesimal number and ϕ[I] is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in ${\displaystyle \phi ^{4}}$-theory, the interacting term of the Hamiltonian is ${\textstyle H_{I}(t)=\int d^{3}x\,{\frac {\lambda }{4!}}\phi _{I}(x)^{4}}$,^[1]^:84 and the expansion of the two-point correlator in terms of ${\displaystyle \lambda }$ becomes${\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle ={\frac {\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\int d^{4} z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(x)\phi _{I}(y)\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }{\displaystyle \sum _{n=0}^{\infty }{\frac {(-i\lambda )^{n}}{(4!)^{n}n!}}\ int d^{4}z_{1}\cdots \int d^{4}z_{n}\langle 0|T\{\phi _{I}(z_{1})^{4}\cdots \phi _{I}(z_{n})^{4}\}|0\rangle }}.}$This perturbation expansion expresses the interacting two-point function in terms of quantities ${\displaystyle \langle 0|\cdots |0\rangle }$ that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written^[1]^:284 ${\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\int {\mathcal {D}}\phi \,\phi (x)\phi (y)\exp \left}{\int {\mathcal {D}}\phi \,\ exp \left}},}$ where ${\displaystyle {\mathcal {L}}}$ is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in λ, reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any n-point correlation function in the free theory to a sum of products of two-point correlation functions. For example, {\displaystyle {\begin{aligned}\langle 0|T\{\phi (x_{1})\phi (x_{2})\phi (x_{3})\phi (x_{4})\}|0\rangle &=\langle 0|T\{\phi (x_{1})\phi (x_{2})\}|0\rangle \langle 0|T\{\phi (x_{3})\phi (x_{4})\}| 0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{3})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{4})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{3})\}|0\rangle .\end{aligned}}} Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.^[1]^:90 This makes the Feynman propagator one of the most important quantities in quantum field theory. Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ^1 term in the two-point correlation function in the ϕ^4 theory is ${\displaystyle {\frac {-i\lambda }{4!}}\int d^{4}z\,\langle 0|T\{\phi (x)\phi (y)\phi (z)\phi (z)\phi (z)\phi (z)\}|0\rangle .}$ After applying Wick's theorem, one of the terms is ${\displaystyle 12\cdot {\frac {-i\lambda }{4!}}\int d^{4}z\,D_{F}(x-z)D_{F}(y-z)D_{F}(z-z).}$ This term can instead be obtained from the Feynman diagram The diagram consists of • external vertices connected with one edge and represented by dots (here labeled ${\displaystyle x}$ and ${\displaystyle y}$). • internal vertices connected with four edges and represented by dots (here labeled ${\displaystyle z}$). • edges connecting the vertices and represented by lines. Every vertex corresponds to a single ${\displaystyle \phi }$ field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the so-called Feynman rules: 1. For every internal vertex ${\displaystyle z_{i}}$, write down a factor ${\textstyle -i\lambda \int d^{4}z_{i}}$. 2. For every edge that connects two vertices ${\displaystyle z_{i}}$ and ${\displaystyle z_{j}}$, write down a factor ${\displaystyle D_{F}(z_{i}-z_{j})}$. 3. Divide by the symmetry factor of the diagram. With the symmetry factor ${\displaystyle 2}$, following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space.^[1]^:91–94 In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, ${\displaystyle \langle \Omega |T\{\phi (x_{1})\cdots \phi (x_{n})\}|\Omega \rangle }$ is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ^4 interaction theory discussed above, every vertex must have four legs.^[1]^:98 In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.^[31]^:44 Lines whose end points are vertices can be thought of as the propagation of virtual particles.^[1]^:31 Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ→∞. This is an example of regularization, a class of methods to treat divergences in QFT, with Λ being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalized perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ^4 theory, the field strength is first redefined: ${\displaystyle \phi =Z^{1/2}\phi _{r},}$ where ϕ is the bare field, ϕ[r] is the renormalized field, and Z is a constant to be determined. The Lagrangian density becomes: ${\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}m_{r}^{2}\phi _{r}^{2}-{\frac {\lambda _{r}}{4!}}\phi _{r}^{4}+{\frac {1}{2}}\ delta _{Z}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}\delta _{m}\phi _{r}^{2}-{\frac {\delta _{\lambda }}{4!}}\phi _{r}^{4},}$ where m[r] and λ[r] are the experimentally measurable, renormalized, mass and coupling constant, respectively, and ${\displaystyle \delta _{Z}=Z-1,\quad \delta _{m}=m^{2}Z-m_{r}^{2},\quad \delta _{\lambda }=\lambda Z^{2}-\lambda _{r}}$ are constants to be determined. The first three terms are the ϕ^4 Lagrangian density written in terms of the renormalized quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularization scheme (such as the cut-off regularization introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δ[Z], δ[m], and δ[λ] such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ→∞ is taken. In this way, meaningful finite quantities are obtained.^[1]^:323–326 It is only possible to eliminate all infinities to obtain a finite result in renormalizable theories, whereas in non-renormalizable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalizable QFT,^[1]^:719–727 while quantum gravity is non-renormalizable.^[1]^:798^[31]^:421 Renormalization group The renormalization group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.^[1]^:393 The way in which each parameter changes with scale is described by its β function.^[1]^:417 Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.^[1]^:410–411 As an example, the coupling constant in QED, namely the elementary charge e, has the following β function: ${\displaystyle \beta (e)\equiv {\frac {1}{\Lambda }}{\frac {de}{d\Lambda }}={\frac {e^{3}}{12\pi ^{2}}}+O{\mathord {\left(e^{5}\right)}},}$ where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases.^[34] The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.^[1]^:420 The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function: ${\displaystyle \beta (g)\equiv {\frac {1}{\Lambda }}{\frac {dg}{d\Lambda }}={\frac {g^{3}}{16\pi ^{2}}}\left(-11+{\frac {2}{3}}N_{f}\right)+O{\mathord {\left(g^{5}\right)}},}$ where N[f] is the number of quark flavours. In the case where N[f] ≤ 16 (the Standard Model has N[f] = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.^[1]^:531 Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.)^[35] Examples include string theory^[25] and N = 4 supersymmetric Yang–Mills theory.^[36] According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalizable effective field theory.^[1]^:402–403 The difference between renormalizable and non-renormalizable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them.^[8]^:2 According to this view, non-renormalizable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.^[31]^:156 Other theories The quantization and renormalization procedures outlined in the preceding sections are performed for the free theory and ϕ^4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field A^μ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: ${\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi -{\frac {1}{4}}F_{\mu u }F^{\mu u }-e{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },}$ where γ^μ are Dirac matrices, ${\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}}$, and ${\displaystyle F_{\mu u }=\partial _{\mu }A_{u }-\partial _{u }A_{\mu }}$ is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.^[1]^:78 Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of electrons, while those pointing backward in time represent the propagation of positrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant: ${\displaystyle \psi (x)\to e^{i\alpha (x)}\psi (x),\quad A_{\mu }(x)\to A_{\mu }(x)+ie^{-1}e^{-i\alpha (x)}\partial _{\mu }e^{i\alpha (x)},}$ where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.^[1]^:482–483 Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations ${\displaystyle e^{i\alpha (x)}}$ and ${\displaystyle e^{i\alpha '(x)}}$ is yet another symmetry transformation ${\displaystyle e^{i}}$. For any α(x), ${\displaystyle e^{i\alpha (x)}} $ is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.^[1]^:496 The photon field A[μ] may be referred to as the U(1) gauge boson. U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).^[1]^:489 Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψ^i, i = 1,2,3 representing quark fields as well as eight vector fields A^a,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.^[1]^:547 The QCD Lagrangian density is:^[1]^:490–491 ${\displaystyle {\mathcal {L}}=i{\bar {\psi }}^{i}\gamma ^{\mu }(D_{\mu })^{ij}\psi ^{j}-{\frac {1}{4}}F_{\mu u }^{a}F^{a,\mu u }-m{\bar {\psi }}^{i}\psi ^{i},}$ where D[μ] is the gauge covariant derivative: ${\displaystyle D_{\mu }=\partial _{\mu }-igA_{\mu }^{a}t^{a},}$ where g is the coupling constant, t^a are the eight generators of SU(3) in the fundamental representation (3×3 matrices), ${\displaystyle F_{\mu u }^{a}=\partial _{\mu }A_{u }^{a}-\partial _{u }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{u }^{c},}$ and f^abc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: ${\displaystyle \psi ^{i}(x)\to U^{ij}(x)\psi ^{j}(x),\quad A_{\mu }^{a}(x)t^{a}\to U(x)\leftU^{\dagger }(x),}$ where U(x) is an element of SU(3) at every spacetime point x: ${\displaystyle U(x)=e^{i\alpha (x)^{a}t^{a}}.}$ The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantization, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density ${\displaystyle {\mathcal {L}}}$ under a certain local transformation of the fields, the measure ${\textstyle \int {\mathcal {D}}\phi }$ of the path integral may change.^[31]^:243 For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.^[1]^:705–707 The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.^[ Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.^[1]^ :17–18^[31]^:73 For example, the U(1) symmetry of QED implies charge conservation.^[38] Gauge-transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field A^μ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarization. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing A^μ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.^[31]^:168 To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.^[1]^:512–515 A more rigorous generalization of the Faddeev–Popov procedure is given by BRST quantization.^[1]^:517 Spontaneous symmetry-breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.^[1]^:347 To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density: ${\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\phi ^{i}\right)\left(\partial ^{\mu }\phi ^{i}\right)+{\frac {1}{2}}\mu ^{2}\phi ^{i}\phi ^{i}-{\frac {\lambda }{4}}\left(\phi ^{i}\phi ^{i}\right)^{2},}$ where μ and λ are real parameters. The theory admits an O(N) global symmetry: ${\displaystyle \phi ^{i}\to R^{ij}\phi ^{j},\quad R\in \mathrm {O} (N).}$ The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ[0] satisfying ${\displaystyle \phi _{0}^{i}\phi _{0}^{i}={\frac {\mu ^{2}}{\lambda }}.}$ Without loss of generality, let the ground state be in the N-th direction: ${\displaystyle \phi _{0}^{i}=\left(0,\cdots ,0,{\frac {\mu }{\sqrt {\lambda }}}\right).}$ The original N fields can be rewritten as: ${\displaystyle \phi ^{i}(x)=\left(\pi ^{1}(x),\cdots ,\pi ^{N-1}(x),{\frac {\mu }{\sqrt {\lambda }}}+\sigma (x)\right),}$ and the original Lagrangian density as: ${\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\partial _{\mu }\pi ^{k}\right)\left(\partial ^{\mu }\pi ^{k}\right)+{\frac {1}{2}}\left(\partial _{\mu }\sigma \right)\left(\partial ^{\mu }\ sigma \right)-{\frac {1}{2}}\left(2\mu ^{2}\right)\sigma ^{2}-{\sqrt {\lambda }}\mu \sigma ^{3}-{\sqrt {\lambda }}\mu \pi ^{k}\pi ^{k}\sigma -{\frac {\lambda }{2}}\pi ^{k}\pi ^{k}\sigma ^{2}-{\ frac {\lambda }{4}}\left(\pi ^{k}\pi ^{k}\right)^{2},}$ where k = 1, ..., N − 1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N − 1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.^[1]^:349–350 Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N − 1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N − 1) has (N − 1)(N − 2)/2. The number of broken symmetries is their difference, N − 1, which corresponds to the N − 1 massless fields π^k.^[1]^:351 On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarized massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.^[1]^:743–744 In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.^[31]^:199 In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.^[1]^:690 All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesized the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.^[1]^:795^[31]^:443 The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations P^μ and the Lorentz transformations J[μν].^[39]^:58–60 In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Q[α], called supercharges, which themselves transform as Weyl fermions.^[1]^:795^[31]^:444 The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, Q[α]^I, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.^[1]^:795^[31]^:450 Supersymmetry can also be constructed in other dimensions,^[40] most notably in (1+1) dimensions for its application in superstring theory.^[41] The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.^[31]^:448 Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,^[31]^:450 and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.^[31]^:444 If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.^[42] Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalization) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its super-partner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.^[1]^:796–797^[43] Nevertheless, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.^[1]^:797^[31]^:443 Other spacetimes The ϕ^4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases.^[44] In high-energy physics, string theory is a type of (1+1)-dimensional QFT,^[31]^:452^[25] while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.^[31]^:428–429 In Minkowski space, the flat metric η[μν] is used to raise and lower spacetime indices in the Lagrangian, e.g. ${\displaystyle A_{\mu }A^{\mu }=\eta _{\mu u }A^{\mu }A^{u },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =\eta ^{\mu u }\partial _{\mu }\phi \partial _{u }\phi ,}$ where η^μν is the inverse of η[μν] satisfying η^μρη[ρν] = δ^μ[ν]. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: ${\displaystyle A_{\mu }A^{\mu }=g_{\mu u }A^{\mu }A^{u },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =g^{\mu u }\partial _{\mu }\phi \partial _{u }\phi ,}$ where g^μν is the inverse of g[μν]. For a real scalar field, the Lagrangian density in a general spacetime background is ${\displaystyle {\mathcal {L}}={\sqrt {|g|}}\left({\frac {1}{2}}g^{\mu u }abla _{\mu }\phi abla _{u }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right),}$ where g = det(g[μν]), and ∇[μ] denotes the covariant derivative.^[45] The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime Topological quantum field theory The correlation functions and physical predictions of a QFT depend on the spacetime metric g[μν]. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.^[46]^:36 QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity.^[47] Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.^[48]^:1–5 The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime,^[49] which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond.^[50] Perturbative and non-perturbative methods Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton.^[8] Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory^[51] and the Thirring model.^[52] Mathematical rigor In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined.^[53] However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory^[54] provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory,^[55] can be given a sound mathematical interpretation from their finite-dimensional Since the 1950s,^[57] theoretical physicists and mathematicians have attempted to organize all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,^[58]^:2 which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem,^[57] and also to mathematically rigorous constructions of many interacting QFTs in two and three spacetime dimensions, e.g. two-dimensional scalar field theories with arbitrary polynomial interactions,^[59] the three-dimensional scalar field theories with a quartic interaction, etc.^[60] Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms.^[ Algebraic quantum field theory is another approach to the axiomatization of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms.^[58]^:2–3 One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).^[58]^:10 Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on ${\displaystyle \mathbb {R} ^{4}}$ and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973) and Osterwalder & Schrader (1975). See also 1. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j ^k ^l ^m ^n ^o ^p ^q ^r ^s ^t ^u ^v ^w ^x ^y ^z ^aa ^ab ^ac ^ad ^ae ^af ^ag ^ah ^ai ^aj ^ak ^al ^am ^an ^ao ^ap ^aq ^ar ^as ^at ^au ^av ^aw ^ax ^ay ^az Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5. 2. ^ ^a ^b ^c Hobson, Art (2013). "There are no particles, there are only fields". American Journal of Physics. 81 (211): 211–223. arXiv:1204.4616. Bibcode:2013AmJPh..81..211H. doi:10.1119/1.4789885 . S2CID 18254182. 3. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j ^k ^l ^m ^n ^o ^p Weinberg, Steven (1977). "The Search for Unity: Notes for a History of Quantum Field Theory". Daedalus. 106 (4): 17–35. JSTOR 20024506. 4. ^ Heilbron, J. L., ed. (2003). The Oxford companion to the history of modern science. Oxford ; New York: Oxford University Press. ISBN 978-0-19-511229-0. 5. ^ Thomson, Joseph John; Maxwell, James Clerk (1893). Notes on recent researches in electricity and magnetism, intended as a sequel to Professor Clerk-Maxwell's 'Treatise on Electricity and Magnetism'. Clarendon Press. 6. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j ^k ^l ^m Weisskopf, Victor (November 1981). "The development of field theory in the last 50 years". Physics Today. 34 (11): 69–85. Bibcode:1981PhT....34k..69W. doi 7. ^ Heisenberg, Werner (1999). Physics and philosophy: the revolution in modern science. Great minds series. Amherst, N.Y: Prometheus Books. ISBN 978-1-57392-694-2. 8. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j Shifman, M. (2012). Advanced Topics in Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-19084-8. 9. ^ Tomonaga, Shinichiro (1966). "Development of Quantum Electrodynamics". Science. 154 (3751): 864–868. Bibcode:1966Sci...154..864T. doi:10.1126/science.154.3751.864. PMID 17744604. 10. ^ ^a ^b ^c Milton, K. A.; Mehra, Jagdish (2000). Climbing the Mountain: The Scientific Biography of Julian Schwinger (Repr ed.). Oxford: Oxford University Press. ISBN 978-0-19-850658-4. 11. ^ Schwinger, Julian (July 1951). "On the Green's functions of quantized fields. I". Proceedings of the National Academy of Sciences. 37 (7): 452–455. Bibcode:1951PNAS...37..452S. doi:10.1073/ pnas.37.7.452. ISSN 0027-8424. PMC 1063400. PMID 16578383. 12. ^ Schwinger, Julian (July 1951). "On the Green's functions of quantized fields. II". Proceedings of the National Academy of Sciences. 37 (7): 455–459. Bibcode:1951PNAS...37..455S. doi:10.1073/ pnas.37.7.455. ISSN 0027-8424. PMC 1063401. PMID 16578384. 13. ^ Schweber, Silvan S. (2005-05-31). "The sources of Schwinger's Green's functions". Proceedings of the National Academy of Sciences. 102 (22): 7783–7788. doi:10.1073/pnas.0405167101. ISSN 0027-8424. PMC 1142349. PMID 15930139. 14. ^ Schwinger, Julian (1966). "Particles and Sources". Phys Rev. 152 (4): 1219. Bibcode:1966PhRv..152.1219S. doi:10.1103/PhysRev.152.1219. 15. ^ ^a ^b ^c Schwinger, Julian (1998). Particles, Sources and Fields vol. 1. Reading, MA: Perseus Books. p. xi. ISBN 0-7382-0053-0. 16. ^ Schwinger, Julian (1998). Particles, sources, and fields. 2 (1. print ed.). Reading, Mass: Advanced Book Program, Perseus Books. ISBN 978-0-7382-0054-5. 17. ^ Schwinger, Julian (1998). Particles, sources, and fields. 3 (1. print ed.). Reading, Mass: Advanced Book Program, Perseus Books. ISBN 978-0-7382-0055-2. 18. ^ C.R. Hagen; et al., eds. (1967). Proc of the 1967 Int. Conference on Particles and Fields. NY: Interscience. p. 128. 19. ^ Schwinger, Julian (1998). Particles, Sources and Fields vol. 1. Reading, MA: Perseus Bookks. pp. 82–85. 20. ^ ^a ^b ^c ^d 't Hooft, Gerard (2015-03-17). "The Evolution of Quantum Field Theory". The Standard Theory of Particle Physics. Advanced Series on Directions in High Energy Physics. Vol. 26. pp. 1–27. arXiv:1503.05007. Bibcode:2016stpp.conf....1T. doi:10.1142/9789814733519_0001. ISBN 978-981-4733-50-2. S2CID 119198452. 21. ^ Yang, C. N.; Mills, R. L. (1954-10-01). "Conservation of Isotopic Spin and Isotopic Gauge Invariance". Physical Review. 96 (1): 191–195. Bibcode:1954PhRv...96..191Y. doi:10.1103/PhysRev.96.191. 22. ^ ^a ^b ^c Coleman, Sidney (1979-12-14). "The 1979 Nobel Prize in Physics". Science. 206 (4424): 1290–1292. Bibcode:1979Sci...206.1290C. doi:10.1126/science.206.4424.1290. JSTOR 1749117. PMID 23. ^ Sutton, Christine. "Standard model". britannica.com. Encyclopædia Britannica. Retrieved 2018-08-14. 24. ^ Kibble, Tom W. B. (2014-12-12). "The Standard Model of Particle Physics". arXiv:1412.4094 . 25. ^ ^a ^b ^c Polchinski, Joseph (2005). String Theory. Vol. 1. Cambridge University Press. ISBN 978-0-521-67227-6. 26. ^ Schwarz, John H. (2012-01-04). "The Early History of String Theory and Supersymmetry". arXiv:1201.0981 . 27. ^ "Common Problems in Condensed Matter and High Energy Physics" (PDF). science.energy.gov. Office of Science, U.S. Department of Energy. 2015-02-02. Retrieved 2018-07-18. 28. ^ ^a ^b Wilczek, Frank (2016-04-19). "Particle Physics and Condensed Matter: The Saga Continues". Physica Scripta. 2016 (T168): 014003. arXiv:1604.05669. Bibcode:2016PhST..168a4003W. doi:10.1088/ 0031-8949/T168/1/014003. S2CID 118439678. 29. ^ ^a ^b Tong 2015, Chapter 1 30. ^ In fact, its number of degrees of freedom is uncountable, because the vector space dimension of the space of continuous (differentiable, real analytic) functions on even a finite dimensional Euclidean space is uncountable. On the other hand, subspaces (of these function spaces) that one typically considers, such as Hilbert spaces (e.g. the space of square integrable real valued functions) or separable Banach spaces (e.g. the space of continuous real-valued functions on a compact interval, with the uniform convergence norm), have denumerable (i. e. countably infinite) dimension in the category of Banach spaces (though still their Euclidean vector space dimension is uncountable), so in these restricted contexts, the number of degrees of freedom (interpreted now as the vector space dimension of a dense subspace rather than the vector space dimension of the function space of interest itself) is denumerable. 31. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j ^k ^l ^m ^n ^o ^p ^q ^r ^s ^t Zee, A. (2010). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 978-0-691-01019-9. 32. ^ Fock, V. (1932-03-10). "Konfigurationsraum und zweite Quantelung". Zeitschrift für Physik (in German). 75 (9–10): 622–647. Bibcode:1932ZPhy...75..622F. doi:10.1007/BF01344458. S2CID 186238995. 33. ^ Becker, Katrin; Becker, Melanie; Schwarz, John H. (2007). String Theory and M-Theory. Cambridge University Press. p. 36. ISBN 978-0-521-86069-7. 34. ^ Fujita, Takehisa (2008-02-01). "Physics of Renormalization Group Equation in QED". arXiv:hep-th/0606101. 35. ^ Aharony, Ofer; Gur-Ari, Guy; Klinghoffer, Nizan (2015-05-19). "The Holographic Dictionary for Beta Functions of Multi-trace Coupling Constants". Journal of High Energy Physics. 2015 (5): 31. arXiv:1501.06664. Bibcode:2015JHEP...05..031A. doi:10.1007/JHEP05(2015)031. S2CID 115167208. 36. ^ Kovacs, Stefano (1999-08-26). "N = 4 supersymmetric Yang–Mills theory and the AdS/SCFT correspondence". arXiv:hep-th/9908171. 37. ^ Veltman, M. J. G. (1976). Methods in Field Theory, Proceedings of the Les Houches Summer School, Les Houches, France, 1975. 38. ^ Brading, Katherine A. (March 2002). "Which symmetry? Noether, Weyl, and conservation of electric charge". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 33 (1): 3–22. Bibcode:2002SHPMP..33....3B. CiteSeerX 10.1.1.569.106. doi:10.1016/S1355-2198(01)00033-8. 39. ^ Weinberg, Steven (1995). The Quantum Theory of Fields. Cambridge University Press. ISBN 978-0-521-55001-7. 40. ^ de Wit, Bernard; Louis, Jan (1998-02-18). "Supersymmetry and Dualities in various dimensions". arXiv:hep-th/9801132. 41. ^ Polchinski, Joseph (2005). String Theory. Vol. 2. Cambridge University Press. ISBN 978-0-521-67228-3. 42. ^ Nath, P.; Arnowitt, R. (1975). "Generalized Super-Gauge Symmetry as a New Framework for Unified Gauge Theories". Physics Letters B. 56 (2): 177. Bibcode:1975PhLB...56..177N. doi:10.1016/ 43. ^ Munoz, Carlos (2017-01-18). "Models of Supersymmetry for Dark Matter". EPJ Web of Conferences. 136: 01002. arXiv:1701.05259. Bibcode:2017EPJWC.13601002M. doi:10.1051/epjconf/201713601002. S2CID 44. ^ Morandi, G.; Sodano, P.; Tagliacozzo, A.; Tognetti, V. (2000). Field Theories for Low-Dimensional Condensed Matter Systems. Springer. ISBN 978-3-662-04273-1. 45. ^ Parker, Leonard E.; Toms, David J. (2009). Quantum Field Theory in Curved Spacetime. Cambridge University Press. p. 43. ISBN 978-0-521-87787-9. 46. ^ Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008-12-11). "Undergraduate Lecture Notes in Topological Quantum Field Theory". arXiv:0810.0344v5 . 47. ^ Carlip, Steven (1998). Quantum Gravity in 2+1 Dimensions. Cambridge University Press. pp. 27–29. arXiv:2312.12596. doi:10.1017/CBO9780511564192. ISBN 9780511564192. 48. ^ Carqueville, Nils; Runkel, Ingo (2018). "Introductory lectures on topological quantum field theory". Banach Center Publications. 114: 9–47. arXiv:1705.05734. doi:10.4064/bc114-1. S2CID 49. ^ Witten, Edward (1989). "Quantum Field Theory and the Jones Polynomial". Communications in Mathematical Physics. 121 (3): 351–399. Bibcode:1989CMaPh.121..351W. doi:10.1007/BF01217730. MR 0990772 . S2CID 14951363. 50. ^ Putrov, Pavel; Wang, Juven; Yau, Shing-Tung (2017). "Braiding Statistics and Link Invariants of Bosonic/Fermionic Topological Quantum Matter in 2+1 and 3+1 dimensions". Annals of Physics. 384 (C): 254–287. arXiv:1612.09298. Bibcode:2017AnPhy.384..254P. doi:10.1016/j.aop.2017.06.019. S2CID 119578849. 51. ^ Di Francesco, Philippe; Mathieu, Pierre; Sénéchal, David (1997). Conformal Field Theory. Springer. ISBN 978-1-4612-7475-9. 52. ^ Thirring, W. (1958). "A Soluble Relativistic Field Theory?". Annals of Physics. 3 (1): 91–112. Bibcode:1958AnPhy...3...91T. doi:10.1016/0003-4916(58)90015-0. 53. ^ Haag, Rudolf (1955). "On Quantum Field Theories" (PDF). Dan Mat Fys Medd. 29 (12). 54. ^ Kevin Costello, Renormalization and Effective Field Theory, Mathematical Surveys and Monographs Volume 170, American Mathematical Society, 2011, ISBN 978-0-8218-5288-0 55. ^ Gerald B. Folland, Quantum Field Theory: A Tourist Guide for Mathematicians, Mathematical Surveys and Monographs Volume 149, American Mathematical Society, 2008, ISBN 0821847058 | chapter=8 56. ^ Nguyen, Timothy (2016). "The perturbative approach to path integrals: A succinct mathematical treatment". J. Math. Phys. 57 (9): 092301. arXiv:1505.04809. Bibcode:2016JMP....57i2301N. doi: 10.1063/1.4962800. S2CID 54813572. 57. ^ ^a ^b Buchholz, Detlev (2000). "Current Trends in Axiomatic Quantum Field Theory". Quantum Field Theory. Lecture Notes in Physics. Vol. 558. pp. 43–64. arXiv:hep-th/9811233. Bibcode: 2000LNP...558...43B. doi:10.1007/3-540-44482-3_4. ISBN 978-3-540-67972-1. S2CID 5052535. 58. ^ ^a ^b ^c Summers, Stephen J. (2016-03-31). "A Perspective on Constructive Quantum Field Theory". arXiv:1203.3991v2 . 59. ^ Simon, Barry (1974). The P(phi)_2 Euclidean (quantum) field theory. Princeton, New Jersey: Princeton University Press. ISBN 0-691-08144-1. OCLC 905864308. 60. ^ Glimm, James; Jaffe, Arthur (1987). Quantum Physics : a Functional Integral Point of View. New York, NY: Springer New York. ISBN 978-1-4612-4728-9. OCLC 852790676. 61. ^ Sati, Hisham; Schreiber, Urs (2012-01-06). "Survey of mathematical foundations of QFT and perturbative string theory". arXiv:1109.0955v2 . 62. ^ Jaffe, Arthur; Witten, Edward. "Quantum Yang–Mills Theory" (PDF). Clay Mathematics Institute. Archived from the original (PDF) on 2015-03-30. Retrieved 2018-07-18. Further reading General readers Introductory texts Advanced texts External links
{"url":"https://sagapedia.com/en/Quantum_field_theory","timestamp":"2024-11-02T12:10:53Z","content_type":"text/html","content_length":"694197","record_id":"<urn:uuid:df6c87e2-d48d-44bc-a85e-d71dea224d18>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00603.warc.gz"}
Encapsulated Pumped Storage, Series 2, Part 2: Hydrodynamics In the last post, I described a fast, inexpensive way of building evaporation-proof water storage on gently sloping terrain. One risk of building a pumped storage solution around this method is that the penstocks will tend to be long, compared to more traditional hydroelectric systems. Penstocks are the large pipes (one or more) that carry water between the upper storage area and the powerhouse. They must be very strong because the peak pressure of the system is realized at the bottom of the penstock, where it enters the turbine. The combined requirements of strength and size make the penstocks a significant contributor to the overall system cost. Headraces and tailraces ( defined in the previous post) don’t have to withstand the high water pressures that the penstocks do, which should help limit the cost of those components. click to enlarge In a traditional hydropower plant where a river is dammed to create a reservoir, penstocks can be relatively short (not much more than the thickness of the dam itself). Closed-loop pumped storage systems (such as EPS) generally need longer penstocks, because the elevation difference is created by natural terrain, rather than a dam. (We could minimize penstock length by aiming the penstocks vertically down from the upper storage area, but this would typically require placing the powerhouse in a location deep within the earth, resulting in higher construction costs and a much longer time from the start of construction until the site comes online.) The more gradual the average slope between upper and lower storage, the longer the penstocks need to be. This will make them more expensive and less hydrodynamically efficient. At the same time, there are sites that are very appealing aside from requiring long penstocks; so we need to quantify how long penstocks can be before the costs outweigh the benefits. Water flowing in a container, such as a pipe, is constantly losing some of its energy to friction against the container walls, and to internal friction against itself. This lost energy reduces how much energy a pumped storage system can store and later retrieve. So it’s a key design goal to minimize these losses. This trades off against cost. The simplest way to reduce energy loss in a pipe, assuming it can’t be made shorter, is to make it larger in diameter, but larger pipes cost more. So it’s an optimization problem: we want to maximize the amount of stored energy we can get out of the system, per dollar spent building it. A 6 meter (20 foot) diameter penstock from the 1960s being repainted. (source) I’ll look at penstock losses in the context of high, medium, and low head (1,000 m, 670 m, and 362 m)—representing three actual sites of interest for EPS. The cross-section drawing above is of the medium-head scenario, which looks like this as a Google Earth profile: This image has been scaled to show true elevation (horizontal and vertical scale are the same). The elevation at the far left is 2,020 meters (6,630 ft), and at the far right, it’s about 1,350 meters (4430 ft). So the usable vertical drop is 670 meters (2,200 ft). The full horizontal span is about 5.5 km (about 18,000 feet, or 3.4 miles). Modeling Pipe Flow Fluid flow in cylindrical pipes has been of intense interest to engineers ever since pipes were invented^1, so it’s a well-studied problem. There are a number of empirical formulas for estimating energy loss in pipe flow. The Darcy–Weisbach equation^2 seems to be regarded as the best^3, short of a full CFD (computational fluid dynamics) simulation, which has its own perils. The Darcy-Weisbach equation, in head-loss form, is this^4: The question I’ll use this formula to answer is: for a given penstock diameter, length, elevation change, and flow rate, how much of the water’s original energy is lost to friction? An Example I’ll specify some basic parameters first: a theoretical power of 1 gigawatt (1,000 megawatts), and a head of 1 km (1,000 meters). Some simple physics dictates how fast water must flow down that 1 km drop to produce 1 GW: So we need a flow rate of 100 cubic meters per second. We’ll assume that we want to use one penstock per 1 GW of power, rather than multiple penstocks (which I’ll examine later in this post). With those parameters fixed, I’ll look at these penstock diameters (in meters): 4, 5, 6, 7, 8, 9, and 10. Penstock lengths will be 2.5, 5, 7.5, and 10 km. This will give us an idea of the bounds of feasibility—as we make the pipe longer and/or narrower, at what point do the losses become intolerable? For each combination of length and diameter, the result will be an efficiency figure between 0 and 1, where 1 means none of the potential energy was dissipated in the pipe, and 0 means that by the time the water reached the bottom, it had no energy left with which to spin a turbine. Penstock efficiency in context I mentioned in the last post that a reasonable number for overall system efficiency, round-trip, was 75%. Penstock losses contribute to this, but are not the only place energy is lost: turbines and/ or pumps are not 100% efficient, and neither are motor/generators. Fortunately, all these components can individually operate above 90% efficiency, often well above. So how much efficiency do we need from the penstock? The water must pass through it twice per storage cycle: once from bottom to top to store the energy, and a second time from top to bottom to turn it back into electricity. This is equivalent, in terms of hydrodynamic losses, to a one-way trip through a penstock twice as long. To come out with a final round-trip efficiency of 75% for the whole system, as a ballpark number, the double-length penstock should be at least 90% efficient. High-head scenario Here is our system with 1 km head, producing 1 GW of power. On the x axis, we see four penstock lengths, from 2.5 km to 10 km. On the y axis, we see the round-trip efficiency, which means the fraction of the original energy in the water that it retains after two trips through a penstock of the given length and diameter. First of all, as predicted above, we see that these are straight lines—the loss of energy is linear with penstock length. Next, the pipe diameter has little effect until it gets too small. The lines for 7- and 8-meter diameter pipes are almost on top of each other, at close to 100% efficiency even for the longest penstock. 6 meters is still excellent, keeping 95% of the energy in two trips through a 10 km pipe (beating our goal of 90%). If we reduce the diameter from there, things start to go bad. The 5 m pipe is well over twice as inefficient as the 6 m and does not meet our goal beyond 5 km length, and the 4 m pipe is useless, falling short of our goal at any length, and at 10 km length wasting 53% of the energy we tried to store. We can see the benefits of shorter penstocks. On a site where 2.5-kilometer penstocks are enough, a 5 meter (16 foot) diameter penstock will work quite well. This is a modest size for 1 gigawatt of power from a single penstock, largely because 1 kilometer of head is unusually high. For comparison, Hoover Dam’s outlet pipes are 30 feet (9 meters) in diameter and total 4,700 feet (1.4 km) in length^5. The penstocks are 13 feet (4 m) in diameter and there are sixteen of them, totaling 5,800 feet (1.8 km). Financial viability is a different question, but these Hoover Dam statistics do show that the sort of pipe diameters and lengths we’re contemplating are achievable. The chart also shows that with a 10 km penstock (four times longer) at the same power and head, we’ll have to deal with larger pipe, but not much: 6 m instead of 5. Multiple Penstocks Though one giant penstock will always be the most efficient for a given cross-sectional area, many hydroelectric systems do use multiple, smaller penstocks. (I mentioned that Hoover Dam uses 16, for example.) There could be a number of practical reasons for this: lighter and more readily sourced pipe sections, simpler mounting and support, easier and quicker installation, redundancy in case of damage, aesthetics, and probably others. For the next experiment, I’ll stay with 1 km of head, and 1 GW of total power, but will use four penstocks, each of which is half the diameter of the one large one. Because area scales as the square of diameter, the total cross sectional area of the four will be the same as the single large penstock, and each will be required to contribute 250 MW of power with a water flow of 25 cubic meters of water per second. Here is the result: Switching from one pipe, to four pipes of half the diameter, has hurt the efficiency significantly. In the worst case, the single 4 m pipe was able to operate at 10 km length (though at a useless 47% efficiency), but four 2 m pipes weren’t able to function at all at 10 km—there’s no efficiency value, not even zero. What this tells us is that the pressure of a 1 km drop, which is around 100 atmospheres or 1,400 psi, is just not enough to push 25 cubic meters per second of water through a 2 meter diameter pipe 10 km long, let alone do useful work on reaching the bottom. Things get less bleak as the pipes get bigger. A single 5 m pipe was about 84% efficient at 10 km; the equivalent four 2.5 m pipes are around 60% efficient at doing the same job. The single 6 m pipe was around 95% efficient, versus 85% for four 3 m pipes. Once we get to four 3.5 m pipes, we’re up to 93% efficiency, meeting our goal. This is at 20 km of water travel (10 km penstock), so with shorter penstocks, we might be fine with four 3 m pipes, which perform about the same as one 5 m pipe. We do sacrifice considerable hydrodynamic efficiency by splitting up the water into smaller pipes, but in some cases it could be a better option. In other cases, one big pipe will be best. An interesting use case for multiple penstocks goes back to the idea of incremental rollout: plan for two penstocks, but only build one at first. This might allow the plant to come online sooner, with less up-front expense, at a somewhat reduced peak power level. After the site has started paying for itself, the second penstock can be added. Medium-head scenario The analysis above was for the high-head scenario with a 1 km elevation difference. Now let’s look at the medium-head scenario: 640 meters. We’ll need a faster flow rate to get 1 GW of power, because less potential energy is stored in each cubic meter of water: This faster flow rate in turn will mean we either need a bigger penstock diameter, or have to accept greater friction losses by putting water through the same pipe faster. Here is outcome with a single penstock (as in the first chart), but with only 640 m head: The lower head, and resulting need for 50% more water flow, has resulted in much higher losses. The 4 meter pipe just can’t provide 156 cubic meters per second at 5 km or more, because the pressure isn’t sufficient. 5 meter pipe is also useless, and 6 km only meets our goal for the shortest penstock. We can get to 10 km with a 7 meter (23 foot) diameter penstock, with efficiency of 90%, but when we had 1 km of head we would have only needed about 5.5 meters (18 feet). This reinforces my belief that higher head is well worth the trouble: not only do we need less stored water for the same stored energy, we also can use a smaller, cheaper, easier-to-install penstock and get the same peak power, with less energy wasted. The longer the penstock, the more important this becomes. Low-head scenario I’ll look at one more head value, because of a specific site I’ll talk about in the next post. This site has only 362 meters of head. (Though I’m calling that “low,” it’s actually in the middle of the pack when compared to existing pumped storage sites.) Now the flow rate must be: This large flow will call for a larger pipe to keep the losses reasonable: With this large flow, driven by less pressure, nothing under 6 meters diameter will even work. To get our target of 90% efficiency, if we need 7.5 km of penstock length, we will have to use 9 meter (30 ft) diameter pipe. That’s very large. For 10 km penstocks, we would need 10 meter (33 ft) pipe. The only saving grace for the low-head scenario is that the pressure requirements are lower: 35 atmospheres (515 psi), compared to 97 atmospheres (1,422 psi) for a system with 1 km head. Even so, 10 meter diameter pipe will be expensive, and challenging to install. What if we compromise on efficiency? My target of 90% round-trip efficiency (which is a rule of thumb in the first place) might be expensive to achieve for some sites, particularly those with low head and long penstocks. What would happen if we lowered this target? The answer depends on how much of the time the site will need to operate at full power, either for storing or generating. The smaller this proportion is, the less input energy from sun and wind we will waste. Because water flow rate is a dominating influence on friction losses, when operating at less than peak power, the efficiency will be higher. The computer simulation that is needed to answer questions like these should certainly factor in what the friction losses will be as a function of flow rate, and not assume a constant value. Roughness factor The roughness of the pipe interior has a large impact on losses in some flow regimes, including ours. The wikipedia article about Darcy-Weisbach goes into some detail. We are well into the turbulent flow regime with a Reynolds number of In our low-head worst case, if I vary the friction factor, here is what happens to the penstock efficiency for an 8 meter diameter penstock, 10 km long: friction factor efficiency 0.015 84% 0.030 68% 0.060 36% This is a very wide range of friction factor values, just to show how significant the effect is. For all the efficiency charts in this post, I used a value of 0.025 mm for epsilon, the roughness height (listed for “new smooth concrete” or “structural or forged steel” in the Moody diagram above) and computed friction factors from that. If a lining material can improve on that, so much the Energy Losses In Canals In the previous post, I discussed a design for water storage in canals (that is, long, narrow reservoirs) that run along contour lines. When water is flowing out of a canal, into a pipe (headrace or tailrace) connected to the underside of the canal, the surface of the water will be slightly depressed near the pipe, and water will flow into that area to make up for the water that’s being removed. Put another way, due to gravity, the water will always be trying to maintain the same level at all points on the canal’s surface, and will move accordingly. This is an example of “open-channel flow” because the top surface of the water isn’t constrained by a solid boundary^6. (The floating cover isn’t heavy or rigid enough to make any difference.) Like any water flow, this will cause energy loss due to friction. We need to quantify that and make sure it’s small enough to be compatible with our overall efficiency goals. The canal flow is complex enough that it really calls for a simulation in computational fluid dynamics software, or even a scale model, but I’ll do the best I can for now. Since all hydro facilities have things like headraces, tailraces, inlets and transitions, and they seem to work well, I’ll focus on the one unusual aspect here, which is the movement of water a significant distance along the length of the canal. Friction losses are always smaller when the water is moving slowly, and for a canal, the larger the cross-section of water, the slower it needs to move. So the losses will be greatest when the canal is at its lowest allowed level, because the cross-section is smallest then. (This is one reason we can’t drain the canal completely.) The large canal design from the previous post, which is 250 m wide at the top, has a cross section of 7,500 square meters, of which 6,250 square meters were deemed usable. The difference, 1250 square meters, is the smallest cross section for flow we’ll allow. That works out to be a trapezoidal shape (of course) with a water depth of 15.5 meters: This is quite large—several 15-meter-diameter pipes would fit in that blue area, with some left over. So if we connect this to a single penstock of 10 m diameter or less, we’d expect the friction losses in the canal to be negligible compared to those in the pipe. (We might even be able to reduce the minimum further to get more usable volume from the canal.) To confirm this optimistic view, we need a formula for open-channel flow. Here’s Manning’s Equation: This isn’t quite what we need. It solves for the flow rate (Q), which we already know. And it assumes the channel is sloped (S). Ours has to be horizontal, because it supports flow in both directions, filling and draining. But I think I can make it work. Suppose I plug in the knowns (or things I can estimate): the flow rate (Q), cross-sectional area (A), hydraulic radius (R), surface roughness (n). Then I could solve for the slope S, but what would be the point when I already know the canal isn’t sloped? The answer is that S would be the slope of a hypothetical canal, of the same dimensions and materials as ours, which would convey our known flow rate of water, getting just enough energy input along the way to keep up with the frictional losses incurred. And that energy input would come from, of course, gravity: the conversion of gravitational potential energy into kinetic energy due to the water flowing down a slope. So S would tell us the head loss in our real canal per unit length, which is what we want to know. (The water in our real canal will also become sloped, lower at the draining end—otherwise there’d be nothing driving the flow—even though the canal itself isn’t sloped.) Converting the Manning formula to solve for S, we get: Before we plug in numbers, let’s look at the relationships. Losses (S) scale as the square of the flow rate (Q), which is true of flow in pipes as well; faster flow makes losses worse, strongly. A is the channel cross-sectional area, and losses scale as the inverse square of that; a bigger channel or pipe makes losses smaller, strongly. R is the hydraulic radius, which is the the area (A) divided by the wetted perimeter (the total length of the channel’s cross section that is in contact with the flowing water). This wetted perimeter is a bad thing for flow, because the water velocity right at a channel surface is very low due to friction against the non-moving surface. R will have its largest value for a circle, which is the most “compact” shape (least channel surface per unit area). Our trapezoidal channel has lower R, because it has more wetted surface than a circle of the same area. An extremely wide, shallow channel (the opposite of “compact”) would be even worse. As R goes down, losses go up, slightly worse than linearly. ^7. Channels with rougher surfaces lose more energy to friction, which isn’t surprising. So we want the smoothest possible surfaces touching the water. (I would think that since the floating cover does touch the water at all times, it will exert drag on the flow, and so its roughness matters as much as that of the canal sides and bottom.) Losses in the larger canal design Now let’s compute an actual value for S. First we need to compute some of the inputs. For the flow rate Q, we might as well use the worst case: the biggest flow we considered above, which was 276 The wetted perimeter is How much slope is that? A canal one kilometer long would have a drop of 0.73 millimeters. That’s utterly negligible, to the point of being a little suspect. There are (at least) three ways to interpret this number: 1. I made a mistake—always a possibility. 2. Manning’s equation, which is empirical, is outside its range of applicability here and is giving invalid results^8. 3. The number is correct (within an order of magnitude or two). This would mean that the canal is so large compared to the flow rate that almost no energy is lost to friction. Losses in the smaller canal design What about our smaller canal design (still pretty big at 100 meters wide)? Here are the inputs: flow rate: 276 area: 200 Wetted area: 93 Hydraulic radius: This is roughly 2 orders of magnitude (100x) worse than the big canal, due to the much smaller area for water flow. Over a one-kilometer-long canal, the head loss would be 99 millimeters (about 4 inches), which is still negligible. For both canal designs, though we’d certainly want to test a physical or CFD model, energy losses in canals don’t look worrisome. A final place where friction losses are a concern is where the pipes (headraces and tailraces) meet the reservoirs (canals in this case). This is a well-studied topic, and while energy is always lost whenever the flow is disturbed (by a change in direction, a restriction, etc.), the key is to make the change as smoothly and gradually as possible. So the connection at the bottom of a canal should be bell-shaped: Bends in the headrace, tailrace, and penstock should also be as smooth and gradual as possible. I’ve done my best to find show-stopping flaws in the hydrodynamics of my plan, but it seems to be holding up so far. In the next post, I’ll apply this design to a real site. Previous: Encapsulated Pumped Storage, Series 2, Part 1: More Water Containment Options Next: Encapsulated Pumped Storage, Series 2, Part 3: An Example System 1. In 2600 BC, according to this History of Plumbing Timeline 2. Darcy–Weisbach equation (wikipedia) 3. “The Darcy formula or the Darcy-Weisbach equation as it tends to be referred to, is now accepted as the most accurate pipe friction loss formula, and although more difficult to calculate and use than other friction loss formula [sic], with the introduction of computers, it has now become the standard equation for hydraulic engineers.” (https://www.pipeflow.com/ 4. adapted from https://www.pipeflow.com/pipe-pressure-drop-calculations/pipe-friction-loss. 5. It’s really only open-channel flow in a steady state and when far from the outlet. Near the outlet, it’s analogous to drainage from a bathtub, where there is a large downward component to the flow. (As usual, all these phenomena are reversed when water is being pumped into a tank rather than draining out of it.) 6. Unless otherwise stated, values from https://www.caee.utexas.edu/prof/maidment/CE365KSpr14/Visual/OpenChannels.pdf 7. Understanding Open-Channel Flow Equations for Hydro Applications says: “It is known that Manning’s equation loses accuracy with very steep or shallow slopes”
{"url":"https://fateclub.org/index.php/2021/01/02/encapsulated-pumped-storage-series-2-part-2-hydrodynamics/","timestamp":"2024-11-02T21:45:59Z","content_type":"text/html","content_length":"92122","record_id":"<urn:uuid:93c0530b-a728-4604-aca5-b37d7d02614c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00660.warc.gz"}
Research - Baptiste COQUINOT I am theoretical physicist interested in dynamics, condensed matter and statistical physics. Below are my past and current subjects of research. Interactions and Dynamics at the Nanoscale Solid-Liquid Interface Credits: Maggie Chiang (Simon’s Foundation) At the solid-liquid interface the fluid interacts with both the phonons an the electrons of the solid. These interactions, called respectively phonon drag and Coulomb drag, can only be explained properly using a quantum formalism. While the collisions (generating the phonon drag) are at the origin of the “classical” friction, the coupling between the collective modes of the fluid and the electronic excitations at the surface of the solid (generating the Coulomb drag) are the source of a new kind of friction which is dominant in certain conditions of nanofluidics. In particular, this provides an understanding of the surprising friction in carbon nanotube where the friction drops as the radius is reduced. When imposing a flow, the solid reaches a quasi-equilibrium state where its different quantum particles fulfil a modified fluctuation-dissipation theorem which includes a frequency shift. In particular, we predict the generation of an electric current which has been measured and studied experimentally. The form of the current is tunnable by choosing the internal structure of the solid: this is a quantum effect. Such an flow-induced electric current may have crucial applications if the solid is well engineered: this opens the door to sensing and controlling the flow velocity at the nanoscales and energy production at larger scales. Physically, the liquid exchanges momentum to the phonons and the electrons through respectively the phonon drag and Coulomb drag: this is like a wind blowing on the Fermi sea. In most situations the former is dominant while the latter is negative: this means that the momentum path is from the flow to the phonons, then the electrons and then back to the flow. In pratice this phenomenon reduces the total friction. This opens the door quantum engeenering the friction by controling the internal structure of the solid. Moreover, everything may be affected by confinement, paving the way to new experimental Selected References: [4] Baptiste Coquinot, Lydéric Bocquet, Nikita Kavokine, Quantum feedback at the solid-liquid interface: flow-induced electronic current and its negative contribution to friction Physical Review X An electronic current driven through a conductor can induce a current in another conductor through the famous Coulomb drag effect. Similar phenomena have been reported at the interface between a moving fluid and a conductor, but their interpretation has remained elusive. Here, we develop a quantum-mechanical theory of the intertwined fluid and electronic flows, taking advantage of the non-equilibrium Keldysh framework. We predict that a globally neutral liquid can generate an electronic current in the solid wall along which it flows. This hydrodynamic Coulomb drag originates from both the Coulomb interactions between the liquid’s charge fluctuations and the solid’s charge carriers, and the liquid-electron interaction mediated by the solid’s phonons. We derive explicitly the Coulomb drag current in terms of the solid’s electronic and phononic properties, as well as the liquid’s dielectric response, a result which quantitatively agrees with recent experiments at the liquid-graphene interface. Furthermore, we show that the current generation counteracts momentum transfer from the liquid to the solid, leading to a reduction of the hydrodynamic friction coefficient through a quantum feedback mechanism. Our results provide a roadmap for controlling nanoscale liquid flows at the quantum level, and suggest strategies for designing materials with low hydrodynamic friction. [3] Mathieu Lizée, Alice Marcotte, Baptiste Coquinot, Nikita Kavokine, Karen Sobnath, Clément Barraud, Ankit Bhardwaj, Boya Radha, Antoine Niguès, Lydéric Bocquet, Alessandro Siria, Strong electronic winds blowing under liquid flows on carbon surfaces Physical Review X The interface between a liquid and a solid is the location of plethora of intrincate mechanisms at the nanoscale, at the root of their specific emerging properties in natural processes or technological applications. However, while the structural properties and chemistry of interfaces have been intensively explored, the effect of the solid-state electronic transport at the fluid interface has been broadly overlooked up to now. It has been reported that water flowing against carbon-based nanomaterials, such as carbon nanotubes or graphene sheets , does induce electronic currents, but the mechanism at stake remains controversial. Here, we unveil the molecular mechanisms underlying the hydro-electronic couplings by investigating the electronic conversion under flow at the nanoscale. We use a tuning fork-Atomic Force Microscope (AFM) to deposit and displace a micrometric droplet of both ionic and non- ionic liquids on a multilayer graphene sample, while recording the electrical current across the carbon flake. We report measurements of an oscillation-induced current which is several orders of magnitude larger than previously reported for water on carbon , and further boosted by the presence of surface wrinkles on the carbon layer. Our results point to a peculiar momentum transfer mechanism between fluid molecules and charge carriers in the carbon walls mediated by phonon excitations in the solid. Our findings pave the way for active control of fluid transfer at the nanoscale by harnessing the complex interplay between collective excitations in the solid and the molecules in the fluid. Geometric Theory of Mechanics and Thermodynamics The standard formalism to study dynamics is the Hamiltonian mechanics which is founded on a symplectic form (or Poisson bracket) and a dynamical funtion: the Hamiltonian. Such a formalism is powerful to study the dynamics of point objects and fluids in many physical situations and is well suited for numerial integration. However, Hamiltonian dynamics does not include dissipation and then can only study systems at the thermodynamic equilibrium. Yet, some dissipative models can be written in a Hamiltonian structure using non-standard formulations of the theory. This is, in particular, the case with b-symplectic geometry which allows singularities in the phase space. These situations are fascinating for numerical and mathematical studies of dynamics and applies to basic dissipative systems. Another approach is the metriplectic (or GENERIC) framework, which has been developed to adress the limitations of the standard Hamiltonian methods. In this formalism the symplectic form is completed by a pseudo-Riemannian metric and the free energy is used as the dynamical function. Under reasonable assumptions, the two principles of thermodynamics arise from the geometric structure. Such a formalism is well-suited to study close-to-the-equilibrium systems where the thermodynamics is described by Onsager’s linear response. This applies in particular for the majority of models of fluid dynamics, like Naver-Stokes equations. In these models, the metric (or dissipative bracket) contains the microscopic physics and is the geometric realisation of the Onsager’s transport tensor. It can also be derived as a emerging property form grand deviation theory and kinetic theory. Selected References: [2] Baptiste Coquinot, Pau Mir, Eva Miranda, Singular cotangent models and complexity in fluids with dissipation Physica D In this article we analyze several mathematical models with singularities where the classical cotangent model is replaced by a b-cotangent model. We provide physical interpretations of the singular symplectic geometry underlying in b-cotangent bundles featuring two models: the canonical (or non-twisted) model and the twisted one. The first one models systems on manifolds with boundary and the twisted model represents Hamiltonian systems where the singularity of the system is in the fiber of the bundle. The twisted cotangent model includes (for linear potentials) the case of fluids with dissipation. We relate the complexity of the fluids in terms of the Reynolds number and the (non)-existence of cotangent lift dynamics. We also discuss more general physical interpretations of the twisted and non-twisted b-symplectic models. These models offer a Hamilton- ian formulation for systems which are dissipative, extending the horizons of Hamiltonian dynamics and opening a new approach to study non-conservative systems. [1] Baptiste Coquinot, Philip J. Morrison, A General Metriplectic Framework With Application To Dissipative Extended Magnetohydrodynamics Journal of Plasma Physics General equations for conservative yet dissipative (entropy producing) extended magnetohydrodynamics are derived from two-fluid theory. Keeping all terms generates unusual cross-effects, such as thermophoresis and a current viscosity that mixes with the usual velocity viscosity. While the Poisson bracket of the ideal version of this model has already been discovered, we determine its metriplectic counterpart that describes the dissipation. This is done using a new and general thermodynamic point of view to derive dissipative brackets, a means of derivation that is natural for understanding and creating dissipative dynamics without appealing to underlying kinetic theory orderings. Finally, the formalism is used to study dissipation in the Lagrangian variable picture where, in the context of extended magnetohydrodynamics, non-local dissipative brackets naturally emerge.
{"url":"https://coquinot.fr/research/","timestamp":"2024-11-14T19:57:46Z","content_type":"text/html","content_length":"73623","record_id":"<urn:uuid:8a85ab1d-a1dc-48a3-9364-b7df47a01a2e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00869.warc.gz"}
Introduction to Solutions to Systems of Equations What you’ll learn to do: Define and identify solutions for systems of equations. The way a river flows depends on many variables including how big the river is, how much water it contains, what sorts of things are floating in the river, whether or not it is raining, and so forth. If you want to best describe its flow, you must take into account these other variables. A system of linear equations can help with that. A system of linear equations consists of two or more linear equations made up of two or more variables such that all equations in the system are considered simultaneously. You will find systems of equations in every application of mathematics. They are a useful tool for discovering and describing how behaviors or processes are interrelated. It is rare to find, for example, a pattern of traffic flow that that is only affected by weather. Accidents, time of day, and major sporting events are just a few of the other variables that can affect the flow of traffic in a city. In this section, we will explore some basic principles for graphing and describing the intersection of two lines that make up a system of equations (which will make you one step closer to claiming your million dollar prize from the Clay Mathematics Institute!). Specifically, in this section you’ll learn how to: • Evaluate ordered pairs as solutions to systems • Classify solutions for systems • Graph systems of equations
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/lesson-1-graphing-systems-of-equations-and-inequalities/","timestamp":"2024-11-13T04:55:16Z","content_type":"text/html","content_length":"46972","record_id":"<urn:uuid:ff7da91b-6e43-4391-9b27-178421e63396>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00446.warc.gz"}
14404 – Version of an IDBDatabase from an aborted version change transaction needs to be specified Consider the following for a database that does not already exist: var db; var request = indexedDB.open(name, 1); request.onupgradeneeded = function() { db = request.result; throw "STOP"; request.onerror = function() { What should the version be here? The new version? The old version? Something else? By my reading of the spec, since the version is not supposed to change throughout the lifetime of an IDBDatabase, the answer is the new version, which seems quite unintuitive since the upgrade failed. Comment 1 Jonas Sicking (Not reading bugmail) 2011-10-07 17:27:35 UTC Microsoft requested .version should be set to 'undefined' before firing the "abort" event in a recent thread. So the question also applies in the following situation var db; var request = indexedDB.open(name, 1); request.onupgradeneeded = function() { db = request.result; trans = request.transaction; trans.onabort = function() { alert("in onabort: " + db.version); throw "STOP"; request.onerror = function() { alert("in onerror: " + db.version); Comment 2 Jonas Sicking (Not reading bugmail) 2011-11-02 15:57:25 UTC I'd rather not set the version to a non-integer value. I.e. I'd like for the sake of simplicity to keep the type of .version to simply be a non-nullable long long. Comment 3 Israel Hilerio [MSFT] 2011-11-30 21:19:35 UTC The only way we get to the onupgrade needed when the version is 1 is when there is no existing DB in the system. The reason is that we are not allowed to pass in a version of 0 to the open API and versions are unsigned long long values. Therefore, if we fail inside the onupgradeneeded handler, I would expect the database version to be undefined. Thus, alerting the developer that the database creation failed. The reason for the undefined is that no new version existed before the open API was called. This will keep things consistent with our request. I understand we internally start with a 0 value for version when the db is created, but I don't believe we want to expose that value to developers. Comment 4 Israel Hilerio [MSFT] 2011-11-30 21:23:52 UTC Jonas, ignore my last request, I was answering the wrong thread. What do you then suggest we should surface as the value of version if the initial creation of the database failes? Are you suggesting we surface the 0 value and explain developers that if you get this value things went wrong? Comment 5 Eliot Graff 2011-12-27 22:04:37 UTC Added step 9.5 to VERSION_CHANGE transaction steps, including table of default values for IDBDatabase attributes. Thanks for the bug. Comment 6 Jonas Sicking (Not reading bugmail) 2012-01-24 03:52:24 UTC The change made here is a bit unclear. The text says "If the transaction is aborted and there is an existing database, the values remain unchanged". There's two things that are unclear here. By "database" you mean the IDBDatabase(Sync) instance, right? Not the on-file database. I think we should clarify that. Second, does "remain unchanged" mean unchanged compared to the on-disk values, or unchanged based on the values on the IDBDatabase instance before the transaction was aborted? Almost the same questions also applies to the text says that if the VERSION_CHANGE transaction used to create a database is aborted, that "the database will remain in the system with the default attributes". Is "database" referring to the on-disk database or the IDBDatabase instance? I would have assumed that the on-disk database is simply deleted if the transaction that created it is aborted. I think I would prefer to not have any special treatment for VERSION_CHANGE transactions that create a database vs. ones that just upgrade the version number. This seems simplest from an implementation point of view. Also, there's the minor nit that aborting can happen outside of the upgradeneeded event handler too since the transaction can span multiple success and error events too. I would recommend something like the following text for step 9.5: If for any reason the VERSION_CHANGE transaction is aborted the IDBDatabase instance which represents <var>connection</var> will remain unchanged. I.e. it's <code>name</code>, <code>version</code> and <code>objectStoreNames</code> properties will remain the value they were before the transaction was aborted. (Note that objectStoreNames per the IDL is never null, though it can be an empty list). Comment 7 Eliot Graff 2012-03-09 20:51:14 UTC I made the suggested change in today's Editor's draft. Comment 8 Jonas Sicking (Not reading bugmail) 2012-03-26 11:52:41 UTC Actually, this is still all sorts of wrong. The text for the individual properties still say that they remain unchanged, whereas in reality it seemed like we wanted to say that they do change value when the transaction is aborted. And step 9.5 of "versionchange transaction steps" still says that the properties will "remain unchanged". And it seems very out-of-place to say what the "default values" are when there's no text to tie in to the default values. And I believe there was agreement on the list to revert to an empty list, rather than null, for objectStoreNames. Sorry guys, but I think this still needs more work :( Comment 9 Eliot Graff 2012-05-08 15:09:44 UTC Section 4.9 Steps for aborting a "versionchange" transaction was added in the Editor's Draft of 7 May.
{"url":"https://www.w3.org/Bugs/Public/show_bug.cgi?id=14404","timestamp":"2024-11-13T05:32:29Z","content_type":"text/html","content_length":"23984","record_id":"<urn:uuid:50896c35-253c-4717-bffd-780ae2acc1c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00581.warc.gz"}
CPM Homework Help The parabola $y = -(x - 3)^2 + 4$ is graphed below. Use four trapezoids of equal width to approximate the area under the parabola for $1 ≤ x ≤ 5$. Is this area an over or an under estimate of the true area under the parabola? Explore this using the Estimating Area Under a Curve eTool (Desmos). The height of each trapezoid is $1$. The bases are determined by the function.
{"url":"https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/1/lesson/1.4.2/problem/1-161","timestamp":"2024-11-03T01:24:47Z","content_type":"text/html","content_length":"39901","record_id":"<urn:uuid:75bb5c64-e2e2-40da-af05-8cbc808ca101>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00764.warc.gz"}
Understanding Equivalent Ratios | Math Ratios What are Equivalent Ratios? Welcome to this in-depth tutorial on equivalent ratios. We will embark on an exciting journey to explore and understand this fundamental mathematical concept, which has a wide range of applications in various fields, including geometry, physics, and economics. Understanding Ratios Before we dive into equivalent ratios, let's first revisit the concept of ratios. In simple terms, a ratio is a comparison or relation between two or more quantities. The study of ratios can be traced back to the ancient Greeks, notably the mathematician Euclid, who made extensive use of ratios in his geometric proofs. Introduction to Equivalent Ratios Equivalent ratios, as the term suggests, are ratios that express the same relationship between quantities. For instance, the ratios 1:2 and 2:4 are equivalent because they both represent the same comparison - one quantity being half of the other. Determining Equivalent Ratios There's a simple rule to find out if two ratios are equivalent: cross-multiply. If the product of the means (inner terms) equals the product of the extremes (outer terms), then the two ratios are Real-Life Applications of Equivalent Ratios Equivalent ratios are more than just an abstract mathematical concept. They have real-world applications. In cooking, for example, we use equivalent ratios to scale recipes up or down. In map reading and model building, equivalent ratios (or scale factors) help us understand the actual size of objects. Equivalent Ratios in Geometry Geometry, the branch of mathematics concerned with shapes and their properties, makes extensive use of equivalent ratios. The Greek mathematician Thales, for instance, used equivalent ratios to calculate distances that couldn't be measured directly, such as the height of pyramids. Equivalent Ratios in Physics Equivalent ratios also play a critical role in physics. For example, in mechanics, ratios of displacement, velocity, and acceleration often need to be equivalent for various calculations. Similarly, in electrical circuits, Ohm's law presents a constant ratio of voltage to current, leading to equivalent ratios when we compare different parts of a circuit. Equivalent Ratios in Economics In economics, equivalent ratios are often used to understand relationships between different economic variables. For instance, ratios such as cost to income, output to input, and risk to reward, when equated, can provide useful insights for economic analysis and decision making. Calculating Equivalent Ratios To calculate equivalent ratios, we can either multiply or divide both terms of a ratio by the same number (other than zero). This operation maintains the original relationship between the quantities. Practice Makes Perfect The best way to understand equivalent ratios is by practicing. Start by identifying equivalent ratios in simple scenarios, then gradually work your way up to more complex situations. Remember, like most mathematical concepts, equivalent ratios become more intuitive with practice. Understanding equivalent ratios is fundamental to mastering many areas of mathematics and applying this knowledge to real-world scenarios. As you delve deeper into this topic, remember the words of the renowned mathematician Carl Friedrich Gauss: "Mathematics is the queen of the sciences and number theory is the queen of mathematics." By studying equivalent ratios, you are indeed studying the very queen of mathematics. So, keep exploring, keep practicing, and enjoy the journey! Understanding Equivalent Ratios Tutorials If you found this ratio information useful then you will likely enjoy the other ratio lessons and tutorials in this section:
{"url":"https://www.mathratios.com/tutorial/understanding-equivalent-ratios.html","timestamp":"2024-11-08T18:31:22Z","content_type":"text/html","content_length":"9854","record_id":"<urn:uuid:6b597c83-4d83-4882-8483-b4d94fa223a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00571.warc.gz"}
Convergence to a distribution A non-stats colleague asked me yesterday about what happens to an MCMC chain when the posterior is multimodal. I believe their mindset is that convergence happens to a point since this is the way the many algorithms work, e.g. hill-climbing algorithms . MCMC chains, as typically used in Bayesian analysis, don't converge to a point rather they converge to a distribution . So an MCMC chain will explore the entire posterior which includes all modes of that posterior. Take a simple example where the posterior is a equal-weighted mixture of two normal distributions both with variance 1. The means of these distributions are positive and negative some known constant, in the example below I used 3. If the constant is sufficiently large, then the distribution is multimodal and an MCMC algorithm will alternate (although not every iteration) between the two modes as it samples. Below I implemented a random-walk Metropolis algorithm that samples from this distribution. From the partial traceplot it is clear that this algorithm gets stuck in both modes for a few iterations before making its way to the other mode. Of course problems can get much more complicated and more sophisticated algorithms, e.g. simulated annealing parallel tempering (see chapter 10 of this book for recent developments), are necessary for exploring these posteriors. Edit: Trying out . How can I decease the font size? blog comments powered by Disqus 15 December 2011
{"url":"https://www.jarad.me/2011/12/15/convergence-to-a-distribution","timestamp":"2024-11-08T22:21:42Z","content_type":"text/html","content_length":"11272","record_id":"<urn:uuid:708944f0-5837-466c-9ae3-75e4a5ea6b47>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00156.warc.gz"}
On C<sub>J</sub> and C<sub>T</sub> in the Gross-Neveu and O(N) models We apply large N diagrammatic techniques for theories with double-trace interactions to the leading corrections to C [J], the coefficient of a conserved current two-point function, and C [T], the coefficient of the stress-energy tensor two-point function. We study in detail two famous conformal field theories in continuous dimensions, the scalar O(N) model and the Gross-Neveu (GN) model. For the O(N) model, where the answers for the leading large N corrections to C [J] and C [T] were derived long ago using analytic bootstrap, we show that the diagrammatic approach reproduces them correctly. We also carry out a new perturbative test of these results using the O(N) symmetric cubic scalar theory in 6 - dimensions. We go on to apply the diagrammatic method to the GN model, finding explicit formulae for the leading corrections to C [J] and C [T] as a function of dimension. We check these large N results using regular perturbation theory for the GN model in dimensions and the Gross-Neveu-Yukawa model in dimensions. For small values of N, we use Padé approximants based on the and expansions to estimate the values of C [J] and C [T] in d = 3. For the O(N) model our estimates are close to those found using the conformal bootstrap. For the GN model, our estimates suggest that, even when N is small, C [T] differs by no more than 2% from that in the theory of free fermions. We find that the inequality applies both to the GN and the scalar O(N) models in d =3. All Science Journal Classification (ASJC) codes • Statistical and Nonlinear Physics • Statistics and Probability • Modeling and Simulation • Mathematical Physics • General Physics and Astronomy • conformal field theory • large N expansion • renormalization group Dive into the research topics of 'On C[J] and C[T] in the Gross-Neveu and O(N) models'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/on-csubjsub-and-csubtsub-in-the-gross-neveu-and-on-models","timestamp":"2024-11-06T12:10:10Z","content_type":"text/html","content_length":"53221","record_id":"<urn:uuid:e255a3e0-5a11-4230-9c03-d4a82e34854a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00222.warc.gz"}
Programming Puzzle: iCar Oct 11, 2015 Programming Puzzle: iCar Here’s a programming puzzle I came up with. It was selected as one of eight problems in The 2015 Nordic Collegiate Programming Contest. The full set of problems for this competition can be found here and the solutions here. You are at home and about to drive to work. The road you will take is a straight line with no speed limit. There are, however, traffic lights precisely every kilometer, and you can not pass a red light. The lights change instantaneously between green and red, and you can pass a light whenever it is green. You can also pass through a light at the exact moment of changing colour. There are no traffic lights at the start or the end of the road. Now your car is special; it is an iCar, the first Orange car, and it has only one button. When you hold down the button, the car accelerates at a constant rate of 1 m/s^2; when you release the button the car stops on the spot. You have driven to work many times, so you happen to know the schedules of the traffic lights. The problem How quickly can you get to work? The first line contains a single integer n, the length of the road in kilometers (1 ≤ n ≤ 16). Each of the next n−1 lines contains 3 integers t[i], g[i] and r[i], the first time the ith light will switch from red to green after the moment you start driving the car; the green light duration, and the red light duration (40 ≤ g[i], r[i] ≤ 50; 0 ≤ t[i] < g[i] + r[i]). Times are given in seconds. You may assume that any light with t[i] > r[i] is green at the time you start driving the car, and switches to red t[i] − r[i] seconds later. Output the minimum time required to reach the end of the road. Answers within a relative or absolute error of 10^−6 will be accepted. Example Input 1 Solution 1 Example Input 2 Solution 2 Example Input 3 Solution 3
{"url":"https://aioo.be/2015/10/11/Programming-Puzzle-iCar.html","timestamp":"2024-11-11T14:38:20Z","content_type":"text/html","content_length":"5900","record_id":"<urn:uuid:59f0ff2c-663f-46e4-9ae4-a8ddbc690adf>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00461.warc.gz"}
Careful "for" Loops Chris Torek torek at elf.ee.lbl.gov Wed Mar 20 05:45:19 AEST 1991 In article <MCDANIEL.91Mar19124111 at dolphin.adi.com> mcdaniel at adi.com (Tim McDaniel) writes: >Case a) > semi-open intervals, like [low,high). low is in the range, but > high is not. If high==0, the range extends from low through all > higher memory. The problem is that high==0 is likely to prove a > special case in iteration. >Case b) > closed intervals, like [low,high]. The range is inclusive: high is > in the range. The problem is that upper bounds look ugly, like > 0x01ffffff. >... Zero-length ranges are a possibility. Languages that do this sort of thing usually take closed intervals, although there are ways to handle both. For instance, the loop for i := 1 to 300 do foo (in Pascal) is usually generated as if (1 <= 300) { for (i = 1;; i++) { if (i == 300) (equivalent C code). (Yes, it is OK to leave `i' unset; Pascal index variables may not be examined after the loop ends, until they are set to some other values.) If there is a step, the loop must compute the terminator value (or the number of iterations): for i := 1 to 4 by 2 should compare for 3 (for ending) or compute a count of ((4-1)+1)/2 iterations. In some cases this can be done at compile time. To do the same for half-open intervals, simply subtract one from the end value (using unsigned arithemtic, if necessary, to avoid underflow) and do the same. The loop for i in [m..n) do foo; can be `compiled' to if (m < n) { stop = n - 1; for (i = 0;; i++) { if (i == stop) Iteration count computations are identical except that instead of ((end - start) + 1) / incr you simply use (end - start) / incr In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427) Berkeley, CA Domain: torek at ee.lbl.gov More information about the Comp.lang.c mailing list
{"url":"http://tuhs.vtda.org/Unix_Usenet/comp.lang.c/1991-March/030274.html","timestamp":"2024-11-07T15:24:43Z","content_type":"text/html","content_length":"4741","record_id":"<urn:uuid:3edc7436-053f-4d31-bc30-b71ec80fcb16>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00202.warc.gz"}
Discover the World of Mathematical Games and Puzzles with Kubiya Games Mathematical puzzles are an integral part of recreational mathematics. They challenge the solver to find a solution that satisfies the given conditions and often require mathematics and creative thinking to solve. They are not typically competitive and instead, the solver is expected to solve the puzzle on their own. There are several types of mathematical puzzles including numbers, arithmetic, and algebra puzzles, combinatorial puzzles, analytical or differential puzzles, probability puzzles, tiling, packing, and dissection puzzles, and puzzles that involve a board. One of the most popular types of mathematical puzzles is the logic puzzle. These puzzles require the solver to use logic and deduction to find the solution. For example, Sudoku is a well-known logic puzzle that requires the solver to fill in the numbers in a 9x9 grid so that each column, row, and 3x3 box contains all the numbers from 1 to 9. Another popular type of mathematical puzzle is the combinatorial puzzle. These puzzles involve arranging elements to satisfy certain conditions. For example, the 15 Puzzle involves sliding tiles to rearrange them into the correct order. The Rubik's Cube is another well-known combinatorial puzzle that involves twisting and turning a cube to rearrange its colors. The Pentomino puzzle is a polygon made of 5 equal-sized squares connected edge-to-edge. It is a popular puzzle and game subject in recreational mathematics. There are 12 different free pentominoes when rotations and reflections are not considered distinct, 18 when only reflections are considered distinct, and 63 when both rotations and reflections are considered distinct. The Soma cube is a solid dissection puzzle made up of 7 unit cubes. The pieces of the cube consist of all possible combinations of three or four unit cubes joined at their faces, such that at least one inside corner is formed, resulting in 1 combination of 3 cubes and 6 combinations of 4 cubes, which make up the 27 cells of a 3x3x3 cube. The puzzle has 240 distinct solutions and is made of 6 polycubes of order 4 and one of order 3. It has been used as a task to measure individuals' performance and effort in psychology experiments, with one possible way of solving the cube being to place the "T" piece in the bottom center of the large cube. Probability puzzles are another type of mathematical puzzle that involves using probability and statistics to find the solution. The Monty Hall problem is a well-known probability puzzle that asks the solver to consider a game show where a prize is hidden behind one of three doors. The solver is asked to choose a door and then the host opens another door to reveal a goat. The solver is then given the opportunity to switch their choice or stick with their original choice. The puzzle asks the solver to determine the probability of winning the prize if they switch or stick with their original choice. Puzzles that involve a board are also popular. Peg Solitaire is a board puzzle that involves jumping pegs to remove them from the board until only one peg is left. Conway's Game of Life is a board puzzle that involves cells that can be alive or dead. The solver sets the initial conditions and then the rules of the puzzle determine all subsequent changes and moves. Tiling, packing, and dissection puzzles involve arranging geometric shapes to fill a space. The Bedlam cube is a tiling puzzle that involves arranging cubes to fill a space. The Mutilated chessboard problem is a packing puzzle that involves finding the maximum number of pieces that can fit into a space. The rules of the game are to make you think, not to stop you from thinking Mathematical puzzles often require creative thinking to find a solution. As Piet Hein, a Danish poet, mathematician, and the Soma Cube inventor said, "The rules of the game are to make you think, not to stop you thinking." These puzzles are not only entertaining but also help to develop problem-solving skills and logical thinking. They are sometimes used in the classroom to teach elementary school math and problem-solving techniques. Mathematical puzzles are an important part of recreational mathematics. They challenge the solver to find a solution that satisfies specific conditions and often require mathematics and creative thinking to solve. Popular examples of mathematical puzzles include logic puzzles, combinatorial puzzles, probability puzzles, puzzles that involve a board, and tiling, packing, and dissection puzzles. These puzzles not only provide entertainment but also help to develop problem-solving skills and logical thinking. Cheers, as always, and happy puzzling!
{"url":"https://kubiyagames.com/en-ae/blogs/mechanical-puzzles-blog/discover-the-world-of-mathematical-games-and-puzzles-with-kubiya-games","timestamp":"2024-11-13T15:25:09Z","content_type":"text/html","content_length":"170600","record_id":"<urn:uuid:cc7e26f0-70e2-4290-b4d3-ace240f83af1>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00576.warc.gz"}
Ch. 4 Key Terms - Introductory Statistics | OpenStax Bernoulli Trials an experiment with the following characteristics: 1. There are only two possible outcomes called “success” and “failure” for each trial. 2. The probability p of a success is the same for any trial (so the probability q = 1 − p of a failure is the same for any trial). Binomial Experiment a statistical experiment that satisfies the following three conditions: 1. There are a fixed number of trials, n. 2. There are only two possible outcomes, called "success" and, "failure," for each trial. The letter p denotes the probability of a success on one trial, and q denotes the probability of a failure on one trial. 3. The n trials are independent and are repeated using identical conditions. Binomial Probability Distribution a discrete random variable (RV) that arises from Bernoulli trials; there are a fixed number, n, of independent trials. “Independent” means that the result of any trial (for example, trial one) does not affect the results of the following trials, and all trials are conducted under the same conditions. Under these circumstances the binomial RV X is defined as the number of successes in n trials. The notation is: X ~ B(n, p). The mean is μ = np and the standard deviation is σ = $npq npq$. The probability of exactly x successes in n trials is P(X = x) = $( n x ) ( n x )$p^xq^n − x. Expected Value expected arithmetic average when an experiment is repeated many times; also called the mean. Notations: μ. For a discrete random variable (RV) with probability distribution function P(x),the definition can also be written in the form μ = $∑ ∑$xP(x). Geometric Distribution a discrete random variable (RV) that arises from the Bernoulli trials; the trials are repeated until the first success. The geometric variable X is defined as the number of trials until the first success. Notation: X ~ G(p). The mean is μ = $1 p 1 p$ and the standard deviation is σ = $1 p ( 1 p −1 ) 1 p ( 1 p −1 )$. The probability of exactly x failures before the first success is given by the formula: P(X = x) = p(1 – p)^x – 1. Geometric Experiment a statistical experiment with the following properties: 1. There are one or more Bernoulli trials with all failures except the last one, which is a success. 2. In theory, the number of trials could go on forever. There must be at least one trial. 3. The probability, p, of a success and the probability, q, of a failure do not change from trial to trial. Hypergeometric Experiment a statistical experiment with the following properties: 1. You take samples from two groups. 2. You are concerned with a group of interest, called the first group. 3. You sample without replacement from the combined groups. 4. Each pick is not independent, since sampling is without replacement. 5. You are not dealing with Bernoulli Trials. Hypergeometric Probability a discrete random variable (RV) that is characterized by: 1. A fixed number of trials. 2. The probability of success is not the same from trial to trial. We sample from two groups of items when we are interested in only one group. X is defined as the number of successes out of the total number of items chosen. Notation: X ~ H(r, b, n), where r = the number of items in the group of interest, b = the number of items in the group not of interest, and n = the number of items chosen. a number that measures the central tendency; a common name for mean is ‘average.’ The term ‘mean’ is a shortened form of ‘arithmetic mean.’ By definition, the mean for a sample (detonated by $x ¯ x ¯$) is $x ¯ = Sum of all values in the sampleNumber of values in the sample x ¯ = Sum of all values in the sampleNumber of values in the sample$ and the mean for a population (denoted by μ) is μ = $Sum of all values in the population Number of values in the population Sum of all values in the population Number of values in the population$. Mean of a Probability Distribution the long-term average of many trials of a statistical experiment Poisson Probability Distribution a discrete random variable (RV) that counts the number of times a certain event will occur in a specific interval; characteristics of the variable: □ The probability that the event occurs in a given interval is the same for all intervals. □ The events occur with a known mean and independently of the time since the last event. The distribution is defined by the mean μ of the event in the interval. Notation: X ~ P(μ). The mean is μ = np. The standard deviation is $σ = μ σ = μ$. The probability of having exactly x successes in r trials is P(X = x ) = $( e −μ ) μ x x! ( e −μ ) μ x x!$. The Poisson distribution is often used to approximate the binomial distribution, when n is “large” and p is “small” (a general rule is that n should be greater than or equal to 20 and p should be less than or equal to 0.05). Probability Distribution Function (PDF) a mathematical description of a discrete random variable (RV), given either in the form of an equation (formula) or in the form of a table listing all the possible outcomes of an experiment and the probability associated with each outcome. Random Variable (RV) a characteristic of interest in a population being studied; common notation for variables are upper case Latin letters X, Y, Z,...; common notation for a specific value from the domain (set of all possible values of a variable) are lower case Latin letters x, y, and z. For example, if X is the number of children in a family, then x represents a specific integer 0, 1, 2, 3,.... Variables in statistics differ from variables in intermediate algebra in the two following ways. □ The domain of the random variable (RV) is not necessarily a numerical set; the domain may be expressed in words; for example, if X = hair color then the domain is {black, blond, gray, green, □ We can tell what specific value x the random variable X takes only after performing the experiment. Standard Deviation of a Probability Distribution a number that measures how far the outcomes of a statistical experiment are from the mean of the distribution $σ=∑[x – μ2 ∙ Ρx] σ=∑[x – μ2 ∙ Ρx]$ The Law of Large Numbers As the number of trials in a probability experiment increases, the difference between the theoretical probability of an event and the relative frequency probability approaches zero.
{"url":"https://openstax.org/books/introductory-statistics/pages/4-key-terms","timestamp":"2024-11-11T08:42:59Z","content_type":"text/html","content_length":"372875","record_id":"<urn:uuid:2ba2bfda-0b05-479d-b160-297a2a542384>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00274.warc.gz"}
GETPIVOTDATA Function: Definition, Formula Examples and Usage GETPIVOTDATA Function Are you tired of manually searching for specific data in your pivot tables? Well, the GETPIVOTDATA function is here to save the day. This handy function allows you to extract specific data from a pivot table with just a few simple arguments. No more digging through rows and columns trying to find the right data – GETPIVOTDATA will do it for you in a snap. But that’s not all! The GETPIVOTDATA function is also very flexible. You can use it to extract data from multiple pivot tables at once, or even use it in conjunction with other functions to create even more powerful formulas. In this blog post, we’ll go over all the ins and outs of the GETPIVOTDATA function, including how to use it, some common use cases, and some tips and tricks for getting the most out of it. So if you’re ready to streamline your data analysis process with the GETPIVOTDATA function, let’s dive in! Definition of GETPIVOTDATA Function The GETPIVOTDATA function in Google Sheets is a built-in function that allows you to extract specific data from a pivot table. It takes a few arguments, such as the pivot table range, the data field, and the item you want to retrieve data for. It then returns the value of that item in the pivot table. You can use the GETPIVOTDATA function to quickly retrieve specific data from a pivot table without manually searching through the rows and columns, and it can be used in conjunction with other functions to create more powerful formulas. Syntax of GETPIVOTDATA Function The syntax of the GETPIVOTDATA function in Google Sheets is as follows: =GETPIVOTDATA(data_field, pivot_table, [field1, item1, field2, item2, ...]) The data_field argument specifies the data field that you want to retrieve data from. This can be a cell reference or a string enclosed in quotation marks. The pivot_table argument specifies the range of the pivot table that you want to retrieve data from. This can be a cell reference or a named range. The optional field1, item1, field2, item2, etc. arguments allow you to specify which items you want to retrieve data for. For example, if your pivot table has a “Country” field and a “Year” field, you can use the field1, item1, field2, item2 arguments to specify which country and year you want to retrieve data for. If you do not specify any of these arguments, the GETPIVOTDATA function will return data for all items in the pivot table. Here is an example of the GETPIVOTDATA function in action: =GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20. Examples of GETPIVOTDATA Function Here are three examples of how you can use the GETPIVOTDATA function in Google Sheets: 1. Retrieve data for a specific item: Suppose you have a pivot table that shows sales data by country and year. You can use the GETPIVOTDATA function to retrieve the sales data for a specific country and year, like this: =GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20. 2. Retrieve data for multiple items: You can also use the GETPIVOTDATA function to retrieve data for multiple items by using the field1, item1, field2, item2, etc. arguments multiple times. For example, suppose you want to retrieve the sales data for the USA and Canada in 2021. You could use the following formula: =GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021) + GETPIVOTDATA("Sales", A1:F20, "Country", "Canada", "Year", 2021) This formula would retrieve the sales data for the USA and Canada in 2021 and add them together. 3. Use the GETPIVOTDATA function in conjunction with other functions: You can also use the GETPIVOTDATA function in conjunction with other functions to create more powerful formulas. For example, suppose you want to find the average sales for the USA in 2021. You could use the following formula: =AVERAGE(GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021)) This formula would retrieve the sales data for the USA in 2021 from the pivot table and then calculate the average of that data. Use Case of GETPIVOTDATA Function Here are a few real-life examples of using the GETPIVOTDATA function in Google Sheets: 1. Extracting data from a pivot table to create a report: Suppose you have a pivot table that shows sales data by region and product. You want to create a report that shows the total sales for each region. You could use the GETPIVOTDATA function to extract the data from the pivot table and sum it up. For example: =SUM(GETPIVOTDATA("Sales", A1:F20, "Region", "East")) This formula would retrieve the sales data for the East region from the pivot table in the range A1:F20 and sum it up. 2. Retrieving data for multiple items to create a chart: Suppose you want to create a chart that shows the sales data for the USA and Canada in 2021. You could use the GETPIVOTDATA function to retrieve the data for the USA and Canada and then use the charting functions in Google Sheets to create the chart. For example: =CHART(A1:B2, GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021), GETPIVOTDATA("Sales", A1:F20, "Country", "Canada", "Year", 2021)) This formula would create a chart that shows the sales data for the USA and Canada in 2021. 3. Retrieving data for a specific item to use in a formula: Suppose you want to calculate the profit margin for a specific product in a specific region. You could use the GETPIVOTDATA function to retrieve the sales and cost data for that product and region and then use a formula to calculate the profit margin. For example: =GETPIVOTDATA("Sales", A1:F20, "Region", "East", "Product", "Widget") - GETPIVOTDATA("Cost", A1:F20, "Region", "East", "Product", "Widget") This formula would retrieve the sales data and cost data for the East region and the product “Widget” and then calculate the profit margin. Limitations of GETPIVOTDATA Function The GETPIVOTDATA function in Google Sheets is a powerful tool for extracting data from pivot tables, but it does have some limitations. Here are a few things to keep in mind when using the GETPIVOTDATA function: 1. The GETPIVOTDATA function only works with pivot tables: The GETPIVOTDATA function is designed specifically for extracting data from pivot tables, so it will not work with regular data ranges. If you want to extract data from a regular data range, you will need to use a different function, such as VLOOKUP or INDEX/MATCH. 2. The GETPIVOTDATA function does not update automatically: The GETPIVOTDATA function returns a static value, which means that it does not update automatically when the data in the pivot table changes. If you want the data extracted by the GETPIVOTDATA function to update automatically, you will need to use the INDIRECT function or a dynamic named range. 3. The GETPIVOTDATA function can be slow with large pivot tables: The GETPIVOTDATA function can be slow with large pivot tables, especially if you are using it to extract data for multiple items. This can make your spreadsheet slow to respond and may cause performance issues. 4. The GETPIVOTDATA function can be difficult to use with complex pivot tables: The GETPIVOTDATA function can be difficult to use with pivot tables that have a large number of fields or a large number of items. It can be challenging to keep track of all the field and item arguments, and it can be easy to make mistakes when using the function. Despite these limitations, the GETPIVOTDATA function can still be a very useful tool for extracting data from pivot tables in Google Sheets. Just be sure to keep these limitations in mind when using the function to avoid any issues. Commonly Used Functions Along With GETPIVOTDATA Here is a list of commonly used functions that can be used along with the GETPIVOTDATA function in Google Sheets: 1. SUM: The SUM function adds up a range of cells. You can use the SUM function to add up the data extracted by the GETPIVOTDATA function. For example: =SUM(GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021)) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20 and then add up the data. 2. AVERAGE: The AVERAGE function calculates the average of a range of cells. You can use the AVERAGE function to calculate the average of the data extracted by the GETPIVOTDATA function. For =AVERAGE(GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021)) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20 and then calculate the average of that data. 3. MIN: The MIN function returns the minimum value in a range of cells. You can use the MIN function to find the minimum value of the data extracted by the GETPIVOTDATA function. For example: =MIN(GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021)) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20 and then find the minimum value of that data. 4. MAX: The MAX function returns the maximum value in a range of cells. You can use the MAX function to find the maximum value of the data extracted by the GETPIVOTDATA function. For example: =MAX(GETPIVOTDATA("Sales", A1:F20, "Country", "USA", "Year", 2021)) This formula would retrieve the sales data for the USA in 2021 from the pivot table in the range A1:F20 and then find the maximum value of that data. The GETPIVOTDATA function is a powerful tool for extracting specific data from pivot tables in Google Sheets. It allows you to quickly retrieve data for a specific item or multiple items without manually searching through the rows and columns of the pivot table. The GETPIVOTDATA function is also very flexible and can be used in conjunction with other functions to create more powerful Here are the key points to remember about the GETPIVOTDATA function: • The GETPIVOTDATA function takes a few arguments, such as the pivot table range, the data field, and the item(s) you want to retrieve data for. • The GETPIVOTDATA function returns a static value, which means it does not update automatically when the data in the pivot table changes. • The GETPIVOTDATA function can be slow with large pivot tables and can be challenging to use with complex pivot tables. • The GETPIVOTDATA function can be used with other functions, such as SUM, AVERAGE, MIN, and MAX, to create more powerful formulas. If you haven’t tried using the GETPIVOTDATA function in your own Google Sheets, we encourage you to give it a try! It can save you a lot of time and effort when working with pivot tables. Just be sure to keep the limitations of the GETPIVOTDATA function in mind to avoid any issues. Happy data analysis! Video: GETPIVOTDATA Function In this video, you will see how to use GETPIVOTDATA function. We suggest you to watch the video to understand the usage of GETPIVOTDATA formula. Related Posts Worth Your Attention Leave a Comment
{"url":"https://sheetsland.com/getpivotdata-function/","timestamp":"2024-11-11T02:15:03Z","content_type":"text/html","content_length":"53169","record_id":"<urn:uuid:56db20b7-c971-48ac-ba16-6b0b5458b9d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00045.warc.gz"}
When quoting this document, please refer to the following DOI: 10.4230/LIPIcs.STACS.2022.35 URN: urn:nbn:de:0030-drops-158458 URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/15845/ Gribling, Sander ; Nieuwboer, Harold Improved Quantum Lower and Upper Bounds for Matrix Scaling Matrix scaling is a simple to state, yet widely applicable linear-algebraic problem: the goal is to scale the rows and columns of a given non-negative matrix such that the rescaled matrix has prescribed row and column sums. Motivated by recent results on first-order quantum algorithms for matrix scaling, we investigate the possibilities for quantum speedups for classical second-order algorithms, which comprise the state-of-the-art in the classical setting. We first show that there can be essentially no quantum speedup in terms of the input size in the high-precision regime: any quantum algorithm that solves the matrix scaling problem for n × n matrices with at most m non-zero entries and with ?₂-error ε = Θ~(1/m) must make Ω(m) queries to the matrix, even when the success probability is exponentially small in n. Additionally, we show that for ε ∈ [1/n,1/2], any quantum algorithm capable of producing ε/100-?₁-approximations of the row-sum vector of a (dense) normalized matrix uses Ω(n/ε) queries, and that there exists a constant ε₀ > 0 for which this problem takes Ω(n^{1.5}) queries. To complement these results we give improved quantum algorithms in the low-precision regime: with quantum graph sparsification and amplitude estimation, a box-constrained Newton method can be sped up in the large-ε regime, and outperforms previous quantum algorithms. For entrywise-positive matrices, we find an ε-?₁-scaling in time O~(n^{1.5}/ε²), whereas the best previously known bounds were O~ (n²polylog(1/ε)) (classical) and O~(n^{1.5}/ε³) (quantum). BibTeX - Entry author = {Gribling, Sander and Nieuwboer, Harold}, title = {{Improved Quantum Lower and Upper Bounds for Matrix Scaling}}, booktitle = {39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)}, pages = {35:1--35:23}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-222-8}, ISSN = {1868-8969}, year = {2022}, volume = {219}, editor = {Berenbrink, Petra and Monmege, Benjamin}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/opus/volltexte/2022/15845}, URN = {urn:nbn:de:0030-drops-158458}, doi = {10.4230/LIPIcs.STACS.2022.35}, annote = {Keywords: Matrix scaling, quantum algorithms, lower bounds} Keywords: Matrix scaling, quantum algorithms, lower bounds Collection: 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022) Issue Date: 2022 Date of publication: 09.03.2022 DROPS-Home | Fulltext Search | Imprint | Privacy
{"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=15845","timestamp":"2024-11-10T21:49:03Z","content_type":"text/html","content_length":"6900","record_id":"<urn:uuid:149fd10f-5a10-4f33-8fa9-70333d3d2cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00737.warc.gz"}
© | << < ? > >> | Dror Bar-Natan: Talks: Everything around $sl_{2+}^\epsilon$ is DoPeGDO. So what? Abstract. I'll explain what "everything around" means: classical and quantum $m$, $\Delta$, $S$, $tr$, $R$, $C$, and $\theta$, as well as $P$, $\Phi$, $J$, ${\mathbb D}$, and more, and all of their compositions. What DoPeGDO means: the category of Docile Perturbed Gaussian Differential Operators. And what $sl_{2+}^\epsilon$ means: a solvable approximation of the semi-simple Lie algebra $sl_2$. Knot theorists should rejoice because all this leads to very powerful and well-behaved poly-time-computable knot invariants. Quantum algebraists should rejoice because it's a realistic playground for testing complicated equations and theories. This is joint work with Roland van der Veen and continues work by Rozansky, Ohtsuki, and Overbay. URL: http://drorbn.net/k23. Handout: DoPeGDO.html, DoPeGDO.pdf, DoPeGDO.png. DaNang-1905 and at CRM-1907. Links: NCSU Ov Za atoms cm engine akt kiw kt oa objects qa talks Sources: pensieve.
{"url":"https://www.math.utoronto.ca/~drorbn/Talks/Kyoto-230727/","timestamp":"2024-11-08T05:57:01Z","content_type":"text/html","content_length":"3534","record_id":"<urn:uuid:e2e314ae-46be-46b6-a23b-9ad51bff957a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00067.warc.gz"}